You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2017/01/06 17:32:16 UTC

[01/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Repository: incubator-hawq-docs
Updated Branches:
  refs/heads/develop 25242858c -> de1e2e07e


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/reference/HDFSConfigurationParameterReference.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/HDFSConfigurationParameterReference.html.md.erb b/reference/HDFSConfigurationParameterReference.html.md.erb
deleted file mode 100644
index aef4ed2..0000000
--- a/reference/HDFSConfigurationParameterReference.html.md.erb
+++ /dev/null
@@ -1,257 +0,0 @@
----
-title: HDFS Configuration Reference
----
-
-This reference page describes HDFS configuration values that are configured for HAWQ either within `hdfs-site.xml`, `core-site.xml`, or `hdfs-client.xml`.
-
-## <a id="topic_ixj_xw1_1w"></a>HDFS Site Configuration (hdfs-site.xml and core-site.xml)
-
-This topic provides a reference of the HDFS site configuration values recommended for HAWQ installations. These parameters are located in either `hdfs-site.xml` or `core-site.xml` of your HDFS deployment.
-
-This table describes the configuration parameters and values that are recommended for HAWQ installations. Only HDFS parameters that need to be modified or customized for HAWQ are listed.
-
-| Parameter                                 | Description                                                                                                                                                                                                        | Recommended Value for HAWQ Installs                                   | Comments                                                                                                                                                               |
-|-------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `dfs.allow.truncate`                      | Allows truncate.                                                                                                                                                                                                   | true                                                                  | HAWQ requires that you enable `dfs.allow.truncate`. The HAWQ service will fail to start if `dfs.allow.truncate` is not set to `true`.                                  |
-| `dfs.block.access.token.enable`           | If `true`, access tokens are used as capabilities for accessing DataNodes. If `false`, no access tokens are checked on accessing DataNodes.                                                                        | *false* for an unsecured HDFS cluster, or *true* for a secure cluster | �                                                                                                                                                                      |
-| `dfs.block.local-path-access.user`        | Comma separated list of the users allowed to open block files on legacy short-circuit local read.                                                                                                                  | gpadmin                                                               | �                                                                                                                                                                      |
-| `dfs.client.read.shortcircuit`            | This configuration parameter turns on short-circuit local reads.                                                                                                                                                   | true                                                                  | In Ambari, this parameter corresponds to **HDFS Short-circuit read**. The value for this parameter should be the same in `hdfs-site.xml` and HAWQ's `hdfs-client.xml`. |
-| `dfs.client.socket-timeout`               | The amount of time before a client connection times out when establishing a connection or reading. The value is expressed in milliseconds.                                                                         | 300000000                                                             | �                                                                                                                                                                      |
-| `dfs.client.use.legacy.blockreader.local` | Setting this value to false specifies that the new version of the short-circuit reader is used. Setting this value to true means that the legacy short-circuit reader would be used.                               | false                                                                 | �                                                                                                                                                                      |
-| `dfs.datanode.data.dir.perm`              | Permissions for the directories on on the local filesystem where the DFS DataNode stores its blocks. The permissions can either be octal or symbolic.                                                              | 750                                                                   | In Ambari, this parameter corresponds to **DataNode directories permission**                                                                                           |
-| `dfs.datanode.handler.count`              | The number of server threads for the DataNode.                                                                                                                                                                     | 60                                                                    | �                                                                                                                                                                      |
-| `dfs.datanode.max.transfer.threads`       | Specifies the maximum number of threads to use for transferring data in and out of the DataNode.                                                                                                                   | 40960                                                                 | In Ambari, this parameter corresponds to **DataNode max data transfer threads**                                                                                        |
-| `dfs.datanode.socket.write.timeout`       | The amount of time before a write operation times out, expressed in milliseconds.                                                                                                                                  | 7200000                                                               | �                                                                                                                                                                      |
-| `dfs.domain.socket.path`                  | (Optional.) The path to a UNIX domain socket to use for communication between the DataNode and local HDFS clients. If the string "\_PORT" is present in this path, it is replaced by the TCP port of the DataNode. | �                                                                     | If set, the value for this parameter should be the same in `hdfs-site.xml` and HAWQ's `hdfs-client.xml`.                                                               |
-| `dfs.namenode.accesstime.precision`       | The access time for HDFS file is precise up to this value. Setting a value of 0 disables access times for HDFS.                                                                                                    | 0                                                                     | In Ambari, this parameter corresponds to **Access time precision**                                                                                                     |
-| `dfs.namenode.handler.count`              | The number of server threads for the NameNode.                                                                                                                                                                     | 600                                                                   | �                                                                                                                                                                      |
-| `dfs.support.append`                      | Whether HDFS is allowed to append to files.                                                                                                                                                                        | true                                                                  | �                                                                                                                                                                      |
-| `ipc.client.connection.maxidletime`       | The maximum time in milliseconds after which a client will bring down the connection to the server.                                                                                                                | 3600000                                                               | In core-site.xml                                                                                                                                                       |
-| `ipc.client.connect.timeout`              | Indicates the number of milliseconds a client will wait for the socket to establish a server connection.                                                                                                           | 300000                                                                | In core-site.xml                                                                                                                                                       |
-| `ipc.server.listen.queue.size`            | Indicates the length of the listen queue for servers accepting client connections.                                                                                                                                 | 3300                                                                  | In core-site.xml                                                                                                                                                       |
-
-## <a id="topic_l1c_zw1_1w"></a>HDFS Client Configuration (hdfs-client.xml)
-
-This topic provides a reference of the HAWQ configuration values located in `$GPHOME/etc/hdfs-client.xml`.
-
-This table describes the configuration parameters and their default values:
-
-<table>
-<colgroup>
-<col width="25%" />
-<col width="25%" />
-<col width="25%" />
-<col width="25%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Parameter</th>
-<th>Description</th>
-<th>Default Value</th>
-<th>Comments</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td><code class="ph codeph">dfs.client.failover.max.attempts</code></td>
-<td>The maximum number of times that the DFS client retries issuing a RPC call when multiple NameNodes are configured.</td>
-<td>15</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">dfs.client.log.severity</code></td>
-<td>The minimal log severity level. Valid values include: FATAL, ERROR, INFO, DEBUG1, DEBUG2, and DEBUG3.</td>
-<td>INFO</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">dfs.client.read.shortcircuit</code></td>
-<td>Determines whether the DataNode is bypassed when reading file blocks, if the block and client are on the same node. The default value, true, bypasses the DataNode.</td>
-<td>true</td>
-<td>The value for this parameter should be the same in <code class="ph codeph">hdfs-site.xml</code> and HAWQ's <code class="ph codeph">hdfs-client.xml</code>.</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">dfs.client.use.legacy.blockreader.local</code></td>
-<td>Determines whether the legacy short-circuit reader implementation, based on HDFS-2246, is used. Set this property to true on non-Linux platforms that do not have the new implementation based on HDFS-347.</td>
-<td>false</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">dfs.default.blocksize</code></td>
-<td>Default block size, in bytes.</td>
-<td>134217728</td>
-<td>Default is equivalent to 128 MB.�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">dfs.default.replica</code></td>
-<td>The default number of replicas.</td>
-<td>3</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">dfs.domain.socket.path</code></td>
-<td>(Optional.) The path to a UNIX domain socket to use for communication between the DataNode and local HDFS clients. If the string &quot;_PORT&quot; is present in this path, it is replaced by the TCP port of the DataNode.</td>
-<td>�</td>
-<td>If set, the value for this parameter should be the same in <code class="ph codeph">hdfs-site.xml</code> and HAWQ's <code class="ph codeph">hdfs-client.xml</code>.</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">dfs.prefetchsize</code></td>
-<td>The number of blocks for which information is pre-fetched.</td>
-<td>10</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">hadoop.security.authentication</code></td>
-<td>Specifies the type of RPC authentication to use. A value of <code class="ph codeph">simple</code> indicates no authentication. A value of <code class="ph codeph">kerberos</code> enables authentication by Kerberos.</td>
-<td>simple</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">input.connect.timeout</code></td>
-<td>The timeout interval, in milliseconds, for when the input stream is setting up a connection to a DataNode.</td>
-<td>600000</td>
-<td>�Default is equal to 1 hour.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">input.localread.blockinfo.cachesize</code></td>
-<td>The size of the file block path information cache, in bytes.</td>
-<td>1000</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">input.localread.default.buffersize</code></td>
-<td>The size of the buffer, in bytes, used to hold data from the file block and verify the checksum. This value is used only when <code class="ph codeph">dfs.client.read.shortcircuit</code> is set to true.</td>
-<td>1048576</td>
-<td>Default is equal to 1MB. Only used when is set to true.
-<p>If an older version of�<code class="ph codeph">hdfs-client.xml</code> is retained during upgrade, to avoid performance degradation, set the�<code class="ph codeph">input.localread.default.buffersize</code> to�2097152.�</p></td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">input.read.getblockinfo.retry</code></td>
-<td>The maximum number of times the client should retry getting block information from the NameNode.</td>
-<td>3</td>
-<td></td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">input.read.timeout</code></td>
-<td>The timeout interval, in milliseconds, for when the input stream is reading from a DataNode.</td>
-<td>3600000</td>
-<td>Default is equal to 1 hour.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">input.write.timeout</code></td>
-<td>The timeout interval, in milliseconds, for when the input stream is writing to a DataNode.</td>
-<td>3600000</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">output.close.timeout</code></td>
-<td>The timeout interval for closing an output stream, in milliseconds.</td>
-<td>900000</td>
-<td>Default is equal to 1.5 hours.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">output.connect.timeout</code></td>
-<td>The timeout interval, in milliseconds, for when the output stream is setting up a connection to a DataNode.</td>
-<td>600000</td>
-<td>Default is equal to 10 minutes.</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">output.default.chunksize</code></td>
-<td>The chunk size of the pipeline, in bytes.</td>
-<td>512</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">output.default.packetsize</code></td>
-<td>The packet size of the pipeline, in bytes.</td>
-<td>65536</td>
-<td>Default is equal to 64KB.�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">output.default.write.retry</code></td>
-<td>The maximum number of times that the client should reattempt to set up a failed pipeline.</td>
-<td>10</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">output.packetpool.size</code></td>
-<td>The maximum number of packets in a file's packet pool.</td>
-<td>1024</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">output.read.timeout</code></td>
-<td>The timeout interval, in milliseconds, for when the output stream is reading from a DataNode.</td>
-<td>3600000</td>
-<td>Default is equal to 1 hour.�</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">output.replace-datanode-on-failure</code></td>
-<td>Determines whether the client adds a new DataNode to pipeline if the number of nodes in the pipeline is less than the specified number of replicas.</td>
-<td>false (if # of nodes less than or equal to 4), otherwise true</td>
-<td>When you deploy a HAWQ cluster, the <code class="ph codeph">hawq init</code> utility detects the number of nodes in the cluster and updates this configuration parameter accordingly. However, when expanding an existing cluster to 4 or more nodes, you must manually set this value to true. Set to false if you remove existing nodes and fall under 4 nodes.</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">output.write.timeout</code></td>
-<td>The timeout interval, in milliseconds, for when the output stream is writing to a DataNode.</td>
-<td>3600000</td>
-<td>Default is equal to 1 hour.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">rpc.client.connect.retry</code></td>
-<td>The maximum number of times to retry a connection if the RPC client fails connect to the server.</td>
-<td>10</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">rpc.client.connect.tcpnodelay</code></td>
-<td>Determines whether TCP_NODELAY is used when connecting to the RPC server.</td>
-<td>true</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">rpc.client.connect.timeout</code></td>
-<td>The timeout interval for establishing the RPC client connection, in milliseconds.</td>
-<td>600000</td>
-<td>Default equals 10 minutes.</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">rpc.client.max.idle</code></td>
-<td>The maximum idle time for an RPC connection, in milliseconds.</td>
-<td>10000</td>
-<td>Default equals 10 seconds.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">rpc.client.ping.interval</code></td>
-<td>The interval which the RPC client send a heart beat to server. 0 means disable.</td>
-<td>10000</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">rpc.client.read.timeout</code></td>
-<td>The timeout interval, in milliseconds, for when the RPC client is reading from the server.</td>
-<td>3600000</td>
-<td>Default equals 1 hour.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">rpc.client.socket.linger.timeout</code></td>
-<td>The value to set for the SO_LINGER socket when connecting to the RPC server.</td>
-<td>-1</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">rpc.client.timeout</code></td>
-<td>The timeout interval of an RPC invocation, in milliseconds.</td>
-<td>3600000</td>
-<td>Default equals 1 hour.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">rpc.client.write.timeout</code></td>
-<td>The timeout interval, in milliseconds, for when the RPC client is writing to the server.</td>
-<td>3600000</td>
-<td>Default equals 1 hour.</td>
-</tr>
-</tbody>
-</table>
-
-


[09/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/svg/hawq_hcatalog.svg
----------------------------------------------------------------------
diff --git a/mdimages/svg/hawq_hcatalog.svg b/mdimages/svg/hawq_hcatalog.svg
deleted file mode 100644
index 4a99830..0000000
--- a/mdimages/svg/hawq_hcatalog.svg
+++ /dev/null
@@ -1,3 +0,0 @@
-<?xml version="1.0" encoding="utf-8" standalone="no"?>
-<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
-<svg xmlns="http://www.w3.org/2000/svg" xmlns:xl="http://www.w3.org/1999/xlink" version="1.1" viewBox="144 110 600 195" width="50pc" height="195pt" xmlns:dc="http://purl.org/dc/elements/1.1/"><metadata> Produced by OmniGraffle 6.0.5 <dc:date>2015-11-30 20:39Z</dc:date></metadata><defs><filter id="Shadow" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" result="blur" stdDeviation="1.308"/><feOffset in="blur" result="offset" dx="0" dy="2"/><feFlood flood-color="black" flood-opacity=".5" result="flood"/><feComposite in="flood" in2="offset" operator="in"/></filter><filter id="Shadow_2" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" result="blur" stdDeviation="1.3030978"/><feOffset in="blur" result="offset" dx="0" dy="2"/><feFlood flood-color="black" flood-opacity=".2" result="flood"/><feComposite in="flood" in2="offset" operator="in" result="color"/><feMerge><feMergeNode in="color"/><feMergeNode in="SourceGraphic"/></feMerge></filter><font-face font-family="H
 elvetica" font-size="14" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="532.22656" cap-height="719.72656" ascent="770.01953" descent="-229.98047" font-weight="bold"><font-face-src><font-face-name name="Helvetica-Bold"/></font-face-src></font-face><font-face font-family="Helvetica" font-size="12" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="532.22656" cap-height="719.72656" ascent="770.01953" descent="-229.98047" font-weight="bold"><font-face-src><font-face-name name="Helvetica-Bold"/></font-face-src></font-face><font-face font-family="Helvetica" font-size="9" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="522.94922" cap-height="717.28516" ascent="770.01953" descent="-229.98047" font-weight="500"><font-face-src><font-face-name name="Helvetica"/></font-face-src></font-face><marker orient="auto" overflow="visible" m
 arkerUnits="strokeWidth" id="FilledArrow_Marker" viewBox="-1 -4 10 8" markerWidth="10" markerHeight="8" color="black"><g><path d="M 8 0 L 0 -3 L 0 3 Z" fill="currentColor" stroke="currentColor" stroke-width="1"/></g></marker><font-face font-family="Helvetica" font-size="11" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="532.22656" cap-height="719.72656" ascent="770.01953" descent="-229.98047" font-weight="bold"><font-face-src><font-face-name name="Helvetica-Bold"/></font-face-src></font-face><font-face font-family="Helvetica" font-size="11" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="522.94922" cap-height="717.28516" ascent="770.01953" descent="-229.98047" font-weight="500"><font-face-src><font-face-name name="Helvetica"/></font-face-src></font-face></defs><g stroke="none" stroke-opacity="1" stroke-dasharray="none" fill="none" fill-opacity="1"><title>Canvas 1</title><
 g><title>Layer 1</title><g><xl:use xl:href="#id24_Graphic" filter="url(#Shadow)"/><xl:use xl:href="#id10_Graphic" filter="url(#Shadow)"/></g><g filter="url(#Shadow_2)"><path d="M 594 183 L 627.75 123 L 695.25 123 L 729 183 L 695.25 243 L 627.75 243 Z" fill="white"/><path d="M 594 183 L 627.75 123 L 695.25 123 L 729 183 L 695.25 243 L 627.75 243 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(599 174.5)" fill="black"><tspan font-family="Helvetica" font-size="14" font-weight="bold" x="46.16211" y="14" textLength="32.675781">HIVE</tspan></text></g><path d="M 540.59675 203 L 540.59675 183 L 583 183 L 583 173 L 603 193 L 583 213 L 583 203 Z" fill="#a9b7c2"/><path d="M 540.59675 203 L 540.59675 183 L 583 183 L 583 173 L 603 193 L 583 213 L 583 203 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(553.39716 186)" fill="black"><tspan font-family="Helvetica" font-size="12
 " font-weight="bold" x="6.7322738" y="11" textLength="23.33789">PXF</tspan></text><path d="M 540.59675 243 L 540.59675 223 L 583 223 L 583 213 L 603 233 L 583 253 L 583 243 Z" fill="#a9b7c2"/><path d="M 540.59675 243 L 540.59675 223 L 583 223 L 583 213 L 603 233 L 583 253 L 583 243 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(553.39716 226)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="6.7322738" y="11" textLength="23.33789">PXF</tspan></text><path d="M 540.59675 163 L 540.59675 143 L 583 143 L 583 133 L 603 153 L 583 173 L 583 163 Z" fill="#a9b7c2"/><path d="M 540.59675 163 L 540.59675 143 L 583 143 L 583 133 L 603 153 L 583 173 L 583 163 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(553.39716 146)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="6.7322738" y="11" textLength="23.33789">P
 XF</tspan></text><g filter="url(#Shadow_2)"><rect x="414" y="234" width="81" height="45" fill="#a9b7c1"/><rect x="414" y="234" width="81" height="45" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(419 251)" fill="black"><tspan font-family="Helvetica" font-size="9" font-weight="500" x="5.729248" y="9" textLength="59.541504">table metadata</tspan></text></g><line x1="358" y1="211.5" x2="442.10014" y2="211.94734" marker-end="url(#FilledArrow_Marker)" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><g filter="url(#Shadow_2)"><circle cx="400.5" cy="184.5" r="13.5000216" fill="#dbdbdb"/><circle cx="400.5" cy="184.5" r="13.5000216" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(394.7 177.5)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="2.463086" y="11" textLength="6.673828">1</tspan></text></g><line x1="4
 14" y1="243" x2="367.9" y2="243" marker-end="url(#FilledArrow_Marker)" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><g id="id24_Graphic"><path d="M 452 240.8 L 452 183.2 C 452 179.2256 468.128 176 488 176 C 507.872 176 524 179.2256 524 183.2 L 524 240.8 C 524 244.7744 507.872 248 488 248 C 468.128 248 452 244.7744 452 240.8" fill="#a9b7c1"/><path d="M 452 240.8 L 452 183.2 C 452 179.2256 468.128 176 488 176 C 507.872 176 524 179.2256 524 183.2 L 524 240.8 C 524 244.7744 507.872 248 488 248 C 468.128 248 452 244.7744 452 240.8 M 452 183.2 C 452 187.1744 468.128 190.4 488 190.4 C 507.872 190.4 524 187.1744 524 183.2" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(457 207.1)" fill="black"><tspan font-family="Helvetica" font-size="14" font-weight="bold" x=".2758789" y="14" textLength="61.448242">HCatalog</tspan></text></g><line x1="360" y1="153" x2="530.69675" y2="153" marker-end="url(#FilledA
 rrow_Marker)" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><path d="M 254 261 L 225 261 L 198 261 L 198 243.9" marker-end="url(#FilledArrow_Marker)" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><g id="id10_Graphic"><path d="M 250 272.7 L 250 150.3 C 250 141.8544 274.192 135 304 135 C 333.808 135 358 141.8544 358 150.3 L 358 272.7 C 358 281.1456 333.808 288 304 288 C 274.192 288 250 281.1456 250 272.7" fill="white"/><path d="M 250 272.7 L 250 150.3 C 250 141.8544 274.192 135 304 135 C 333.808 135 358 141.8544 358 150.3 L 358 272.7 C 358 281.1456 333.808 288 304 288 C 274.192 288 250 281.1456 250 272.7 M 250 150.3 C 250 158.7456 274.192 165.6 304 165.6 C 333.808 165.6 358 158.7456 358 150.3" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(255 210.65)" fill="black"><tspan font-family="Helvetica" font-size="14" font-weight="bold" x="27.220703" y="14" textLengt
 h="20.220703">HA</tspan><tspan font-family="Helvetica" font-size="14" font-weight="bold" x="46.67578" y="14" textLength="24.103516">WQ</tspan></text></g><g filter="url(#Shadow_2)"><path d="M 172.29774 210.86712 C 155.8125 207 162.3864 174.4452 188.68404 180 C 191.12388 169.17192 221.7045 170.92944 221.50458 180 C 240.67956 168.39864 265.18404 191.53152 248.74776 203.13288 C 268.47048 208.75752 248.49888 239.06232 232.3125 234 C 231.0171 242.43768 202.08072 245.3904 199.54092 234 C 183.15564 246.1644 148.98972 227.46096 172.29774 210.86712 Z" fill="white"/><path d="M 172.29774 210.86712 C 155.8125 207 162.3864 174.4452 188.68404 180 C 191.12388 169.17192 221.7045 170.92944 221.50458 180 C 240.67956 168.39864 265.18404 191.53152 248.74776 203.13288 C 268.47048 208.75752 248.49888 239.06232 232.3125 234 C 231.0171 242.43768 202.08072 245.3904 199.54092 234 C 183.15564 246.1644 148.98972 227.46096 172.29774 210.86712 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" strok
 e-width="1"/><text transform="translate(179.3 187.5)" fill="black"><tspan font-family="Helvetica" font-size="11" font-weight="bold" x=".75078125" y="10" textLength="62.95459">in-memory: </tspan><tspan font-family="Helvetica" font-size="11" font-weight="500" x="2.2600586" y="23" textLength="56.879883">pg_exttable</tspan><tspan font-family="Helvetica" font-size="11" font-weight="500" x="3.4927246" y="36" textLength="54.41455">pg_class\u2026</tspan></text></g><g filter="url(#Shadow_2)"><circle cx="220.5" cy="265.5" r="13.5000216" fill="#dbdbdb"/><circle cx="220.5" cy="265.5" r="13.5000216" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(214.7 258.5)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="2.463086" y="11" textLength="6.673828">2</tspan></text></g><g filter="url(#Shadow_2)"><circle cx="431.1501" cy="153" r="13.5000216" fill="#dbdbdb"/><circle cx="431.1501" cy="153" r="13.5000216" stroke="bl
 ack" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(425.3501 146)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="2.463086" y="11" textLength="6.673828">3</tspan></text></g></g><g><title>Layer 2</title><path d="M 369.59675 221 L 369.59675 201 L 412 201 L 412 191 L 432 211 L 412 231 L 412 221 Z" fill="#a9b7c2"/><path d="M 369.59675 221 L 369.59675 201 L 412 201 L 412 191 L 432 211 L 412 231 L 412 221 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(382.39716 204)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="6.7322738" y="11" textLength="23.33789">PXF</tspan></text></g></g></svg>


[03/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/defining-queries.html.md.erb
----------------------------------------------------------------------
diff --git a/query/defining-queries.html.md.erb b/query/defining-queries.html.md.erb
deleted file mode 100644
index b796511..0000000
--- a/query/defining-queries.html.md.erb
+++ /dev/null
@@ -1,528 +0,0 @@
----
-title: Defining Queries
----
-
-HAWQ is based on the PostgreSQL implementation of the SQL standard. SQL commands are typically entered using the standard PostgreSQL interactive terminal `psql`, but other programs that have similar functionality can be used as well.
-
-
-## <a id="topic3"></a>SQL Lexicon
-
-SQL is a standard language for accessing databases. The language consists of elements that enable data storage, retrieval, analysis, viewing, and so on. You use SQL commands to construct queries and commands that the HAWQ engine understands.
-
-SQL queries consist of a sequence of commands. Commands consist of a sequence of valid tokens in correct syntax order, terminated by a semicolon (`;`).
-
-H uses PostgreSQL's structure and syntax, with some exceptions. For more information about SQL rules and concepts in PostgreSQL, see "SQL Syntax" in the PostgreSQL documentation.
-
-## <a id="topic4"></a>SQL Value Expressions
-
-SQL value expressions consist of one or more values, symbols, operators, SQL functions, and data. The expressions compare data or perform calculations and return a value as the result. Calculations include logical, arithmetic, and set operations.
-
-The following are value expressions:
-
--   Aggregate expressions
--   Array constructors
--   Column references
--   Constant or literal values
--   Correlated subqueries
--   Field selection expressions
--   Function calls
--   New column values in an `INSERT`
--   Operator invocation column references
--   Positional parameter references, in the body of a function definition or prepared statement
--   Row constructors
--   Scalar subqueries
--   Search conditions in a `WHERE` clause
--   Target lists of a `SELECT` command
--   Type casts
--   Value expressions in parentheses, useful to group sub-expressions and override precedence
--   Window expressions
-
-SQL constructs such as functions and operators are expressions but do not follow any general syntax rules. For more information about these constructs, see [Using Functions and Operators](functions-operators.html#topic26).
-
-### <a id="topic5"></a>Column References
-
-A column reference has the form:
-
-```
-correlation.columnname
-```
-
-Here, `correlation` is the name of a table (possibly qualified with a schema name) or an alias for a table defined with a `FROM` clause or one of the keywords `NEW` or `OLD`. `NEW` and `OLD` can appear only in rewrite rules, but you can use other correlation names in any SQL statement. If the column name is unique across all tables in the query, you can omit the "`correlation.`" part of the column reference.
-
-### <a id="topic6"></a>Positional Parameters
-
-Positional parameters are arguments to SQL statements or functions that you reference by their positions in a series of arguments. For example, `$1` refers to the first argument, `$2` to the second argument, and so on. The values of positional parameters are set from arguments external to the SQL statement or supplied when SQL functions are invoked. Some client libraries support specifying data values separately from the SQL command, in which case parameters refer to the out-of-line data values. A parameter reference has the form:
-
-```
-$number
-```
-
-For example:
-
-``` pre
-CREATE FUNCTION dept(text) RETURNS dept
-    AS $$ SELECT * FROM dept WHERE name = $1 $$
-    LANGUAGE SQL;
-```
-
-Here, the `$1` references the value of the first function argument whenever the function is invoked.
-
-### <a id="topic7"></a>Subscripts
-
-If an expression yields a value of an array type, you can extract a specific element of the array value as follows:
-
-``` pre
-expression[subscript]
-```
-
-You can extract multiple adjacent elements, called an array slice, as follows (including the brackets):
-
-``` pre
-expression[lower_subscript:upper_subscript]
-```
-
-Each subscript is an expression and yields an integer value.
-
-Array expressions usually must be in parentheses, but you can omit the parentheses when the expression to be subscripted is a column reference or positional parameter. You can concatenate multiple subscripts when the original array is multidimensional. For example (including the parentheses):
-
-``` pre
-mytable.arraycolumn[4]
-```
-
-``` pre
-mytable.two_d_column[17][34]
-```
-
-``` pre
-$1[10:42]
-```
-
-``` pre
-(arrayfunction(a,b))[42]
-```
-
-### <a id="topic8"></a>Field Selections
-
-If an expression yields a value of a composite type (row type), you can extract a specific field of the row as follows:
-
-```
-expression.fieldname
-```
-
-The row expression usually must be in parentheses, but you can omit these parentheses when the expression to be selected from is a table reference or positional parameter. For example:
-
-``` pre
-mytable.mycolumn
-```
-
-``` pre
-$1.somecolumn
-```
-
-``` pre
-(rowfunction(a,b)).col3
-```
-
-A qualified column reference is a special case of field selection syntax.
-
-### <a id="topic9"></a>Operator Invocations
-
-Operator invocations have the following possible syntaxes:
-
-``` pre
-expression operator expression(binary infix operator)
-```
-
-``` pre
-operator expression(unary prefix operator)
-```
-
-``` pre
-expression operator(unary postfix operator)
-```
-
-Where *operator* is an operator token, one of the key words `AND`, `OR`, or `NOT`, or qualified operator name in the form:
-
-``` pre
-OPERATOR(schema.operatorname)
-```
-
-Available operators and whether they are unary or binary depends on the operators that the system or user defines. For more information about built-in operators, see [Built-in Functions and Operators](functions-operators.html#topic29).
-
-### <a id="topic10"></a>Function Calls
-
-The syntax for a function call is the name of a function (possibly qualified with a schema name), followed by its argument list enclosed in parentheses:
-
-``` pre
-function ([expression [, expression ... ]])
-```
-
-For example, the following function call computes the square root of 2:
-
-``` pre
-sqrt(2)
-```
-
-### <a id="topic11"></a>Aggregate Expressions
-
-An aggregate expression applies an aggregate function across the rows that a query selects. An aggregate function performs a calculation on a set of values and returns a single value, such as the sum or average of the set of values. The syntax of an aggregate expression is one of the following:
-
--   `aggregate_name ([ , ... ] ) [FILTER (WHERE                 condition)] ` \u2014 operates across all input rows for which the expected result value is non-null. `ALL` is the default.
--   `aggregate_name(ALLexpression[ , ... ] ) [FILTER               (WHERE condition)]` \u2014 operates identically to the first form because `ALL` is the default
--   `aggregate_name(DISTINCT expression[ , ... ] )               [FILTER (WHERE condition)]` \u2014 operates across all distinct non-null values of input rows
--   `aggregate_name(*) [FILTER (WHERE               condition)]` \u2014 operates on all rows with values both null and non-null. Generally, this form is most useful for the `count(*)` aggregate function.
-
-Where *aggregate\_name* is a previously defined aggregate (possibly schema-qualified) and *expression* is any value expression that does not contain an aggregate expression.
-
-For example, `count(*)` yields the total number of input rows, `count(f1)` yields the number of input rows in which `f1` is <span class="ph">non-null, and </span>`count(distinct f1)` yields the number of distinct non-null values of `f1`.
-
-You can specify a condition with the `FILTER` clause to limit the input rows to the aggregate function. For example:
-
-``` sql
-SELECT count(*) FILTER (WHERE gender='F') FROM employee;
-```
-
-The `WHERE condition` of the `FILTER` clause cannot contain a set-returning function, subquery, window function, or outer reference. If you use a user-defined aggregate function, declare the state transition function as `STRICT` (see `CREATE AGGREGATE`).
-
-For predefined aggregate functions, see [Built-in Functions and Operators](functions-operators.html#topic29). You can also add custom aggregate functions.
-
-HAWQ provides the `MEDIAN` aggregate function, which returns the fiftieth percentile of the `PERCENTILE_CONT` result and special aggregate expressions for inverse distribution functions as follows:
-
-``` sql
-PERCENTILE_CONT(_percentage_) WITHIN GROUP (ORDER BY _expression_)
-```
-
-``` sql
-PERCENTILE_DISC(_percentage_) WITHIN GROUP (ORDER BY _expression_)
-```
-
-Currently you can use only these two expressions with the keyword `WITHIN             GROUP`.
-
-#### <a id="topic12"></a>Limitations of Aggregate Expressions
-
-The following are current limitations of the aggregate expressions:
-
--   HAWQ does not support the following keywords: ALL, DISTINCT, FILTER and OVER. See [Advanced Aggregate Functions](functions-operators.html#topic31__in2073121) for more details.
--   An aggregate expression can appear only in the result list or HAVING clause of a SELECT command. It is forbidden in other clauses, such as WHERE, because those clauses are logically evaluated before the results of aggregates form. This restriction applies to the query level to which the aggregate belongs.
--   When an aggregate expression appears in a subquery, the aggregate is normally evaluated over the rows of the subquery. If the aggregate's arguments contain only outer-level variables, the aggregate belongs to the nearest such outer level and evaluates over the rows of that query. The aggregate expression as a whole is then an outer reference for the subquery in which it appears, and the aggregate expression acts as a constant over any one evaluation of that subquery. See [Scalar Subqueries](#topic15) and [Built-in functions and operators](functions-operators.html#topic29__in204913).
--   HAWQ does not support DISTINCT with multiple input expressions.
-
-### <a id="topic13"></a>Window Expressions
-
-Window expressions allow application developers to more easily compose complex online analytical processing (OLAP) queries using standard SQL commands. For example, with window expressions, users can calculate moving averages or sums over various intervals, reset aggregations and ranks as selected column values change, and express complex ratios in simple terms.
-
-A window expression represents the application of a *window function* applied to a *window frame*, which is defined in a special `OVER()` clause. A window partition is a set of rows that are grouped together to apply a window function. Unlike aggregate functions, which return a result value for each group of rows, window functions return a result value for every row, but that value is calculated with respect to the rows in a particular window partition. If no partition is specified, the window function is computed over the complete intermediate result set.
-
-The syntax of a window expression is:
-
-``` pre
-window_function ( [expression [, ...]] ) OVER ( window_specification )
-```
-
-Where *`window_function`* is one of the functions listed in [Window functions](functions-operators.html#topic30__in164369), *`expression`* is any value expression that does not contain a window expression, and *`window_specification`* is:
-
-```
-[window_name]
-[PARTITION BY expression [, ...]]
-[[ORDER BY expression [ASC | DESC | USING operator] [, ...]
-����[{RANGE | ROWS} 
-�������{ UNBOUNDED PRECEDING
-�������| expression PRECEDING
-�������| CURRENT ROW
-�������| BETWEEN window_frame_bound AND window_frame_bound }]]
-```
-
-and where `window_frame_bound` can be one of:
-
-``` 
- ���UNBOUNDED PRECEDING
-����expression PRECEDING
-����CURRENT ROW
-����expression FOLLOWING
-����UNBOUNDED FOLLOWING
-```
-
-A window expression can appear only in the select list of a `SELECT` command. For example:
-
-``` sql
-SELECT count(*) OVER(PARTITION BY customer_id), * FROM sales;
-```
-
-The `OVER` clause differentiates window functions from other aggregate or reporting functions. The `OVER` clause defines the *`window_specification`* to which the window function is applied. A window specification has the following characteristics:
-
--   The `PARTITION BY` clause defines the window partitions to which the window function is applied. If omitted, the entire result set is treated as one partition.
--   The `ORDER BY` clause defines the expression(s) for sorting rows within a window partition. The `ORDER BY` clause of a window specification is separate and distinct from the `ORDER BY` clause of a regular query expression. The `ORDER BY` clause is required for the window functions that calculate rankings, as it identifies the measure(s) for the ranking values. For OLAP aggregations, the `ORDER BY` clause is required to use window frames (the `ROWS` | `RANGE` clause).
-
-**Note:** Columns of data types without a coherent ordering, such as `time`, are not good candidates for use in the `ORDER BY` clause of a window specification. `Time`, with or without a specified time zone, lacks a coherent ordering because addition and subtraction do not have the expected effects. For example, the following is not generally true: `x::time < x::time +             '2 hour'::interval`
-
--   The `ROWS/RANGE` clause defines a window frame for aggregate (non-ranking) window functions. A window frame defines a set of rows within a window partition. When a window frame is defined, the window function computes on the contents of this moving frame rather than the fixed contents of the entire window partition. Window frames are row-based (`ROWS`) or value-based (`RANGE`).
-
-### <a id="topic14"></a>Type Casts
-
-A type cast specifies a conversion from one data type to another. HAWQ accepts two equivalent syntaxes for type casts:
-
-``` sql
-CAST ( expression AS type )
-expression::type
-```
-
-The `CAST` syntax conforms to SQL; the syntax with `::` is historical PostgreSQL usage.
-
-A cast applied to a value expression of a known type is a run-time type conversion. The cast succeeds only if a suitable type conversion function is defined. This differs from the use of casts with constants. A cast applied to a string literal represents the initial assignment of a type to a literal constant value, so it succeeds for any type if the contents of the string literal are acceptable input syntax for the data type.
-
-You can usually omit an explicit type cast if there is no ambiguity about the type a value expression must produce; for example, when it is assigned to a table column, the system automatically applies a type cast. The system applies automatic casting only to casts marked "OK to apply implicitly" in system catalogs. Other casts must be invoked with explicit casting syntax to prevent unexpected conversions from being applied without the user's knowledge.
-
-### <a id="topic15"></a>Scalar Subqueries
-
-A scalar subquery is a `SELECT` query in parentheses that returns exactly one row with one column. Do not use a `SELECT` query that returns multiple rows or columns as a scalar subquery. The query runs and uses the returned value in the surrounding value expression. A correlated scalar subquery contains references to the outer query block.
-
-### <a id="topic16"></a>Correlated Subqueries
-
-A correlated subquery (CSQ) is a `SELECT` query with a `WHERE` clause or target list that contains references to the parent outer clause. CSQs efficiently express results in terms of results of another query. HAWQ supports correlated subqueries that provide compatibility with many existing applications. A CSQ is a scalar or table subquery, depending on whether it returns one or multiple rows. HAWQ does not support correlated subqueries with skip-level correlations.
-
-### <a id="topic17"></a>Correlated Subquery Examples
-
-#### <a id="topic18"></a>Example 1 \u2013 Scalar correlated subquery
-
-``` sql
-SELECT * FROM t1 WHERE t1.x 
-> (SELECT MAX(t2.x) FROM t2 WHERE t2.y = t1.y);
-```
-
-#### <a id="topic19"></a>Example 2 \u2013 Correlated EXISTS subquery
-
-``` sql
-SELECT * FROM t1 WHERE 
-EXISTS (SELECT 1 FROM t2 WHERE t2.x = t1.x);
-```
-
-HAWQ uses one of the following methods to run CSQs:
-
--   Unnest the CSQ into join operations \u2013 This method is most efficient, and it is how HAWQ runs most CSQs, including queries from the TPC-H benchmark.
--   Run the CSQ on every row of the outer query \u2013 This method is relatively inefficient, and it is how HAWQ runs queries that contain CSQs in the `SELECT` list or are connected by `OR` conditions.
-
-The following examples illustrate how to rewrite some of these types of queries to improve performance.
-
-#### <a id="topic20"></a>Example 3 - CSQ in the Select List
-
-*Original Query*
-
-``` sql
-SELECT T1.a,
-(SELECT COUNT(DISTINCT T2.z) FROM t2 WHERE t1.x = t2.y) dt2 
-FROM t1;
-```
-
-Rewrite this query to perform an inner join with `t1` first and then perform a left join with `t1` again. The rewrite applies for only an equijoin in the correlated condition.
-
-*Rewritten Query*
-
-``` sql
-SELECT t1.a, dt2 FROM t1 
-LEFT JOIN 
-(SELECT t2.y AS csq_y, COUNT(DISTINCT t2.z) AS dt2 
-FROM t1, t2 WHERE t1.x = t2.y 
-GROUP BY t1.x) 
-ON (t1.x = csq_y);
-```
-
-### <a id="topic21"></a>Example 4 - CSQs connected by OR Clauses
-
-*Original Query*
-
-``` sql
-SELECT * FROM t1 
-WHERE 
-x > (SELECT COUNT(*) FROM t2 WHERE t1.x = t2.x) 
-OR x < (SELECT COUNT(*) FROM t3 WHERE t1.y = t3.y)
-```
-
-Rewrite this query to separate it into two parts with a union on the `OR` conditions.
-
-*Rewritten Query*
-
-``` sql
-SELECT * FROM t1 
-WHERE x > (SELECT count(*) FROM t2 WHERE t1.x = t2.x) 
-UNION 
-SELECT * FROM t1 
-WHERE x < (SELECT count(*) FROM t3 WHERE t1.y = t3.y)
-```
-
-To view the query plan, use `EXPLAIN SELECT` or `EXPLAIN ANALYZE             SELECT`. Subplan nodes in the query plan indicate that the query will run on every row of the outer query, and the query is a candidate for rewriting. For more information about these statements, see [Query Profiling](query-profiling.html#topic39).
-
-### <a id="topic22"></a>Advanced Table Functions
-
-HAWQ supports table functions with `TABLE` value expressions. You can sort input rows for advanced table functions with an `ORDER BY` clause. You can redistribute them with a `SCATTER BY` clause to specify one or more columns or an expression for which rows with the specified characteristics are available to the same process. This usage is similar to using a `DISTRIBUTED BY` clause when creating a table, but the redistribution occurs when the query runs.
-
-**Note:**
-Based on the distribution of data, HAWQ automatically parallelizes table functions with `TABLE` value parameters over the nodes of the cluster.
-
-### <a id="topic23"></a>Array Constructors
-
-An array constructor is an expression that builds an array value from values for its member elements. A simple array constructor consists of the key word `ARRAY`, a left square bracket `[`, one or more expressions separated by commas for the array element values, and a right square bracket `]`. For example,
-
-``` sql
-SELECT ARRAY[1,2,3+4];
-```
-
-```
-  array
----------
- {1,2,7}
-```
-
-The array element type is the common type of its member expressions, determined using the same rules as for `UNION` or `CASE` constructs.
-
-You can build multidimensional array values by nesting array constructors. In the inner constructors, you can omit the keyword `ARRAY`. For example, the following two `SELECT` statements produce the same result:
-
-``` sql
-SELECT ARRAY[ARRAY[1,2], ARRAY[3,4]];
-SELECT ARRAY[[1,2],[3,4]];
-```
-
-```
-     array
----------------
- {{1,2},{3,4}}
-```
-
-Since multidimensional arrays must be rectangular, inner constructors at the same level must produce sub-arrays of identical dimensions.
-
-Multidimensional array constructor elements are not limited to a sub-`ARRAY` construct; they are anything that produces an array of the proper kind. For example:
-
-``` sql
-CREATE TABLE arr(f1 int[], f2 int[]);
-INSERT INTO arr VALUES (ARRAY[[1,2],[3,4]], 
-ARRAY[[5,6],[7,8]]);
-SELECT ARRAY[f1, f2, '{{9,10},{11,12}}'::int[]] FROM arr;
-```
-
-```
-                     array
-------------------------------------------------
- {{{1,2},{3,4}},{{5,6},{7,8}},{{9,10},{11,12}}}
-```
-
-You can construct an array from the results of a subquery. Write the array constructor with the keyword `ARRAY` followed by a subquery in parentheses. For example:
-
-``` sql
-SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%');
-```
-
-```
-                          ?column?
------------------------------------------------------------
- {2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31}
-```
-
-The subquery must return a single column. The resulting one-dimensional array has an element for each row in the subquery result, with an element type matching that of the subquery's output column. The subscripts of an array value built with `ARRAY` always begin with `1`.
-
-### <a id="topic24"></a>Row Constructors
-
-A row constructor is an expression that builds a row value (also called a composite value) from values for its member fields. For example,
-
-``` sql
-SELECT ROW(1,2.5,'this is a test');
-```
-
-Row constructors have the syntax `rowvalue.*`, which expands to a list of the elements of the row value, as when you use the syntax `.*` at the top level of a `SELECT` list. For example, if table `t` has columns `f1` and `f2`, the following queries are the same:
-
-``` sql
-SELECT ROW(t.*, 42) FROM t;
-SELECT ROW(t.f1, t.f2, 42) FROM t;
-```
-
-By default, the value created by a `ROW` expression has an anonymous record type. If necessary, it can be cast to a named composite type \u2014 either the row type of a table, or a composite type created with `CREATE TYPE AS`. To avoid ambiguity, you can explicitly cast the value if necessary. For example:
-
-``` sql
-CREATE TABLE mytable(f1 int, f2 float, f3 text);
-CREATE FUNCTION getf1(mytable) RETURNS int AS 'SELECT $1.f1' 
-LANGUAGE SQL;
-```
-
-In the following query, you do not need to cast the value because there is only one `getf1()` function and therefore no ambiguity:
-
-``` sql
-SELECT getf1(ROW(1,2.5,'this is a test'));
-```
-
-```
- getf1
--------
-     1
-```
-
-``` sql
-CREATE TYPE myrowtype AS (f1 int, f2 text, f3 numeric);
-CREATE FUNCTION getf1(myrowtype) RETURNS int AS 'SELECT 
-$1.f1' LANGUAGE SQL;
-```
-
-Now we need a cast to indicate which function to call:
-
-``` sql
-SELECT getf1(ROW(1,2.5,'this is a test'));
-```
-```
-ERROR:  function getf1(record) is not unique
-```
-
-``` sql
-SELECT getf1(ROW(1,2.5,'this is a test')::mytable);
-```
-
-```
- getf1
--------
-     1
-```
-
-``` sql
-SELECT getf1(CAST(ROW(11,'this is a test',2.5) AS myrowtype));
-```
-
-```
- getf1
--------
-    11
-```
-
-You can use row constructors to build composite values to be stored in a composite-type table column or to be passed to a function that accepts a composite parameter.
-
-### <a id="topic25"></a>Expression Evaluation Rules
-
-The order of evaluation of subexpressions is undefined. The inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order.
-
-If you can determine the result of an expression by evaluating only some parts of the expression, then other subexpressions might not be evaluated at all. For example, in the following expression:
-
-``` sql
-SELECT true OR somefunc();
-```
-
-`somefunc()` would probably not be called at all. The same is true in the following expression:
-
-``` sql
-SELECT somefunc() OR true;
-```
-
-This is not the same as the left-to-right evaluation order that Boolean operators enforce in some programming languages.
-
-Do not use functions with side effects as part of complex expressions, especially in `WHERE` and `HAVING` clauses, because those clauses are extensively reprocessed when developing an execution plan. Boolean expressions (`AND`/`OR`/`NOT` combinations) in those clauses can be reorganized in any manner that Boolean algebra laws allow.
-
-Use a `CASE` construct to force evaluation order. The following example is an untrustworthy way to avoid division by zero in a `WHERE` clause:
-
-``` sql
-SELECT ... WHERE x <> 0 AND y/x > 1.5;
-```
-
-The following example shows a trustworthy evaluation order:
-
-``` sql
-SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false 
-END;
-```
-
-This `CASE` construct usage defeats optimization attempts; use it only when necessary.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/functions-operators.html.md.erb
----------------------------------------------------------------------
diff --git a/query/functions-operators.html.md.erb b/query/functions-operators.html.md.erb
deleted file mode 100644
index 8f14ee6..0000000
--- a/query/functions-operators.html.md.erb
+++ /dev/null
@@ -1,437 +0,0 @@
----
-title: Using Functions and Operators
----
-
-HAWQ evaluates functions and operators used in SQL expressions.
-
-## <a id="topic27"></a>Using Functions in HAWQ
-
-In HAWQ, functions can only be run on master.
-
-<a id="topic27__in201681"></a>
-
-<span class="tablecap">Table 1. Functions in HAWQ</span>
-
-
-| Function Type | HAWQ Support       | Description                                                                                                               | Comments                                                                                                                                               |
-|---------------|--------------------|---------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
-| IMMUTABLE     | Yes                | Relies only on information directly in its argument list. Given the same argument values, always returns the same result. | �                                                                                                                                                      |
-| STABLE        | Yes, in most cases | Within a single table scan, returns the same result for same argument values, but results change across SQL statements.   | Results depend on database lookups or parameter values. `current_timestamp` family of functions is `STABLE`; values do not change within an execution. |
-| VOLATILE      | Restricted         | Function values can change within a single table scan. For example: `random()`, `currval()`, `timeofday()`.               | Any function with side effects is volatile, even if its result is predictable. For example: `setval()`.                                                |
-
-HAWQ does not support functions that return a table reference (`rangeFuncs`) or functions that use the `refCursor` datatype.
-
-## <a id="topic28"></a>User-Defined Functions
-
-HAWQ supports user-defined functions. See [Extending SQL](http://www.postgresql.org/docs/8.2/static/extend.html) in the PostgreSQL documentation for more information.
-
-In HAWQ, the shared library files for user-created functions must reside in the same library path location on every host in the HAWQ array (masters and segments).
-
-**Important:**
-HAWQ does not support the following:
-
--   Enhanced table functions
--   PL/Java Type Maps
-
-
-Use the `CREATE FUNCTION` statement to register user-defined functions that are used as described in [Using Functions in HAWQ](#topic27). By default, user-defined functions are declared as `VOLATILE`, so if your user-defined function is `IMMUTABLE` or `STABLE`, you must specify the correct volatility level when you register your function.
-
-### <a id="functionvolatility"></a>Function Volatility
-
-Every function has a�**volatility**�classification, with the possibilities being�VOLATILE,�STABLE, or�IMMUTABLE.�VOLATILE�is the default if the�[CREATE FUNCTION](../reference/sql/CREATE-FUNCTION.html)�command does not specify a category. The volatility category is a promise to the optimizer about the behavior of the function:
-
--   A�VOLATILE�function can do anything, including modifying the database. It can return different results on successive calls with the same arguments. The optimizer makes no assumptions about the behavior of such functions. A query using a volatile function will re-evaluate the function at every row where its value is needed.
--   A�STABLE�function cannot modify the database and is guaranteed to return the same results given the same arguments for all rows within a single statement. This category allows the optimizer to optimize multiple calls of the function to a single call.
--   An�IMMUTABLE�function cannot modify the database and is guaranteed to return the same results given the same arguments forever. This category allows the optimizer to pre-evaluate the function when a query calls it with constant arguments. For example, a query like�SELECT ... WHERE x = 2 + 2�can be simplified on sight to�SELECT ... WHERE x = 4, because the function underlying the integer addition operator is marked�IMMUTABLE.
-
-For best optimization results, you should label your functions with the strictest volatility category that is valid for them.
-
-Any function with side-effects�must�be labeled�VOLATILE, so that calls to it cannot be optimized away. Even a function with no side-effects needs to be labeled�VOLATILE�if its value can change within a single query; some examples are�random(),�currval(),�timeofday().
-
-Another important example is that the�`current_timestamp`�family of functions qualify as�STABLE, since their values do not change within a transaction.
-
-There is relatively little difference between�STABLE�and�IMMUTABLE�categories when considering simple interactive queries that are planned and immediately executed: it doesn't matter a lot whether a function is executed once during planning or once during query execution startup. But there is a big difference if the plan is saved and reused later. Labeling a function�IMMUTABLE�when it really isn't might allow it to be prematurely folded to a constant during planning, resulting in a stale value being re-used during subsequent uses of the plan. This is a hazard when using prepared statements or when using function languages that cache plans (such as�PL/pgSQL).
-
-For functions written in SQL or in any of the standard procedural languages, there is a second important property determined by the volatility category, namely the visibility of any data changes that have been made by the SQL command that is calling the function. A�VOLATILE�function will see such changes, a�STABLE�or�IMMUTABLE�function will not. STABLE�and�IMMUTABLE�functions use a snapshot established as of the start of the calling query, whereas�VOLATILE�functions obtain a fresh snapshot at the start of each query they execute.
-
-Because of this snapshotting behavior, a function containing only�SELECT�commands can safely be marked�STABLE, even if it selects from tables that might be undergoing modifications by concurrent queries.�PostgreSQL�will execute all commands of a�STABLE�function using the snapshot established for the calling query, and so it will see a fixed view of the database throughout that query.
-
-The same snapshotting behavior is used for�SELECT�commands within�IMMUTABLE�functions. It is generally unwise to select from database tables within an�IMMUTABLE function at all, since the immutability will be broken if the table contents ever change. However,�PostgreSQL�does not enforce that you do not do that.
-
-A common error is to label a function�IMMUTABLE�when its results depend on a configuration parameter. For example, a function that manipulates timestamps might well have results that depend on the�timezone�setting. For safety, such functions should be labeled�STABLE�instead.
-
-When you create user defined functions, avoid using fatal errors or destructive calls. HAWQ may respond to such errors with a sudden shutdown or restart.
-
-### <a id="nestedUDFs"></a>Nested Function Query Limitations
-
-HAWQ queries employing nested user-defined functions will fail when dispatched to segment node(s). 
-
-HAWQ stores the system catalog only on the master node. User-defined functions are stored in system catalog tables. HAWQ has no built-in knowledge about how to interpret the source text of a user-defined function. Consequently, the text is not parsed by HAWQ.
-
-This behavior may be problematic in queries where a user-defined function includes a nested function(s). When a query includes a user-defined function, metadata passed to the query executor includes function invocation information.  If run on the HAWQ master node, the nested function will be recognized. If such a query is dispatched to a segment, the nested function will not be found and the query will throw an error.
-
-## <a id="userdefinedtypes"></a>User Defined Types
-
-HAWQ can be extended to support new data types. This section describes how to define new base types, which are data types defined below the level of the�SQL�language. Creating a new base type requires implementing functions to operate on the type in a low-level language, usually C.
-
-A user-defined type must always have input and output functions. �These functions determine how the type appears in strings (for input by the user and output to the user) and how the type is organized in memory. The input function takes a null-terminated character string as its argument and returns the internal (in memory) representation of the type. The output function takes the internal representation of the type as argument and returns a null-terminated character string. If we want to do anything more with the type than merely store it, we must provide additional functions to implement whatever operations we'd like to have for the type.
-
-You should be careful to make the input and output functions inverses of each other. If you do not, you will have severe problems when you need to dump your data into a file and then read it back in. This is a particularly common problem when floating-point numbers are involved.
-
-Optionally, a user-defined type can provide binary input and output routines. Binary I/O is normally faster but less portable than textual I/O. As with textual I/O, it is up to you to define exactly what the external binary representation is. Most of the built-in data types try to provide a machine-independent binary representation.�
-
-Once we have written the I/O functions and compiled them into a shared library, we can define the�complex�type in SQL. First we declare it as a shell type:
-
-``` sql
-CREATE TYPE complex;
-```
-
-This serves as a placeholder that allows us to reference the type while defining its I/O functions. Now we can define the I/O functions:
-
-``` sql
-CREATE FUNCTION complex_in(cstring)
-    RETURNS complex
-    AS 'filename'
-    LANGUAGE C IMMUTABLE STRICT;
-
-CREATE FUNCTION complex_out(complex)
-    RETURNS cstring
-    AS 'filename'
-    LANGUAGE C IMMUTABLE STRICT;
-
-CREATE FUNCTION complex_recv(internal)
-   RETURNS complex
-   AS 'filename'
-   LANGUAGE C IMMUTABLE STRICT;
-
-CREATE FUNCTION complex_send(complex)
-   RETURNS bytea
-   AS 'filename'
-   LANGUAGE C IMMUTABLE STRICT;
-```
-
-Finally, we can provide the full definition of the data type:
-
-``` sql
-CREATE TYPE complex (
-   internallength = 16, 
-   input = complex_in,
-   output = complex_out,
-   receive = complex_recv,
-   send = complex_send,
-   alignment = double
-);
-```
-
-When you define a new base type,�HAWQ�automatically provides support for arrays of that type.�For historical reasons, the array type has the same name as the base type with the underscore character (\_) prepended.
-
-Once the data type exists, we can declare additional functions to provide useful operations on the data type. Operators can then be defined atop the functions, and if needed, operator classes can be created to support indexing of the data type.�
-
-For further details, see the description of the�[CREATE TYPE](../reference/sql/CREATE-TYPE.html) command.
-
-## <a id="userdefinedoperators"></a>User Defined Operators
-
-Every operator is�"syntactic sugar"�for a call to an underlying function that does the real work; so you must first create the underlying function before you can create the operator. However, an operator is�not merely�syntactic sugar, because it carries additional information that helps the query planner optimize queries that use the operator. The next section will be devoted to explaining that additional information.
-
-HAWQ�supports left unary, right unary, and binary operators. Operators can be overloaded;�that is, the same operator name can be used for different operators that have different numbers and types of operands. When a query is executed, the system determines the operator to call from the number and types of the provided operands.
-
-Here is an example of creating an operator for adding two complex numbers. We assume we've already created the definition of type�complex. First we need a function that does the work, then we can define the operator:
-
-``` sql
-CREATE FUNCTION complex_add(complex, complex)
-    RETURNS complex
-    AS 'filename', 'complex_add'
-    LANGUAGE C IMMUTABLE STRICT;
-
-CREATE OPERATOR + (
-    leftarg = complex,
-    rightarg = complex,
-    procedure = complex_add,
-    commutator = +
-);
-```
-
-Now we could execute a query like this:
-
-``` sql
-SELECT (a + b) AS c FROM test_complex;
-```
-
-```
-        c
------------------
- (5.2,6.05)
- (133.42,144.95)
-```
-
-We've shown how to create a binary operator here. To create unary operators, just omit one of�leftarg�(for left unary) or�rightarg�(for right unary). The�procedure�clause and the argument clauses are the only required items in�CREATE OPERATOR. The�commutator�clause shown in the example is an optional hint to the query optimizer. Further details aboutcommutator�and other optimizer hints appear in the next section.
-
-## <a id="topic29"></a>Built-in Functions and Operators
-
-The following table lists the categories of built-in functions and operators supported by PostgreSQL. All functions and operators are supported in HAWQ as in PostgreSQL with the exception of `STABLE` and `VOLATILE` functions, which are subject to the restrictions noted in [Using Functions in HAWQ](#topic27). See the [Functions and Operators](http://www.postgresql.org/docs/8.2/static/functions.html) section of the PostgreSQL documentation for more information about these built-in functions and operators.
-
-<a id="topic29__in204913"></a>
-
-<table>
-<caption><span class="tablecap">Table 2. Built-in functions and operators</span></caption>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Operator/Function Category</th>
-<th>VOLATILE Functions</th>
-<th>STABLE Functions</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions.html#FUNCTIONS-LOGICAL">Logical Operators</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-comparison.html">Comparison Operators</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-math.html">Mathematical Functions and Operators</a></td>
-<td>random
-<p>setseed</p></td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-string.html">String Functions and Operators</a></td>
-<td><em>All built-in conversion functions</em></td>
-<td>convert
-<p>pg_client_encoding</p></td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-binarystring.html">Binary String Functions and Operators</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-bitstring.html">Bit String Functions and Operators</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.3/static/functions-matching.html">Pattern Matching</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-formatting.html">Data Type Formatting Functions</a></td>
-<td>�</td>
-<td>to_char
-<p>to_timestamp</p></td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-datetime.html">Date/Time Functions and Operators</a></td>
-<td>timeofday</td>
-<td>age
-<p>current_date</p>
-<p>current_time</p>
-<p>current_timestamp</p>
-<p>localtime</p>
-<p>localtimestamp</p>
-<p>now</p></td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-geometry.html">Geometric Functions and Operators</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-net.html">Network Address Functions and Operators</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-sequence.html">Sequence Manipulation Functions</a></td>
-<td>currval
-<p>lastval</p>
-<p>nextval</p>
-<p>setval</p></td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-conditional.html">Conditional Expressions</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-array.html">Array Functions and Operators</a></td>
-<td>�</td>
-<td><em>All array functions</em></td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-aggregate.html">Aggregate Functions</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-subquery.html">Subquery Expressions</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-comparisons.html">Row and Array Comparisons</a></td>
-<td>�</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-srf.html">Set Returning Functions</a></td>
-<td>generate_series</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-info.html">System Information Functions</a></td>
-<td>�</td>
-<td><em>All session information functions</em>
-<p><em>All access privilege inquiry functions</em></p>
-<p><em>All schema visibility inquiry functions</em></p>
-<p><em>All system catalog information functions</em></p>
-<p><em>All comment information functions</em></p></td>
-</tr>
-<tr class="even">
-<td><a href="http://www.postgresql.org/docs/8.2/static/functions-admin.html">System Administration Functions</a></td>
-<td>set_config
-<p>pg_cancel_backend</p>
-<p>pg_reload_conf</p>
-<p>pg_rotate_logfile</p>
-<p>pg_start_backup</p>
-<p>pg_stop_backup</p>
-<p>pg_size_pretty</p>
-<p>pg_ls_dir</p>
-<p>pg_read_file</p>
-<p>pg_stat_file</p></td>
-<td>current_setting
-<p><em>All database object size functions</em></p></td>
-</tr>
-<tr class="odd">
-<td><a href="http://www.postgresql.org/docs/9.1/interactive/functions-xml.html">XML Functions</a></td>
-<td>�</td>
-<td>xmlagg(xml)
-<p>xmlexists(text, xml)</p>
-<p>xml_is_well_formed(text)</p>
-<p>xml_is_well_formed_document(text)</p>
-<p>xml_is_well_formed_content(text)</p>
-<p>xpath(text, xml)</p>
-<p>xpath(text, xml, text[])</p>
-<p>xpath_exists(text, xml)</p>
-<p>xpath_exists(text, xml, text[])</p>
-<p>xml(text)</p>
-<p>text(xml)</p>
-<p>xmlcomment(xml)</p>
-<p>xmlconcat2(xml, xml)</p></td>
-</tr>
-</tbody>
-</table>
-
-## <a id="topic30"></a>Window Functions
-
-The following built-in window functions are HAWQ extensions to the PostgreSQL database. All window functions are *immutable*. For more information about window functions, see [Window Expressions](defining-queries.html#topic13).
-
-<a id="topic30__in164369"></a>
-
-<span class="tablecap">Table 3. Window functions</span>
-
-| Function                                             | Return Type               | Full Syntax                                                                                               | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                |
-|------------------------------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `cume_dist()`                                        | `double precision`        | `CUME_DIST() OVER ( [PARTITION BY ` *expr* `] ORDER BY ` *expr* ` )`                                      | Calculates the cumulative distribution of a value in a group of values. Rows with equal values always evaluate to the same cumulative distribution value.                                                                                                                                                                                                                                                                                                  |
-| `dense_rank()`                                       | `bigint`                  | `DENSE_RANK () OVER ( [PARTITION BY ` *expr* `] ORDER BY ` *expr* `)`                                     | Computes the rank of a row in an ordered group of rows without skipping rank values. Rows with equal values are given the same rank value.                                                                                                                                                                                                                                                                                                                 |
-| `first_value(expr)`                                  | same as input *expr* type | FIRST\_VALUE expr ) OVER ( \[PARTITION BY expr \] ORDER BY expr \[ROWS|RANGE frame\_expr \] )             | Returns the first value in an ordered set of values.                                                                                                                                                                                                                                                                                                                                                                                                       |
-| `lag(expr [,offset] [,default])`                     | same as input *expr* type | `LAG(` *expr* ` [,` *offset* `] [,` *default* `]) OVER ( [PARTITION BY ` *expr* `] ORDER BY ` *expr* ` )` | Provides access to more than one row of the same table without doing a self join. Given a series of rows returned from a query and a position of the cursor, `LAG` provides access to a row at a given physical offset prior to that position. The default `offset` is 1. *default* sets the value that is returned if the offset goes beyond the scope of the window. If *default* is not specified, the default value is null.                           |
-| `last_valueexpr`                                     | same as input *expr* type | LAST\_VALUE(expr) OVER ( \[PARTITION BY expr\] ORDER BY expr \[ROWS|RANGE frame\_expr\] )                 | Returns the last value in an ordered set of values.                                                                                                                                                                                                                                                                                                                                                                                                        |
-| `                   lead(expr [,offset] [,default])` | same as input *expr* type | `LEAD(expr [,offset] [,exprdefault]) OVER (                   [PARTITION BY expr] ORDER BY expr )`        | Provides access to more than one row of the same table without doing a self join. Given a series of rows returned from a query and a position of the cursor, `lead` provides access to a row at a given physical offset after that position. If *offset* is not specified, the default offset is 1. *default* sets the value that is returned if the offset goes beyond the scope of the window. If *default* is not specified, the default value is null. |
-| `ntile(expr)`                                        | bigint                    | `NTILE(expr) OVER ( [PARTITION BY expr] ORDER BY expr                   )`                                | Divides an ordered data set into a number of buckets (as defined by *expr*) and assigns a bucket number to each row.                                                                                                                                                                                                                                                                                                                                       |
-| `percent_rank(`)                                     | `double precision`        | `PERCENT_RANK () OVER ( [PARTITION BY expr] ORDER BY expr                   )`                            | Calculates the rank of a hypothetical row `R` minus 1, divided by 1 less than the number of rows being evaluated (within a window partition).                                                                                                                                                                                                                                                                                                              |
-| `rank()`                                             | bigint                    | `RANK () OVER ( [PARTITION BY expr] ORDER BY expr )`                                                      | Calculates the rank of a row in an ordered group of values. Rows with equal values for the ranking criteria receive the same rank. The number of tied rows are added to the rank number to calculate the next rank value. Ranks may not be consecutive numbers in this case.                                                                                                                                                                               |
-| `row_number(`)                                       | `bigint`                  | `ROW_NUMBER () OVER ( [PARTITION BY expr] ORDER BY expr                   )`                              | Assigns a unique number to each row to which it is applied (either each row in a window partition or each row of the query).                                                                                                                                                                                                                                                                                                                               |
-
-
-## <a id="topic31"></a>Advanced Aggregate Functions
-
-The following built-in advanced aggregate functions are HAWQ extensions of the PostgreSQL database.
-
-<a id="topic31__in2073121"></a>
-
-<table>
-
-<caption><span class="tablecap">Table 4. Advanced Aggregate Functions</span></caption>
-<colgroup>
-<col width="25%" />
-<col width="25%" />
-<col width="25%" />
-<col width="25%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Function</th>
-<th>Return Type</th>
-<th>Full Syntax</th>
-<th>Description</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td><code class="ph codeph">MEDIAN (expr)</code></td>
-<td><code class="ph codeph">timestamp, timestampz, interval, float</code></td>
-<td><code class="ph codeph">MEDIAN (expression)</code>
-<p><em>Example:</em></p>
-<pre class="pre codeblock"><code>SELECT department_id, MEDIAN(salary) 
-FROM employees 
-GROUP BY department_id; </code></pre></td>
-<td>Can take a two-dimensional array as input. Treats such arrays as matrices.</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">PERCENTILE_CONT (expr) WITHIN GROUP (ORDER BY expr                   [DESC/ASC])</code></td>
-<td><code class="ph codeph">timestamp, timestampz, interval, float</code></td>
-<td><code class="ph codeph">PERCENTILE_CONT(percentage) WITHIN GROUP (ORDER BY                   expression)</code>
-<p><em>Example:</em></p>
-<pre class="pre codeblock"><code>SELECT department_id,
-PERCENTILE_CONT (0.5) WITHIN GROUP (ORDER BY salary DESC)
-&quot;Median_cont&quot;; 
-FROM employees GROUP BY department_id;</code></pre></td>
-<td>Performs an inverse function that assumes a continuous distribution model. It takes a percentile value and a sort specification and returns the same datatype as the numeric datatype of the argument. This returned value is a computed result after performing linear interpolation. Null are ignored in this calculation.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">PERCENTILE_DISC (expr) WITHIN GROUP (ORDER BY                     expr [DESC/ASC]</code>)</td>
-<td><code class="ph codeph">timestamp, timestampz, interval, float</code></td>
-<td><code class="ph codeph">PERCENTILE_DISC(percentage) WITHIN GROUP (ORDER BY                   expression)</code>
-<p><em>Example:</em></p>
-<pre class="pre codeblock"><code>SELECT department_id, 
-PERCENTILE_DISC (0.5) WITHIN GROUP (ORDER BY salary DESC)
-&quot;Median_desc&quot;; 
-FROM employees GROUP BY department_id;</code></pre></td>
-<td>Performs an inverse distribution function that assumes a discrete distribution model. It takes a percentile value and a sort specification. This returned value is an element from the set. Null are ignored in this calculation.</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">sum(array[])</code></td>
-<td><code class="ph codeph">smallint[]int[], bigint[], float[]</code></td>
-<td><code class="ph codeph">sum(array[[1,2],[3,4]])</code>
-<p><em>Example:</em></p>
-<pre class="pre codeblock"><code>CREATE TABLE mymatrix (myvalue int[]);
-INSERT INTO mymatrix VALUES (array[[1,2],[3,4]]);
-INSERT INTO mymatrix VALUES (array[[0,1],[1,0]]);
-SELECT sum(myvalue) FROM mymatrix;
- sum 
----------------
- {{1,3},{4,4}}</code></pre></td>
-<td>Performs matrix summation. Can take as input a two-dimensional array that is treated as a matrix.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">pivot_sum (label[], label, expr)</code></td>
-<td><code class="ph codeph">int[], bigint[], float[]</code></td>
-<td><code class="ph codeph">pivot_sum( array['A1','A2'], attr, value)</code></td>
-<td>A pivot aggregation using sum to resolve duplicate entries.</td>
-</tr>
-</tbody>
-</table>
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/gporca/query-gporca-changed.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-changed.html.md.erb b/query/gporca/query-gporca-changed.html.md.erb
deleted file mode 100644
index 041aa4b..0000000
--- a/query/gporca/query-gporca-changed.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Changed Behavior with GPORCA
----
-
-<span class="shortdesc">When GPORCA is enabled, HAWQ's behavior changes. This topic describes these changes.</span>
-
--   The command `CREATE TABLE AS` distributes table data randomly if the `DISTRIBUTED BY` clause is not specified and no primary or unique keys are specified.
--   Statistics are required on the root table of a partitioned table. The `ANALYZE` command generates statistics on both root and individual partition tables (leaf child tables). See the `ROOTPARTITION` clause for `ANALYZE` command.
--   Additional Result nodes in the query plan:
-    -   Query plan `Assert` operator.
-    -   Query plan `Partition selector` operator.
-    -   Query plan `Split` operator.
--   When running `EXPLAIN`, the query plan generated by GPORCA is different than the plan generated by the legacy query optimizer.
--   HAWQ adds the log file message `Planner produced plan` when GPORCA is enabled and HAWQ falls back to the legacy query optimizer to generate the query plan.
--   HAWQ issues a warning when statistics are missing from one or more table columns. When executing an SQL command with GPORCA, HAWQ issues a warning if the command performance could be improved by collecting statistics on a column or set of columns referenced by the command. The warning is issued on the command line and information is added to the HAWQ log file. For information about collecting statistics on table columns, see the `ANALYZE` command.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/gporca/query-gporca-enable.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-enable.html.md.erb b/query/gporca/query-gporca-enable.html.md.erb
deleted file mode 100644
index e8cc93f..0000000
--- a/query/gporca/query-gporca-enable.html.md.erb
+++ /dev/null
@@ -1,95 +0,0 @@
----
-title: Enabling GPORCA
----
-
-<span class="shortdesc">Precompiled versions of HAWQ that include the GPORCA query optimizer enable it by default, no additional configuration is required. To use the GPORCA query optimizer in a HAWQ built from source, your build must include GPORCA. You must also enable specific HAWQ server configuration parameters at or after install time: </span>
-
--   [Set the <code class="ph codeph">optimizer\_analyze\_root\_partition</code> parameter to <code class="ph codeph">on</code>](#topic_r5d_hv1_kr) to enable statistics collection for the root partition of a partitioned table.
--   Set the `optimizer` parameter to `on` to enable GPORCA. You can set the parameter at these levels:
-    -   [A HAWQ system](#topic_byp_lqk_br)
-    -   [A specific HAWQ database](#topic_pzr_3db_3r)
-    -   [A session or query](#topic_lx4_vqk_br)
-
-**Important:** If you intend to execute queries on partitioned tables with GPORCA enabled, you must collect statistics on the partitioned table root partition with the `ANALYZE ROOTPARTITION` command. The command `ANALYZE         ROOTPARTITION` collects statistics on the root partition of a partitioned table without collecting statistics on the leaf partitions. If you specify a list of column names for a partitioned table, the statistics for the columns and the root partition are collected. For information on the `ANALYZE` command, see [ANALYZE](../../reference/sql/ANALYZE.html).
-
-You can also use the HAWQ utility `analyzedb` to update table statistics. The HAWQ utility `analyzedb` can update statistics for multiple tables in parallel. The utility can also check table statistics and update statistics only if the statistics are not current or do not exist. For information about the `analyzedb` utility, see [analyzedb](../../reference/cli/admin_utilities/analyzedb.html#topic1).
-
-As part of routine database maintenance, you should refresh statistics on the root partition when there are significant changes to child leaf partition data.
-
-## <a id="topic_r5d_hv1_kr"></a>Setting the optimizer\_analyze\_root\_partition Parameter
-
-When the configuration parameter `optimizer_analyze_root_partition` is set to `on`, root partition statistics will be collected when `ANALYZE` is run on a partitioned table. Root partition statistics are required by GPORCA.
-
-You will perform different procedures to set optimizer configuration parameters for your whole HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set optimizer server configuration parameters.
-
-If you use Ambari to manage your HAWQ cluster:
-
-1. Set the `optimizer_analyze_root_partition` configuration property to `on` via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. 
-2. Select **Service Actions > Restart All** to load the updated configuration.
-
-If you manage your HAWQ cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-1. Use the `hawq config` utility to set `optimizer_analyze_root_partition`:
-
-    ``` shell
-    $ hawq config -c optimizer_analyze_root_partition -v on
-    ```
-2. Reload the HAWQ configuration:
-
-    ``` shell
-    $ hawq stop cluster -u
-    ```
-
-## <a id="topic_byp_lqk_br"></a>Enabling GPORCA for a System
-
-Set the server configuration parameter `optimizer` for the HAWQ system.
-
-If you use Ambari to manage your HAWQ cluster:
-
-1. Set the `optimizer` configuration property to `on` via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. 
-2. Select **Service Actions > Restart All** to load the updated configuration.
-
-If you manage your HAWQ cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-1. Use the `hawq config` utility to set `optimizer`:
-
-    ``` shell
-    $ hawq config -c optimizer -v on
-    ```
-2. Reload the HAWQ configuration:
-
-    ``` shell
-    $ hawq stop cluster -u
-    ```
-
-## <a id="topic_pzr_3db_3r"></a>Enabling GPORCA for a Database
-
-Set the server configuration parameter `optimizer` for individual HAWQ databases with the `ALTER DATABASE` command. For example, this command enables GPORCA for the database *test\_db*.
-
-``` sql
-=> ALTER DATABASE test_db SET optimizer = ON ;
-```
-
-## <a id="topic_lx4_vqk_br"></a>Enabling GPORCA for a Session or a Query
-
-You can use the `SET` command to set `optimizer` server configuration parameter for a session. For example, after you use the `psql` utility to connect to HAWQ, this `SET` command enables GPORCA:
-
-``` sql
-=> SET optimizer = on ;
-```
-
-To set the parameter for a specific query, include the `SET` command prior to running the query.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/gporca/query-gporca-fallback.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-fallback.html.md.erb b/query/gporca/query-gporca-fallback.html.md.erb
deleted file mode 100644
index 999e9a7..0000000
--- a/query/gporca/query-gporca-fallback.html.md.erb
+++ /dev/null
@@ -1,142 +0,0 @@
----
-title: Determining The Query Optimizer In Use
----
-
-<span class="shortdesc"> When GPORCA is enabled, you can determine if HAWQ is using GPORCA or is falling back to the legacy query optimizer. </span>
-
-These are two ways to determine which query optimizer HAWQ used to execute the query:
-
--   Examine `EXPLAIN` query plan output for the query. (Your output may include other settings.)
-    -   When GPORCA generates the query plan, the GPORCA version is displayed near the end of the query plan . For example.
-
-        ``` pre
-         Settings:  optimizer=on
-         Optimizer status:  PQO version 1.627
-        ```
-
-        When HAWQ falls back to the legacy optimizer to generate the plan, `legacy query                 optimizer` is displayed near the end of the query plan. For example.
-
-        ``` pre
-         Settings:  optimizer=on
-         Optimizer status: legacy query optimizer
-        ```
-
-        When the server configuration parameter `OPTIMIZER` is `off`, the following lines are displayed near the end of a query plan.
-
-        ``` pre
-         Settings:  optimizer=off
-         Optimizer status: legacy query optimizer
-        ```
-
-    -   These plan items appear only in the `EXPLAIN` plan output generated by GPORCA. The items are not supported in a legacy optimizer query plan.
-        -   Assert operator
-        -   Sequence operator
-        -   DynamicIndexScan
-        -   DynamicTableScan
-        -   Table Scan
-    -   When a query against a partitioned table is generated by GPORCA, the `EXPLAIN` plan displays only the number of partitions that are being eliminated is listed. The scanned partitions are not shown. The `EXPLAIN` plan generated by the legacy optimizer lists the scanned partitions.
-
--   View the log messages in the HAWQ log file.
-
-    The log file contains messages that indicate which query optimizer was used. In the log file message, the `[OPT]` flag appears when GPORCA attempts to optimize a query. If HAWQ falls back to the legacy optimizer, an error message is added to the log file, indicating the unsupported feature. Also, in the message, the label `Planner produced             plan:` appears before the query when HAWQ falls back to the legacy optimizer.
-
-    **Note:** You can configure HAWQ to display log messages on the psql command line by setting the HAWQ server configuration parameter `client_min_messages` to `LOG`. See [Server Configuration Parameter Reference](../../reference/HAWQSiteConfig.html) for information about the parameter.
-
-## <a id="topic_n4w_nb5_xr"></a>Example
-
-This example shows the differences for a query that is run against partitioned tables when GPORCA is enabled.
-
-This `CREATE TABLE` statement creates a table with single level partitions:
-
-``` sql
-CREATE TABLE sales (trans_id int, date date, 
-    amount decimal(9,2), region text)
-   DISTRIBUTED BY (trans_id)
-   PARTITION BY RANGE (date)
-      (START (date '2011�01�01') 
-       INCLUSIVE END (date '2012�01�01') 
-       EXCLUSIVE EVERY (INTERVAL '1 month'),
-   DEFAULT PARTITION outlying_dates);
-```
-
-This query against the table is supported by GPORCA and does not generate errors in the log file:
-
-``` sql
-SELECT * FROM sales;
-```
-
-The `EXPLAIN` plan output lists only the number of selected partitions.
-
-``` 
- ->  Partition Selector for sales (dynamic scan id: 1)  (cost=10.00..100.00 rows=50 width=4)
-       Partitions selected:  13 (out of 13)
-```
-
-Output from the log file indicates that GPORCA attempted to optimize the query:
-
-``` 
-2015-05-06 15:00:53.293451 PDT,"gpadmin","test",p2809,th297883424,"[local]",
-  ,2015-05-06 14:59:21 PDT,1120,con6,cmd1,seg-1,,dx3,x1120,sx1,"LOG","00000"
-  ,"statement: explain select * from sales
-;",,,,,,"explain select * from sales
-;",0,,"postgres.c",1566,
-
-2015-05-06 15:00:54.258412 PDT,"gpadmin","test",p2809,th297883424,"[local]",
-  ,2015-05-06 14:59:21 PDT,1120,con6,cmd1,seg-1,,dx3,x1120,sx1,"LOG","00000","
-[OPT]: Using default search strategy",,,,,,"explain select * from sales
-;",0,,"COptTasks.cpp",677,
-```
-
-The following cube query is not supported by GPORCA.
-
-``` sql
-SELECT count(*) FROM foo GROUP BY cube(a,b);
-```
-
-The following EXPLAIN plan output includes the message "Feature not supported by GPORCA."
-
-``` sql
-postgres=# EXPLAIN SELECT count(*) FROM foo GROUP BY cube(a,b);
-```
-```
-LOG:  statement: explain select count(*) from foo group by cube(a,b);
-LOG:  2016-04-14 16:26:15:487935 PDT,THD000,NOTICE,"Feature not supported by the GPORCA: Cube",
-LOG:  Planner produced plan :0
-                                                        QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
- Gather Motion 3:1  (slice3; segments: 3)  (cost=9643.62..19400.26 rows=40897 width=28)
-   ->  Append  (cost=9643.62..19400.26 rows=13633 width=28)
-         ->  HashAggregate  (cost=9643.62..9993.39 rows=9328 width=28)
-               Group By: "rollup".unnamed_attr_2, "rollup".unnamed_attr_1, "rollup"."grouping", "rollup"."group_id"
-               ->  Subquery Scan "rollup"  (cost=8018.50..9589.81 rows=1435 width=28)
-                     ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=8018.50..9546.76 rows=1435 width=28)
-                           Hash Key: "rollup".unnamed_attr_2, "rollup".unnamed_attr_1, "grouping", group_id()
-                           ->  GroupAggregate  (cost=8018.50..9460.66 rows=1435 width=28)
-                                 Group By: "rollup"."grouping", "rollup"."group_id"
-                                 ->  Subquery Scan "rollup"  (cost=8018.50..9326.13 rows=2153 width=28)
-                                       ->  GroupAggregate  (cost=8018.50..9261.56 rows=2153 width=28)
-                                             Group By: "rollup".unnamed_attr_2, "rollup"."grouping", "rollup"."group_id"
-                                             ->  Subquery Scan "rollup"  (cost=8018.50..9073.22 rows=2870 width=28)
-                                                   ->  GroupAggregate  (cost=8018.50..8987.12 rows=2870 width=28)
-                                                         Group By: public.foo.b, public.foo.a
-                                                         ->  Sort  (cost=8018.50..8233.75 rows=28700 width=8)
-                                                               Sort Key: public.foo.b, public.foo.a
-                                                               ->  Seq Scan on foo  (cost=0.00..961.00 rows=28700 width=8)
-         ->  HashAggregate  (cost=9116.27..9277.71 rows=4305 width=28)
-               Group By: "rollup".unnamed_attr_1, "rollup".unnamed_attr_2, "rollup"."grouping", "rollup"."group_id"
-               ->  Subquery Scan "rollup"  (cost=8018.50..9062.46 rows=1435 width=28)
-                     ->  Redistribute Motion 3:3  (slice2; segments: 3)  (cost=8018.50..9019.41 rows=1435 width=28)
-                           Hash Key: public.foo.a, public.foo.b, "grouping", group_id()
-                           ->  GroupAggregate  (cost=8018.50..8933.31 rows=1435 width=28)
-                                 Group By: public.foo.a
-                                 ->  Sort  (cost=8018.50..8233.75 rows=28700 width=8)
-                                       Sort Key: public.foo.a
-                                       ->  Seq Scan on foo  (cost=0.00..961.00 rows=28700 width=8)
- Settings:  optimizer=on
- Optimizer status: legacy query optimizer
-(30 rows)
-```
-
-Since this query is not supported by GPORCA, HAWQ falls back to the legacy optimizer.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/gporca/query-gporca-features.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-features.html.md.erb b/query/gporca/query-gporca-features.html.md.erb
deleted file mode 100644
index 4941866..0000000
--- a/query/gporca/query-gporca-features.html.md.erb
+++ /dev/null
@@ -1,215 +0,0 @@
----
-title: GPORCA Features and Enhancements
----
-
-GPORCA includes enhancements for specific types of queries and operations.  GPORCA also includes these optimization enhancements:
-
--   Improved join ordering
--   Join-Aggregate reordering
--   Sort order optimization
--   Data skew estimates included in query optimization
-
-## <a id="topic_dwy_zml_gr"></a>Queries Against Partitioned Tables
-
-GPORCA includes these enhancements for queries against partitioned tables:
-
--   Partition elimination is improved.
--   Query plan can contain the `Partition selector` operator.
--   Partitions are not enumerated in `EXPLAIN` plans.
-
-    For queries that involve static partition selection where the partitioning key is compared to a constant, GPORCA lists the number of partitions to be scanned in the `EXPLAIN` output under the Partition Selector operator. This example Partition Selector operator shows the filter and number of partitions selected:
-
-    ``` pre
-    Partition Selector for Part_Table (dynamic scan id: 1) 
-           Filter: a > 10
-           Partitions selected:  1 (out of 3)
-    ```
-
-    For queries that involve dynamic partition selection where the partitioning key is compared to a variable, the number of partitions that are scanned will be known only during query execution. The partitions selected are not shown in the `EXPLAIN` output.
-
--   Plan size is independent of number of partitions.
--   Out of memory errors caused by number of partitions are reduced.
-
-This example `CREATE TABLE` command creates a range partitioned table.
-
-``` sql
-CREATE TABLE sales(order_id int, item_id int, amount numeric(15,2), 
-      date date, yr_qtr int)
-   RANGE PARTITIONED BY yr_qtr;
-```
-
-GPORCA improves on these types of queries against partitioned tables:
-
--   Full table scan. Partitions are not enumerated in plans.
-
-    ``` sql
-    SELECT * FROM sales;
-    ```
-
--   Query with a constant filter predicate. Partition elimination is performed.
-
-    ``` sql
-    SELECT * FROM sales WHERE yr_qtr = 201201;
-    ```
-
--   Range selection. Partition elimination is performed.
-
-    ``` sql
-    SELECT * FROM sales WHERE yr_qtr BETWEEN 201301 AND 201404 ;
-    ```
-
--   Joins involving partitioned tables. In this example, the partitioned dimension table *date\_dim* is joined with fact table *catalog\_sales*:
-
-    ``` sql
-    SELECT * FROM catalog_sales
-       WHERE date_id IN (SELECT id FROM date_dim WHERE month=12);
-    ```
-
-## <a id="topic_vph_wml_gr"></a>Queries that Contain Subqueries
-
-GPORCA handles subqueries more efficiently. A subquery is query that is nested inside an outer query block. In the following query, the `SELECT` in the `WHERE` clause is a subquery.
-
-``` sql
-SELECT * FROM part
-  WHERE price > (SELECT avg(price) FROM part);
-```
-
-GPORCA also handles queries that contain a correlated subquery (CSQ) more efficiently. A correlated subquery is a subquery that uses values from the outer query. In the following query, the `price` column is used in both the outer query and the subquery.
-
-``` sql
-SELECT * FROM part p1
-  WHERE price > (SELECT avg(price) FROM part p2 
-  WHERE  p2.brand = p1.brand);
-```
-
-GPORCA generates more efficient plans for the following types of subqueries:
-
--   CSQ in the `SELECT` list.
-
-    ``` sql
-    SELECT *,
-     (SELECT min(price) FROM part p2 WHERE p1.brand = p2.brand)
-     AS foo
-    FROM part p1;
-    ```
-
--   CSQ in disjunctive (`OR`) filters.
-
-    ``` sql
-    SELECT FROM part p1 WHERE p_size > 40 OR 
-          p_retailprice > 
-          (SELECT avg(p_retailprice) 
-              FROM part p2 
-              WHERE p2.p_brand = p1.p_brand)
-    ```
-
--   Nested CSQ with skip level correlations
-
-    ``` sql
-    SELECT * FROM part p1 WHERE p1.p_partkey 
-    IN (SELECT p_partkey FROM part p2 WHERE p2.p_retailprice = 
-         (SELECT min(p_retailprice)
-           FROM part p3 
-           WHERE p3.p_brand = p1.p_brand)
-    );
-    ```
-
-    **Note:** Nested CSQ with skip level correlations are not supported by the legacy query optimizer.
-
--   CSQ with aggregate and inequality. This example contains a CSQ with an inequality.
-
-    ``` sql
-    SELECT * FROM part p1 WHERE p1.p_retailprice =
-     (SELECT min(p_retailprice) FROM part p2 WHERE p2.p_brand <> p1.p_brand);
-    ```
-
-<!-- -->
-
--   CSQ that must return one row.
-
-    ``` sql
-    SELECT p_partkey, 
-      (SELECT p_retailprice FROM part p2 WHERE p2.p_brand = p1.p_brand )
-    FROM part p1;
-    ```
-
-## <a id="topic_c3v_rml_gr"></a>Queries that Contain Common Table Expressions
-
-GPORCA handles queries that contain the `WITH` clause. The `WITH` clause, also known as a common table expression (CTE), generates temporary tables that exist only for the query. This example query contains a CTE.
-
-``` sql
-WITH v AS (SELECT a, sum(b) as s FROM T WHERE c < 10 GROUP BY a)
-  SELECT *FROM  v AS v1 ,  v AS v2
-  WHERE v1.a <> v2.a AND v1.s < v2.s;
-```
-
-As part of query optimization, GPORCA can push down predicates into a CTE. For example query, GPORCA pushes the equality predicates to the CTE.
-
-``` sql
-WITH v AS (SELECT a, sum(b) as s FROM T GROUP BY a)
-  SELECT *
-  FROM v as v1, v as v2, v as v3
-  WHERE v1.a < v2.a
-    AND v1.s < v3.s
-    AND v1.a = 10
-    AND v2.a = 20
-    AND v3.a = 30;
-```
-
-GPORCA can handle these types of CTEs:
-
--   CTE that defines one or multiple tables. In this query, the CTE defines two tables.
-
-    ``` sql
-    WITH cte1 AS (SELECT a, sum(b) as s FROM T 
-                   where c < 10 GROUP BY a),
-          cte2 AS (SELECT a, s FROM cte1 where s > 1000)
-      SELECT *
-      FROM cte1 as v1, cte2 as v2, cte2 as v3
-      WHERE v1.a < v2.a AND v1.s < v3.s;
-    ```
-
--   Nested CTEs.
-
-    ``` sql
-    WITH v AS (WITH w AS (SELECT a, b FROM foo 
-                          WHERE b < 5) 
-               SELECT w1.a, w2.b 
-               FROM w AS w1, w AS w2 
-               WHERE w1.a = w2.a AND w1.a > 2)
-      SELECT v1.a, v2.a, v2.b
-      FROM v as v1, v as v2
-      WHERE v1.a < v2.a; 
-    ```
-
-## <a id="topic_plx_mml_gr"></a>DML Operation Enhancements with GPORCA
-
-GPORCA contains enhancements for DML operations such as `INSERT`.
-
--   A DML node in a query plan is a query plan operator.
-    -   Can appear anywhere in the plan, as a regular node (top slice only for now)
-    -   Can have consumers
--   New query plan operator `Assert` is used for constraints checking.
-
-    This example plan shows the `Assert` operator.
-
-    ```
-    QUERY PLAN
-    ------------------------------------------------------------
-     Insert  (cost=0.00..4.61 rows=3 width=8)
-       ->  Assert  (cost=0.00..3.37 rows=3 width=24)
-             Assert Cond: (dmlsource.a > 2) IS DISTINCT FROM 
-    false
-             ->  Assert  (cost=0.00..2.25 rows=3 width=24)
-                   Assert Cond: NOT dmlsource.b IS NULL
-                   ->  Result  (cost=0.00..1.14 rows=3 width=24)
-                         ->  Table Scan on dmlsource
-    ```
-
-## <a id="topic_anl_t3t_pv"></a>Queries with Distinct Qualified Aggregates (DQA)
-
-GPORCA improves performance for queries that contain distinct qualified aggregates (DQA) without a grouping column and when the table is not distributed on the columns used by the DQA. When encountering these types of queries, GPORCA uses an alternative plan that evaluates the aggregate functions in three stages (local, intermediate, and global aggregations).
-
-See [optimizer\_prefer\_scalar\_dqa\_multistage\_agg](../../reference/guc/parameter_definitions.html#optimizer_prefer_scalar_dqa_multistage_agg) for information on the configuration parameter that controls this behavior.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/gporca/query-gporca-limitations.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-limitations.html.md.erb b/query/gporca/query-gporca-limitations.html.md.erb
deleted file mode 100644
index b63f0d2..0000000
--- a/query/gporca/query-gporca-limitations.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: GPORCA Limitations
----
-
-<span class="shortdesc">There are limitations in HAWQ when GPORCA is enabled. GPORCA and the legacy query optimizer currently coexist in HAWQ because GPORCA does not support all HAWQ features. </span>
-
-
-## <a id="topic_kgn_vxl_vp"></a>Unsupported SQL Query Features
-
-These HAWQ features are unsupported when GPORCA is enabled:
-
--   Indexed expressions
--   `PERCENTILE` window function
--   External parameters
--   SortMergeJoin (SMJ)
--   Ordered aggregations
--   These analytics extensions:
-    -   CUBE
-    -   Multiple grouping sets
--   These scalar operators:
-    -   `ROW`
-    -   `ROWCOMPARE`
-    -   `FIELDSELECT`
--   Multiple `DISTINCT` qualified aggregate functions
--   Inverse distribution functions
-
-## <a id="topic_u4t_vxl_vp"></a>Performance Regressions
-
-When GPORCA is enabled in HAWQ, the following features are known performance regressions:
-
--   Short running queries - For GPORCA, short running queries might encounter additional overhead due to GPORCA enhancements for determining an optimal query execution plan.
--   `ANALYZE` - For GPORCA, the `ANALYZE` command generates root partition statistics for partitioned tables. For the legacy optimizer, these statistics are not generated.
--   DML operations - For GPORCA, DML enhancements including the support of updates on partition and distribution keys might require additional overhead.
-
-Also, enhanced functionality of the features from previous versions could result in additional time required when GPORCA executes SQL statements with the features.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/gporca/query-gporca-notes.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-notes.html.md.erb b/query/gporca/query-gporca-notes.html.md.erb
deleted file mode 100644
index ed943e4..0000000
--- a/query/gporca/query-gporca-notes.html.md.erb
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Considerations when Using GPORCA
----
-
-<span class="shortdesc"> To execute queries optimally with GPORCA, consider certain criteria for the query. </span>
-
-Ensure the following criteria are met:
-
--   The table does not contain multi-column partition keys.
--   The table does not contain multi-level partitioning.
--   The query does not run against master only tables such as the system table *pg\_attribute*.
--   Statistics have been collected on the root partition of a partitioned table.
-
-If the partitioned table contains more than 20,000 partitions, consider a redesign of the table schema.
-
-GPORCA generates minidumps to describe the optimization context for a given query. Use the minidump files to analyze HAWQ issues. The minidump file is located under the master data directory and uses the following naming format:
-
-`Minidump_date_time.mdp`
-
-For information about the minidump file, see the server configuration parameter `optimizer_minidump`.
-
-When the `EXPLAIN ANALYZE` command uses GPORCA, the `EXPLAIN` plan shows only the number of partitions that are being eliminated. The scanned partitions are not shown. To show name of the scanned partitions in the segment logs set the server configuration parameter `gp_log_dynamic_partition_pruning` to `on`. This example `SET` command enables the parameter.
-
-``` sql
-SET gp_log_dynamic_partition_pruning = on;
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/gporca/query-gporca-optimizer.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-optimizer.html.md.erb b/query/gporca/query-gporca-optimizer.html.md.erb
deleted file mode 100644
index 11814f8..0000000
--- a/query/gporca/query-gporca-optimizer.html.md.erb
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: About GPORCA
----
-
-In HAWQ, you can use GPORCA or the legacy query optimizer.
-
-**Note:** To use the GPORCA query optimizer, you must be running a version of HAWQ built with GPORCA, and GPORCA must be enabled in your HAWQ deployment.
-
-These sections describe GPORCA functionality and usage:
-
--   **[Overview of GPORCA](../../query/gporca/query-gporca-overview.html)**
-
-    GPORCA extends the planning and optimization capabilities of the HAWQ legacy optimizer.
-
--   **[GPORCA Features and Enhancements](../../query/gporca/query-gporca-features.html)**
-
-    GPORCA includes enhancements for specific types of queries and operations:
-
--   **[Enabling GPORCA](../../query/gporca/query-gporca-enable.html)**
-
-    Precompiled versions of HAWQ that include the GPORCA query optimizer enable it by default, no additional configuration is required. To use the GPORCA query optimizer in a HAWQ built from source, your build must include GPORCA. You must also enable specific HAWQ server configuration parameters at or after install time:
-
--   **[Considerations when Using GPORCA](../../query/gporca/query-gporca-notes.html)**
-
-    To execute queries optimally with GPORCA, consider certain criteria for the query.
-
--   **[Determining The Query Optimizer In Use](../../query/gporca/query-gporca-fallback.html)**
-
-    When GPORCA is enabled, you can determine if HAWQ is using GPORCA or is falling back to the legacy query optimizer.
-
--   **[Changed Behavior with GPORCA](../../query/gporca/query-gporca-changed.html)**
-
-    When GPORCA is enabled, HAWQ's behavior changes. This topic describes these changes.
-
--   **[GPORCA Limitations](../../query/gporca/query-gporca-limitations.html)**
-
-    There are limitations in HAWQ when GPORCA is enabled. GPORCA and the legacy query optimizer currently coexist in HAWQ because GPORCA does not support all HAWQ features.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/gporca/query-gporca-overview.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-overview.html.md.erb b/query/gporca/query-gporca-overview.html.md.erb
deleted file mode 100644
index 56f97eb..0000000
--- a/query/gporca/query-gporca-overview.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: Overview of GPORCA
----
-
-<span class="shortdesc">GPORCA extends the planning and optimization capabilities of the HAWQ legacy optimizer. </span> GPORCA is extensible and achieves better optimization in multi-core architecture environments. When GPORCA is available in your HAWQ installation and enabled, HAWQ uses GPORCA to generate an execution plan for a query when possible.
-
-GPORCA also enhances HAWQ query performance tuning in the following areas:
-
--   Queries against partitioned tables
--   Queries that contain a common table expression (CTE)
--   Queries that contain subqueries
-
-The legacy and GPORCA query optimizers coexist in HAWQ. The default query optimizer is GPORCA. When GPORCA is available and enabled in your HAWQ installation, HAWQ uses GPORCA to generate an execution plan for a query when possible. If GPORCA cannot be used, the legacy query optimizer is used.
-
-The following flow chart shows how GPORCA�fits into the query planning architecture:
-
-<img src="../../images/gporca.png" id="topic1__image_rf5_svc_fv" class="image" width="672" />
-
-You can inspect the log to determine whether GPORCA�or the legacy query optimizer produced the plan. The log message, "Optimizer produced plan" indicates that GPORCA�generated the plan for your query. If the legacy query optimizer generated the plan, the log message reads "Planner produced plan". See [Determining The Query Optimizer In Use](query-gporca-fallback.html#topic1).
-
-**Note:** All legacy query optimizer (planner) server configuration parameters are ignored by GPORCA. However, if HAWQ falls back to the legacy optimizer, the planner server configuration parameters will impact the query plan generation.
-
-


[46/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/redirects.rb
----------------------------------------------------------------------
diff --git a/book/redirects.rb b/book/redirects.rb
new file mode 100644
index 0000000..a09023b
--- /dev/null
+++ b/book/redirects.rb
@@ -0,0 +1,4 @@
+r301 '/', '/docs/userguide/2.1.0.0-incubating/overview/HAWQOverview.html'
+r301 '/index.html', '/docs/userguide/2.1.0.0-incubating/overview/HAWQOverview.html'
+r301 '/docs', '/docs/userguide/2.1.0.0-incubating/overview/HAWQOverview.html'
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/client_auth.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/client_auth.html.md.erb b/clientaccess/client_auth.html.md.erb
deleted file mode 100644
index a13f4e1..0000000
--- a/clientaccess/client_auth.html.md.erb
+++ /dev/null
@@ -1,193 +0,0 @@
----
-title: Configuring Client Authentication
----
-
-When a HAWQ system is first initialized, the system contains one predefined *superuser* role. This role will have the same name as the operating system user who initialized the HAWQ system. This role is referred to as `gpadmin`. By default, the system is configured to only allow local connections to the database from the `gpadmin` role. To allow any other roles to connect, or to allow connections from remote hosts, you configure HAWQ to allow such connections.
-
-## <a id="topic2"></a>Allowing Connections to HAWQ 
-
-Client access and authentication is controlled by the standard PostgreSQL host-based authentication file, `pg_hba.conf`. In HAWQ, the `pg_hba.conf` file of the master instance controls client access and authentication to your HAWQ system. HAWQ segments have `pg_hba.conf` files that are configured to allow only client connections from the master host and never accept client connections. Do not alter the `pg_hba.conf` file on your segments.
-
-See [The pg\_hba.conf File](http://www.postgresql.org/docs/9.0/interactive/auth-pg-hba-conf.html) in the PostgreSQL documentation for more information.
-
-The general format of the `pg_hba.conf` file is a set of records, one per line. HAWQ ignores blank lines and any text after the `#` comment character. A record consists of a number of fields that are separated by spaces and/or tabs. Fields can contain white space if the field value is quoted. Records cannot be continued across lines. Each remote client access record has the following format:
-
-```
-host|hostssl|hostnossl���<database>���<role>���<CIDR-address>|<IP-address>,<IP-mask>���<authentication-method>
-```
-
-Each UNIX-domain socket access record has the following format:
-
-```
-local���<database>���<role>���<authentication-method>
-```
-
-The following table describes meaning of each field.
-
-|Field|Description|
-|-----|-----------|
-|local|Matches connection attempts using UNIX-domain sockets. Without a record of this type, UNIX-domain socket connections are disallowed.|
-|host|Matches connection attempts made using TCP/IP. Remote TCP/IP connections will not be possible unless the server is started with an appropriate value for the listen\_addresses server configuration parameter.|
-|hostssl|Matches connection attempts made using TCP/IP, but only when the connection is made with SSL encryption. SSL must be enabled at server start time by setting the ssl configuration parameter|
-|hostnossl|Matches connection attempts made over TCP/IP that do not use SSL.|
-|\<database\>|Specifies which database names this record matches. The value `all` specifies that it matches all databases. Multiple database names can be supplied by separating them with commas. A separate file containing database names can be specified by preceding the file name with @.|
-|\<role\>|Specifies which database role names this record matches. The value `all` specifies that it matches all roles. If the specified role is a group and you want all members of that group to be included, precede the role name with a +. Multiple role names can be supplied by separating them with commas. A separate file containing role names can be specified by preceding the file name with @.|
-|\<CIDR-address\>|Specifies the client machine IP address range that this record matches. It contains an IP address in standard dotted decimal notation and a CIDR mask length. IP addresses can only be specified numerically, not as domain or host names. The mask length indicates the number of high-order bits of the client IP address that must match. Bits to the right of this must be zero in the given IP address. There must not be any white space between the IP address, the /, and the CIDR mask length. Typical examples of a CIDR-address are 192.0.2.0/32 for a single host, or 192.0.2.2/24 for a small network, or 192.0.2.3/16 for a larger one. To specify a single host, use a CIDR mask of 32 for IPv4 or 128 for IPv6. In a network address, do not omit trailing zeroes.|
-|\<IP-address\>, \<IP-mask\>|These fields can be used as an alternative to the CIDR-address notation. Instead of specifying the mask length, the actual mask is specified in a separate column. For example, 255.255.255.255 represents a CIDR mask length of 32. These fields only apply to host, hostssl, and hostnossl records.|
-|\<authentication-method\>|Specifies the authentication method to use when connecting. HAWQ supports the [authentication methods](http://www.postgresql.org/docs/9.0/static/auth-methods.html) supported by PostgreSQL 9.0.|
-
-### <a id="topic3"></a>Editing the pg\_hba.conf File 
-
-This example shows how to edit the `pg_hba.conf` file of the master to allow remote client access to all databases from all roles using encrypted password authentication.
-
-**Note:** For a more secure system, consider removing all connections that use trust authentication from your master `pg_hba.conf`. Trust authentication means the role is granted access without any authentication, therefore bypassing all security. Replace trust entries with ident authentication if your system has an ident service available.
-
-#### <a id="ip144328"></a>Editing pg\_hba.conf 
-
-1.  Obtain the master data directory location from the `hawq_master_directory` property value in `hawq-site.xml` and use a text editor to open the `pg_hba.conf` file in this directory.
-2.  Add a line to the file for each type of connection you want to allow. Records are read sequentially, so the order of the records is significant. Typically, earlier records will have tight connection match parameters and weaker authentication methods, while later records will have looser match parameters and stronger authentication methods. For example:
-
-    ```
-    # allow the gpadmin user local access to all databases
-    # using ident authentication
-    local ��all ��gpadmin ��ident ��������sameuser
-    host ���all ��gpadmin ��127.0.0.1/32 �ident
-    host ���all ��gpadmin ��::1/128 ������ident
-    # allow the 'dba' role access to any database from any
-    # host with IP address 192.168.x.x and use md5 encrypted
-    # passwords to authenticate the user
-    # Note that to use SHA-256 encryption, replace *md5* with
-    # password in the line below
-    host ���all ��dba ��192.168.0.0/32 �md5
-    # allow all roles access to any database from any
-    # host and use ldap to authenticate the user. HAWQ role
-    # names must match the LDAP common name.
-    host ���all ��all ��192.168.0.0/32 �ldap ldapserver=usldap1
-    ldapport=1389 ldapprefix="cn="
-    ldapsuffix=",ou=People,dc=company,dc=com"
-    ```
-
-3.  Save and close the file.
-4.  Reload the `pg_hba.conf `configuration file for your changes to take effect. Include the `-M fast` option if you have active/open database connections:
-
-    ``` bash
-    $ hawq stop cluster -u [-M fast]
-    ```
-    
-
-
-## <a id="topic4"></a>Limiting Concurrent Connections 
-
-HAWQ allocates some resources on a per-connection basis, so setting the maximum number of connections allowed is recommended.
-
-To limit the number of active concurrent sessions to your HAWQ system, you can configure the `max_connections` server configuration parameter on master or the `seg_max_connections` server configuration parameter on segments. These parameters are *local* parameters, meaning that you must set them in the `hawq-site.xml` file of all HAWQ instances.
-
-When you set `max_connections`, you must also set the dependent parameter `max_prepared_transactions`. This value must be at least as large as the value of `max_connections`, and all HAWQ instances should be set to the same value.
-
-Example `$GPHOME/etc/hawq-site.xml` configuration:
-
-``` xml
-  <property>
-      <name>max_connections</name>
-      <value>500</value>
-  </property>
-  <property>
-      <name>max_prepared_transactions</name>
-      <value>1000</value>
-  </property>
-  <property>
-      <name>seg_max_connections</name>
-      <value>3200</value>
-  </property>
-```
-
-**Note:** Raising the values of these parameters may cause HAWQ to request more shared memory. To mitigate this effect, consider decreasing other memory-related server configuration parameters such as [gp\_cached\_segworkers\_threshold](../reference/guc/parameter_definitions.html#gp_cached_segworkers_threshold).
-
-
-### <a id="ip142411"></a>Setting the number of allowed connections
-
-You will perform different procedures to set connection-related server configuration parameters for your HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set server configuration parameters.
-
-If you use Ambari to manage your cluster:
-
-1. Set the `max_connections`, `seg_max_connections`, and `max_prepared_transactions` configuration properties via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down.
-2. Select **Service Actions > Restart All** to load the updated configuration.
-
-If you manage your cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-    
-2.  Use the `hawq config` utility to set the values of the `max_connections`, `seg_max_connections`, and `max_prepared_transactions` parameters to values appropriate for your deployment. For example: 
-
-    ``` bash
-    $ hawq config -c max_connections -v 100
-    $ hawq config -c seg_max_connections -v 6400
-    $ hawq config -c max_prepared_transactions -v 200
-    ```
-
-    The value of `max_prepared_transactions` must be greater than or equal to `max_connections`.
-
-5.  Load the new configuration values by restarting your HAWQ cluster:
-
-    ``` bash
-    $ hawq restart cluster
-    ```
-
-6.  Use the `-s` option to `hawq config` to display server configuration parameter values:
-
-    ``` bash
-    $ hawq config -s max_connections
-    $ hawq config -s seg_max_connections
-    ```
-
-
-## <a id="topic5"></a>Encrypting Client/Server Connections 
-
-Enable SSL for client connections to HAWQ to encrypt the data passed over the network between the client and the database.
-
-HAWQ has native support for SSL connections between the client and the master server. SSL connections prevent third parties from snooping on the packets, and also prevent man-in-the-middle attacks. SSL should be used whenever the client connection goes through an insecure link, and must be used whenever client certificate authentication is used.
-
-Enabling SSL requires that OpenSSL be installed on both the client and the master server systems. HAWQ can be started with SSL enabled by setting the server configuration parameter `ssl` to `on` in the master `hawq-site.xml`. When starting in SSL mode, the server will look for the files `server.key` \(server private key\) and `server.crt` \(server certificate\) in the master data directory. These files must be set up correctly before an SSL-enabled HAWQ system can start.
-
-**Important:** Do not protect the private key with a passphrase. The server does not prompt for a passphrase for the private key, and the database startup fails with an error if one is required.
-
-A self-signed certificate can be used for testing, but a certificate signed by a certificate authority \(CA\) should be used in production, so the client can verify the identity of the server. Either a global or local CA can be used. If all the clients are local to the organization, a local CA is recommended.
-
-### <a id="topic6"></a>Creating a Self-signed Certificate without a Passphrase for Testing Only 
-
-To create a quick self-signed certificate for the server for testing, use the following OpenSSL command:
-
-```
-# openssl req -new -text -out server.req
-```
-
-Enter the information requested by the prompts. Be sure to enter the local host name as *Common Name*. The challenge password can be left blank.
-
-The program will generate a key that is passphrase protected, and does not accept a passphrase that is less than four characters long.
-
-To use this certificate with HAWQ, remove the passphrase with the following commands:
-
-```
-# openssl rsa -in privkey.pem -out server.key
-# rm privkey.pem
-```
-
-Enter the old passphrase when prompted to unlock the existing key.
-
-Then, enter the following command to turn the certificate into a self-signed certificate and to copy the key and certificate to a location where the server will look for them.
-
-``` 
-# openssl req -x509 -in server.req -text -key server.key -out server.crt
-```
-
-Finally, change the permissions on the key with the following command. The server will reject the file if the permissions are less restrictive than these.
-
-```
-# chmod og-rwx server.key
-```
-
-For more details on how to create your server private key and certificate, refer to the [OpenSSL documentation](https://www.openssl.org/docs/).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/disable-kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/disable-kerberos.html.md.erb b/clientaccess/disable-kerberos.html.md.erb
deleted file mode 100644
index 5646eec..0000000
--- a/clientaccess/disable-kerberos.html.md.erb
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: Disabling Kerberos Security
----
-
-Follow these steps to disable Kerberos security for HAWQ and PXF for manual installations.
-
-**Note:** If you install or manage your cluster using Ambari, then the HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you disable security for Hadoop. The following instructions are only necessary for manual installations, or when Hadoop security is disabled outside of Ambari.
-
-1.  Disable Kerberos on the Hadoop cluster on which you use HAWQ.
-2.  Disable security for HAWQ:
-    1.  Login to the HAWQ database master server as the `gpadmin` user:
-
-        ``` bash
-        $ ssh hawq_master_fqdn
-        ```
-
-    2.  Run the following command to set up HAWQ environment variables:
-
-        ``` bash
-        $ source /usr/local/hawq/greenplum_path.sh
-        ```
-
-    3.  Start HAWQ if necessary:
-
-        ``` bash
-        $ hawq start -a
-        ```
-
-    4.  Run the following command to disable security:
-
-        ``` bash
-        $ hawq config --masteronly -c enable_secure_filesystem -v \u201coff\u201d
-        ```
-
-    5.  Change the permission of the HAWQ HDFS data directory:
-
-        ``` bash
-        $ sudo -u hdfs hdfs dfs -chown -R gpadmin:gpadmin /hawq_data
-        ```
-
-    6.  On the HAWQ master node and on all segment server nodes, edit the `/usr/local/hawq/etc/hdfs-client.xml` file to disable kerberos security. Comment or remove the following properties in each file:
-
-        ``` xml
-        <!--
-        <property>
-          <name>hadoop.security.authentication</name>
-          <value>kerberos</value>
-        </property>
-
-        <property>
-          <name>dfs.namenode.kerberos.principal</name>
-          <value>nn/_HOST@LOCAL.DOMAIN</value>
-        </property>
-        -->
-        ```
-
-    7.  Restart HAWQ:
-
-        ``` bash
-        $ hawq restart -a -M fast
-        ```
-
-3.  Disable security for PXF:
-    1.  On each PXF node, edit the `/etc/gphd/pxf/conf/pxf-site.xml` to comment or remove the properties:
-
-        ``` xml
-        <!--
-        <property>
-            <name>pxf.service.kerberos.keytab</name>
-            <value>/etc/security/phd/keytabs/pxf.service.keytab</value>
-            <description>path to keytab file owned by pxf service
-            with permissions 0400</description>
-        </property>
-
-        <property>
-            <name>pxf.service.kerberos.principal</name>
-            <value>pxf/_HOST@PHD.LOCAL</value>
-            <description>Kerberos principal pxf service should use.
-            _HOST is replaced automatically with hostnames
-            FQDN</description>
-        </property>
-        -->
-        ```
-
-    2.  Restart the PXF service.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-connecting-with-psql.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-connecting-with-psql.html.md.erb b/clientaccess/g-connecting-with-psql.html.md.erb
deleted file mode 100644
index 0fa501c..0000000
--- a/clientaccess/g-connecting-with-psql.html.md.erb
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Connecting with psql
----
-
-Depending on the default values used or the environment variables you have set, the following examples show how to access a database via `psql`:
-
-``` bash
-$ psql -d gpdatabase -h master_host -p 5432 -U `gpadmin`
-```
-
-``` bash
-$ psql gpdatabase
-```
-
-``` bash
-$ psql
-```
-
-If a user-defined database has not yet been created, you can access the system by connecting to the `template1` database. For example:
-
-``` bash
-$ psql template1
-```
-
-After connecting to a database, `psql` provides a prompt with the name of the database to which `psql` is currently connected, followed by the string `=>` \(or `=#` if you are the database superuser\). For example:
-
-``` sql
-gpdatabase=>
-```
-
-At the prompt, you may type in SQL commands. A SQL command must end with a `;` \(semicolon\) in order to be sent to the server and executed. For example:
-
-``` sql
-=> SELECT * FROM mytable;
-```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-database-application-interfaces.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-database-application-interfaces.html.md.erb b/clientaccess/g-database-application-interfaces.html.md.erb
deleted file mode 100644
index 29e22c5..0000000
--- a/clientaccess/g-database-application-interfaces.html.md.erb
+++ /dev/null
@@ -1,96 +0,0 @@
----
-title: HAWQ Database Drivers and APIs
----
-
-You may want to connect your existing Business Intelligence (BI) or Analytics applications with HAWQ. The database application programming interfaces most commonly used with HAWQ are the Postgres and ODBC and JDBC APIs.
-
-HAWQ provides the following connectivity tools for connecting to the database:
-
-  - ODBC driver
-  - JDBC driver
-  - `libpq` - PostgreSQL C API
-
-## <a id="dbdriver"></a>HAWQ Drivers
-
-ODBC and JDBC drivers for HAWQ are available as a separate download from Pivotal Network [Pivotal Network](https://network.pivotal.io/products/pivotal-hdb).
-
-### <a id="odbc_driver"></a>ODBC Driver
-
-The ODBC API specifies a standard set of C interfaces for accessing database management systems.  For additional information on using the ODBC API, refer to the [ODBC Programmer's Reference](https://msdn.microsoft.com/en-us/library/ms714177(v=vs.85).aspx) documentation.
-
-HAWQ supports the DataDirect ODBC Driver. Installation instructions for this driver are provided on the Pivotal Network driver download page. Refer to [HAWQ ODBC Driver](http://media.datadirect.com/download/docs/odbc/allodbc/#page/odbc%2Fthe-greenplum-wire-protocol-driver.html%23) for HAWQ-specific ODBC driver information.
-
-#### <a id="odbc_driver_connurl"></a>Connection Data Source
-The information required by the HAWQ ODBC driver to connect to a database is typically stored in a named data source. Depending on your platform, you may use [GUI](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FData_Source_Configuration_through_a_GUI_14.html%23) or [command line](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FData_Source_Configuration_in_the_UNIX_2fLinux_odbc_13.html%23) tools to create your data source definition. On Linux, ODBC data sources are typically defined in a file named `odbc.ini`. 
-
-Commonly-specified HAWQ ODBC data source connection properties include:
-
-| Property Name                                                    | Value Description                                                                                                                                                                                         |
-|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Database | Name of the database to which you want to connect. |
-| Driver   | Full path to the ODBC driver library file.                                                                                           |
-| HostName              | HAWQ master host name.                                                                                     |
-| MaxLongVarcharSize      | Maximum size of columns of type long varchar.                                                                                      |
-| Password              | Password used to connect to the specified database.                                                                                       |
-| PortNumber              | HAWQ master database port number.                                                                                      |
-
-Refer to [Connection Option Descriptions](http://media.datadirect.com/download/docs/odbc/allodbc/#page/odbc%2Fgreenplum-connection-option-descriptions.html%23) for a list of ODBC connection properties supported by the HAWQ DataDirect ODBC driver.
-
-Example HAWQ DataDirect ODBC driver data source definition:
-
-``` shell
-[HAWQ-201]
-Driver=/usr/local/hawq_drivers/odbc/lib/ddgplm27.so
-Description=DataDirect 7.1 Greenplum Wire Protocol - for HAWQ
-Database=getstartdb
-HostName=hdm1
-PortNumber=5432
-Password=changeme
-MaxLongVarcharSize=8192
-```
-
-The first line, `[HAWQ-201]`, identifies the name of the data source.
-
-ODBC connection properties may also be specified in a connection string identifying either a data source name, the name of a file data source, or the name of a driver.  A HAWQ ODBC connection string has the following format:
-
-``` shell
-([DSN=<data_source_name>]|[FILEDSN=<filename.dsn>]|[DRIVER=<driver_name>])[;<attribute=<value>[;...]]
-```
-
-For additional information on specifying a HAWQ ODBC connection string, refer to [Using a Connection String](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FUsing_a_Connection_String_16.html%23).
-
-### <a id="jdbc_driver"></a>JDBC Driver
-The JDBC API specifies a standard set of Java interfaces to SQL-compliant databases. For additional information on using the JDBC API, refer to the [Java JDBC API](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) documentation.
-
-HAWQ supports the DataDirect JDBC Driver. Installation instructions for this driver are provided on the Pivotal Network driver download page. Refer to [HAWQ JDBC Driver](http://media.datadirect.com/download/docs/jdbc/alljdbc/help.html#page/jdbcconnect%2Fgreenplum-driver.html%23) for HAWQ-specific JDBC driver information.
-
-#### <a id="jdbc_driver_connurl"></a>Connection URL
-Connection URLs for accessing the HAWQ DataDirect JDBC driver must be in the following format:
-
-``` shell
-jdbc:pivotal:greenplum://host:port[;<property>=<value>[;...]]
-```
-
-Commonly-specified HAWQ JDBC connection properties include:
-
-| Property Name                                                    | Value Description                                                                                                                                                                                         |
-|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| DatabaseName | Name of the database to which you want to connect. |
-| User                         | Username used to connect to the specified database.                                                                                           |
-| Password              | Password used to connect to the specified database.                                                                                       |
-
-Refer to [Connection Properties](http://media.datadirect.com/download/docs/jdbc/alljdbc/help.html#page/jdbcconnect%2FConnection_Properties_10.html%23) for a list of JDBC connection properties supported by the HAWQ DataDirect JDBC driver.
-
-Example HAWQ JDBC connection string:
-
-``` shell
-jdbc:pivotal:greenplum://hdm1:5432;DatabaseName=getstartdb;User=hdbuser;Password=hdbpass
-```
-
-## <a id="libpq_api"></a>libpq API
-`libpq` is the C API to PostgreSQL/HAWQ. This API provides a set of library functions enabling client programs to pass queries to the PostgreSQL backend server and to receive the results of those queries.
-
-`libpq` is installed in the `lib/` directory of your HAWQ distribution. `libpq-fe.h`, the header file required for developing front-end PostgreSQL applications, can be found in the `include/` directory.
-
-For additional information on using the `libpq` API, refer to [libpq - C Library](https://www.postgresql.org/docs/8.2/static/libpq.html) in the PostgreSQL documentation.
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-establishing-a-database-session.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-establishing-a-database-session.html.md.erb b/clientaccess/g-establishing-a-database-session.html.md.erb
deleted file mode 100644
index a1c5f1c..0000000
--- a/clientaccess/g-establishing-a-database-session.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Establishing a Database Session
----
-
-Users can connect to HAWQ using a PostgreSQL-compatible client program, such as `psql`. Users and administrators *always* connect to HAWQ through the *master*; the segments cannot accept client connections.
-
-In order to establish a connection to the HAWQ master, you will need to know the following connection information and configure your client program accordingly.
-
-|Connection Parameter|Description|Environment Variable|
-|--------------------|-----------|--------------------|
-|Application name|The application name that is connecting to the database. The default value, held in the `application_name` connection parameter is *psql*.|`$PGAPPNAME`|
-|Database name|The name of the database to which you want to connect. For a newly initialized system, use the `template1` database to connect for the first time.|`$PGDATABASE`|
-|Host name|The host name of the HAWQ master. The default host is the local host.|`$PGHOST`|
-|Port|The port number that the HAWQ master instance is running on. The default is 5432.|`$PGPORT`|
-|User name|The database user \(role\) name to connect as. This is not necessarily the same as your OS user name. Check with your HAWQ administrator if you are not sure what you database user name is. Note that every HAWQ system has one superuser account that is created automatically at initialization time. This account has the same name as the OS name of the user who initialized the HAWQ system \(typically `gpadmin`\).|`$PGUSER`|
-
-[Connecting with psql](g-connecting-with-psql.html) provides example commands for connecting to HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-hawq-database-client-applications.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-hawq-database-client-applications.html.md.erb b/clientaccess/g-hawq-database-client-applications.html.md.erb
deleted file mode 100644
index a1e8ff3..0000000
--- a/clientaccess/g-hawq-database-client-applications.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: HAWQ Client Applications
----
-
-HAWQ comes installed with a number of client utility applications located in the `$GPHOME/bin` directory of your HAWQ master host installation. The following are the most commonly used client utility applications:
-
-|Name|Usage|
-|----|-----|
-|`createdb`|create a new database|
-|`createlang`|define a new procedural language|
-|`createuser`|define a new database role|
-|`dropdb`|remove a database|
-|`droplang`|remove a procedural language|
-|`dropuser`|remove a role|
-|`psql`|PostgreSQL interactive terminal|
-|`reindexdb`|reindex a database|
-|`vacuumdb`|garbage-collect and analyze a database|
-
-When using these client applications, you must connect to a database through the HAWQ master instance. You will need to know the name of your target database, the host name and port number of the master, and what database user name to connect as. This information can be provided on the command-line using the options `-d`, `-h`, `-p`, and `-U` respectively. If an argument is found that does not belong to any option, it will be interpreted as the database name first.
-
-All of these options have default values which will be used if the option is not specified. The default host is the local host. The default port number is 5432. The default user name is your OS system user name, as is the default database name. Note that OS user names and HAWQ user names are not necessarily the same.
-
-If the default values are not correct, you can set the environment variables `PGDATABASE`, `PGHOST`, `PGPORT`, and `PGUSER` to the appropriate values, or use a `psql``~/.pgpass` file to contain frequently-used passwords.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-supported-client-applications.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-supported-client-applications.html.md.erb b/clientaccess/g-supported-client-applications.html.md.erb
deleted file mode 100644
index 202f625..0000000
--- a/clientaccess/g-supported-client-applications.html.md.erb
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: Supported Client Applications
----
-
-Users can connect to HAWQ using various client applications:
-
--   A number of [HAWQ Client Applications](g-hawq-database-client-applications.html) are provided with your HAWQ installation. The `psql` client application provides an interactive command-line interface to HAWQ.
--   Using standard ODBC/JDBC Application Interfaces, such as ODBC and JDBC, users can connect their client applications to HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-troubleshooting-connection-problems.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-troubleshooting-connection-problems.html.md.erb b/clientaccess/g-troubleshooting-connection-problems.html.md.erb
deleted file mode 100644
index 0328606..0000000
--- a/clientaccess/g-troubleshooting-connection-problems.html.md.erb
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Troubleshooting Connection Problems
----
-
-A number of things can prevent a client application from successfully connecting to HAWQ. This topic explains some of the common causes of connection problems and how to correct them.
-
-|Problem|Solution|
-|-------|--------|
-|No pg\_hba.conf entry for host or user|To enable HAWQ to accept remote client connections, you must configure your HAWQ master instance so that connections are allowed from the client hosts and database users that will be connecting to HAWQ. This is done by adding the appropriate entries to the pg\_hba.conf configuration file \(located in the master instance's data directory\). For more detailed information, see [Allowing Connections to HAWQ](client_auth.html).|
-|HAWQ is not running|If the HAWQ master instance is down, users will not be able to connect. You can verify that the HAWQ system is up by running the `hawq state` utility on the HAWQ master host.|
-|Network problems<br/><br/>Interconnect timeouts|If users connect to the HAWQ master host from a remote client, network problems can prevent a connection \(for example, DNS host name resolution problems, the host system is down, and so on.\). To ensure that network problems are not the cause, connect to the HAWQ master host from the remote client host. For example: `ping hostname`. <br/><br/>If the system cannot resolve the host names and IP addresses of the hosts involved in HAWQ, queries and connections will fail. For some operations, connections to the HAWQ master use `localhost` and others use the actual host name, so you must be able to resolve both. If you encounter this error, first make sure you can connect to each host in your HAWQ array from the master host over the network. In the `/etc/hosts` file of the master and all segments, make sure you have the correct host names and IP addresses for all hosts involved in the HAWQ array. The `127.0.0.1` IP must resolve to `localho
 st`.|
-|Too many clients already|By default, HAWQ is configured to allow a maximum of 200 concurrent user connections on the master and 1280 connections on a segment. A connection attempt that causes that limit to be exceeded will be refused. This limit is controlled by the `max_connections` parameter on the master instance and by the `seg_max_connections` parameter on segment instances. If you change this setting for the master, you must also make appropriate changes at the segments.|
-|Query failure|Reverse DNS must be configured in your HAWQ cluster network. In cases where reverse DNS has not been configured, failing queries will generate "Failed to reverse DNS lookup for ip \<ip-address\>" warning messages to the HAWQ master node log file. |

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/index.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/index.md.erb b/clientaccess/index.md.erb
deleted file mode 100644
index c88adeb..0000000
--- a/clientaccess/index.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Managing Client Access
----
-
-This section explains how to configure client connections and authentication for HAWQ:
-
-*  <a class="subnav" href="./client_auth.html">Configuring Client Authentication</a>
-*  <a class="subnav" href="./ldap.html">Using LDAP Authentication with TLS/SSL</a>
-*  <a class="subnav" href="./kerberos.html">Using Kerberos Authentication</a>
-*  <a class="subnav" href="./disable-kerberos.html">Disabling Kerberos Security</a>
-*  <a class="subnav" href="./roles_privs.html">Managing Roles and Privileges</a>
-*  <a class="subnav" href="./g-establishing-a-database-session.html">Establishing a Database Session</a>
-*  <a class="subnav" href="./g-supported-client-applications.html">Supported Client Applications</a>
-*  <a class="subnav" href="./g-hawq-database-client-applications.html">HAWQ Client Applications</a>
-*  <a class="subnav" href="./g-connecting-with-psql.html">Connecting with psql</a>
-*  <a class="subnav" href="./g-database-application-interfaces.html">Database Application Interfaces</a>
-*  <a class="subnav" href="./g-troubleshooting-connection-problems.html">Troubleshooting Connection Problems</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/kerberos.html.md.erb b/clientaccess/kerberos.html.md.erb
deleted file mode 100644
index 2e7cfe5..0000000
--- a/clientaccess/kerberos.html.md.erb
+++ /dev/null
@@ -1,308 +0,0 @@
----
-title: Using Kerberos Authentication
----
-
-**Note:** The following steps for enabling Kerberos *are not required* if you install HAWQ using Ambari.
-
-You can control access to HAWQ with a Kerberos authentication server.
-
-HAWQ supports the Generic Security Service Application Program Interface \(GSSAPI\) with Kerberos authentication. GSSAPI provides automatic authentication \(single sign-on\) for systems that support it. You specify the HAWQ users \(roles\) that require Kerberos authentication in the HAWQ configuration file `pg_hba.conf`. The login fails if Kerberos authentication is not available when a role attempts to log in to HAWQ.
-
-Kerberos provides a secure, encrypted authentication service. It does not encrypt data exchanged between the client and database and provides no authorization services. To encrypt data exchanged over the network, you must use an SSL connection. To manage authorization for access to HAWQ databases and objects such as schemas and tables, you use settings in the `pg_hba.conf` file and privileges given to HAWQ users and roles within the database. For information about managing authorization privileges, see [Managing Roles and Privileges](roles_privs.html).
-
-For more information about Kerberos, see [http://web.mit.edu/kerberos/](http://web.mit.edu/kerberos/).
-
-## <a id="kerberos_prereq"></a>Requirements for Using Kerberos with HAWQ 
-
-The following items are required for using Kerberos with HAWQ:
-
--   Kerberos Key Distribution Center \(KDC\) server using the `krb5-server` library
--   Kerberos version 5 `krb5-libs` and `krb5-workstation` packages installed on the HAWQ master host
--   System time on the Kerberos server and HAWQ master host must be synchronized. \(Install Linux `ntp` package on both servers.\)
--   Network connectivity between the Kerberos server and the HAWQ master
--   Java 1.7.0\_17 or later is required to use Kerberos-authenticated JDBC on Red Hat Enterprise Linux 6.x
--   Java 1.6.0\_21 or later is required to use Kerberos-authenticated JDBC on Red Hat Enterprise Linux 4.x or 5.x
-
-## <a id="nr166539"></a>Enabling Kerberos Authentication for HAWQ 
-
-Complete the following tasks to set up Kerberos authentication with HAWQ:
-
-1.  Verify your system satisfies the prequisites for using Kerberos with HAWQ. See [Requirements for Using Kerberos with HAWQ](#kerberos_prereq).
-2.  Set up, or identify, a Kerberos Key Distribution Center \(KDC\) server to use for authentication. See [Install and Configure a Kerberos KDC Server](#task_setup_kdc).
-3.  Create and deploy principals for your HDFS cluster, and ensure that kerberos authentication is enabled and functioning for all HDFS services. See your Hadoop documentation for additional details.
-4.  In a Kerberos database on the KDC server, set up a Kerberos realm and principals on the server. For HAWQ, a principal is a HAWQ role that uses Kerberos authentication. In the Kerberos database, a realm groups together Kerberos principals that are HAWQ roles.
-5.  Create Kerberos keytab files for HAWQ. To access HAWQ, you create a service key known only by Kerberos and HAWQ. On the Kerberos server, the service key is stored in the Kerberos database.
-
-    On the HAWQ master, the service key is stored in key tables, which are files known as keytabs. The service keys are usually stored in the keytab file `/etc/krb5.keytab`. This service key is the equivalent of the service's password, and must be kept secure. Data that is meant to be read-only by the service is encrypted using this key.
-
-6.  Install the Kerberos client packages and the keytab file on HAWQ master.
-7.  Create a Kerberos ticket for `gpadmin` on the HAWQ master node using the keytab file. The ticket contains the Kerberos authentication credentials that grant access to the HAWQ.
-
-With Kerberos authentication configured on the HAWQ, you can use Kerberos for PSQL and JDBC.
-
-[Set up HAWQ with Kerberos for PSQL](#topic6)
-
-[Set up HAWQ with Kerberos for JDBC](#topic9)
-
-## <a id="task_setup_kdc"></a>Install and Configure a Kerberos KDC Server 
-
-Steps to set up a Kerberos Key Distribution Center \(KDC\) server on a Red Hat Enterprise Linux host for use with HAWQ.
-
-Follow these steps to install and configure a Kerberos Key Distribution Center \(KDC\) server on a Red Hat Enterprise Linux host.
-
-1.  Install the Kerberos server packages:
-
-    ```
-    sudo yum install krb5-libs krb5-server krb5-workstation
-    ```
-
-2.  Edit the `/etc/krb5.conf` configuration file. The following example shows a Kerberos server with a default `KRB.EXAMPLE.COM` realm.
-
-    ```
-    [logging]
-    �default = FILE:/var/log/krb5libs.log
-    �kdc = FILE:/var/log/krb5kdc.log
-    �admin_server = FILE:/var/log/kadmind.log
-
-    [libdefaults]
-    �default_realm = KRB.EXAMPLE.COM
-    �dns_lookup_realm = false
-    �dns_lookup_kdc = false
-    �ticket_lifetime = 24h
-    �renew_lifetime = 7d
-    �forwardable = true
-    �default_tgs_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-    �default_tkt_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-    �permitted_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-
-    [realms]
-    �KRB.EXAMPLE.COM = {
-    ��kdc = kerberos-gpdb:88
-    ��admin_server = kerberos-gpdb:749
-    ��default_domain = kerberos-gpdb
-     }
-
-    [domain_realm]
-    �.kerberos-gpdb = KRB.EXAMPLE.COM
-    �kerberos-gpdb = KRB.EXAMPLE.COM
-
-    [appdefaults]
-    �pam = {
-    ����debug = false
-    ����ticket_lifetime = 36000
-    ����renew_lifetime = 36000
-    ����forwardable = true
-    ����krb4_convert = false
-       }
-    ```
-
-    The `kdc` and `admin_server` keys in the `[realms]` section specify the host \(`kerberos-gpdb`\) and port where the Kerberos server is running. IP numbers can be used in place of host names.
-
-    If your Kerberos server manages authentication for other realms, you would instead add the `KRB.EXAMPLE.COM` realm in the `[realms]` and `[domain_realm]` section of the `kdc.conf` file. See the [Kerberos documentation](http://web.mit.edu/kerberos/krb5-latest/doc/) for information about the `kdc.conf` file.
-
-3.  To create a Kerberos KDC database, run the `kdb5_util`.
-
-    ```
-    kdb5_util create -s
-    ```
-
-    The `kdb5_util`create option creates the database to store keys for the Kerberos realms that are managed by this KDC server. The `-s` option creates a stash file. Without the stash file, every time the KDC server starts it requests a password.
-
-4.  Add an administrative user to the KDC database with the `kadmin.local` utility. Because it does not itself depend on Kerberos authentication, the `kadmin.local` utility allows you to add an initial administrative user to the local Kerberos server. To add the user `gpadmin` as an administrative user to the KDC database, run the following command:
-
-    ```
-    kadmin.local -q "addprinc gpadmin/admin"
-    ```
-
-    Most users do not need administrative access to the Kerberos server. They can use `kadmin` to manage their own principals \(for example, to change their own password\). For information about `kadmin`, see the [Kerberos documentation](http://web.mit.edu/kerberos/krb5-latest/doc/).
-
-5.  If needed, edit the `/var/kerberos/krb5kdc/kadm5.acl` file to grant the appropriate permissions to `gpadmin`.
-6.  Start the Kerberos daemons:
-
-    ```
-    /sbin/service krb5kdc start
-    /sbin/service kadmin start
-    ```
-
-7.  To start Kerberos automatically upon restart:
-
-    ```
-    /sbin/chkconfig krb5kdc on
-    /sbin/chkconfig kadmin on
-    ```
-
-
-## <a id="task_m43_vwl_2p"></a>Create HAWQ Roles in the KDC Database 
-
-Add principals to the Kerberos realm for HAWQ.
-
-Start `kadmin.local` in interactive mode, then add two principals to the HAWQ Realm.
-
-1.  Start `kadmin.local` in interactive mode:
-
-    ```
-    kadmin.local
-    ```
-
-2.  Add principals:
-
-    ```
-    kadmin.local: addprinc gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
-    kadmin.local: addprinc postgres/master.test.com@KRB.EXAMPLE.COM
-    ```
-
-    The `addprinc` commands prompt for passwords for each principal. The first `addprinc` creates a HAWQ user as a principal, `gpadmin/kerberos-gpdb`. The second `addprinc` command creates the `postgres` process on the HAWQ master host as a principal in the Kerberos KDC. This principal is required when using Kerberos authentication with HAWQ.
-
-3.  Create a Kerberos keytab file with `kadmin.local`. The following example creates a keytab file `gpdb-kerberos.keytab` in the current directory with authentication information for the two principals.
-
-    ```
-    kadmin.local: xst -k gpdb-kerberos.keytab
-        gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
-        postgres/master.test.com@KRB.EXAMPLE.COM
-    ```
-
-    You will copy this file to the HAWQ master host.
-
-4.  Exit `kadmin.local` interactive mode with the `quit` command:`kadmin.local: quit`
-
-## <a id="topic6"></a>Install and Configure the Kerberos Client 
-
-Steps to install the Kerberos client on the HAWQ master host.
-
-Install the Kerberos client libraries on the HAWQ master and configure the Kerberos client.
-
-1.  Install the Kerberos packages on the HAWQ master.
-
-    ```
-    sudo yum install krb5-libs krb5-workstation
-    ```
-
-2.  Ensure that the `/etc/krb5.conf` file is the same as the one that is on the Kerberos server.
-3.  Copy the `gpdb-kerberos.keytab` file that was generated on the Kerberos server to the HAWQ master host.
-4.  Remove any existing tickets with the Kerberos utility `kdestroy`. Run the utility as root.
-
-    ```
-    sudo kdestroy
-    ```
-
-5.  Use the Kerberos utility `kinit` to request a ticket using the keytab file on the HAWQ master for `gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM`. The `-t` option specifies the keytab file on the HAWQ master.
-
-    ```
-    # kinit -k -t gpdb-kerberos.keytab gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
-    ```
-
-6.  Use the Kerberos utility `klist` to display the contents of the Kerberos ticket cache on the HAWQ master. The following is an example:
-
-    ```screen
-    # klist
-    Ticket cache: FILE:/tmp/krb5cc_108061
-    Default principal: gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
-    Valid starting�����Expires������������Service principal
-    03/28/13 14:50:26��03/29/13 14:50:26��krbtgt/KRB.EXAMPLE.COM ����@KRB.EXAMPLE.COM
-    ����renew until 03/28/13 14:50:26
-    ```
-
-
-### <a id="topic7"></a>Set up HAWQ with Kerberos for PSQL 
-
-Configure a HAWQ to use Kerberos.
-
-After you have set up Kerberos on the HAWQ master, you can configure HAWQ to use Kerberos. For information on setting up the HAWQ master, see [Install and Configure the Kerberos Client](#topic6).
-
-1.  Create a HAWQ administrator role in the database `template1` for the Kerberos principal that is used as the database administrator. The following example uses `gpamin/kerberos-gpdb`.
-
-    ``` bash
-    $ psql template1 -c 'CREATE ROLE "gpadmin/kerberos-gpdb" LOGIN SUPERUSER;'
-
-    ```
-
-    The role you create in the database `template1` will be available in any new HAWQ that you create.
-
-2.  Modify `hawq-site.xml` to specify the location of the keytab file. For example, adding this line to the `hawq-site.xml` specifies the folder /home/gpadmin as the location of the keytab filegpdb-kerberos.keytab.
-
-    ``` xml
-      <property>
-          <name>krb_server_keyfile</name>
-          <value>/home/gpadmin/gpdb-kerberos.keytab</value>
-      </property>
-    ```
-
-3.  Modify the HAWQ file `pg_hba.conf` to enable Kerberos support. Then restart HAWQ \(`hawq restart -a`\). For example, adding the following line to `pg_hba.conf` adds GSSAPI and Kerberos support. The value for `krb_realm` is the Kerberos realm that is used for authentication to HAWQ.
-
-    ```
-    host all all 0.0.0.0/0 gss include_realm=0 krb_realm=KRB.EXAMPLE.COM
-    ```
-
-    For information about the `pg_hba.conf` file, see [The pg\_hba.conf file](http://www.postgresql.org/docs/9.0/static/auth-pg-hba-conf.html) in the Postgres documentation.
-
-4.  Create a ticket using `kinit` and show the tickets in the Kerberos ticket cache with `klist`.
-5.  As a test, log in to the database as the `gpadmin` role with the Kerberos credentials `gpadmin/kerberos-gpdb`:
-
-    ``` bash
-    $ psql -U "gpadmin/kerberos-gpdb" -h master.test template1
-    ```
-
-    A username map can be defined in the `pg_ident.conf` file and specified in the `pg_hba.conf` file to simplify logging into HAWQ. For example, this `psql` command logs into the default HAWQ on `mdw.proddb` as the Kerberos principal `adminuser/mdw.proddb`:
-
-    ``` bash
-    $ psql -U "adminuser/mdw.proddb" -h mdw.proddb
-    ```
-
-    If the default user is `adminuser`, the `pg_ident.conf` file and the `pg_hba.conf` file can be configured so that the `adminuser` can log in to the database as the Kerberos principal `adminuser/mdw.proddb` without specifying the `-U` option:
-
-    ``` bash
-    $ psql -h mdw.proddb
-    ```
-
-    The `pg_ident.conf` file defines the username map. This file is located in the HAWQ master data directory (identified by the `hawq_master_directory` property value in `hawq-site.xml`):
-
-    ```
-    # MAPNAME ��SYSTEM-USERNAME �������GP-USERNAME
-    mymap ������/^(.*)mdw\.proddb$���� adminuser
-    ```
-
-    The map can be specified in the `pg_hba.conf` file as part of the line that enables Kerberos support:
-
-    ```
-    host all all 0.0.0.0/0 krb5 include_realm=0 krb_realm=proddb map=mymap
-    ```
-
-    For more information about specifying username maps see [Username maps](http://www.postgresql.org/docs/9.0/static/auth-username-maps.html) in the Postgres documentation.
-
-6.  If a Kerberos principal is not a HAWQ user, a message similar to the following is displayed from the `psql` command line when the user attempts to log in to the database:
-
-    ```
-    psql: krb5_sendauth: Bad response
-    ```
-
-    The principal must be added as a HAWQ user.
-
-
-### <a id="topic9"></a>Set up HAWQ with Kerberos for JDBC 
-
-Enable Kerberos-authenticated JDBC access to HAWQ.
-
-You can configure HAWQ to use Kerberos to run user-defined Java functions.
-
-1.  Ensure that Kerberos is installed and configured on the HAWQ master. See [Install and Configure the Kerberos Client](#topic6).
-2.  Create the file `.java.login.config` in the folder `/home/gpadmin` and add the following text to the file:
-
-    ```
-    pgjdbc {
-    ��com.sun.security.auth.module.Krb5LoginModule required
-    ��doNotPrompt=true
-    ��useTicketCache=true
-    ��debug=true
-    ��client=true;
-    };
-    ```
-
-3.  Create a Java application that connects to HAWQ using Kerberos authentication. The following example database connection URL uses a PostgreSQL JDBC driver and specifies parameters for Kerberos authentication:
-
-    ```
-    jdbc:postgresql://mdw:5432/mytest?kerberosServerName=postgres&jaasApplicationName=pgjdbc&user=gpadmin/kerberos-gpdb
-    ```
-
-    The parameter names and values specified depend on how the Java application performs Kerberos authentication.
-
-4.  Test the Kerberos login by running a sample Java application from HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/ldap.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/ldap.html.md.erb b/clientaccess/ldap.html.md.erb
deleted file mode 100644
index 27b204f..0000000
--- a/clientaccess/ldap.html.md.erb
+++ /dev/null
@@ -1,116 +0,0 @@
----
-title: Using LDAP Authentication with TLS/SSL
----
-
-You can control access to HAWQ with an LDAP server and, optionally, secure the connection with encryption by adding parameters to pg\_hba.conf file entries.
-
-HAWQ supports LDAP authentication with the TLS/SSL protocol to encrypt communication with an LDAP server:
-
--   LDAP authentication with STARTTLS and TLS protocol \u2013 STARTTLS starts with a clear text connection \(no encryption\) and upgrades it to a secure connection \(with encryption\).
--   LDAP authentication with a secure connection and TLS/SSL \(LDAPS\) \u2013 HAWQ uses the TLS or SSL protocol based on the protocol that is used by the LDAP server.
-
-If no protocol is specified, HAWQ communicates with the LDAP server with a clear text connection.
-
-To use LDAP authentication, the HAWQ master host must be configured as an LDAP client. See your LDAP documentation for information about configuring LDAP clients.
-
-## Enabing LDAP Authentication with STARTTLS and TLS
-
-To enable STARTTLS with the TLS protocol, specify the `ldaptls` parameter with the value 1. The default port is 389. In this example, the authentication method parameters include the `ldaptls` parameter.
-
-```
-ldap ldapserver=ldap.example.com ldaptls=1 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```
-
-Specify a non-default port, with the `ldapport` parameter. In this example, the authentication method includes the `ldaptls` parameter and the `ldapport` parameter to specify the port 550.
-
-```
-ldap ldapserver=ldap.example.com ldaptls=1 ldapport=500 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```
-
-## Enabling LDAP Authentication with a Secure Connection and TLS/SSL
-
-To enable a secure connection with TLS/SSL, add `ldaps://` as the prefix to the LDAP server name specified in the `ldapserver` parameter. The default port is 636.
-
-This example `ldapserver` parameter specifies a secure connection and the TLS/SSL protocol for the LDAP server `ldap.example.com`.
-
-```
-ldapserver=ldaps://ldap.example.com
-```
-
-To specify a non-default port, add a colon \(:\) and the port number after the LDAP server name. This example `ldapserver` parameter includes the `ldaps://` prefix and the non-default port 550.
-
-```
-ldapserver=ldaps://ldap.example.com:550
-```
-
-### Notes
-
-HAWQ logs an error if the following are specified in a pg\_hba.conf file entry:
-
--   If both the `ldaps://` prefix and the `ldaptls=1` parameter are specified.
--   If both the `ldaps://` prefix and the `ldapport` parameter are specified.
-
-Enabling encrypted communication for LDAP authentication only encrypts the communication between HAWQ and the LDAP server.
-
-## Configuring Authentication with a System-wide OpenLDAP System
-
-If you have a system-wide OpenLDAP system and logins are configured to use LDAP with TLS or SSL in the pg_hba.conf file, logins may fail with the following message:
-
-```shell
-could not start LDAP TLS session: error code '-11'
-```
-
-To use an existing OpenLDAP system for authentication, HAWQ must be set up to use the LDAP server's CA certificate to validate user certificates. Follow these steps on both the master and standby hosts to configure HAWQ:
-
-1. Copy the base64-encoded root CA chain file from the Active Directory or LDAP server to
-the HAWQ master and standby master hosts. This example uses the directory `/etc/pki/tls/certs`.
-
-2. Change to the directory where you copied the CA certificate file and, as the root user, generate the hash for OpenLDAP:
-
-    ```
-    # cd /etc/pki/tls/certs
-    # openssl x509 -noout -hash -in <ca-certificate-file>
-    # ln -s <ca-certificate-file> <ca-certificate-file>.0
-    ```
-
-3. Configure an OpenLDAP configuration file for HAWQ with the CA certificate directory and certificate file specified.
-
-    As the root user, edit the OpenLDAP configuration file `/etc/openldap/ldap.conf`:
-
-    ```
-    SASL_NOCANON on
-    URI ldaps://ldapA.example.priv ldaps://ldapB.example.priv ldaps://ldapC.example.priv
-    BASE dc=example,dc=priv
-    TLS_CACERTDIR /etc/pki/tls/certs
-    TLS_CACERT /etc/pki/tls/certs/<ca-certificate-file>
-    ```
-
-    **Note**: For certificate validation to succeed, the hostname in the certificate must match a hostname in the URI property. Otherwise, you must also add `TLS_REQCERT allow` to the file.
-
-4. As the gpadmin user, edit `/usr/local/hawq/greenplum_path.sh` and add the following line.
-
-    ```bash
-    export LDAPCONF=/etc/openldap/ldap.conf
-    ```
-
-## Examples
-
-These are example entries from an pg\_hba.conf file.
-
-This example specifies LDAP authentication with no encryption between HAWQ and the LDAP server.
-
-```
-host all plainuser 0.0.0.0/0 ldap ldapserver=ldap.example.com ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```
-
-This example specifies LDAP authentication with the STARTTLS and TLS protocol between HAWQ and the LDAP server.
-
-```
-host all tlsuser 0.0.0.0/0 ldap ldapserver=ldap.example.com ldaptls=1 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```
-
-This example specifies LDAP authentication with a secure connection and TLS/SSL protocol between HAWQ and the LDAP server.
-
-```
-host all ldapsuser 0.0.0.0/0 ldap ldapserver=ldaps://ldap.example.com ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/roles_privs.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/roles_privs.html.md.erb b/clientaccess/roles_privs.html.md.erb
deleted file mode 100644
index 4bdf3ee..0000000
--- a/clientaccess/roles_privs.html.md.erb
+++ /dev/null
@@ -1,285 +0,0 @@
----
-title: Managing Roles and Privileges
----
-
-The HAWQ authorization mechanism stores roles and permissions to access database objects in the database and is administered using SQL statements or command-line utilities.
-
-HAWQ manages database access permissions using *roles*. The concept of roles subsumes the concepts of *users* and *groups*. A role can be a database user, a group, or both. Roles can own database objects \(for example, tables\) and can assign privileges on those objects to other roles to control access to the objects. Roles can be members of other roles, thus a member role can inherit the object privileges of its parent role.
-
-Every HAWQ system contains a set of database roles \(users and groups\). Those roles are separate from the users and groups managed by the operating system on which the server runs. However, for convenience you may want to maintain a relationship between operating system user names and HAWQ role names, since many of the client applications use the current operating system user name as the default.
-
-In HAWQ, users log in and connect through the master instance, which then verifies their role and access privileges. The master then issues commands to the segment instances behind the scenes as the currently logged in role.
-
-Roles are defined at the system level, meaning they are valid for all databases in the system.
-
-In order to bootstrap the HAWQ system, a freshly initialized system always contains one predefined *superuser* role \(also referred to as the system user\). This role will have the same name as the operating system user that initialized the HAWQ system. Customarily, this role is named `gpadmin`. In order to create more roles you first have to connect as this initial role.
-
-## <a id="topic2"></a>Security Best Practices for Roles and Privileges 
-
--   **Secure the gpadmin system user.** HAWQ requires a UNIX user id to install and initialize the HAWQ system. This system user is referred to as `gpadmin` in the HAWQ documentation. This `gpadmin` user is the default database superuser in HAWQ, as well as the file system owner of the HAWQ installation and its underlying data files. This default administrator account is fundamental to the design of HAWQ. The system cannot run without it, and there is no way to limit the access of this gpadmin user id. Use roles to manage who has access to the database for specific purposes. You should only use the `gpadmin` account for system maintenance tasks such as expansion and upgrade. Anyone who logs on to a HAWQ host as this user id can read, alter or delete any data; specifically system catalog data and database access rights. Therefore, it is very important to secure the gpadmin user id and only provide access to essential system administrators. Administrators should only log in to HAWQ as
  `gpadmin` when performing certain system maintenance tasks \(such as upgrade or expansion\). Database users should never log on as `gpadmin`, and ETL or production workloads should never run as `gpadmin`.
--   **Assign a distinct role to each user that logs in.** For logging and auditing purposes, each user that is allowed to log in to HAWQ should be given their own database role. For applications or web services, consider creating a distinct role for each application or service. See [Creating New Roles \(Users\)](#topic3).
--   **Use groups to manage access privileges.** See [Role Membership](#topic5).
--   **Limit users who have the SUPERUSER role attribute.** Roles that are superusers bypass all access privilege checks in HAWQ, as well as resource queuing. Only system administrators should be given superuser rights. See [Altering Role Attributes](#topic4).
-
-## <a id="topic3"></a>Creating New Roles \(Users\) 
-
-A user-level role is considered to be a database role that can log in to the database and initiate a database session. Therefore, when you create a new user-level role using the `CREATE ROLE` command, you must specify the `LOGIN` privilege. For example:
-
-``` sql
-=# CREATE ROLE jsmith WITH LOGIN;
-```
-
-A database role may have a number of attributes that define what sort of tasks that role can perform in the database. You can set these attributes when you create the role, or later using the `ALTER ROLE` command. See [Table 1](#iq139556) for a description of the role attributes you can set.
-
-### <a id="topic4"></a>Altering Role Attributes 
-
-A database role may have a number of attributes that define what sort of tasks that role can perform in the database.
-
-<a id="iq139556"></a>
-
-|Attributes|Description|
-|----------|-----------|
-|SUPERUSER &#124; NOSUPERUSER|Determines if the role is a superuser. You must yourself be a superuser to create a new superuser. NOSUPERUSER is the default.|
-|CREATEDB &#124; NOCREATEDB|Determines if the role is allowed to create databases. NOCREATEDB is the default.|
-|CREATEROLE &#124; NOCREATEROLE|Determines if the role is allowed to create and manage other roles. NOCREATEROLE is the default.|
-|INHERIT &#124; NOINHERIT|Determines whether a role inherits the privileges of roles it is a member of. A role with the INHERIT attribute can automatically use whatever database privileges have been granted to all roles it is directly or indirectly a member of. INHERIT is the default.|
-|LOGIN &#124; NOLOGIN|Determines whether a role is allowed to log in. A role having the LOGIN attribute can be thought of as a user. Roles without this attribute are useful for managing database privileges \(groups\). NOLOGIN is the default.|
-|CONNECTION LIMIT *connlimit*|If role can log in, this specifies how many concurrent connections the role can make. -1 \(the default\) means no limit.|
-|PASSWORD '*password*'|Sets the role's password. If you do not plan to use password authentication you can omit this option. If no password is specified, the password will be set to null and password authentication will always fail for that user. A null password can optionally be written explicitly as PASSWORD NULL.|
-|ENCRYPTED &#124; UNENCRYPTED|Controls whether the password is stored encrypted in the system catalogs. The default behavior is determined by the configuration parameter `password_encryption` \(currently set to md5, for SHA-256 encryption, change this setting to password\). If the presented password string is already in encrypted format, then it is stored encrypted as-is, regardless of whether ENCRYPTED or UNENCRYPTED is specified \(since the system cannot decrypt the specified encrypted password string\). This allows reloading of encrypted passwords during dump/restore.|
-|VALID UNTIL '*timestamp*'|Sets a date and time after which the role's password is no longer valid. If omitted the password will be valid for all time.|
-|RESOURCE QUEUE *queue\_name*|Assigns the role to the named resource queue for workload management. Any statement that role issues is then subject to the resource queue's limits. Note that the RESOURCE QUEUE attribute is not inherited; it must be set on each user-level \(LOGIN\) role.|
-|DENY \{deny\_interval &#124; deny\_point\}|Restricts access during an interval, specified by day or day and time. For more information see [Time-based Authentication](#topic13).|
-
-You can set these attributes when you create the role, or later using the `ALTER ROLE` command. For example:
-
-``` sql
-=# ALTER ROLE jsmith WITH PASSWORD 'passwd123';
-=# ALTER ROLE admin VALID UNTIL 'infinity';
-=# ALTER ROLE jsmith LOGIN;
-=# ALTER ROLE jsmith RESOURCE QUEUE adhoc;
-=# ALTER ROLE jsmith DENY DAY 'Sunday';
-```
-
-## <a id="topic5"></a>Role Membership 
-
-It is frequently convenient to group users together to ease management of object privileges: that way, privileges can be granted to, or revoked from, a group as a whole. In HAWQ this is done by creating a role that represents the group, and then granting membership in the group role to individual user roles.
-
-Use the `CREATE ROLE` SQL command to create a new group role. For example:
-
-``` sql
-=# CREATE ROLE admin CREATEROLE CREATEDB;
-```
-
-Once the group role exists, you can add and remove members \(user roles\) using the `GRANT` and `REVOKE` commands. For example:
-
-``` sql
-=# GRANT admin TO john, sally;
-=# REVOKE admin FROM bob;
-```
-
-For managing object privileges, you would then grant the appropriate permissions to the group-level role only \(see [Table 2](#iq139925)\). The member user roles then inherit the object privileges of the group role. For example:
-
-``` sql
-=# GRANT ALL ON TABLE mytable TO admin;
-=# GRANT ALL ON SCHEMA myschema TO admin;
-=# GRANT ALL ON DATABASE mydb TO admin;
-```
-
-The role attributes `LOGIN`, `SUPERUSER`, `CREATEDB`, and `CREATEROLE` are never inherited as ordinary privileges on database objects are. User members must actually `SET ROLE` to a specific role having one of these attributes in order to make use of the attribute. In the above example, we gave `CREATEDB` and `CREATEROLE` to the `admin` role. If `sally` is a member of `admin`, she could issue the following command to assume the role attributes of the parent role:
-
-``` sql
-=> SET ROLE admin;
-```
-
-## <a id="topic6"></a>Managing Object Privileges 
-
-When an object \(table, view, sequence, database, function, language, schema, or tablespace\) is created, it is assigned an owner. The owner is normally the role that executed the creation statement. For most kinds of objects, the initial state is that only the owner \(or a superuser\) can do anything with the object. To allow other roles to use it, privileges must be granted. HAWQ supports the following privileges for each object type:
-
-<a id="iq139925"></a>
-
-|Object Type|Privileges|
-|-----------|----------|
-|Tables, Views, Sequences|SELECT <br/> INSERT <br/> RULE <br/> ALL|
-|External Tables|SELECT <br/> RULE <br/> ALL|
-|Databases|CONNECT<br/>CREATE<br/>TEMPORARY &#124; TEMP <br/> ALL|
-|Functions|EXECUTE|
-|Procedural Languages|USAGE|
-|Schemas|CREATE <br/> USAGE <br/> ALL|
-|Custom Protocol|SELECT <br/> INSERT <br/> RULE </br> ALL|
-
-**Note:** Privileges must be granted for each object individually. For example, granting ALL on a database does not grant full access to the objects within that database. It only grants all of the database-level privileges \(CONNECT, CREATE, TEMPORARY\) to the database itself.
-
-Use the `GRANT` SQL command to give a specified role privileges on an object. For example:
-
-``` sql
-=# GRANT INSERT ON mytable TO jsmith;
-```
-
-To revoke privileges, use the `REVOKE` command. For example:
-
-``` sql
-=# REVOKE ALL PRIVILEGES ON mytable FROM jsmith;
-```
-
-You can also use the `DROP OWNED` and `REASSIGN OWNED` commands for managing objects owned by deprecated roles \(Note: only an object's owner or a superuser can drop an object or reassign ownership\). For example:
-
-``` sql
-=# REASSIGN OWNED BY sally TO bob;
-=# DROP OWNED BY visitor;
-```
-
-### <a id="topic7"></a>Simulating Row and Column Level Access Control 
-
-Row-level or column-level access is not supported, nor is labeled security. Row-level and column-level access can be simulated using views to restrict the columns and/or rows that are selected. Row-level labels can be simulated by adding an extra column to the table to store sensitivity information, and then using views to control row-level access based on this column. Roles can then be granted access to the views rather than the base table.
-
-## <a id="topic8"></a>Encrypting Data 
-
-PostgreSQL provides an optional package of encryption/decryption functions called `pgcrypto`, which can also be installed and used in HAWQ. The `pgcrypto` package is not installed by default with HAWQ. However, you can download a `pgcrypto` package from [Pivotal Network](https://network.pivotal.io). 
-
-If you are building HAWQ from source files, then you should enable `pgcrypto` support as an option when compiling HAWQ.
-
-The `pgcrypto` functions allow database administrators to store certain columns of data in encrypted form. This adds an extra layer of protection for sensitive data, as data stored in HAWQ in encrypted form cannot be read by users who do not have the encryption key, nor be read directly from the disks.
-
-**Note:** The `pgcrypto` functions run inside the database server, which means that all the data and passwords move between `pgcrypto` and the client application in clear-text. For optimal security, consider also using SSL connections between the client and the HAWQ master server.
-
-## <a id="topic9"></a>Encrypting Passwords 
-
-This technical note outlines how to use a server parameter to implement SHA-256 encrypted password storage. Note that in order to use SHA-256 encryption for storage, the client authentication method must be set to `password` rather than the default, `MD5`. \(See [Encrypting Client/Server Connections](client_auth.html) for more details.\) This means that the password is transmitted in clear text over the network; to avoid this, set up SSL to encrypt the client server communication channel.
-
-### <a id="topic10"></a>Enabling SHA-256 Encryption 
-
-You can set your chosen encryption method system-wide or on a per-session basis. There are three encryption methods available: `SHA-256`, `SHA-256-FIPS`, and `MD5` \(for backward compatibility\). The `SHA-256-FIPS` method requires that FIPS compliant libraries are used.
-
-#### <a id="topic11"></a>System-wide 
-
-You will perform different procedures to set the encryption method (`password_hash_algorithm` server parameter) system-wide depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update encryption method configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set encryption method configuration parameters.
-
-If you use Ambari to manage your HAWQ cluster:
-
-1. Set the `password_hash_algorithm` configuration property via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. Valid values include `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\).
-2. Select **Service Actions > Restart All** to load the updated configuration.
-
-If you manage your HAWQ cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-1. Use the `hawq config` utility to set `password_hash_algorithm` to `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\):
-
-    ``` shell
-    $ hawq config -c password_hash_algorithm -v 'SHA-256'
-    ```
-        
-    Or:
-        
-    ``` shell
-    $ hawq config -c password_hash_algorithm -v 'SHA-256-FIPS'
-    ```
-
-2. Reload the HAWQ configuration:
-
-    ``` shell
-    $ hawq stop cluster -u
-    ```
-
-3.  Verify the setting:
-
-    ``` bash
-    $ hawq config -s password_hash_algorithm
-    ```
-
-#### <a id="topic12"></a>Individual Session 
-
-To set the `password_hash_algorithm` server parameter for an individual database session:
-
-1.  Log in to your HAWQ instance as a superuser.
-2.  Set the `password_hash_algorithm` to `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\):
-
-    ``` sql
-    =# SET password_hash_algorithm = 'SHA-256'
-    SET
-    ```
-
-    or:
-
-    ``` sql
-    =# SET password_hash_algorithm = 'SHA-256-FIPS'
-    SET
-    ```
-
-3.  Verify the setting:
-
-    ``` sql
-    =# SHOW password_hash_algorithm;
-    password_hash_algorithm
-    ```
-
-    You will see:
-
-    ```
-    SHA-256
-    ```
-
-    or:
-
-    ```
-    SHA-256-FIPS
-    ```
-
-    **Example**
-
-    Following is an example of how the new setting works:
-
-4.  Login in as a super user and verify the password hash algorithm setting:
-
-    ``` sql
-    =# SHOW password_hash_algorithm
-    password_hash_algorithm
-    -------------------------------
-    SHA-256-FIPS
-    ```
-
-5.  Create a new role with password that has login privileges.
-
-    ``` sql
-    =# CREATE ROLE testdb WITH PASSWORD 'testdb12345#' LOGIN;
-    ```
-
-6.  Change the client authentication method to allow for storage of SHA-256 encrypted passwords:
-
-    Open the `pg_hba.conf` file on the master and add the following line:
-
-    ```
-    host all testdb 0.0.0.0/0 password
-    ```
-
-7.  Restart the cluster.
-8.  Login to the database as user just created `testdb`.
-
-    ``` bash
-    $ psql -U testdb
-    ```
-
-9.  Enter the correct password at the prompt.
-10. Verify that the password is stored as a SHA-256 hash.
-
-    Note that password hashes are stored in `pg_authid.rolpasswod`
-
-    1.  Login as the super user.
-    2.  Execute the following:
-
-        ``` sql
-        =# SELECT rolpassword FROM pg_authid WHERE rolname = 'testdb';
-        Rolpassword
-        -----------
-        sha256<64 hexidecimal characters>
-        ```
-
-
-## <a id="topic13"></a>Time-based Authentication 
-
-HAWQ enables the administrator to restrict access to certain times by role. Use the `CREATE ROLE` or `ALTER ROLE` commands to specify time-based constraints.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/BasicDataOperations.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/BasicDataOperations.html.md.erb b/datamgmt/BasicDataOperations.html.md.erb
deleted file mode 100644
index 66328c7..0000000
--- a/datamgmt/BasicDataOperations.html.md.erb
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: Basic Data Operations
----
-
-This topic describes basic data operations that you perform in HAWQ.
-
-## <a id="topic3"></a>Inserting Rows
-
-Use the `INSERT` command to create rows in a table. This command requires the table name and a value for each column in the table; you may optionally specify the column names in any order. If you do not specify column names, list the data values in the order of the columns in the table, separated by commas.
-
-For example, to specify the column names and the values to insert:
-
-``` sql
-INSERT INTO products (name, price, product_no) VALUES ('Cheese', 9.99, 1);
-```
-
-To specify only the values to insert:
-
-``` sql
-INSERT INTO products VALUES (1, 'Cheese', 9.99);
-```
-
-Usually, the data values are literals (constants), but you can also use scalar expressions. For example:
-
-``` sql
-INSERT INTO films SELECT * FROM tmp_films WHERE date_prod <
-'2004-05-07';
-```
-
-You can insert multiple rows in a single command. For example:
-
-``` sql
-INSERT INTO products (product_no, name, price) VALUES
-    (1, 'Cheese', 9.99),
-    (2, 'Bread', 1.99),
-    (3, 'Milk', 2.99);
-```
-
-To insert data into a partitioned table, you specify the root partitioned table, the table created with the `CREATE TABLE` command. You also can specify a leaf child table of the partitioned table in an `INSERT` command. An error is returned if the data is not valid for the specified leaf child table. Specifying a child table that is not a leaf child table in the `INSERT` command is not supported.
-
-To insert large amounts of data, use external tables or the `COPY` command. These load mechanisms are more efficient than `INSERT` for inserting large quantities of rows. See [Loading and Unloading Data](load/g-loading-and-unloading-data.html#topic1) for more information about bulk data loading.
-
-## <a id="topic9"></a>Vacuuming the System Catalog Tables
-
-Only HAWQ system catalog tables use multiple version concurrency control. Deleted or updated data rows in the catalog tables occupy physical space on disk even though new transactions cannot see them. Periodically running the `VACUUM` command removes these expired rows. 
-
-The `VACUUM` command also collects table-level statistics such as the number of rows and pages.
-
-For example:
-
-``` sql
-VACUUM pg_class;
-```
-
-### <a id="topic10"></a>Configuring the Free Space Map
-
-Expired rows are held in the *free space map*. The free space map must be sized large enough to hold all expired rows in your database. If not, a regular `VACUUM` command cannot reclaim space occupied by expired rows that overflow the free space map.
-
-**Note:** `VACUUM FULL` is not recommended with HAWQ because it is not safe for large tables and may take an unacceptably long time to complete. See [VACUUM](../reference/sql/VACUUM.html#topic1).
-
-Size the free space map with the following server configuration parameters:
-
--   `max_fsm_pages`
--   `max_fsm_relations`

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/ConcurrencyControl.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/ConcurrencyControl.html.md.erb b/datamgmt/ConcurrencyControl.html.md.erb
deleted file mode 100644
index 2ced135..0000000
--- a/datamgmt/ConcurrencyControl.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: Concurrency Control
----
-
-This topic discusses the mechanisms used in HAWQ to provide concurrency control.
-
-HAWQ and PostgreSQL do not use locks for concurrency control. They maintain data consistency using a multiversion model, Multiversion Concurrency Control (MVCC). MVCC achieves transaction isolation for each database session, and each query transaction sees a snapshot of data. This ensures the transaction sees consistent data that is not affected by other concurrent transactions.
-
-Because MVCC does not use explicit locks for concurrency control, lock contention is minimized and HAWQ maintains reasonable performance in multiuser environments. Locks acquired for querying (reading) data do not conflict with locks acquired for writing data.
-
-HAWQ provides multiple lock modes to control concurrent access to data in tables. Most HAWQ SQL commands automatically acquire the appropriate locks to ensure that referenced tables are not dropped or modified in incompatible ways while a command executes. For applications that cannot adapt easily to MVCC behavior, you can use the `LOCK` command to acquire explicit locks. However, proper use of MVCC generally provides better performance.
-
-<caption><span class="tablecap">Table 1. Lock Modes in HAWQ</span></caption>
-
-<a id="topic_f5l_qnh_kr__ix140861"></a>
-
-| Lock Mode              | Associated SQL Commands                                                             | Conflicts With                                                                                                          |
-|------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
-| ACCESS SHARE           | `SELECT`                                                                            | ACCESS EXCLUSIVE                                                                                                        |
-| ROW EXCLUSIVE          | `INSERT`, `COPY`                                                                    | SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                                                 |
-| SHARE UPDATE EXCLUSIVE | `VACUUM` (without `FULL`), `ANALYZE`                                                | SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                         |
-| SHARE                  | `CREATE INDEX`                                                                      | ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                 |
-| SHARE ROW EXCLUSIVE    | �                                                                                   | ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                          |
-| ACCESS EXCLUSIVE       | `ALTER TABLE`, `DROP TABLE`, `TRUNCATE`, `REINDEX`, `CLUSTER`, `VACUUM FULL`        | ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE |


[35/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/svg/hawq_architecture_components.svg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/svg/hawq_architecture_components.svg b/markdown/mdimages/svg/hawq_architecture_components.svg
new file mode 100644
index 0000000..78d421a
--- /dev/null
+++ b/markdown/mdimages/svg/hawq_architecture_components.svg
@@ -0,0 +1,1083 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   version="1.1"
+   viewBox="0 0 960 720"
+   stroke-miterlimit="10"
+   id="svg3984"
+   inkscape:version="0.91 r13725"
+   sodipodi:docname="hawq_architecture_components.svg"
+   width="960"
+   height="720"
+   style="fill:none;stroke:none;stroke-linecap:square;stroke-miterlimit:10"
+   inkscape:export-filename="/Users/stymon/workspace/docs-apache-hawq/hawq/images/hawq_architecture_components.png"
+   inkscape:export-xdpi="92.099998"
+   inkscape:export-ydpi="92.099998">
+  <metadata
+     id="metadata4339">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title />
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <defs
+     id="defs4337">
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible;"
+       id="marker10787"
+       refX="0.0"
+       refY="0.0"
+       orient="auto"
+       inkscape:stockid="Arrow1Lend">
+      <path
+         transform="scale(0.8) rotate(180) translate(12.5,0)"
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         id="path10789" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Lstart"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker10121"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path10123"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         transform="scale(0.8) translate(12.5,0)" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker9953"
+       refX="0.0"
+       refY="0.0"
+       orient="auto"
+       inkscape:stockid="Arrow1Lstart">
+      <path
+         transform="scale(0.8) translate(12.5,0)"
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         id="path9955" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker9779"
+       refX="0.0"
+       refY="0.0"
+       orient="auto"
+       inkscape:stockid="Arrow1Lstart">
+      <path
+         transform="scale(0.8) translate(12.5,0)"
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         id="path9781" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Lstart"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker9605"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path9607"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         transform="scale(0.8) translate(12.5,0)" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Lend"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker4821"
+       style="overflow:visible;"
+       inkscape:isstock="true">
+      <path
+         id="path4823"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         transform="scale(0.8) rotate(180) translate(12.5,0)" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Lend"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="Arrow1Lend"
+       style="overflow:visible;"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path4522"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         transform="scale(0.8) rotate(180) translate(12.5,0)" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Lstart"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="Arrow1Lstart"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path4519"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         transform="scale(0.8) translate(12.5,0)" />
+    </marker>
+  </defs>
+  <sodipodi:namedview
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1"
+     objecttolerance="10"
+     gridtolerance="10"
+     guidetolerance="10"
+     inkscape:pageopacity="0"
+     inkscape:pageshadow="2"
+     inkscape:window-width="1264"
+     inkscape:window-height="851"
+     id="namedview4335"
+     showgrid="false"
+     inkscape:zoom="0.88611111"
+     inkscape:cx="387.77249"
+     inkscape:cy="319.83874"
+     inkscape:window-x="221"
+     inkscape:window-y="172"
+     inkscape:window-maximized="0"
+     inkscape:current-layer="svg3984"
+     fit-margin-top="0"
+     fit-margin-left="0"
+     fit-margin-right="0"
+     fit-margin-bottom="0"
+     inkscape:snap-global="false" />
+  <clipPath
+     id="p.0">
+    <path
+       d="M 0,0 960,0 960,720 0,720 0,0 Z"
+       id="path3987"
+       inkscape:connector-curvature="0"
+       style="clip-rule:nonzero" />
+  </clipPath>
+  <g
+     clip-path="url(#p.0)"
+     id="g3989">
+    <path
+       d="m 0,0 960,0 0,720 -960,0 z"
+       id="path3991"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 602.9554,414.2446 0,0 c 0,-13.48706 10.93341,-24.42044 24.42041,-24.42044 l 97.67883,0 0,0 c 6.47668,0 12.68817,2.57285 17.26788,7.15256 4.57972,4.57974 7.15259,10.79117 7.15259,17.26788 l 0,145.33237 c 0,13.487 -10.93341,24.42041 -24.42047,24.42041 l -97.67883,0 c -13.487,0 -24.42041,-10.93341 -24.42041,-24.42041 z"
+       id="path3993"
+       inkscape:connector-curvature="0"
+       style="fill:#3d85c6;fill-rule:nonzero" />
+    <path
+       d="m 602.9554,414.2446 0,0 c 0,-13.48706 10.93341,-24.42044 24.42041,-24.42044 l 97.67883,0 0,0 c 6.47668,0 12.68817,2.57285 17.26788,7.15256 4.57972,4.57974 7.15259,10.79117 7.15259,17.26788 l 0,145.33237 c 0,13.487 -10.93341,24.42041 -24.42047,24.42041 l -97.67883,0 c -13.487,0 -24.42041,-10.93341 -24.42041,-24.42041 z"
+       id="path3995"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 370.95538,414.2446 0,0 c 0,-13.48706 10.93341,-24.42044 24.42044,-24.42044 l 97.6788,0 0,0 c 6.47672,0 12.68814,2.57285 17.26785,7.15256 4.57975,4.57974 7.15262,10.79117 7.15262,17.26788 l 0,145.33237 c 0,13.487 -10.93344,24.42041 -24.42047,24.42041 l -97.6788,0 c -13.48703,0 -24.42044,-10.93341 -24.42044,-24.42041 z"
+       id="path3997"
+       inkscape:connector-curvature="0"
+       style="fill:#3d85c6;fill-rule:nonzero" />
+    <path
+       d="m 370.95538,414.2446 0,0 c 0,-13.48706 10.93341,-24.42044 24.42044,-24.42044 l 97.6788,0 0,0 c 6.47672,0 12.68814,2.57285 17.26785,7.15256 4.57975,4.57974 7.15262,10.79117 7.15262,17.26788 l 0,145.33237 c 0,13.487 -10.93344,24.42041 -24.42047,24.42041 l -97.6788,0 c -13.48703,0 -24.42044,-10.93341 -24.42044,-24.42041 z"
+       id="path3999"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 130.95538,414.2655 0,0 c 0,-13.48703 10.93339,-24.42044 24.42044,-24.42044 l 97.6788,0 0,0 c 6.47671,0 12.68814,2.57285 17.26785,7.15259 4.57975,4.57971 7.15259,10.79114 7.15259,17.26785 l 0,145.33234 c 0,13.48706 -10.93341,24.42047 -24.42044,24.42047 l -97.6788,0 c -13.48704,0 -24.42044,-10.93341 -24.42044,-24.42047 z"
+       id="path4001"
+       inkscape:connector-curvature="0"
+       style="fill:#3d85c6;fill-rule:nonzero" />
+    <path
+       d="m 130.95538,414.2655 0,0 c 0,-13.48703 10.93339,-24.42044 24.42044,-24.42044 l 97.6788,0 0,0 c 6.47671,0 12.68814,2.57285 17.26785,7.15259 4.57975,4.57971 7.15259,10.79114 7.15259,17.26785 l 0,145.33234 c 0,13.48706 -10.93341,24.42047 -24.42044,24.42047 l -97.6788,0 c -13.48704,0 -24.42044,-10.93341 -24.42044,-24.42047 z"
+       id="path4003"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 260,162.79868 0,0 c 0,-21.20758 17.19214,-38.39973 38.39972,-38.39973 l 275.98798,0 c 10.1842,0 19.95136,4.04568 27.15271,11.24702 7.20135,7.20134 11.24701,16.96846 11.24701,27.15271 l 0,153.59427 c 0,21.20758 -17.19214,38.39972 -38.39972,38.39972 l -275.98798,0 C 277.19214,354.79267 260,337.60053 260,316.39295 Z"
+       id="path4005"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 260,162.79868 0,0 c 0,-21.20758 17.19214,-38.39973 38.39972,-38.39973 l 275.98798,0 c 10.1842,0 19.95136,4.04568 27.15271,11.24702 7.20135,7.20134 11.24701,16.96846 11.24701,27.15271 l 0,153.59427 c 0,21.20758 -17.19214,38.39972 -38.39972,38.39972 l -275.98798,0 C 277.19214,354.79267 260,337.60053 260,316.39295 Z"
+       id="path4007"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 555.3648,15.619595 0,0 c 0,-4.766269 3.86383,-8.630093 8.63013,-8.630093 l 129.25946,0 c 2.28888,0 4.48394,0.9092393 6.10241,2.5276961 1.61847,1.6184559 2.52771,3.8135539 2.52771,6.1023969 l 0,34.51934 c 0,4.76627 -3.86383,8.630093 -8.63012,8.630093 l -129.25946,0 c -4.7663,0 -8.63013,-3.863823 -8.63013,-8.630093 z"
+       id="path4009"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 555.3648,15.619595 0,0 c 0,-4.766269 3.86383,-8.630093 8.63013,-8.630093 l 129.25946,0 c 2.28888,0 4.48394,0.9092393 6.10241,2.5276961 1.61847,1.6184559 2.52771,3.8135539 2.52771,6.1023969 l 0,34.51934 c 0,4.76627 -3.86383,8.630093 -8.63012,8.630093 l -129.25946,0 c -4.7663,0 -8.63013,-3.863823 -8.63013,-8.630093 z"
+       id="path4011"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 608.76154,24.25364 1.60937,0.40625 q -0.51562,1.984375 -1.82812,3.03125 -1.3125,1.03125 -3.21875,1.03125 -1.96875,0 -3.20313,-0.796875 -1.21875,-0.796875 -1.875,-2.3125 -0.64062,-1.53125 -0.64062,-3.265625 0,-1.90625 0.71875,-3.3125 0.73437,-1.421875 2.07812,-2.15625 1.34375,-0.734375 2.95313,-0.734375 1.82812,0 3.0625,0.9375 1.25,0.921875 1.73437,2.609375 l -1.57812,0.375 q -0.42188,-1.328125 -1.23438,-1.9375 -0.79687,-0.609375 -2.01562,-0.609375 -1.40625,0 -2.35938,0.671875 -0.9375,0.671875 -1.32812,1.8125 -0.375,1.125 -0.375,2.328125 0,1.5625 0.45312,2.71875 0.45313,1.15625 1.40625,1.734375 0.96875,0.5625 2.07813,0.5625 1.34375,0 2.28125,-0.78125 0.9375,-0.78125 1.28125,-2.3125 z m 9.38885,3.171875 q -0.82812,0.71875 -1.60937,1.015625 -0.76563,0.28125 -1.64063,0.28125 -1.45312,0 -2.23437,-0.703125 -0.78125,-0.71875 -0.78125,-1.828125 0,-0.640625 0.29687,-1.171875 0.29688,-0.546875 0.76563,-0.859375 0.48437,-0.328125 1.07812,-0.5 0.45313,-0.109375 1.32813,-0.21875 1.81
 25,-0.21875 2.67187,-0.515625 0,-0.3125 0,-0.390625 0,-0.90625 -0.42187,-1.28125 -0.5625,-0.515625 -1.70313,-0.515625 -1.04687,0 -1.54687,0.375 -0.5,0.359375 -0.75,1.3125 l -1.45313,-0.203125 q 0.20313,-0.9375 0.65625,-1.515625 0.45313,-0.578125 1.3125,-0.890625 0.85938,-0.3125 2,-0.3125 1.14063,0 1.84375,0.265625 0.70313,0.265625 1.03125,0.671875 0.32813,0.40625 0.46875,1.015625 0.0781,0.375 0.0781,1.375 l 0,2 q 0,2.078125 0.0937,2.640625 0.0937,0.546875 0.375,1.046875 l -1.5625,0 q -0.23438,-0.46875 -0.29688,-1.09375 z m -0.125,-3.328125 q -0.8125,0.328125 -2.4375,0.5625 -0.92187,0.125 -1.3125,0.296875 -0.375,0.171875 -0.59375,0.5 -0.20312,0.3125 -0.20312,0.703125 0,0.59375 0.45312,1 0.45313,0.390625 1.32813,0.390625 0.85937,0 1.53125,-0.375 0.67187,-0.390625 1,-1.03125 0.23437,-0.515625 0.23437,-1.5 l 0,-0.546875 z m 7.27771,3.078125 0.20313,1.328125 q -0.625,0.125 -1.125,0.125 -0.8125,0 -1.26563,-0.25 -0.4375,-0.265625 -0.625,-0.671875 -0.1875,-0.421875 -0.1875,-1.765625 l 0,-5.
 078125 -1.09375,0 0,-1.15625 1.09375,0 0,-2.1875 1.48438,-0.890625 0,3.078125 1.51562,0 0,1.15625 -1.51562,0 0,5.15625 q 0,0.640625 0.0781,0.828125 0.0781,0.171875 0.25,0.28125 0.1875,0.109375 0.53125,0.109375 0.23438,0 0.65625,-0.0625 z m 7.29865,0.25 q -0.82813,0.71875 -1.60938,1.015625 -0.76562,0.28125 -1.64062,0.28125 -1.45313,0 -2.23438,-0.703125 -0.78125,-0.71875 -0.78125,-1.828125 0,-0.640625 0.29688,-1.171875 0.29687,-0.546875 0.76562,-0.859375 0.48438,-0.328125 1.07813,-0.5 0.45312,-0.109375 1.32812,-0.21875 1.8125,-0.21875 2.67188,-0.515625 0,-0.3125 0,-0.390625 0,-0.90625 -0.42188,-1.28125 -0.5625,-0.515625 -1.70312,-0.515625 -1.04688,0 -1.54688,0.375 -0.5,0.359375 -0.75,1.3125 l -1.45312,-0.203125 q 0.20312,-0.9375 0.65625,-1.515625 0.45312,-0.578125 1.3125,-0.890625 0.85937,-0.3125 2,-0.3125 1.14062,0 1.84375,0.265625 0.70312,0.265625 1.03125,0.671875 0.32812,0.40625 0.46875,1.015625 0.0781,0.375 0.0781,1.375 l 0,2 q 0,2.078125 0.0937,2.640625 0.0937,0.546875 0.375,1.04
 6875 l -1.5625,0 q -0.23437,-0.46875 -0.29687,-1.09375 z m -0.125,-3.328125 q -0.8125,0.328125 -2.4375,0.5625 -0.92188,0.125 -1.3125,0.296875 -0.375,0.171875 -0.59375,0.5 -0.20313,0.3125 -0.20313,0.703125 0,0.59375 0.45313,1 0.45312,0.390625 1.32812,0.390625 0.85938,0 1.53125,-0.375 0.67188,-0.390625 1,-1.03125 0.23438,-0.515625 0.23438,-1.5 l 0,-0.546875 z m 3.98083,4.421875 0,-12.171875 1.48438,0 0,12.171875 -1.48438,0 z m 3.31855,-4.40625 q 0,-2.453125 1.35937,-3.625 1.14063,-0.984375 2.78125,-0.984375 1.8125,0 2.96875,1.203125 1.15625,1.1875 1.15625,3.28125 0,1.703125 -0.51562,2.6875 -0.51563,0.96875 -1.48438,1.515625 -0.96875,0.53125 -2.125,0.53125 -1.85937,0 -3,-1.1875 -1.14062,-1.1875 -1.14062,-3.421875 z m 1.53125,0 q 0,1.6875 0.73437,2.53125 0.75,0.84375 1.875,0.84375 1.10938,0 1.84375,-0.84375 0.73438,-0.84375 0.73438,-2.578125 0,-1.640625 -0.75,-2.484375 -0.73438,-0.84375 -1.82813,-0.84375 -1.125,0 -1.875,0.84375 -0.73437,0.84375 -0.73437,2.53125 z m 8.38708,5.140625 1.45
 313,0.21875 q 0.0937,0.671875 0.51562,0.96875 0.54688,0.421875 1.51563,0.421875 1.03125,0 1.59375,-0.421875 0.5625,-0.40625 0.76562,-1.15625 0.125,-0.453125 0.10938,-1.921875 -0.98438,1.15625 -2.4375,1.15625 -1.8125,0 -2.8125,-1.3125 -1,-1.3125 -1,-3.140625 0,-1.265625 0.45312,-2.328125 0.46875,-1.078125 1.32813,-1.65625 0.875,-0.578125 2.03125,-0.578125 1.5625,0 2.57812,1.265625 l 0,-1.0625 1.375,0 0,7.609375 q 0,2.0625 -0.42187,2.921875 -0.40625,0.859375 -1.32813,1.359375 -0.90625,0.5 -2.23437,0.5 -1.57813,0 -2.54688,-0.71875 -0.96875,-0.703125 -0.9375,-2.125 z m 1.23438,-5.296875 q 0,1.734375 0.6875,2.53125 0.70312,0.796875 1.73437,0.796875 1.03125,0 1.71875,-0.796875 0.70313,-0.796875 0.70313,-2.484375 0,-1.625 -0.71875,-2.4375 -0.71875,-0.828125 -1.73438,-0.828125 -0.98437,0 -1.6875,0.8125 -0.70312,0.8125 -0.70312,2.40625 z"
+       id="path4013"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 600.5002,45.613014 1.51562,-0.140625 q 0.10938,0.921875 0.5,1.515625 0.39063,0.578125 1.21875,0.9375 0.84375,0.359375 1.875,0.359375 0.92188,0 1.625,-0.265625 0.70313,-0.28125 1.04688,-0.75 0.35937,-0.484375 0.35937,-1.046875 0,-0.578125 -0.34375,-1 -0.32812,-0.4375 -1.09375,-0.71875 -0.48437,-0.203125 -2.17187,-0.59375 -1.67188,-0.40625 -2.34375,-0.765625 -0.875,-0.453125 -1.29688,-1.125 -0.42187,-0.6875 -0.42187,-1.515625 0,-0.921875 0.51562,-1.71875 0.53125,-0.8125 1.53125,-1.21875 1,-0.421875 2.23438,-0.421875 1.34375,0 2.375,0.4375 1.04687,0.4375 1.59375,1.28125 0.5625,0.84375 0.59375,1.921875 l -1.53125,0.109375 q -0.125,-1.15625 -0.84375,-1.734375 -0.71875,-0.59375 -2.125,-0.59375 -1.45313,0 -2.125,0.53125 -0.67188,0.53125 -0.67188,1.296875 0,0.65625 0.46875,1.078125 0.46875,0.421875 2.42188,0.875 1.96875,0.4375 2.70312,0.765625 1.0625,0.484375 1.5625,1.234375 0.51563,0.75 0.51563,1.734375 0,0.96875 -0.5625,1.828125 -0.54688,0.859375 -1.59375,1.34375 -1.04688,0.46
 875 -2.34375,0.46875 -1.65625,0 -2.78125,-0.46875 -1.10938,-0.484375 -1.75,-1.453125 -0.625,-0.96875 -0.65625,-2.1875 z m 17.94836,1.0625 1.54688,0.203125 q -0.375,1.34375 -1.35938,2.09375 -0.98437,0.75 -2.51562,0.75 -1.9375,0 -3.07813,-1.1875 -1.125,-1.203125 -1.125,-3.34375 0,-2.234375 1.14063,-3.453125 1.14062,-1.234375 2.96875,-1.234375 1.78125,0 2.89062,1.203125 1.125,1.203125 1.125,3.390625 0,0.125 -0.0156,0.390625 l -6.5625,0 q 0.0781,1.453125 0.8125,2.234375 0.75,0.765625 1.84375,0.765625 0.82812,0 1.40625,-0.421875 0.57812,-0.4375 0.92187,-1.390625 z m -4.90625,-2.40625 4.92188,0 q -0.0937,-1.109375 -0.5625,-1.671875 -0.71875,-0.859375 -1.85938,-0.859375 -1.01562,0 -1.71875,0.6875 -0.70312,0.6875 -0.78125,1.84375 z m 8.49646,5.25 0,-8.8125 1.34375,0 0,1.328125 q 0.51563,-0.9375 0.9375,-1.234375 0.4375,-0.296875 0.96875,-0.296875 0.75,0 1.53125,0.484375 l -0.51562,1.390625 q -0.54688,-0.328125 -1.09375,-0.328125 -0.48438,0 -0.875,0.296875 -0.39063,0.296875 -0.5625,0.8125 -0.
 25,0.796875 -0.25,1.75 l 0,4.609375 -1.48438,0 z m 8.22351,0 -3.34375,-8.8125 1.57813,0 1.89062,5.28125 q 0.3125,0.84375 0.5625,1.765625 0.20313,-0.6875 0.5625,-1.671875 l 1.95313,-5.375 1.53125,0 -3.32813,8.8125 -1.40625,0 z m 6.22657,-10.453125 0,-1.71875 1.5,0 0,1.71875 -1.5,0 z m 0,10.453125 0,-8.8125 1.5,0 0,8.8125 -1.5,0 z m 9.59979,-3.234375 1.46875,0.203125 q -0.23438,1.515625 -1.23438,2.375 -0.98437,0.859375 -2.4375,0.859375 -1.8125,0 -2.90625,-1.1875 -1.09375,-1.1875 -1.09375,-3.390625 0,-1.421875 0.46875,-2.484375 0.46875,-1.078125 1.4375,-1.609375 0.96875,-0.546875 2.10938,-0.546875 1.4375,0 2.34375,0.734375 0.90625,0.71875 1.17187,2.0625 l -1.45312,0.21875 q -0.20313,-0.890625 -0.73438,-1.328125 -0.53125,-0.453125 -1.28125,-0.453125 -1.125,0 -1.82812,0.8125 -0.70313,0.796875 -0.70313,2.546875 0,1.78125 0.67188,2.59375 0.6875,0.796875 1.78125,0.796875 0.875,0 1.46875,-0.53125 0.59375,-0.546875 0.75,-1.671875 z m 8.94531,0.390625 1.54688,0.203125 q -0.375,1.34375 -1.35938
 ,2.09375 -0.98437,0.75 -2.51562,0.75 -1.9375,0 -3.07813,-1.1875 -1.125,-1.203125 -1.125,-3.34375 0,-2.234375 1.14063,-3.453125 1.14062,-1.234375 2.96875,-1.234375 1.78125,0 2.89062,1.203125 1.125,1.203125 1.125,3.390625 0,0.125 -0.0156,0.390625 l -6.5625,0 q 0.0781,1.453125 0.8125,2.234375 0.75,0.765625 1.84375,0.765625 0.82812,0 1.40625,-0.421875 0.57812,-0.4375 0.92187,-1.390625 z m -4.90625,-2.40625 4.92188,0 q -0.0937,-1.109375 -0.5625,-1.671875 -0.71875,-0.859375 -1.85938,-0.859375 -1.01562,0 -1.71875,0.6875 -0.70312,0.6875 -0.78125,1.84375 z"
+       id="path4015"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 192.41995,15.619595 0,0 c 0,-4.766269 3.86382,-8.630093 8.63008,-8.630093 l 129.2595,0 c 2.28885,0 4.48395,0.9092393 6.10239,2.5276961 1.61847,1.6184559 2.52771,3.8135539 2.52771,6.1023969 l 0,34.51934 c 0,4.76627 -3.86383,8.630093 -8.6301,8.630093 l -129.2595,0 c -4.76626,0 -8.63008,-3.863823 -8.63008,-8.630093 z"
+       id="path4017"
+       inkscape:connector-curvature="0"
+       style="fill:#ff9900;fill-rule:nonzero" />
+    <path
+       d="m 192.41995,15.619595 0,0 c 0,-4.766269 3.86382,-8.630093 8.63008,-8.630093 l 129.2595,0 c 2.28885,0 4.48395,0.9092393 6.10239,2.5276961 1.61847,1.6184559 2.52771,3.8135539 2.52771,6.1023969 l 0,34.51934 c 0,4.76627 -3.86383,8.630093 -8.6301,8.630093 l -129.2595,0 c -4.76626,0 -8.63008,-3.863823 -8.63008,-8.630093 z"
+       id="path4019"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 245.06015,39.799263 0,-5.765625 -5.23438,-7.828125 2.1875,0 2.67188,4.09375 q 0.75,1.15625 1.39062,2.296875 0.60938,-1.0625 1.48438,-2.40625 l 2.625,-3.984375 2.10937,0 -5.4375,7.828125 0,5.765625 -1.79687,0 z m 7.11545,0 5.23437,-13.59375 1.9375,0 5.5625,13.59375 -2.04687,0 -1.59375,-4.125 -5.6875,0 -1.48438,4.125 -1.92187,0 z m 3.92187,-5.578125 4.60938,0 -1.40625,-3.78125 q -0.65625,-1.703125 -0.96875,-2.8125 -0.26563,1.3125 -0.73438,2.59375 l -1.5,4 z m 10.05295,5.578125 0,-13.59375 6.03125,0 q 1.8125,0 2.75,0.359375 0.95313,0.359375 1.51563,1.296875 0.5625,0.921875 0.5625,2.046875 0,1.453125 -0.9375,2.453125 -0.92188,0.984375 -2.89063,1.25 0.71875,0.34375 1.09375,0.671875 0.78125,0.734375 1.48438,1.8125 l 2.375,3.703125 -2.26563,0 -1.79687,-2.828125 q -0.79688,-1.21875 -1.3125,-1.875 -0.5,-0.65625 -0.90625,-0.90625 -0.40625,-0.265625 -0.8125,-0.359375 -0.3125,-0.07813 -1.01563,-0.07813 l -2.07812,0 0,6.046875 -1.79688,0 z m 1.79688,-7.59375 3.85937,0 q 1.23438,0 1.9
 2188,-0.25 0.70312,-0.265625 1.0625,-0.828125 0.375,-0.5625 0.375,-1.21875 0,-0.96875 -0.70313,-1.578125 -0.70312,-0.625 -2.21875,-0.625 l -4.29687,0 0,4.5 z m 11.62918,7.59375 0,-13.59375 1.84375,0 7.14062,10.671875 0,-10.671875 1.71875,0 0,13.59375 -1.84375,0 -7.14062,-10.6875 0,10.6875 -1.71875,0 z"
+       id="path4021"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 145.27034,447.79276 0,0 c 0,-3.1398 2.54533,-5.68515 5.68515,-5.68515 l 106.51945,0 c 1.50782,0 2.95386,0.59897 4.02002,1.66516 1.06617,1.06616 1.66514,2.51221 1.66514,4.01999 l 0,22.73993 c 0,3.13983 -2.54532,5.68515 -5.68515,5.68515 l -106.51946,0 c -3.13982,0 -5.68515,-2.54532 -5.68515,-5.68515 z"
+       id="path4023"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 145.27034,447.79276 0,0 c 0,-3.1398 2.54533,-5.68515 5.68515,-5.68515 l 106.51945,0 c 1.50782,0 2.95386,0.59897 4.02002,1.66516 1.06617,1.06616 1.66514,2.51221 1.66514,4.01999 l 0,22.73993 c 0,3.13983 -2.54532,5.68515 -5.68515,5.68515 l -106.51946,0 c -3.13982,0 -5.68515,-2.54532 -5.68515,-5.68515 z"
+       id="path4025"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 167.73932,461.70773 1.6875,-0.14062 q 0.125,1.01562 0.5625,1.67187 0.4375,0.65625 1.35937,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79688,-0.3125 1.1875,-0.84375 0.39063,-0.53125 0.39063,-1.15625 0,-0.64062 -0.375,-1.10937 -0.375,-0.48438 -1.23438,-0.8125 -0.54687,-0.21875 -2.42187,-0.65625 -1.875,-0.45313 -2.625,-0.85938 -0.96875,-0.51562 -1.45313,-1.26562 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57813,-1.92188 0.59375,-0.90625 1.70312,-1.35937 1.125,-0.46875 2.5,-0.46875 1.51563,0 2.67188,0.48437 1.15625,0.48438 1.76562,1.4375 0.625,0.9375 0.67188,2.14063 l -1.71875,0.125 q -0.14063,-1.28125 -0.95313,-1.9375 -0.79687,-0.67188 -2.35937,-0.67188 -1.625,0 -2.375,0.60938 -0.75,0.59375 -0.75,1.4375 0,0.73437 0.53125,1.20312 0.51562,0.46875 2.70312,0.96875 2.20313,0.5 3.01563,0.875 1.1875,0.54688 1.75,1.39063 0.57812,0.82812 0.57812,1.92187 0,1.09375 -0.625,2.0625 -0.625,0.95313 -1.79687,1.48438 -1.15625,0.53125 -2.60938,0.53125 -1.84375,0 -3.09375
 ,-0.53125 -1.25,-0.54688 -1.96875,-1.625 -0.70312,-1.07813 -0.73437,-2.45313 z m 19.5842,1.20313 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 5.5,0 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.81322,6.6875 1.60937,0.25 q 0.10938,0.75 0.57813,1.09375 0.60937,0.45313 1.6875,0.45313 1.17187,0 1.79687,-0.46875 0.625,-0.45313 0.85938,-1.28125 0.125,-0.51563 0.10937,-2.15625 -1.09375,1.29687 -2.71875,1.29687 -2.03125,0 -3.15625,-1.46875 -1.10937,-1.46875 -1.10937,-3.51562 0,-1.40625 0.51562,-2.59375 0.51563,-1
 .20313 1.48438,-1.84375 0.96875,-0.65625 2.26562,-0.65625 1.75,0 2.875,1.40625 l 0,-1.1875 1.54688,0 0,8.51562 q 0,2.3125 -0.46875,3.26563 -0.46875,0.96875 -1.48438,1.51562 -1.01562,0.5625 -2.5,0.5625 -1.76562,0 -2.85937,-0.79687 -1.07813,-0.79688 -1.03125,-2.39063 z m 1.375,-5.92187 q 0,1.95312 0.76562,2.84375 0.78125,0.89062 1.9375,0.89062 1.14063,0 1.92188,-0.89062 0.78125,-0.89063 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79688,-0.92188 -1.92188,-0.92188 -1.10937,0 -1.89062,0.90625 -0.78125,0.89063 -0.78125,2.67188 z m 9.29759,5.10937 0,-9.85937 1.5,0 0,1.39062 q 0.45313,-0.71875 1.21875,-1.15625 0.78125,-0.45312 1.76563,-0.45312 1.09375,0 1.79687,0.45312 0.70313,0.45313 0.98438,1.28125 1.17187,-1.73437 3.04689,-1.73437 1.46875,0 2.25,0.8125 0.79687,0.8125 0.79687,2.5 l 0,6.76562 -1.67187,0 0,-6.20312 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45313 -0.59375,-0.71875 -0.42188,-0.26563 -1,-0.26563 -1.03127,0 -1.71877,0.6875 -0.6875,0.6875 -0.6875,2.21875 l 0,5.71875 -1.67187,0 0,-6
 .40625 q 0,-1.10937 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70313,0 -1.3125,0.375 -0.59375,0.35938 -0.85938,1.07813 -0.26562,0.71875 -0.26562,2.0625 l 0,5.10937 -1.67188,0 z m 22.29082,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 5.5,0 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 9.1101,5.875 0,-9.85937 1.5,0 0,1.40625 q 1.09375,-1.625 3.14062,-1.625 0.89063,0 1.64063,0.32812 0.75,0.3125 1.10937,0.84375 0.375,0.51563 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 l 0,6.0625 -1.67187,0 
 0,-6 q 0,-1.01562 -0.20313,-1.51562 -0.1875,-0.51563 -0.6875,-0.8125 -0.5,-0.29688 -1.17187,-0.29688 -1.0625,0 -1.84375,0.67188 -0.76563,0.67187 -0.76563,2.57812 l 0,5.375 -1.67187,0 z m 14.03196,-1.5 0.23438,1.48438 q -0.70313,0.14062 -1.26563,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98437 l 0,-5.65625 -1.23437,0 0,-1.3125 1.23437,0 0,-2.4375 1.65625,-1 0,3.4375 1.6875,0 0,1.3125 -1.6875,0 0,5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29687,0.32813 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z"
+       id="path4027"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 617.6798,447.79276 0,0 c 0,-3.1398 2.54529,-5.68515 5.68512,-5.68515 l 106.51947,0 c 1.50781,0 2.95386,0.59897 4.02002,1.66516 1.06616,1.06616 1.66516,2.51221 1.66516,4.01999 l 0,22.73993 c 0,3.13983 -2.54535,5.68515 -5.68518,5.68515 l -106.51947,0 c -3.13983,0 -5.68512,-2.54532 -5.68512,-5.68515 z"
+       id="path4029"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 617.6798,447.79276 0,0 c 0,-3.1398 2.54529,-5.68515 5.68512,-5.68515 l 106.51947,0 c 1.50781,0 2.95386,0.59897 4.02002,1.66516 1.06616,1.06616 1.66516,2.51221 1.66516,4.01999 l 0,22.73993 c 0,3.13983 -2.54535,5.68515 -5.68518,5.68515 l -106.51947,0 c -3.13983,0 -5.68512,-2.54532 -5.68512,-5.68515 z"
+       id="path4031"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 640.1488,461.70773 1.6875,-0.14062 q 0.125,1.01562 0.5625,1.67187 0.4375,0.65625 1.35938,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79687,-0.3125 1.1875,-0.84375 0.39062,-0.53125 0.39062,-1.15625 0,-0.64062 -0.375,-1.10937 -0.375,-0.48438 -1.23437,-0.8125 -0.54688,-0.21875 -2.42188,-0.65625 -1.875,-0.45313 -2.625,-0.85938 -0.96875,-0.51562 -1.45312,-1.26562 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57812,-1.92188 0.59375,-0.90625 1.70313,-1.35937 1.125,-0.46875 2.5,-0.46875 1.51562,0 2.67187,0.48437 1.15625,0.48438 1.76563,1.4375 0.625,0.9375 0.67187,2.14063 l -1.71875,0.125 q -0.14062,-1.28125 -0.95312,-1.9375 -0.79688,-0.67188 -2.35938,-0.67188 -1.625,0 -2.375,0.60938 -0.75,0.59375 -0.75,1.4375 0,0.73437 0.53125,1.20312 0.51563,0.46875 2.70313,0.96875 2.20312,0.5 3.01562,0.875 1.1875,0.54688 1.75,1.39063 0.57813,0.82812 0.57813,1.92187 0,1.09375 -0.625,2.0625 -0.625,0.95313 -1.79688,1.48438 -1.15625,0.53125 -2.60937,0.53125 -1.84375,0 -3.09375,
 -0.53125 -1.25,-0.54688 -1.96875,-1.625 -0.70313,-1.07813 -0.73438,-2.45313 z m 19.58417,1.20313 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 5.5,0 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.81323,6.6875 1.60938,0.25 q 0.10937,0.75 0.57812,1.09375 0.60938,0.45313 1.6875,0.45313 1.17188,0 1.79688,-0.46875 0.625,-0.45313 0.85937,-1.28125 0.125,-0.51563 0.10938,-2.15625 -1.09375,1.29687 -2.71875,1.29687 -2.03125,0 -3.15625,-1.46875 -1.10938,-1.46875 -1.10938,-3.51562 0,-1.40625 0.51563,-2.59375 0.51562,-1
 .20313 1.48437,-1.84375 0.96875,-0.65625 2.26563,-0.65625 1.75,0 2.875,1.40625 l 0,-1.1875 1.54687,0 0,8.51562 q 0,2.3125 -0.46875,3.26563 -0.46875,0.96875 -1.48437,1.51562 -1.01563,0.5625 -2.5,0.5625 -1.76563,0 -2.85938,-0.79687 -1.07812,-0.79688 -1.03125,-2.39063 z m 1.375,-5.92187 q 0,1.95312 0.76563,2.84375 0.78125,0.89062 1.9375,0.89062 1.14062,0 1.92187,-0.89062 0.78125,-0.89063 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79687,-0.92188 -1.92187,-0.92188 -1.10938,0 -1.89063,0.90625 -0.78125,0.89063 -0.78125,2.67188 z m 9.29761,5.10937 0,-9.85937 1.5,0 0,1.39062 q 0.45313,-0.71875 1.21875,-1.15625 0.78125,-0.45312 1.76563,-0.45312 1.09375,0 1.79687,0.45312 0.70313,0.45313 0.98438,1.28125 1.17187,-1.73437 3.04687,-1.73437 1.46875,0 2.25,0.8125 0.79688,0.8125 0.79688,2.5 l 0,6.76562 -1.67188,0 0,-6.20312 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45313 -0.59375,-0.71875 -0.42187,-0.26563 -1,-0.26563 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 l 0,5.71875 -1.67187,0 0,-6
 .40625 q 0,-1.10937 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70313,0 -1.3125,0.375 -0.59375,0.35938 -0.85938,1.07813 -0.26562,0.71875 -0.26562,2.0625 l 0,5.10937 -1.67188,0 z m 22.29077,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42188,-1.32812 -1.26562,-1.32813 -1.26562,-3.73438 0,-2.48437 1.26562,-3.85937 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92187,2.48438 0.82813,0.85937 2.0625,0.85937 0.90625,0 1.54688,-0.46875 0.65625,-0.48437 1.04687,-1.54687 z m -5.48437,-2.70313 5.5,0 q -0.10938,-1.23437 -0.625,-1.85937 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76562 -0.85937,2.04687 z m 9.1101,5.875 0,-9.85937 1.5,0 0,1.40625 q 1.09375,-1.625 3.14063,-1.625 0.89062,0 1.64062,0.32812 0.75,0.3125 1.10938,0.84375 0.375,0.51563 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 l 0,6.0625 -1.67188,0 
 0,-6 q 0,-1.01562 -0.20312,-1.51562 -0.1875,-0.51563 -0.6875,-0.8125 -0.5,-0.29688 -1.17188,-0.29688 -1.0625,0 -1.84375,0.67188 -0.76562,0.67187 -0.76562,2.57812 l 0,5.375 -1.67188,0 z m 14.03199,-1.5 0.23437,1.48438 q -0.70312,0.14062 -1.26562,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70313,-0.75 -0.20312,-0.46875 -0.20312,-1.98437 l 0,-5.65625 -1.23438,0 0,-1.3125 1.23438,0 0,-2.4375 1.65625,-1 0,3.4375 1.6875,0 0,1.3125 -1.6875,0 0,5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29688,0.32813 0.20312,0.125 0.57812,0.125 0.26563,0 0.73438,-0.0781 z"
+       id="path4033"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 389.47507,447.79276 0,0 c 0,-3.1398 2.54532,-5.68515 5.68515,-5.68515 l 106.51947,0 c 1.50778,0 2.95383,0.59897 4.01999,1.66516 1.06619,1.06616 1.66516,2.51221 1.66516,4.01999 l 0,22.73993 c 0,3.13983 -2.54535,5.68515 -5.68515,5.68515 l -106.51947,0 c -3.13983,0 -5.68515,-2.54532 -5.68515,-5.68515 z"
+       id="path4035"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 389.47507,447.79276 0,0 c 0,-3.1398 2.54532,-5.68515 5.68515,-5.68515 l 106.51947,0 c 1.50778,0 2.95383,0.59897 4.01999,1.66516 1.06619,1.06616 1.66516,2.51221 1.66516,4.01999 l 0,22.73993 c 0,3.13983 -2.54535,5.68515 -5.68515,5.68515 l -106.51947,0 c -3.13983,0 -5.68515,-2.54532 -5.68515,-5.68515 z"
+       id="path4037"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 411.94406,461.70773 1.6875,-0.14062 q 0.125,1.01562 0.5625,1.67187 0.4375,0.65625 1.35937,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79688,-0.3125 1.1875,-0.84375 0.39063,-0.53125 0.39063,-1.15625 0,-0.64062 -0.375,-1.10937 -0.375,-0.48438 -1.23438,-0.8125 -0.54687,-0.21875 -2.42187,-0.65625 -1.875,-0.45313 -2.625,-0.85938 -0.96875,-0.51562 -1.45313,-1.26562 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57813,-1.92188 0.59375,-0.90625 1.70312,-1.35937 1.125,-0.46875 2.5,-0.46875 1.51563,0 2.67188,0.48437 1.15625,0.48438 1.76562,1.4375 0.625,0.9375 0.67188,2.14063 l -1.71875,0.125 q -0.14063,-1.28125 -0.95313,-1.9375 -0.79687,-0.67188 -2.35937,-0.67188 -1.625,0 -2.375,0.60938 -0.75,0.59375 -0.75,1.4375 0,0.73437 0.53125,1.20312 0.51562,0.46875 2.70312,0.96875 2.20313,0.5 3.01563,0.875 1.1875,0.54688 1.75,1.39063 0.57812,0.82812 0.57812,1.92187 0,1.09375 -0.625,2.0625 -0.625,0.95313 -1.79687,1.48438 -1.15625,0.53125 -2.60938,0.53125 -1.84375,0 -3.09375
 ,-0.53125 -1.25,-0.54688 -1.96875,-1.625 -0.70312,-1.07813 -0.73437,-2.45313 z m 19.5842,1.20313 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 5.5,0 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.8132,6.6875 1.60938,0.25 q 0.10937,0.75 0.57812,1.09375 0.60938,0.45313 1.6875,0.45313 1.17188,0 1.79688,-0.46875 0.625,-0.45313 0.85937,-1.28125 0.125,-0.51563 0.10938,-2.15625 -1.09375,1.29687 -2.71875,1.29687 -2.03125,0 -3.15625,-1.46875 -1.10938,-1.46875 -1.10938,-3.51562 0,-1.40625 0.51563,-2.59375 0.51562,-1.
 20313 1.48437,-1.84375 0.96875,-0.65625 2.26563,-0.65625 1.75,0 2.875,1.40625 l 0,-1.1875 1.54687,0 0,8.51562 q 0,2.3125 -0.46875,3.26563 -0.46875,0.96875 -1.48437,1.51562 -1.01563,0.5625 -2.5,0.5625 -1.76563,0 -2.85938,-0.79687 -1.07812,-0.79688 -1.03125,-2.39063 z m 1.375,-5.92187 q 0,1.95312 0.76563,2.84375 0.78125,0.89062 1.9375,0.89062 1.14062,0 1.92187,-0.89062 0.78125,-0.89063 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79687,-0.92188 -1.92187,-0.92188 -1.10938,0 -1.89063,0.90625 -0.78125,0.89063 -0.78125,2.67188 z m 9.29761,5.10937 0,-9.85937 1.5,0 0,1.39062 q 0.45313,-0.71875 1.21875,-1.15625 0.78125,-0.45312 1.76563,-0.45312 1.09375,0 1.79687,0.45312 0.70313,0.45313 0.98438,1.28125 1.17187,-1.73437 3.04687,-1.73437 1.46875,0 2.25,0.8125 0.79688,0.8125 0.79688,2.5 l 0,6.76562 -1.67188,0 0,-6.20312 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45313 -0.59375,-0.71875 -0.42187,-0.26563 -1,-0.26563 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 l 0,5.71875 -1.67187,0 0,-6.
 40625 q 0,-1.10937 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70313,0 -1.3125,0.375 -0.59375,0.35938 -0.85938,1.07813 -0.26562,0.71875 -0.26562,2.0625 l 0,5.10937 -1.67188,0 z m 22.2908,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42188,-1.32812 -1.26562,-1.32813 -1.26562,-3.73438 0,-2.48437 1.26562,-3.85937 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92187,2.48438 0.82813,0.85937 2.0625,0.85937 0.90625,0 1.54688,-0.46875 0.65625,-0.48437 1.04687,-1.54687 z m -5.48437,-2.70313 5.5,0 q -0.10938,-1.23437 -0.625,-1.85937 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76562 -0.85937,2.04687 z m 9.11008,5.875 0,-9.85937 1.5,0 0,1.40625 q 1.09375,-1.625 3.14062,-1.625 0.89063,0 1.64063,0.32812 0.75,0.3125 1.10937,0.84375 0.375,0.51563 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 l 0,6.0625 -1.67187,0 0
 ,-6 q 0,-1.01562 -0.20313,-1.51562 -0.1875,-0.51563 -0.6875,-0.8125 -0.5,-0.29688 -1.17187,-0.29688 -1.0625,0 -1.84375,0.67188 -0.76563,0.67187 -0.76563,2.57812 l 0,5.375 -1.67187,0 z m 14.03198,-1.5 0.23437,1.48438 q -0.70312,0.14062 -1.26562,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70313,-0.75 -0.20312,-0.46875 -0.20312,-1.98437 l 0,-5.65625 -1.23438,0 0,-1.3125 1.23438,0 0,-2.4375 1.65625,-1 0,3.4375 1.6875,0 0,1.3125 -1.6875,0 0,5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29688,0.32813 0.20312,0.125 0.57812,0.125 0.26563,0 0.73438,-0.0781 z"
+       id="path4039"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 252,154.79868 0,0 c 0,-21.20758 17.19214,-38.39973 38.39972,-38.39973 l 275.98798,0 c 10.1842,0 19.95136,4.04568 27.15271,11.24702 7.20135,7.20134 11.24701,16.96846 11.24701,27.15271 l 0,153.59427 c 0,21.20758 -17.19214,38.39972 -38.39972,38.39972 l -275.98798,0 C 269.19214,346.79267 252,329.60053 252,308.39295 Z"
+       id="path4041"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 252,154.79868 0,0 c 0,-21.20758 17.19214,-38.39973 38.39972,-38.39973 l 275.98798,0 c 10.1842,0 19.95136,4.04568 27.15271,11.24702 7.20135,7.20134 11.24701,16.96846 11.24701,27.15271 l 0,153.59427 c 0,21.20758 -17.19214,38.39972 -38.39972,38.39972 l -275.98798,0 C 269.19214,346.79267 252,329.60053 252,308.39295 Z"
+       id="path4043"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 230.41995,409.83597 83.33859,-88.09451"
+       id="path4045"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 230.41995,409.83597 78.6282,-83.11533"
+       id="path4047"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:2, 6" />
+    <path
+       d="m 309.04816,326.72064 0.0882,3.17957 2.61282,-6.03476 -5.88062,2.94339 z"
+       id="path4049"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="M 648.32544,410.00262 313.77429,321.71914"
+       id="path4051"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 648.32544,410.00262 320.40158,323.46801"
+       id="path4053"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:2, 6" />
+    <path
+       d="m 320.40158,323.46802 2.7486,-1.60083 -6.54886,0.59799 5.40112,3.75144 z"
+       id="path4055"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="M 362.65616,115.20998 265.67978,58.76903"
+       id="path4057"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 356.73224,111.76222 271.6037,62.216783"
+       id="path4059"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 356.73227,111.76221 -3.07529,0.81254 6.4722,1.16451 -4.20947,-5.05231 z"
+       id="path4061"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 271.6037,62.216785 3.07526,-0.812538 -6.4722,-1.164498 4.20947,5.052304 z"
+       id="path4063"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 355.75198,702.83203 65.70078,0"
+       id="path4065"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 355.75198,702.83203 58.84662,0"
+       id="path4067"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:2, 6" />
+    <path
+       d="m 414.59857,702.83203 -2.24915,2.24915 6.17954,-2.24915 -6.17954,-2.24921 z"
+       id="path4069"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 429.81235,685.7769 95.2756,0 0,34.11023 -95.2756,0 z"
+       id="path4071"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 440.0936,710.1369 0,-11.45313 1.51562,0 0,4.70313 5.95313,0 0,-4.70313 1.51562,0 0,11.45313 -1.51562,0 0,-5.40625 -5.95313,0 0,5.40625 -1.51562,0 z m 17.00781,-2.67188 1.45313,0.17188 q -0.34375,1.28125 -1.28125,1.98437 -0.92188,0.70313 -2.35938,0.70313 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14063 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14063 1.0625,1.125 1.0625,3.17187 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10938 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29688 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04687 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64063 -0.71875,1.71875 z m 13.24218,3.92188 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54687,0.26563 -1.375,0 -2.10938,-0.67188 -0.73437,-0.67187 -0.73437,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0
 .42188,-0.10937 1.25,-0.20312 1.70313,-0.20313 2.51563,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39063,-1.20312 -0.54687,-0.48438 -1.60937,-0.48438 -0.98438,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14063 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35937 0.9375,-0.98437 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51563 z m 3.58594,4.17188 0,-8.29688 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45313 l -0.48438,
 1.29687 q -0.51562,-0.29687 -1.03125,-0.29687 -0.45312,0 -0.82812,0.28125 -0.35938,0.26562 -0.51563,0.76562 -0.23437,0.75 -0.23437,1.64063 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26563 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23437 -0.42187,-0.25 -0.59375,-0.64063 -0.17187,-0.40625 -0.17187,-1.67187 l 0,-4.76563 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60938 0.0625,0.78125 0.0781,0.17188 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 2.67968,1.26563 -1.3125,0 0,-11.45313 1.40625,0 0,4.07813 q 0.89063,-1.10938 2.28125,-1.10938 0.76563,0 1.4375,0.3125 0.6875,0.29688 1.125,0.85938 0.45313,0.5625 0.70313,1.35937 0.25,0.78125 0.25,1.67188 0,2.14062 -1.0625,3.3125 -1.04688,1.15625 -2.53125,1.15625 -1.46875,0 -2.29688,-1.23438 l 0,1.04688 z m -0.0156,-4.21875 q 0,1.5 0.40625,2.15625 0.65625,1.09375 1.79687,1.09375 0.92188,0 1.59375,-0.79688 0.67188,-0.8125 0.67188,-2.
 39062 0,-1.625 -0.65625,-2.39063 -0.64063,-0.78125 -1.54688,-0.78125 -0.92187,0 -1.59375,0.79688 -0.67187,0.79687 -0.67187,2.3125 z m 13.28906,1.54687 1.45313,0.17188 q -0.34375,1.28125 -1.28125,1.98437 -0.92188,0.70313 -2.35938,0.70313 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14063 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14063 1.0625,1.125 1.0625,3.17187 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10938 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29688 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04687 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64063 -0.71875,1.71875 z m 13.24218,3.92188 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54687,0.26563 -1.375,0 -2.10938,-0.67188 -0.73437,-0.67187 -0.73437,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10937 1.25,-0.20312
  1.70313,-0.20313 2.51563,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39063,-1.20312 -0.54687,-0.48438 -1.60937,-0.48438 -0.98438,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14063 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35937 0.9375,-0.98437 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51563 z m 6.66406,2.90625 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23437 -0.42188,-0.25 -0.59375,-0.64063 -0.17188,-0.40625 -0.17188,-1.67187 l 0,-4.76563 -1.0312
 5,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60938 0.0625,0.78125 0.0781,0.17188 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z"
+       id="path4073"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 677.8845,212.63277 0,0 c 0,-6.04482 4.90027,-10.9451 10.94507,-10.9451 l 124.62952,0 c 2.90283,0 5.68677,1.15314 7.73938,3.20575 2.05255,2.0526 3.20569,4.83653 3.20569,7.73935 l 0,43.7791 c 0,6.0448 -4.90027,10.9451 -10.94507,10.9451 l -124.62952,0 c -6.0448,0 -10.94507,-4.9003 -10.94507,-10.9451 z"
+       id="path4075"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 677.8845,212.63277 0,0 c 0,-6.04482 4.90027,-10.9451 10.94507,-10.9451 l 124.62952,0 c 2.90283,0 5.68677,1.15314 7.73938,3.20575 2.05255,2.0526 3.20569,4.83653 3.20569,7.73935 l 0,43.7791 c 0,6.0448 -4.90027,10.9451 -10.94507,10.9451 l -124.62952,0 c -6.0448,0 -10.94507,-4.9003 -10.94507,-10.9451 z"
+       id="path4077"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 732.6905,234.22356 0,-13.64063 1.53125,0 0,1.28125 q 0.53125,-0.75 1.20313,-1.125 0.6875,-0.375 1.64062,-0.375 1.26563,0 2.23438,0.65625 0.96875,0.64063 1.45312,1.82813 0.5,1.1875 0.5,2.59375 0,1.51562 -0.54687,2.73437 -0.54688,1.20313 -1.57813,1.84375 -1.03125,0.64063 -2.17187,0.64063 -0.84375,0 -1.51563,-0.34375 -0.65625,-0.35938 -1.07812,-0.89063 l 0,4.79688 -1.67188,0 z m 1.51563,-8.65625 q 0,1.90625 0.76562,2.8125 0.78125,0.90625 1.875,0.90625 1.10938,0 1.89063,-0.9375 0.79687,-0.9375 0.79687,-2.92188 0,-1.875 -0.78125,-2.8125 -0.76562,-0.9375 -1.84375,-0.9375 -1.0625,0 -1.89062,1 -0.8125,1 -0.8125,2.89063 z m 8.18823,1.9375 1.65625,-0.26563 q 0.14062,1 0.76562,1.53125 0.64063,0.51563 1.78125,0.51563 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89063 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60938 -0.35937,-1.32813 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625
  0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48437 0.67187,-0.20313 1.4375,-0.20313 1.17187,0 2.04687,0.34375 0.875,0.32813 1.28125,0.90625 0.42188,0.5625 0.57813,1.51563 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39063 -0.48437,0.375 -0.48437,0.875 0,0.32812 0.20312,0.59375 0.20313,0.26562 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76562 0.70312,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48438,1.57813 -0.48437,0.73437 -1.40625,1.14062 -0.92187,0.39063 -2.07812,0.39063 -1.92188,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 16.28125,6.71875 0,-4.82813 q -0.39063,0.54688 -1.09375,0.90625 -0.6875,0.35938 -1.48438,0.35938 -1.75,0 -3.01562,-1.39063 -1.26563,-1.40625 -1.26563,-3.84375 0,-1.48437 0.51563,-2.65625 0.51562,-1.1875 1.48437,-1.79687 0.98438,-0.60938 2.15625,-0.60938 1.82813,0 2.875,1.54688 l 0,-1.32813 1.5,0 0,13.64063 -1.67187,0 z m -5.14063,-8.73438 q 0,1.90625 
 0.79688,2.85938 0.79687,0.9375 1.90625,0.9375 1.0625,0 1.82812,-0.89063 0.78125,-0.90625 0.78125,-2.76562 0,-1.95313 -0.8125,-2.95313 -0.8125,-1 -1.90625,-1 -1.09375,0 -1.84375,0.9375 -0.75,0.92188 -0.75,2.875 z m 9.20386,4.95313 0,-13.59375 1.67187,0 0,13.59375 -1.67187,0 z m 2.92609,0.23437 3.9375,-14.0625 1.34375,0 -3.9375,14.0625 -1.34375,0 z"
+       id="path4079"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 712.4629,240.78606 0,-1.9375 1.65625,0 0,1.9375 -1.65625,0 z m -2.125,15.48439 0.3125,-1.42189 q 0.5,0.125 0.79687,0.125 0.51563,0 0.76563,-0.34375 0.25,-0.32813 0.25,-1.6875 l 0,-10.35938 1.65625,0 0,10.39063 q 0,1.82812 -0.46875,2.54687 -0.59375,0.92189 -2,0.92189 -0.67188,0 -1.3125,-0.17187 z m 12.66046,-3.82814 0,-1.25 q -0.9375,1.46875 -2.75,1.46875 -1.17187,0 -2.17187,-0.64063 -0.98438,-0.65625 -1.53125,-1.8125 -0.53125,-1.17187 -0.53125,-2.6875 0,-1.46875 0.48437,-2.67187 0.5,-1.20313 1.46875,-1.84375 0.98438,-0.64063 2.20313,-0.64063 0.89062,0 1.57812,0.375 0.70313,0.375 1.14063,0.98438 l 0,-4.875 1.65625,0 0,13.59375 -1.54688,0 z m -5.28125,-4.92188 q 0,1.89063 0.79688,2.82813 0.8125,0.9375 1.89062,0.9375 1.09375,0 1.85938,-0.89063 0.76562,-0.89062 0.76562,-2.73437 0,-2.01563 -0.78125,-2.95313 -0.78125,-0.95312 -1.92187,-0.95312 -1.10938,0 -1.85938,0.90625 -0.75,0.90625 -0.75,2.85937 z m 10.81317,4.92188 -1.54687,0 0,-13.59375 1.65625,0 0,4.84375 q 1.0625,-1.328
 13 2.70312,-1.32813 0.90625,0 1.71875,0.375 0.8125,0.35938 1.32813,1.03125 0.53125,0.65625 0.82812,1.59375 0.29688,0.9375 0.29688,2 0,2.53125 -1.25,3.92188 -1.25,1.375 -3,1.375 -1.75,0 -2.73438,-1.45313 l 0,1.23438 z m -0.0156,-5 q 0,1.76562 0.46875,2.5625 0.79687,1.28125 2.14062,1.28125 1.09375,0 1.89063,-0.9375 0.79687,-0.95313 0.79687,-2.84375 0,-1.92188 -0.76562,-2.84375 -0.76563,-0.92188 -1.84375,-0.92188 -1.09375,0 -1.89063,0.95313 -0.79687,0.95312 -0.79687,2.75 z m 15.28198,1.39062 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,-2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.5
 9375 0.65625,-0.60938 0.84375,-1.85938 z m 1.64062,3.84375 3.9375,-14.0625 1.34375,0 -3.9375,14.0625 -1.34375,0 z m 5.80829,-5.15625 q 0,-2.73437 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32813 1.29688,3.67188 0,1.90625 -0.57813,3 -0.5625,1.07812 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82813,2.82813 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95313 0.82813,-2.89063 0,-1.82812 -0.82813,-2.76562 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82812 z m 15.67261,4.92188 0,-1.25 q -0.9375,1.46875 -2.75,1.46875 -1.17188,0 -2.17188,-0.64063 -0.98437,-0.65625 -1.53125,-1.8125 -0.53125,-1.17187 -0.53125,-2.6875 0,-1.46875 0.48438,-2.67187 0.5,-1.20313 1.46875,-1.84375 0.98437,-0.64063 2.20312,-0.64063 0.89063,0 1.57813,0.375 0.70312,0.375 1.14062,0.98438 l 0,-4.875 1.65625,0 0,13.59375 -1.54687,0 
 z m -5.28125,-4.92188 q 0,1.89063 0.79687,2.82813 0.8125,0.9375 1.89063,0.9375 1.09375,0 1.85937,-0.89063 0.76563,-0.89062 0.76563,-2.73437 0,-2.01563 -0.78125,-2.95313 -0.78125,-0.95312 -1.92188,-0.95312 -1.10937,0 -1.85937,0.90625 -0.75,0.90625 -0.75,2.85937 z m 10.81323,4.92188 -1.54687,0 0,-13.59375 1.65625,0 0,4.84375 q 1.0625,-1.32813 2.70312,-1.32813 0.90625,0 1.71875,0.375 0.8125,0.35938 1.32813,1.03125 0.53125,0.65625 0.82812,1.59375 0.29688,0.9375 0.29688,2 0,2.53125 -1.25,3.92188 -1.25,1.375 -3,1.375 -1.75,0 -2.73438,-1.45313 l 0,1.23438 z m -0.0156,-5 q 0,1.76562 0.46875,2.5625 0.79687,1.28125 2.14062,1.28125 1.09375,0 1.89063,-0.9375 0.79687,-0.95313 0.79687,-2.84375 0,-1.92188 -0.76562,-2.84375 -0.76563,-0.92188 -1.84375,-0.92188 -1.09375,0 -1.89063,0.95313 -0.79687,0.95312 -0.79687,2.75 z m 15.28192,1.39062 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,
 -2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z"
+       id="path4081"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 678.6772,232.54068 -73.88977,-0.94487"
+       id="path4083"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 666.67816,232.38724 -49.89172,-0.638"
+       id="path4085"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 666.6359,235.69043 9.11768,-3.18713 -9.03321,-3.41925 z"
+       id="path4087"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 616.8287,228.44604 -9.11774,3.18713 9.03327,3.41925 z"
+       id="path4089"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="M 628.6247,58.769028 543.58533,116.40682"
+       id="path4091"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 618.6913,65.50165 553.51869,109.6742"
+       id="path4093"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 620.54474,68.23619 5.65967,-7.826756 -9.36652,2.357666 z"
+       id="path4095"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 551.6653,106.93966 -5.65973,7.82676 9.36652,-2.35767 z"
+       id="path4097"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 441.77298,321.8084 2.45666,68"
+       id="path4099"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 442.2062,333.80057 1.59021,44.01566"
+       id="path4101"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 445.50754,333.6813 -3.629,-8.95102 -2.97363,9.18955 z"
+       id="path4103"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 440.4951,377.9355 3.629,8.95102 2.97363,-9.18955 z"
+       id="path4105"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 441.77298,321.8084 206.55118,88.18896"
+       id="path4107"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 452.80914,326.52042 637.28796,405.2854"
+       id="path4109"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 454.1063,323.48227 -9.64435,-0.52579 7.05002,6.60205 z"
+       id="path4111"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 635.99084,408.32352 9.64435,0.52579 -7.05005,-6.60205 z"
+       id="path4113"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 230.41995,409.83597 211.3386,-88.03149"
+       id="path4115"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 241.49736,405.22174 430.68109,326.41867"
+       id="path4117"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 240.22711,402.17227 -7.10815,6.53943 9.64863,-0.44046 z"
+       id="path4119"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 431.95135,329.46817 7.10815,-6.53946 -9.64862,0.44049 z"
+       id="path4121"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 124.80052,127.79265 138.3622,0 0,51.77953 -138.3622,0 z"
+       id="path4123"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 513.7402,162.28366 0,0 c 0,-5.23305 4.24219,-9.47527 9.47522,-9.47527 l 66.08887,0 c 2.513,0 4.9231,0.9983 6.70001,2.77524 1.77698,1.77696 2.77527,4.18703 2.77527,6.70003 l 0,37.89987 c 0,5.23305 -4.24225,9.47527 -9.47528,9.47527 l -66.08887,0 c -5.23303,0 -9.47522,-4.24222 -9.47522,-9.47527 z"
+       id="path4127"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 513.7402,162.28366 0,0 c 0,-5.23305 4.24219,-9.47527 9.47522,-9.47527 l 66.08887,0 c 2.513,0 4.9231,0.9983 6.70001,2.77524 1.77698,1.77696 2.77527,4.18703 2.77527,6.70003 l 0,37.89987 c 0,5.23305 -4.24225,9.47527 -9.47528,9.47527 l -66.08887,0 c -5.23303,0 -9.47522,-4.24222 -9.47522,-9.47527 z"
+       id="path4129"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 535.7806,177.0336 0,-9.3125 3.51563,0 q 0.92187,0 1.40625,0.0937 0.6875,0.10937 1.15625,0.4375 0.46875,0.3125 0.75,0.89062 0.28125,0.57813 0.28125,1.28125 0,1.1875 -0.76563,2.01563 -0.75,0.8125 -2.71875,0.8125 l -2.39062,0 0,3.78125 -1.23438,0 z m 1.23438,-4.875 2.40625,0 q 1.1875,0 1.6875,-0.4375 0.51562,-0.45313 0.51562,-1.26563 0,-0.57812 -0.29687,-0.98437 -0.29688,-0.42188 -0.78125,-0.5625 -0.3125,-0.0781 -1.15625,-0.0781 l -2.375,0 0,3.32813 z m 11.90539,4.04687 q -0.625,0.53125 -1.21875,0.76563 -0.57812,0.21875 -1.25,0.21875 -1.125,0 -1.71875,-0.54688 -0.59375,-0.54687 -0.59375,-1.39062 0,-0.48438 0.21875,-0.89063 0.23438,-0.42187 0.59375,-0.67187 0.375,-0.25 0.82813,-0.375 0.32812,-0.0781 1.01562,-0.17188 1.375,-0.15625 2.03125,-0.39062 0.0156,-0.23438 0.0156,-0.29688 0,-0.70312 -0.32813,-0.98437 -0.4375,-0.39063 -1.29687,-0.39063 -0.8125,0 -1.20313,0.28125 -0.375,0.28125 -0.5625,1 l -1.10937,-0.14062 q 0.14062,-0.71875 0.48437,-1.15625 0.35938,-0.45313 1.01563,-0
 .6875 0.67187,-0.23438 1.53125,-0.23438 0.875,0 1.40625,0.20313 0.54687,0.20312 0.79687,0.51562 0.25,0.29688 0.35938,0.76563 0.0469,0.29687 0.0469,1.0625 l 0,1.51562 q 0,1.59375 0.0781,2.01563 0.0781,0.42187 0.28125,0.8125 l -1.1875,0 q -0.17188,-0.35938 -0.23438,-0.82813 z m -0.0937,-2.5625 q -0.625,0.26563 -1.85937,0.4375 -0.70313,0.10938 -1,0.23438 -0.29688,0.125 -0.45313,0.375 -0.15625,0.23437 -0.15625,0.53125 0,0.45312 0.34375,0.76562 0.34375,0.29688 1.01563,0.29688 0.65625,0 1.17187,-0.28125 0.51563,-0.29688 0.76563,-0.79688 0.17187,-0.375 0.17187,-1.14062 l 0,-0.42188 z m 3.09998,3.39063 0,-6.73438 1.03125,0 0,1.01563 q 0.39062,-0.71875 0.71875,-0.9375 0.34375,-0.23438 0.73437,-0.23438 0.57813,0 1.17188,0.35938 l -0.39063,1.0625 q -0.42187,-0.25 -0.82812,-0.25 -0.375,0 -0.6875,0.23437 -0.29688,0.21875 -0.42188,0.625 -0.1875,0.60938 -0.1875,1.32813 l 0,3.53125 -1.14062,0 z m 4.00085,-2.01563 1.125,-0.17187 q 0.0937,0.67187 0.53125,1.04687 0.4375,0.35938 1.21875,0.35938 0.78125
 ,0 1.15625,-0.3125 0.39063,-0.32813 0.39063,-0.76563 0,-0.39062 -0.34375,-0.60937 -0.23438,-0.15625 -1.17188,-0.39063 -1.25,-0.3125 -1.73437,-0.54687 -0.48438,-0.23438 -0.73438,-0.64063 -0.25,-0.40625 -0.25,-0.90625 0,-0.45312 0.20313,-0.82812 0.20312,-0.39063 0.5625,-0.64063 0.26562,-0.20312 0.71875,-0.32812 0.46875,-0.14063 1,-0.14063 0.78125,0 1.375,0.23438 0.60937,0.21875 0.89062,0.60937 0.29688,0.39063 0.40625,1.04688 l -1.125,0.15625 q -0.0781,-0.53125 -0.4375,-0.8125 -0.35937,-0.29688 -1.03125,-0.29688 -0.78125,0 -1.125,0.26563 -0.34375,0.25 -0.34375,0.60937 0,0.21875 0.14063,0.39063 0.14062,0.1875 0.4375,0.3125 0.17187,0.0625 1.01562,0.28125 1.21875,0.32812 1.6875,0.53125 0.48438,0.20312 0.75,0.60937 0.28125,0.39063 0.28125,0.96875 0,0.57813 -0.34375,1.07813 -0.32812,0.5 -0.95312,0.78125 -0.625,0.28125 -1.42188,0.28125 -1.3125,0 -2,-0.54688 -0.6875,-0.54687 -0.875,-1.625 z m 11.72656,-0.15625 1.1875,0.14063 q -0.28125,1.04687 -1.04687,1.625 -0.75,0.5625 -1.92188,0.5625 -1.48
 437,0 -2.35937,-0.90625 -0.85938,-0.92188 -0.85938,-2.5625 0,-1.70313 0.875,-2.64063 0.89063,-0.9375 2.28125,-0.9375 1.35938,0 2.20313,0.92188 0.85937,0.92187 0.85937,2.57812 0,0.10938 0,0.3125 l -5.03125,0 q 0.0625,1.10938 0.625,1.70313 0.5625,0.59375 1.40625,0.59375 0.64063,0 1.07813,-0.32813 0.45312,-0.34375 0.70312,-1.0625 z m -3.75,-1.84375 3.76563,0 q -0.0781,-0.85937 -0.4375,-1.28125 -0.54688,-0.65625 -1.40625,-0.65625 -0.79688,0 -1.32813,0.53125 -0.53125,0.51563 -0.59375,1.40625 z m 6.53748,4.01563 0,-6.73438 1.03125,0 0,1.01563 q 0.39062,-0.71875 0.71875,-0.9375 0.34375,-0.23438 0.73437,-0.23438 0.57813,0 1.17188,0.35938 l -0.39063,1.0625 q -0.42187,-0.25 -0.82812,-0.25 -0.375,0 -0.6875,0.23437 -0.29688,0.21875 -0.42188,0.625 -0.1875,0.60938 -0.1875,1.32813 l 0,3.53125 -1.14062,0 z m 3.59466,0.15625 2.70313,-9.625 0.90625,0 -2.6875,9.625 -0.92188,0 z"
+       id="path4131"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 530.31683,193.0336 3.57812,-9.3125 1.3125,0 3.8125,9.3125 -1.40625,0 -1.07812,-2.8125 -3.89063,0 -1.03125,2.8125 -1.29687,0 z m 2.6875,-3.82813 3.15625,0 -0.98438,-2.57812 q -0.4375,-1.17188 -0.65625,-1.92188 -0.17187,0.89063 -0.5,1.78125 l -1.01562,2.71875 z m 7.07727,3.82813 0,-6.73438 1.03125,0 0,0.95313 q 0.73438,-1.10938 2.14063,-1.10938 0.60937,0 1.10937,0.21875 0.51563,0.21875 0.76563,0.57813 0.26562,0.34375 0.35937,0.84375 0.0625,0.3125 0.0625,1.10937 l 0,4.14063 -1.14062,0 0,-4.09375 q 0,-0.70313 -0.14063,-1.04688 -0.125,-0.34375 -0.46875,-0.54687 -0.32812,-0.21875 -0.78125,-0.21875 -0.73437,0 -1.26562,0.46875 -0.53125,0.45312 -0.53125,1.75 l 0,3.6875 -1.14063,0 z m 11.8031,-0.82813 q -0.625,0.53125 -1.21875,0.76563 -0.57812,0.21875 -1.25,0.21875 -1.125,0 -1.71875,-0.54688 -0.59375,-0.54687 -0.59375,-1.39062 0,-0.48438 0.21875,-0.89063 0.23438,-0.42187 0.59375,-0.67187 0.375,-0.25 0.82813,-0.375 0.32812,-0.0781 1.01562,-0.17188 1.375,-0.15625 2.03125,-0.39062 0.
 0156,-0.23438 0.0156,-0.29688 0,-0.70312 -0.32813,-0.98437 -0.4375,-0.39063 -1.29687,-0.39063 -0.8125,0 -1.20313,0.28125 -0.375,0.28125 -0.5625,1 l -1.10937,-0.14062 q 0.14062,-0.71875 0.48437,-1.15625 0.35938,-0.45313 1.01563,-0.6875 0.67187,-0.23438 1.53125,-0.23438 0.875,0 1.40625,0.20313 0.54687,0.20312 0.79687,0.51562 0.25,0.29688 0.35938,0.76563 0.0469,0.29687 0.0469,1.0625 l 0,1.51562 q 0,1.59375 0.0781,2.01563 0.0781,0.42187 0.28125,0.8125 l -1.1875,0 q -0.17188,-0.35938 -0.23438,-0.82813 z m -0.0937,-2.5625 q -0.625,0.26563 -1.85937,0.4375 -0.70313,0.10938 -1,0.23438 -0.29688,0.125 -0.45313,0.375 -0.15625,0.23437 -0.15625,0.53125 0,0.45312 0.34375,0.76562 0.34375,0.29688 1.01563,0.29688 0.65625,0 1.17187,-0.28125 0.51563,-0.29688 0.76563,-0.79688 0.17187,-0.375 0.17187,-1.14062 l 0,-0.42188 z m 3.08435,3.39063 0,-9.3125 1.14063,0 0,9.3125 -1.14063,0 z m 2.94544,2.59375 -0.14063,-1.0625 q 0.375,0.0937 0.65625,0.0937 0.39063,0 0.60938,-0.125 0.23437,-0.125 0.375,-0.35938 0.10
 937,-0.17187 0.35937,-0.84375 0.0312,-0.0937 0.0937,-0.28125 l -2.5625,-6.75 1.23438,0 1.40625,3.89063 q 0.26562,0.75 0.48437,1.5625 0.20313,-0.78125 0.46875,-1.53125 l 1.45313,-3.92188 1.14062,0 -2.5625,6.84375 q -0.42187,1.10938 -0.64062,1.53125 -0.3125,0.5625 -0.70313,0.82813 -0.39062,0.26562 -0.9375,0.26562 -0.32812,0 -0.73437,-0.14062 z m 6.10156,-2.59375 0,-0.92188 4.29687,-4.9375 q -0.73437,0.0469 -1.29687,0.0469 l -2.73438,0 0,-0.92188 5.5,0 0,0.75 -3.64062,4.28125 -0.71875,0.78125 q 0.78125,-0.0625 1.45312,-0.0625 l 3.10938,0 0,0.98438 -5.96875,0 z m 11.88281,-2.17188 1.1875,0.14063 q -0.28125,1.04687 -1.04687,1.625 -0.75,0.5625 -1.92188,0.5625 -1.48437,0 -2.35937,-0.90625 -0.85938,-0.92188 -0.85938,-2.5625 0,-1.70313 0.875,-2.64063 0.89063,-0.9375 2.28125,-0.9375 1.35938,0 2.20313,0.92188 0.85937,0.92187 0.85937,2.57812 0,0.10938 0,0.3125 l -5.03125,0 q 0.0625,1.10938 0.625,1.70313 0.5625,0.59375 1.40625,0.59375 0.64063,0 1.07813,-0.32813 0.45312,-0.34375 0.70312,-1.0625 z
  m -3.75,-1.84375 3.76563,0 q -0.0781,-0.85937 -0.4375,-1.28125 -0.54688,-0.65625 -1.40625,-0.65625 -0.79688,0 -1.32813,0.53125 -0.53125,0.51563 -0.59375,1.40625 z m 6.53748,4.01563 0,-6.73438 1.03125,0 0,1.01563 q 0.39062,-0.71875 0.71875,-0.9375 0.34375,-0.23438 0.73437,-0.23438 0.57813,0 1.17188,0.35938 l -0.39063,1.0625 q -0.42187,-0.25 -0.82812,-0.25 -0.375,0 -0.6875,0.23437 -0.29688,0.21875 -0.42188,0.625 -0.1875,0.60938 -0.1875,1.32813 l 0,3.53125 -1.14062,0 z"
+       id="path4133"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 394.13516,162.28366 0,0 c 0,-5.23305 4.24222,-9.47527 9.47525,-9.47527 l 76.3251,0 c 2.513,0 4.92307,0.9983 6.70001,2.77524 1.77695,1.77696 2.77524,4.18703 2.77524,6.70003 l 0,37.89987 c 0,5.23305 -4.24222,9.47527 -9.47525,9.47527 l -76.3251,0 c -5.23303,0 -9.47525,-4.24222 -9.47525,-9.47527 z"
+       id="path4135"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 394.13516,162.28366 0,0 c 0,-5.23305 4.24222,-9.47527 9.47525,-9.47527 l 76.3251,0 c 2.513,0 4.92307,0.9983 6.70001,2.77524 1.77695,1.77696 2.77524,4.18703 2.77524,6.70003 l 0,37.89987 c 0,5.23305 -4.24222,9.47527 -9.47525,9.47527 l -76.3251,0 c -5.23303,0 -9.47525,-4.24222 -9.47525,-9.47527 z"
+       id="path4137"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 411.13965,181.07922 q 0,-2.67188 1.4375,-4.17188 1.4375,-1.51562 3.70313,-1.51562 1.5,0 2.6875,0.71875 1.1875,0.70312 1.8125,1.96875 0.64062,1.26562 0.64062,2.875 0,1.64062 -0.67187,2.9375 -0.65625,1.28125 -1.85938,1.95312 -1.20312,0.65625 -2.60937,0.65625 -1.51563,0 -2.71875,-0.73437 -1.1875,-0.73438 -1.8125,-2 -0.60938,-1.26563 -0.60938,-2.6875 z m 1.46875,0.0312 q 0,1.9375 1.04688,3.0625 1.04687,1.10937 2.625,1.10937 1.59375,0 2.625,-1.125 1.04687,-1.125 1.04687,-3.20312 0,-1.3125 -0.45312,-2.28125 -0.4375,-0.98438 -1.29688,-1.51563 -0.84375,-0.54687 -1.90625,-0.54687 -1.51562,0 -2.60937,1.04687 -1.07813,1.03125 -1.07813,3.45313 z m 10.19699,8.1875 0,-10.76563 1.20313,0 0,1.01563 q 0.42187,-0.59375 0.95312,-0.89063 0.54688,-0.29687 1.3125,-0.29687 0.98438,0 1.75,0.51562 0.76563,0.51563 1.14063,1.45313 0.39062,0.92187 0.39062,2.03125 0,1.20312 -0.42187,2.15625 -0.42188,0.95312 -1.25,1.46875 -0.8125,0.5 -1.71875,0.5 -0.65625,0 -1.1875,-0.26563 -0.51563,-0.28125 -0.84375
 ,-0.71875 l 0,3.79688 -1.32813,0 z m 1.20313,-6.82813 q 0,1.5 0.60937,2.21875 0.60938,0.71875 1.46875,0.71875 0.875,0 1.5,-0.73437 0.625,-0.75 0.625,-2.3125 0,-1.48438 -0.60937,-2.21875 -0.60938,-0.75 -1.45313,-0.75 -0.84375,0 -1.5,0.79687 -0.64062,0.78125 -0.64062,2.28125 z m 9.83859,2.67188 0.1875,1.15625 q -0.5625,0.125 -1,0.125 -0.71875,0 -1.125,-0.23438 -0.39063,-0.23437 -0.54688,-0.59375 -0.15625,-0.375 -0.15625,-1.5625 l 0,-4.46875 -0.96875,0 0,-1.03125 0.96875,0 0,-1.92187 1.3125,-0.79688 0,2.71875 1.32813,0 0,1.03125 -1.32813,0 0,4.54688 q 0,0.5625 0.0625,0.73437 0.0781,0.15625 0.23438,0.25 0.15625,0.0937 0.4375,0.0937 0.23437,0 0.59375,-0.0469 z m 1.19699,-8.04688 0,-1.51562 1.3125,0 0,1.51562 -1.3125,0 z m 0,9.21875 0,-7.78125 1.3125,0 0,7.78125 -1.3125,0 z m 3.24051,0 0,-7.78125 1.1875,0 0,1.09375 q 0.35937,-0.57812 0.96875,-0.92187 0.60937,-0.34375 1.39062,-0.34375 0.85938,0 1.40625,0.35937 0.5625,0.35938 0.78125,1 0.92188,-1.35937 2.40625,-1.35937 1.15625,0 1.78125,0.6
 4062 0.625,0.64063 0.625,1.96875 l 0,5.34375 -1.3125,0 0,-4.90625 q 0,-0.78125 -0.125,-1.125 -0.125,-0.35937 -0.46875,-0.5625 -0.34375,-0.21875 -0.79687,-0.21875 -0.8125,0 -1.35938,0.54688 -0.54687,0.54687 -0.54687,1.75 l 0,4.51562 -1.3125,0 0,-5.04687 q 0,-0.89063 -0.32813,-1.32813 -0.3125,-0.4375 -1.04687,-0.4375 -0.5625,0 -1.03125,0.29688 -0.46875,0.29687 -0.6875,0.85937 -0.20313,0.5625 -0.20313,1.625 l 0,4.03125 -1.32812,0 z m 12.2244,-9.21875 0,-1.51562 1.3125,0 0,1.51562 -1.3125,0 z m 0,9.21875 0,-7.78125 1.3125,0 0,7.78125 -1.3125,0 z m 2.55303,0 0,-1.0625 4.95313,-5.6875 q -0.84375,0.0469 -1.5,0.0469 l -3.15625,0 0,-1.07813 6.34375,0 0,0.875 -4.20313,4.9375 -0.8125,0.90625 q 0.89063,-0.0781 1.65625,-0.0781 l 3.59375,0 0,1.14062 -6.875,0 z m 13.34375,-2.5 1.35938,0.15625 q -0.3125,1.20313 -1.1875,1.85938 -0.875,0.65625 -2.23438,0.65625 -1.70312,0 -2.70312,-1.04688 -1,-1.04687 -1,-2.95312 0,-1.95313 1.01562,-3.03125 1.01563,-1.09375 2.625,-1.09375 1.5625,0 2.54688,1.0625 0.984
 37,1.0625 0.98437,2.98437 0,0.125 0,0.35938 l -5.8125,0 q 0.0781,1.28125 0.71875,1.96875 0.65625,0.67187 1.64063,0.67187 0.71875,0 1.23437,-0.375 0.51563,-0.39062 0.8125,-1.21875 z m -4.32812,-2.14062 4.34375,0 q -0.0937,-0.98438 -0.5,-1.46875 -0.625,-0.76563 -1.625,-0.76563 -0.92188,0 -1.54688,0.60938 -0.60937,0.60937 -0.67187,1.625 z m 7.13547,4.64062 0,-7.78125 1.1875,0 0,1.1875 q 0.45312,-0.82812 0.84375,-1.09375 0.39062,-0.26562 0.84375,-0.26562 0.67187,0 1.35937,0.42187 l -0.45312,1.21875 q -0.48438,-0.28125 -0.96875,-0.28125 -0.4375,0 -0.78125,0.26563 -0.34375,0.25 -0.48438,0.71875 -0.21875,0.70312 -0.21875,1.53125 l 0,4.07812 -1.32812,0 z"
+       id="path4139"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 386.19815,274.43326 0,0 c 0,-5.23303 4.24222,-9.47525 9.47528,-9.47525 l 92.1991,0 c 2.51297,0 4.92304,0.99829 6.70001,2.77524 1.77695,1.77694 2.77524,4.18701 2.77524,6.70001 l 0,37.89987 c 0,5.23306 -4.24222,9.47528 -9.47525,9.47528 l -92.1991,0 0,0 c -5.23306,0 -9.47528,-4.24222 -9.47528,-9.47528 z"
+       id="path4141"
+       inkscape:connector-curvature="0"
+       style="fill:#efefef;fill-rule:nonzero" />
+    <path
+       d="m 386.19815,274.43326 0,0 c 0,-5.23303 4.24222,-9.47525 9.47528,-9.47525 l 92.1991,0 c 2.51297,0 4.92304,0.99829 6.70001,2.77524 1.77695,1.77694 2.77524,4.18701 2.77524,6.70001 l 0,37.89987 c 0,5.23306 -4.24222,9.47528 -9.47525,9.47528 l -92.1991,0 0,0 c -5.23306,0 -9.47528,-4.24222 -9.47528,-9.47528 z"
+       id="path4143"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 405.86636,289.4632 0,-10.73438 3.70312,0 q 1.25,0 1.90625,0.15625 0.92188,0.20313 1.57813,0.76563 0.84375,0.71875 1.26562,1.84375 0.42188,1.10937 0.42188,2.54687 0,1.21875 -0.28125,2.17188 -0.28125,0.9375 -0.73438,1.5625 -0.45312,0.60937 -0.98437,0.96875 -0.53125,0.35937 -1.28125,0.54687 -0.75,0.17188 -1.71875,0.17188 l -3.875,0 z m 1.42187,-1.26563 2.29688,0 q 1.0625,0 1.65625,-0.1875 0.60937,-0.20312 0.96875,-0.5625 0.5,-0.51562 0.78125,-1.35937 0.28125,-0.85938 0.28125,-2.07813 0,-1.6875 -0.54688,-2.57812 -0.54687,-0.90625 -1.34375,-1.21875 -0.57812,-0.21875 -1.84375,-0.21875 l -2.25,0 0,8.20312 z m 9.00617,-7.95312 0,-1.51563 1.3125,0 0,1.51563 -1.3125,0 z m 0,9.21875 0,-7.78125 1.3125,0 0,7.78125 -1.3125,0 z m 2.72488,-2.32813 1.29688,-0.20312 q 0.10937,0.78125 0.60937,1.20312 0.5,0.42188 1.40625,0.42188 0.90625,0 1.34375,-0.35938 0.4375,-0.375 0.4375,-0.875 0,-0.45312 -0.39062,-0.70312 -0.26563,-0.1875 -1.34375,-0.45313 -1.45313,-0.35937 -2.01563,-0.625 -0.54687,-0
 .28125 -0.84375,-0.75 -0.28125,-0.46875 -0.28125,-1.04687 0,-0.51563 0.23438,-0.95313 0.23437,-0.45312 0.64062,-0.73437 0.3125,-0.23438 0.84375,-0.39063 0.53125,-0.15625 1.14063,-0.15625 0.90625,0 1.59375,0.26563 0.70312,0.26562 1.03125,0.71875 0.32812,0.4375 0.45312,1.20312 l -1.28125,0.17188 q -0.0937,-0.60938 -0.51562,-0.9375 -0.42188,-0.34375 -1.1875,-0.34375 -0.90625,0 -1.29688,0.3125 -0.39062,0.29687 -0.39062,0.70312 0,0.25 0.15625,0.45313 0.17187,0.21875 0.51562,0.35937 0.1875,0.0625 1.15625,0.32813 1.40625,0.375 1.95313,0.60937 0.5625,0.23438 0.875,0.70313 0.3125,0.45312 0.3125,1.125 0,0.65625 -0.39063,1.23437 -0.375,0.57813 -1.10937,0.90625 -0.71875,0.3125 -1.64063,0.3125 -1.51562,0 -2.3125,-0.625 -0.78125,-0.625 -1,-1.875 z m 7.84375,5.3125 0,-10.76562 1.20313,0 0,1.01562 q 0.42187,-0.59375 0.95312,-0.89062 0.54688,-0.29688 1.3125,-0.29688 0.98438,0 1.75,0.51563 0.76563,0.51562 1.14063,1.45312 0.39062,0.92188 0.39062,2.03125 0,1.20313 -0.42187,2.15625 -0.42188,0.95313 -1.2
 5,1.46875 -0.8125,0.5 -1.71875,0.5 -0.65625,0 -1.1875,-0.26562 -0.51563,-0.28125 -0.84375,-0.71875 l 0,3.79687 -1.32813,0 z m 1.20313,-6.82812 q 0,1.5 0.60937,2.21875 0.60938,0.71875 1.46875,0.71875 0.875,0 1.5,-0.73438 0.625,-0.75 0.625,-2.3125 0,-1.48437 -0.60937,-2.21875 -0.60938,-0.75 -1.45313,-0.75 -0.84375,0 -1.5,0.79688 -0.64062,0.78125 -0.64062,2.28125 z m 12.02612,2.89062 q -0.73437,0.60938 -1.40625,0.875 -0.67187,0.25 -1.45312,0.25 -1.28125,0 -1.96875,-0.625 -0.6875,-0.625 -0.6875,-1.59375 0,-0.57812 0.25,-1.04687 0.26562,-0.46875 0.6875,-0.75 0.42187,-0.29688 0.95312,-0.4375 0.375,-0.10938 1.17188,-0.20313 1.59375,-0.1875 2.34375,-0.45312 0.0156,-0.26563 0.0156,-0.34375 0,-0.8125 -0.375,-1.14063 -0.51562,-0.4375 -1.5,-0.4375 -0.9375,0 -1.39062,0.32813 -0.4375,0.3125 -0.64063,1.14062 l -1.29687,-0.17187 q 0.17187,-0.82813 0.57812,-1.32813 0.40625,-0.51562 1.17188,-0.78125 0.76562,-0.28125 1.76562,-0.28125 1,0 1.60938,0.23438 0.625,0.23437 0.92187,0.59375 0.29688,0.34375 0.
 40625,0.89062 0.0625,0.34375 0.0625,1.21875 l 0,1.75 q 0,1.84375 0.0781,2.32813 0.0937,0.48437 0.34375,0.9375 l -1.375,0 q -0.20313,-0.40625 -0.26563,-0.95313 z m -0.10937,-2.95312 q -0.71875,0.29687 -2.15625,0.5 -0.8125,0.125 -1.15625,0.26562 -0.32813,0.14063 -0.51563,0.42188 -0.17187,0.28125 -0.17187,0.625 0,0.53125 0.39062,0.89062 0.40625,0.34375 1.17188,0.34375 0.76562,0 1.35937,-0.32812 0.59375,-0.34375 0.875,-0.92188 0.20313,-0.4375 0.20313,-1.3125 l 0,-0.48437 z m 6.07296,2.73437 0.1875,1.15625 q -0.5625,0.125 -1,0.125 -0.71875,0 -1.125,-0.23437 -0.39062,-0.23438 -0.54687,-0.59375 -0.15625,-0.375 -0.15625,-1.5625 l 0,-4.46875 -0.96875,0 0,-1.03125 0.96875,0 0,-1.92188 1.3125,-0.79687 0,2.71875 1.32812,0 0,1.03125 -1.32812,0 0,4.54687 q 0,0.5625 0.0625,0.73438 0.0781,0.15625 0.23437,0.25 0.15625,0.0937 0.4375,0.0937 0.23438,0 0.59375,-0.0469 z m 6.2595,-1.67187 1.29684,0.15625 q -0.20313,1.34375 -1.09372,2.10937 -0.875,0.75 -2.14062,0.75 -1.59375,0 -2.5625,-1.03125 -0.96875,-1
 .04687 -0.96875,-3 0,-1.26562 0.40625,-2.20312 0.42187,-0.95313 1.26562,-1.42188 0.85938,-0.46875 1.85938,-0.46875 1.26562,0 2.07812,0.64063 0.81247,0.64062 1.03122,1.8125 l -1.28122,0.20312 q -0.1875,-0.78125 -0.65625,-1.17187 -0.45312,-0.40625 -1.10937,-0.40625 -1,0 -1.625,0.71875 -0.625,0.71875 -0.625,2.26562 0,1.5625 0.59375,2.28125 0.60937,0.70313 1.57812,0.70313 0.78125,0 1.29688,-0.46875 0.51562,-0.48438 0.65625,-1.46875 z m 2.24996,2.84375 0,-10.73438 1.32813,0 0,3.84375 q 0.92187,-1.0625 2.32812,-1.0625 0.85938,0 1.5,0.34375 0.64063,0.34375 0.90625,0.9375 0.28125,0.59375 0.28125,1.75 l 0,4.92188 -1.32812,0 0,-4.92188 q 0,-1 -0.42188,-1.4375 -0.42187,-0.45312 -1.21875,-0.45312 -0.57812,0 -1.09375,0.29687 -0.51562,0.29688 -0.73437,0.82813 -0.21875,0.51562 -0.21875,1.4375 l 0,4.25 -1.32813,0 z m 13.47925,-2.5 1.35938,0.15625 q -0.3125,1.20312 -1.1875,1.85937 -0.875,0.65625 -2.23438,0.65625 -1.70312,0 -2.70312,-1.04687 -1,-1.04688 -1,-2.95313 0,-1.95312 1.01562,-3.03125 1.01563
 ,-1.09375 2.625,-1.09375 1.5625,0 2.54688,1.0625 0.98437,1.0625 0.98437,2.98438 0,0.125 0,0.35937 l -5.8125,0 q 0.0781,1.28125 0.71875,1.96875 0.65625,0.67188 1.64063,0.67188 0.71875,0 1.23437,-0.375 0.51563,-0.39063 0.8125,-1.21875 z m -4.32812,-2.14063 4.34375,0 q -0.0937,-0.98437 -0.5,-1.46875 -0.625,-0.76562 -1.625,-0.76562 -0.92188,0 -1.54688,0.60937 -0.60937,0.60938 -0.67187,1.625 z m 7.13547,4.64063 0,-7.78125 1.1875,0 0,1.1875 q 0.45312,-0.82813 0.84375,-1.09375 0.39062,-0.26563 0.84375,-0.26563 0.67187,0 1.35937,0.42188 l -0.45312,1.21875 q -0.48438,-0.28125 -0.96875,-0.28125 -0.4375,0 -0.78125,0.26562 -0.34375,0.25 -0.48438,0.71875 -0.21875,0.70313 -0.21875,1.53125 l 0,4.07813 -1.32812,0 z m 3.91189,0.1875 3.10938,-11.10938 1.0625,0 -3.10938,11.10938 -1.0625,0 z"
+       id="path4145"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 412.2953,303.69757 1.42187,0.35937 q -0.4375,1.75 -1.60937,2.67188 -1.15625,0.92187 -2.82813,0.92187 -1.73437,0 -2.82812,-0.70312 -1.09375,-0.71875 -1.65625,-2.04688 -0.5625,-1.34375 -0.5625,-2.89062 0,-1.67188 0.64062,-2.92188 0.64063,-1.25 1.8125,-1.89062 1.1875,-0.65625 2.60938,-0.65625 1.60937,0 2.70312,0.82812 1.10938,0.8125 1.54688,2.29688 l -1.40625,0.32812 q -0.375,-1.17187 -1.09375,-1.70312 -0.70313,-0.53125 -1.78125,-0.53125 -1.23438,0 -2.0625,0.59375 -0.82813,0.59375 -1.17188,1.59375 -0.32812,1 -0.32812,2.0625 0,1.35937 0.39062,2.39062 0.40625,1.01563 1.23438,1.53125 0.84375,0.5 1.82812,0.5 1.20313,0 2.03125,-0.6875 0.82813,-0.6875 1.10938,-2.04687 z m 2.27179,-0.125 q 0,-2.15625 1.20312,-3.20313 1,-0.85937 2.4375,-0.85937 1.60938,0 2.625,1.04687 1.01563,1.04688 1.01563,2.90625 0,1.5 -0.45313,2.35938 -0.4375,0.85937 -1.3125,1.34375 -0.85937,0.46875 -1.875,0.46875 -1.625,0 -2.64062,-1.04688 -1,-1.04687 -1,-3.01562 z m 1.35937,0 q 0,1.5 0.64063,2.25 0.65625,0.73
 437 1.64062,0.73437 0.98438,0 1.64063,-0.75 0.65625,-0.75 0.65625,-2.28125 0,-1.4375 -0.65625,-2.17187 -0.65625,-0.75 -1.64063,-0.75 -0.98437,0 -1.64062,0.73437 -0.64063,0.73438 -0.64063,2.23438 z m 6.79172,0 q 0,-2.15625 1.20313,-3.20313 1,-0.85937 2.4375,-0.85937 1.60937,0 2.625,1.04687 1.01562,1.04688 1.01562,2.90625 0,1.5 -0.45312,2.35938 -0.4375,0.85937 -1.3125,1.34375 -0.85938,0.46875 -1.875,0.46875 -1.625,0 -2.64063,-1.04688 -1,-1.04687 -1,-3.01562 z m 1.35938,0 q 0,1.5 0.64062,2.25 0.65625,0.73437 1.64063,0.73437 0.98437,0 1.64062,-0.75 0.65625,-0.75 0.65625,-2.28125 0,-1.4375 -0.65625,-2.17187 -0.65625,-0.75 -1.64062,-0.75 -0.98438,0 -1.64063,0.73437 -0.64062,0.73438 -0.64062,2.23438 z m 7.2605,3.89062 0,-7.78125 1.1875,0 0,1.1875 q 0.45312,-0.82812 0.84375,-1.09375 0.39062,-0.26562 0.84375,-0.26562 0.67187,0 1.35937,0.42187 l -0.45312,1.21875 q -0.48438,-0.28125 -0.96875,-0.28125 -0.4375,0 -0.78125,0.26563 -0.34375,0.25 -0.48438,0.71875 -0.21875,0.70312 -0.21875,1.53125 l 
 0,4.07812 -1.32812,0 z m 9.94314,0 0,-0.98437 q -0.73437,1.15625 -2.17187,1.15625 -0.92188,0 -1.70313,-0.5 -0.78125,-0.51563 -1.21875,-1.4375 -0.42187,-0.92188 -0.42187,-2.10938 0,-1.17187 0.39062,-2.10937 0.39063,-0.95313 1.15625,-1.45313 0.78125,-0.51562 1.73438,-0.51562 0.70312,0 1.25,0.29687 0.5625,0.29688 0.90625,0.76563 l 0,-3.84375 1.3125,0 0,10.73437 -1.23438,0 z m -4.15625,-3.875 q 0,1.48438 0.625,2.23438 0.625,0.73437 1.48438,0.73437 0.85937,0 1.46875,-0.70312 0.60937,-0.71875 0.60937,-2.15625 0,-1.60938 -0.625,-2.34375 -0.60937,-0.75 -1.51562,-0.75 -0.875,0 -1.46875,0.71875 -0.57813,0.71875 -0.57813,2.26562 z m 7.27609,-5.34375 0,-1.51562 1.3125,0 0,1.51562 -1.3125,0 z m 0,9.21875 0,-7.78125 1.3125,0 0,7.78125 -1.3125,0 z m 3.24054,0 0,-7.78125 1.1875,0 0,1.10938 q 0.85938,-1.28125 2.48438,-1.28125 0.70312,0 1.28125,0.25 0.59375,0.25 0.89062,0.67187 0.29688,0.40625 0.40625,0.96875 0.0781,0.35938 0.0781,1.28125 l 0,4.78125 -1.32813,0 0,-4.73437 q 0,-0.79688 -0.15625,-1.187
 5 -0.14062,-0.40625 -0.53125,-0.64063 -0.39062,-0.25 -0.92187,-0.25 -0.84375,0 -1.45313,0.53125 -0.60937,0.53125 -0.60937,2.03125 l 0,4.25 -1.32813,0 z m 13.22922,-0.95312 q -0.73437,0.60937 -1.40625,0.875 -0.67187,0.25 -1.45312,0.25 -1.28125,0 -1.96875,-0.625 -0.6875,-0.625 -0.6875,-1.59375 0,-0.57813 0.25,-1.04688 0.26562,-0.46875 0.6875,-0.75 0.42187,-0.29687 0.95312,-0.4375 0.375,-0.10937 1.17188,-0.20312 1.59375,-0.1875 2.34375,-0.45313 0.0156,-0.26562 0.0156,-0.34375 0,-0.8125 -0.375,-1.14062 -0.51562,-0.4375 -1.5,-0.4375 -0.9375,0 -1.39062,0.32812 -0.4375,0.3125 -0.64063,1.14063 l -1.29687,-0.17188 q 0.17187,-0.82812 0.57812,-1.32812 0.40625,-0.51563 1.17188,-0.78125 0.76562,-0.28125 1.76562,-0.28125 1,0 1.60938,0.23437 0.625,0.23438 0.92187,0.59375 0.29688,0.34375 0.40625,0.89063 0.0625,0.34375 0.0625,1.21875 l 0,1.75 q 0,1.84375 0.0781,2.32812 0.0937,0.48438 0.34375,0.9375 l -1.375,0 q -0.20313,-0.40625 -0.26563,-0.95312 z m -0.10937,-2.95313 q -0.71875,0.29688 -2.15625,0.5
  -0.8125,0.125 -1.15625,0.26563 -0.32813,0.14062 -0.51563,0.42187 -0.17187,0.28125 -0.17187,0.625 0,0.53125 0.39062,0.89063 0.40625,0.34375 1.17188,0.34375 0.76562,0 1.35937,-0.32813 0.59375,-0.34375 0.875,-0.92187 0.20313,-0.4375 0.20313,-1.3125 l 0,-0.48438 z m 6.07299,2.73438 0.1875,1.15625 q -0.5625,0.125 -1,0.125 -0.71875,0 -1.125,-0.23438 -0.39062,-0.23437 -0.54687,-0.59375 -0.15625,-0.375 -0.15625,-1.5625 l 0,-4.46875 -0.96875,0 0,-1.03125 0.96875,0 0,-1.92187 1.3125,-0.79688 0,2.71875 1.32812,0 0,1.03125 -1.32812,0 0,4.54688 q 0,0.5625 0.0625,0.73437 0.0781,0.15625 0.23437,0.25 0.15625,0.0937 0.4375,0.0937 0.23438,0 0.59375,-0.0469 z m 0.69696,-2.71875 q 0,-2.15625 1.20313,-3.20313 1,-0.85937 2.4375,-0.85937 1.60937,0 2.625,1.04687 1.01562,1.04688 1.01562,2.90625 0,1.5 -0.45312,2.35938 -0.4375,0.85937 -1.3125,1.34375 -0.85938,0.46875 -1.875,0.46875 -1.625,0 -2.64063,-1.04688 -1,-1.04687 -1,-3.01562 z m 1.35938,0 q 0,1.5 0.64062,2.25 0.65625,0.73437 1.64063,0.73437 0.98437,0 
 1.64062,-0.75 0.65625,-0.75 0.65625,-2.28125 0,-1.4375 -0.65625,-2.17187 -0.65625,-0.75 -1.64062,-0.75 -0.98438,0 -1.64063,0.73437 -0.64062,0.73438 -0.64062,2.23438 z m 7.2605,3.89062 0,-7.78125 1.1875,0 0,1.1875 q 0.45312,-0.82812 0.84375,-1.09375 0.39062,-0.26562 0.84375,-0.26562 0.67187,0 1.35937,0.42187 l -0.45312,1.21875 q -0.48438,-0.28125 -0.96875,-0.28125 -0.4375,0 -0.78125,0.26563 -0.34375,0.25 -0.48438,0.71875 -0.21875,0.70312 -0.21875,1.53125 l 0,4.07812 -1.32812,0 z"
+       id="path4147"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 513.7402,181.2336 -24.31497,0"
+       id="path4149"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 513.7402,181.2336 -17.46082,0"
+       id="path4151"
+       inkscape:connector-curvature="0"
+       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 496.27936,181.2336 2.24918,-2.24916 -6.17954,2.24916 6.17954,2.24916 z"
+       id="path4153"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
+    <path
+       d="m 145.27034,495.79276 0,0 c 0,-3.1398 2.54533,-5.68515 5.68515,-5.68515 l 106.51945,0 c 1.50782,0 2.95386,0.59897 4.02002,1.66516 1.06617,1.06616 1.66514,2.51221 1.66514,4.01999 l 0,22.73996 c 0,3.13983 -2.54532,5.68512 -5.68515,5.68512 l -106.51946,0 c -3.13982,0 -5.68515,-2.54529 -5.68515,-5.68512 z"
+       id="path4155"
+       inkscape:connector-curvature="0"
+       style="fill:#e06666;fill-rule:nonzero" />
+    <path
+       d="m 145.27034,495.79276 0,0 c 0,-3.1398 2.54533,-5.68515 5.68515,-5.68515 l 106.51945,0 c 1.50782,0 2.95386,0.59897 4.02002,1.66516 1.06617,1.06616 1.66514,2.51221 1.66514,4.01999 l 0,22.73996 c 0,3.13983 -2.54532,5.68512 -5.68515,5.68512 l -106.51946,0 c -3.13982,0 -5.68515,-2.54529 -5.68515,-5.68512 z"
+       id="path4157"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 163.68008,514.0827 0,-13.59372 4.6875,0 q 1.57812,0 2.42187,0.1875 1.15625,0.26563 1.98438,0.96875 1.07812,0.92188 1.60937,2.34375 0.53125,1.40625 0.53125,3.21875 0,1.54688 -0.35937,2.75 -0.35938,1.1875 -0.92188,1.98438 -0.5625,0.78122 -1.23437,1.23434 -0.67188,0.4375 -1.625,0.67188 -0.95313,0.23437 -2.1875,0.23437 l -4.90625,0 z m 1.79687,-1.60937 2.90

<TRUNCATED>


[22/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb b/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb
new file mode 100644
index 0000000..eeb7b39
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb
@@ -0,0 +1,147 @@
+---
+title: hawq filespace
+---
+
+Creates a filespace using a configuration file that defines a file system location. Filespaces describe the physical file system resources to be used by a tablespace.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq filespace [<connection_options>] 
+  -o <output_directory_name> | --output <output_directory_name>
+  [-l <logfile_directory> | --logdir <logfile_directory>] 
+
+hawq filespace [<connection_options]  
+  -c <fs_config_file> | --config <fs_config_file> 
+  [-l <logfile_directory> | --logdir <logfile_directory>] 
+
+hawq filespace [<connection_options>]
+  --movefilespace <filespace> --location <dfslocation>
+  [-l <logfile_directory> | --logdir <logfile_directory>] 
+
+hawq filespace -v | --version 
+
+hawq filespace -? | --help
+```
+where:
+
+``` pre
+<connection_options> =
+  [-h <host> | --host <host>] 
+  [-p <port> | -- port <port>] 
+  [-U <username> | --username <username>] 
+  [-W | --password] 
+```
+
+## <a id="topic1__section3"></a>Description
+
+A tablespace requires a file system location to store its database files. This file system location for all components in a HAWQ system is referred to as a *filespace*. Once a filespace is defined, it can be used by one or more tablespaces.
+
+The `--movefilespace` option allows you to relocate a filespace and its components within a dfs file system.
+
+When used with the `-o` option, the `hawq filespace` utility looks up your system configuration information in the system catalog tables and prompts you for the appropriate file system location needed to create the filespace. It then outputs a configuration file that can be used to create a filespace. If a file name is not specified, a `hawqfilespace_config_`*\#* file will be created in the current directory by default.
+
+Once you have a configuration file, you can run `hawq filespace` with the `-c` option to create the filespace in HAWQ system.
+
+**Note:** If segments are down due to a power or nic failure, you may see inconsistencies during filespace creation. You may not be able to bring up the cluster.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-o, -\\\-output &lt;output\_directory\_name&gt;  </dt>
+<dd>The directory location and file name to output the generated filespace configuration file. You will be prompted to enter a name for the filespace and file system location. The file system locations must exist on all hosts in your system prior to running the `hawq filespace` command. You will specify the number of replicas to create. The default is 3 replicas. After the utility creates the configuration file, you can manually edit the file to make any required changes to the filespace layout before creating the filespace in HAWQ.</dd>
+
+<dt>-c, -\\\-config &lt;fs\_config\_file&gt;  </dt>
+<dd>A configuration file containing:
+
+-   An initial line denoting the new filespace name. For example:
+
+    filespace:&lt;myfs&gt;
+</dd>
+
+<dt>-\\\-movefilespace &lt;filespace&gt;  </dt>
+<dd>Create the filespace in a new location on a distributed file system. Updates the dfs url in the HAWQ database, so that data in the original location can be moved or deleted. Data in the original location is not affected by this command.</dd>
+
+<dt>-\\\-location &lt;dfslocation&gt;  </dt>
+<dd>Specifies the new URL location to which a dfs file system should be moved.</dd>
+
+<dt>-l, -\\\-logdir &lt;logfile\_directory&gt;  </dt>
+<dd>The directory to write the log file. Defaults to `~/hawqAdminLogs`.</dd>
+
+<dt>-v, -\\\-version (show utility version)  </dt>
+<dd>Displays the version of this utility.</dd>
+
+<dt>-?, -\\\-help (help)  </dt>
+<dd>Displays the command usage and syntax.</dd>
+
+**&lt;connection_options&gt;**
+
+<dt>-h, -\\\-host &lt;hostname&gt;  </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p, -\\\-port &lt;port&gt;  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U, -\\\-username &lt;superuser\_name&gt;  </dt>
+<dd>The database superuser role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system user name. Only database superusers are allowed to create filespaces.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+## <a id="topic1__section6"></a>Example 1
+
+Create a filespace configuration file. Depending on your system setup, you may need to specify the host and port. You will be prompted to enter a name for the filespace and a replica number. You will then be asked for the DFS location. The file system locations must exist on all hosts in your system prior to running the `hawq filespace` command:
+
+``` shell
+$ hawq filespace -o .
+```
+
+``` pre
+Enter a name for this filespace
+> fastdisk
+Enter replica num for filespace. If 0, default replica num is used (default=3)
+0
+Please specify the DFS location for the filespace (for example: localhost:9000/fs)
+location> localhost:9000/hawqfs
+
+20160203:11:35:42:272716 hawqfilespace:localhost:gpadmin-[INFO]:-[created]
+20160203:11:35:42:272716 hawqfilespace:localhost:gpadmin-[INFO]:-
+To add this filespace to the database please run the command:
+   hawqfilespace --config ./hawqfilespace_config_20160203_112711
+Checking your configuration: 
+
+Your system has 1 hosts with 2 primary segments 
+per host.
+
+Configuring hosts: [sdw1, sdw2] 
+
+Enter a file system location for the master:
+master location> /hawq_master_filespc
+```
+
+Example filespace configuration file:
+
+``` pre
+filespace:fastdisk
+mdw:1:/hawq_master_filespc/gp-1
+sdw1:2:/hawq_pri_filespc/gp0
+sdw2:3:/hawq_pri_filespc/gp1
+```
+
+Execute the configuration file to create the filespace:
+
+``` shell
+$ hawq filespace --config hawq_filespace_config_1
+```
+
+## Example 2
+
+Create the filespace at `cdbfast_fs_a` and move an hdfs filesystem to it:
+
+``` shell
+$ hawq filespace --movefilespace=cdbfast_fs_a
+      --location=hdfs://gphd-cluster/cdbfast_fs_a/
+```
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE TABLESPACE](../../sql/CREATE-TABLESPACE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb b/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb
new file mode 100644
index 0000000..de45ef3
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb
@@ -0,0 +1,156 @@
+---
+title: hawq init
+---
+
+The `hawq init cluster` command initializes a HAWQ system and starts it.
+
+Use the `hawq init master` and `hawq init segment` commands to individually initialize the master or segment nodes, respectively. Specify any format options at this time. The `hawq init standby` command initializes a standby master host for a HAWQ system.
+
+Use the `hawq init <object> --standby-host` option to define the host for a standby at initialization.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq init <object> [--options]
+
+hawq init standby | cluster
+  [--standby-host <address_of_standby_host>] 
+  [<options>]
+
+hawq init -? | --help
+```
+where:
+
+``` pre
+<object> = cluster | master | segment | standby
+
+<options> =   
+��[-a] [-l <logfile_directory>] [-q] [-v] [-t] 
+  [-n]   
+  [--locale=<locale>] [--lc-collate=<locale>] 
+��[--lc-ctype=<locale>] [--lc-messages=<locale>] 
+ �[--lc-monetary=<locale>] [--lc-numeric=<locale>] 
+��[--lc-time=<locale>] 
+��[--bucket_number <number>] 
+��[--max_connections <number>] �
+��[--shared_buffers <number>]
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq init <object>` utility creates a HAWQ instance using configuration parameters defined in `$GPHOME/etc/hawq-site.xml`. Before running this utility, verify that you have installed the HAWQ software on all the hosts in the array.
+
+In a HAWQ DBMS, each database instance (the master and all segments) must be initialized across all of the hosts in the system in a way that allows them to work together as a unified DBMS. The `hawq init cluster` utility initializes the HAWQ master and each segment instance, and configures the system as a whole. When `hawq init cluster` is run, the cluster comes online automatically without needing to explicitly start it. You can start a single node cluster without any user-defined changes to the default `hawq-site.xml` file. For larger clusters, use the template-hawq-site.xml file to specify the configuration.
+
+To use the template for initializing a new cluster configuration, replace the items contained within the % markers. For example, replace `value%master.host%value` and `%master.host%` with the master host name. After modification, rename the file to the name of the default configuration file: `hawq-site.xml`.
+
+
+-   Before initializing HAWQ, set the `$GPHOME` environment variable to point to the location of your HAWQ installation on the master host and exchange SSH keys between all host addresses in the array, using `hawq ssh-exkeys`.
+-   To initialize and start a HAWQ cluster, enter the following command on the master host:
+
+    ```shell
+    $ hawq init cluster
+    ```
+
+This utility performs the following tasks:
+
+-   Verifies that the parameters in the configuration file are correct.
+-   Ensures that a connection can be established to each host address. If a host address cannot be reached, the utility will exit.
+-   Verifies the locale settings.
+-   Initializes the master instance.
+-   Initializes the standby master instance (if specified).
+-   Initializes the segment instances.
+-   Configures the HAWQ system and checks for errors.
+-   Starts the HAWQ system.
+
+The `hawq init standby` utility can be run on either  the currently active *primary* master host or on the standby node.
+
+`hawq init standby` performs the following steps:
+
+-   Updates the HAWQ system catalog to add the new standby master host information
+-   Edits the `pg_hba.conf` file of the HAWQ master to allow access from the newly added standby master.
+-   Sets up the standby master instance on the alternate master host
+-   Starts the synchronization process
+
+A backup, standby master host serves as a 'warm standby' in the event of the primary master host becoming non-operational. The standby master is kept up to date by transaction log replication processes (the `walsender` and `walreceiver`), which run on the primary master and standby master hosts and keep the data between the primary and standby master hosts synchronized. To add a standby master to the system, use the command `hawq init standby`, for example: `init standby host09`. To configure the standby hostname at initialization without needing to run hawq config by defining it, use the --standby-host option. To create the standby above, you would specify `hawq init standby --standby-host=host09` or `hawq init cluster --standby-host=host09`.
+
+If the primary master fails, the log replication process is shut down. Run the `hawq activate standby` utility to activate the standby master in its place;  upon activation of the standby master, the replicated logs are used to reconstruct the state of the master host at the time of the last successfully committed transaction.
+
+## Objects
+
+<dt>cluster  </dt>
+<dd>Start a HAWQ cluster.</dd>
+
+<dt>master  </dt>
+<dd>Start HAWQ master.</dd>
+
+<dt>segment  </dt>
+<dd>Start a local segment node.</dd>
+
+<dt>standby  </dt>
+<dd>Start a HAWQ standby master.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-a, (do not prompt)  </dt>
+<dd>Do not prompt the user for confirmation.</dd>
+
+
+<dt>-l, -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>The directory to write the log file. Defaults to `~/hawq/AdminLogs`.</dd>
+
+<dt>-q, -\\\-quiet (no screen output)  </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages and writes them to the log files.</dd>
+
+<dt>-t, -\\\-timeout  </dt>
+<dd>Sets timeout value in seconds. The default is 60 seconds.</dd>
+
+<dt>-n, -\\\-no-update  </dt>
+<dd>Resync the standby with the master, but do not update system catalog tables.</dd>
+
+<dt>-\\\-locale=\<locale\>   </dt>
+<dd>Sets the default locale used by HAWQ. If not specified, the `LC_ALL`, `LC_COLLATE`, or `LANG` environment variable of the master host determines the locale. If these are not set, the default locale is `C` (`POSIX`). A locale identifier consists of a language identifier and a region identifier, and optionally a character set encoding. For example, `sv_SE` is Swedish as spoken in Sweden, `en_US` is U.S. English, and `fr_CA` is French Canadian. If more than one character set can be useful for a locale, then the specifications look like this: `en_US.UTF-8` (locale specification and character set encoding). On most systems, the command `locale` will show the locale environment settings and `locale -a` will show a list of all available locales.</dd>
+
+<dt>-\\\-lc-collate=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for collation (sorting data). The sort order cannot be changed after HAWQ is initialized, so it is important to choose a collation locale that is compatible with the character set encodings that you plan to use for your data. There is a special collation name of `C` or `POSIX` (byte-order sorting as opposed to dictionary-order sorting). The `C` collation can be used with any character encoding.</dd>
+
+<dt>-\\\-lc-ctype=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for character classification (what character sequences are valid and how they are interpreted). This cannot be changed after HAWQ is initialized, so it is important to choose a character classification locale that is compatible with the data you plan to store in HAWQ.</dd>
+
+<dt>-\\\-lc-messages=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for messages output by HAWQ. The current version of HAWQ does not support multiple locales for output messages (all messages are in English), so changing this setting will not have any effect.</dd>
+
+<dt>-\\\-lc-monetary=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for formatting currency amounts.</dd>
+
+<dt>-\\\-lc-numeric=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for formatting numbers.</dd>
+
+<dt>-\\\-lc-time=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for formatting dates and times.</dd>
+
+<dt>-\\\-bucket\_number=\<number\>   </dt>
+<dd>Sets value of `default_hash_table_bucket_number`, which sets the default number of hash buckets for creating virtual segments. This parameter overrides the default value of `default_hash_table_bucket_number` set in `hawq-site.xml` by an Ambari install. If not specified, `hawq init` will use the value in `hawq-site.xml`.</dd>
+
+<dt>-\\\-max\_connections=\<number\>   </dt>
+<dd>Sets the number of client connections allowed to the master. The default is 250.</dd>
+
+<dt>-\\\-shared\_buffers \<number\>  </dt>
+<dd>Sets the number of shared\_buffers to be used when initializing HAWQ.</dd>
+
+<dt>-s, -\\\-standby-host \<name\_of\_standby\_host\>  </dt>
+<dd>Adds a standby host name to hawq-site.xml and syncs it to all the nodes. If a standby host name was already defined in hawq-site.xml, using this option will overwrite the existing value.</dd>
+
+<dt>-?, -\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+Initialize a HAWQ array with an optional standby master host:
+
+``` shell
+$ hawq init standby 
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqload.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqload.html.md.erb b/markdown/reference/cli/admin_utilities/hawqload.html.md.erb
new file mode 100644
index 0000000..b9fe441
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqload.html.md.erb
@@ -0,0 +1,420 @@
+---
+title: hawq load
+---
+
+Acts as an interface to the external table parallel loading feature. Executes a load specification defined in a YAML-formatted control file to invoke the HAWQ parallel file server (`gpfdist`).
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq load -f <control_file> [-l <log_file>]   
+  [--gpfdist_timeout <seconds>] 
+  [[-v | -V] 
+  [-q]]
+  [-D]
+  [<connection_options>]
+
+hawq load -? 
+
+hawq load --version
+```
+where:
+
+``` pre
+<connection_options> =
+  [-h <host>] 
+  [-p <port>] 
+  [-U <username>] 
+  [-d <database>]
+  [-W]
+```
+
+## <a id="topic1__section3"></a>Prerequisites
+
+The client machine where `hawq load` is executed must have the following:
+
+-   Python 2.6.2 or later, `pygresql` (the Python interface to PostgreSQL), and `pyyaml`. Note that Python and the required Python libraries are included with the HAWQ server installation, so if you have HAWQ installed on the machine where `hawq load` is running, you do not need a separate Python installation.
+    **Note:** HAWQ Loaders for Windows supports only Python 2.5 (available from [www.python.org](http://python.org)).
+
+-   The [gpfdist](gpfdist.html#topic1) parallel file distribution program installed and in your `$PATH`. This program is located in `$GPHOME/bin` of your HAWQ server installation.
+-   Network access to and from all hosts in your HAWQ array (master and segments).
+-   Network access to and from the hosts where the data to be loaded resides (ETL servers).
+
+## <a id="topic1__section4"></a>Description
+
+`hawq load` is a data loading utility that acts as an interface to HAWQ's external table parallel loading feature. Using a load specification defined in a YAML formatted control file, `hawq                     load` executes a load by invoking the HAWQ parallel file server ([gpfdist](gpfdist.html#topic1)), creating an external table definition based on the source data defined, and executing an `INSERT` operation to load the source data into the target table in the database.
+
+The operation, including any SQL commands specified in the `SQL` collection of the YAML control file (see [Control File Format](#topic1__section7)), are performed as a single transaction to prevent inconsistent data when performing multiple, simultaneous load operations on a target table.
+
+## <a id="args"></a>Arguments
+
+<dt>-f &lt;control\_file&gt;  </dt>
+<dd>A YAML file that contains the load specification details. See [Control File Format](#topic1__section7).</dd>
+
+## <a id="topic1__section5"></a>Options
+
+<dt>-\\\-gpfdist\_timeout &lt;seconds&gt;  </dt>
+<dd>Sets the timeout for the `gpfdist` parallel file distribution program to send a response. Enter a value from `0` to `30` seconds (entering "`0`" to disables timeouts). Note that you might need to increase this value when operating on high-traffic networks.</dd>
+
+<dt>-l &lt;log\_file&gt;  </dt>
+<dd>Specifies where to write the log file. Defaults to `~/hawq/Adminlogs/hawq_load_YYYYMMDD`. For more information about the log file, see [Log File Format](#topic1__section9).</dd>
+
+<dt>-q (no screen output)  </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.</dd>
+
+<dt>-D (debug mode)  </dt>
+<dd>Check for error conditions, but do not execute the load.</dd>
+
+<dt>-v (verbose mode)  </dt>
+<dd>Show verbose output of the load steps as they are executed.</dd>
+
+<dt>-V (very verbose mode)  </dt>
+<dd>Shows very verbose output.</dd>
+
+<dt>-? (show help)  </dt>
+<dd>Show help, then exit.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Show the version of this utility, then exit.</dd>
+
+**Connection Options**
+
+<dt>-d &lt;database&gt;  </dt>
+<dd>The database to load into. If not specified, reads from the load control file, the environment variable `$PGDATABASE` or defaults to the current system user name.</dd>
+
+<dt>-h &lt;hostname&gt;  </dt>
+<dd>Specifies the host name of the machine on which the HAWQ master database server is running. If not specified, reads from the load control file, the environment variable `$PGHOST` or defaults to `localhost`.</dd>
+
+<dt>-p &lt;port&gt;  </dt>
+<dd>Specifies the TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the load control file, the environment variable `$PGPORT` or defaults to 5432.</dd>
+
+<dt>-U &lt;username&gt;  </dt>
+<dd>The database role name to connect as. If not specified, reads from the load control file, the environment variable `$PGUSER` or defaults to the current system user name.</dd>
+
+<dt>-W (force password prompt)  </dt>
+<dd>Force a password prompt. If not specified, reads the password from the environment variable `$PGPASSWORD` or from a password file specified by `$PGPASSFILE` or in `~/.pgpass`. If these are not set, then `hawq                                 load` will prompt for a password even if `-W` is not supplied.</dd>
+
+## <a id="topic1__section7"></a>Control File Format
+
+The `hawq load` control file uses the [YAML 1.1](http://yaml.org/spec/1.1/) document format and then implements its own schema for defining the various steps of a HAWQ load operation. The control file must be a valid YAML document.
+
+The `hawq load` program processes the control file document in order and uses indentation (spaces) to determine the document hierarchy and the relationships of the sections to one another. The use of white space is significant. White space should not be used simply for formatting purposes, and tabs should not be used at all.
+
+The basic structure of a load control file is:
+
+``` pre
+---
+VERSION: 1.0.0.1
+DATABASE: db_name
+USER: db_username
+HOST: master_hostname
+PORT: master_port
+GPLOAD:
+   INPUT:
+    - SOURCE:
+���������LOCAL_HOSTNAME:
+�����������- hostname_or_ip
+���������PORT: http_port
+�������| PORT_RANGE: [start_port_range, end_port_range]
+���������FILE: 
+�����������- /path/to/input_file
+���������SSL: true | false
+���������CERTIFICATES_PATH: /path/to/certificates
+    - COLUMNS:
+�����������- field_name: data_type
+    - TRANSFORM: 'transformation'
+����- TRANSFORM_CONFIG: 'configuration-file-path' 
+����- MAX_LINE_LENGTH: integer 
+����- FORMAT: text | csv
+����- DELIMITER: 'delimiter_character'
+����- ESCAPE: 'escape_character' | 'OFF'
+����- NULL_AS: 'null_string'
+����- FORCE_NOT_NULL: true | false
+����- QUOTE: 'csv_quote_character'
+����- HEADER: true | false
+����- ENCODING: database_encoding
+    - ERROR_LIMIT: integer
+    - ERROR_TABLE: schema.table_name
+   OUTPUT:
+    - TABLE: schema.table_name
+    - MODE: insert | update | merge
+    - MATCH_COLUMNS:
+�����������- target_column_name
+    - UPDATE_COLUMNS:
+�����������- target_column_name
+    - UPDATE_CONDITION: 'boolean_condition'
+    - MAPPING:
+�����������   target_column_name: source_column_name | 'expression'
+   PRELOAD:
+    - TRUNCATE: true | false
+    - REUSE_TABLES: true | false
+   SQL:
+    - BEFORE: "sql_command"
+    - AFTER: "sql_command"
+```
+
+**Control File Schema Elements**  
+
+The control file contains the schema elements for:
+
+-   Version
+-   Database
+-   User
+-   Host
+-   Port
+-   GPLOAD file
+
+<dt>VERSION  </dt>
+<dd>Optional. The version of the `hawq load` control file schema, for example: 1.0.0.1.</dd>
+
+<dt>DATABASE  </dt>
+<dd>Optional. Specifies which database in HAWQ to connect to. If not specified, defaults to `$PGDATABASE` if set or the current system user name. You can also specify the database on the command line using the `-d` option.</dd>
+
+<dt>USER  </dt>
+<dd>Optional. Specifies which database role to use to connect. If not specified, defaults to the current user or `$PGUSER` if set. You can also specify the database role on the command line using the `-U` option.
+
+If the user running `hawq load` is not a HAWQ superuser, then the server configuration parameter `gp_external_grant_privileges` must be set to `on` for the load to be processed.</dd>
+
+<dt>HOST  </dt>
+<dd>Optional. Specifies HAWQ master host name. If not specified, defaults to localhost or `$PGHOST` if set. You can also specify the master host name on the command line using the `-h` option.</dd>
+
+<dt>PORT  </dt>
+<dd>Optional. Specifies HAWQ master port. If not specified, defaults to 5432 or `$PGPORT` if set. You can also specify the master port on the command line using the `-p` option.</dd>
+
+<dt>GPLOAD  </dt>
+<dd>Required. Begins the load specification section. A `GPLOAD` specification must have an `INPUT` and an `OUTPUT` section defined.</dd>
+
+<dt>INPUT  </dt>
+<dd>Required element. Defines the location and the format of the input data to be loaded. `hawq load` will start one or more instances of the [gpfdist](gpfdist.html#topic1) file distribution program on the current host and create the required external table definition(s) in HAWQ that point to the source data. Note that the host from which you run `hawq load` must be accessible over the network by all HAWQ hosts (master and segments).</dd>
+
+<dt>SOURCE  </dt>
+<dd>Required. The `SOURCE` block of an `INPUT` specification defines the location of a source file. An `INPUT` section can have more than one `SOURCE` block defined. Each `SOURCE` block defined corresponds to one instance of the [gpfdist](gpfdist.html#topic1) file distribution program that will be started on the local machine. Each `SOURCE` block defined must have a `FILE` specification.</dd>
+
+<dt>LOCAL\_HOSTNAME  </dt>
+<dd>Optional. Specifies the host name or IP address of the local machine on which `hawq                                                   load` is running. If this machine is configured with multiple network interface cards (NICs), you can specify the host name or IP of each individual NIC to allow network traffic to use all NICs simultaneously. The default is to use the local machine's primary host name or IP only.</dd>
+
+<dt>PORT  </dt>
+<dd>Optional. Specifies the specific port number that the [gpfdist](gpfdist.html#topic1) file distribution program should use. You can also supply a `PORT_RANGE` to select an available port from the specified range. If both `PORT` and `PORT_RANGE` are defined, then `PORT` takes precedence. If neither `PORT` or `PORT_RANGE` are defined, the default is to select an available port between 8000 and 9000.
+
+If multiple host names are declared in `LOCAL_HOSTNAME`, this port number is used for all hosts. This configuration is desired if you want to use all NICs to load the same file or set of files in a given directory location.</dd>
+
+<dt>PORT\_RANGE  </dt>
+<dd>Optional. Can be used instead of `PORT` to supply a range of port numbers from which `hawq load` can choose an available port for this instance of the [gpfdist](gpfdist.html#topic1) file distribution program.</dd>
+
+<dt>FILE  </dt>
+<dd>Required. Specifies the location of a file, named pipe, or directory location on the local file system that contains data to be loaded. You can declare more than one file so long as the data is of the same format in all files specified.
+
+If the files are compressed using `gzip` or `bzip2` (have a `.gz` or `.bz2` file extension), the files will be uncompressed automatically (provided that `gunzip` or `bunzip2` is in your path).
+
+When specifying which source files to load, you can use the wildcard character (`*`) or other C-style pattern matching to denote multiple files. The files specified are assumed to be relative to the current directory from which `hawq                                                   load` is executed (or you can declare an absolute path).</dd>
+
+<dt>SSL  </dt>
+<dd>Optional. Specifies usage of SSL encryption.</dd>
+
+<dt>CERTIFICATES\_PATH  </dt>
+<dd>Required when SSL is `true`; cannot be specified when SSL is `false` or unspecified. The location specified in `CERTIFICATES_PATH` must contain the following files:
+
+-   The server certificate file, `server.crt`
+-   The server private key file, `server.key`
+-   The trusted certificate authorities, `root.crt`
+
+The root directory (`/`) cannot be specified as `CERTIFICATES_PATH`.</dd>
+
+<dt>COLUMNS  </dt>
+<dd>Optional. Specifies the schema of the source data file(s) in the format of `field_name:data_type`. The `DELIMITER` character in the source file is what separates two data value fields (columns). A row is determined by a line feed character (`0x0a`).
+
+If the input `COLUMNS` are not specified, then the schema of the output `TABLE` is implied, meaning that the source data must have the same column order, number of columns, and data format as the target table.
+
+The default source-to-target mapping is based on a match of column names as defined in this section and the column names in the target `TABLE`. This default mapping can be overridden using the `MAPPING` section.</dd>
+
+<dt>TRANSFORM  </dt>
+<dd>Optional. Specifies the name of the input XML transformation passed to `hawq                                                   load`. <span class="ph">For more information about XML transformations, see [&quot;Loading and Unloading Data.&quot;](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1).</span></dd>
+
+<dt>TRANSFORM\_CONFIG  </dt>
+<dd>Optional. Specifies the location of the XML transformation configuration file that is specified in the `TRANSFORM` parameter, above.</dd>
+
+<dt>MAX\_LINE\_LENGTH  </dt>
+<dd>Optional. An integer that specifies the maximum length of a line in the XML transformation data passed to `hawq load`.</dd>
+
+<dt>FORMAT  </dt>
+<dd>Optional. Specifies the format of the source data file(s) - either plain text (`TEXT`) or comma separated values (`CSV`) format. Defaults to `TEXT` if not specified.<span class="ph"> For more information about the format of the source data, see [&quot;Loading and Unloading Data&quot;](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1) .</span></dd>
+
+<dt>DELIMITER  </dt>
+<dd>Optional. Specifies a single ASCII character that separates columns within each row (line) of data. The default is a tab character in TEXT mode, a comma in CSV mode.You can also specify a non-printable ASCII character via an escape sequence\\ using the decimal representation of the ASCII character. For example, `\014` represents the shift out character..</dd>
+
+<dt>ESCAPE  </dt>
+<dd>Specifies the single character that is used for C escape sequences (such as `\n`, `\t`, `\100`, and so on) and for escaping data characters that might otherwise be taken as row or column delimiters. Make sure to choose an escape character that is not used anywhere in your actual column data. The default escape character is a \\ (backslash) for text-formatted files and a `"` (double quote) for csv-formatted files, however it is possible to specify another character to represent an escape. It is also possible to disable escaping in text-formatted files by specifying the value `'OFF'` as the escape value. This is very useful for data such as text-formatted web log data that has many embedded backslashes that are not intended to be escapes.</dd>
+
+<dt>NULL\_AS  </dt>
+<dd>Optional. Specifies the string that represents a null value. The default is `\N` (backslash-N) in `TEXT` mode, and an empty value with no quotations in `CSV` mode. You might prefer an empty string even in `TEXT` mode for cases where you do not want to distinguish nulls from empty strings. Any source data item that matches this string will be considered a null value.</dd>
+
+<dt>FORCE\_NOT\_NULL  </dt>
+<dd>Optional. In CSV mode, processes each specified column as though it were quoted and hence not a NULL value. For the default null string in CSV mode (nothing between two delimiters), this causes missing values to be evaluated as zero-length strings.</dd>
+
+<dt>QUOTE  </dt>
+<dd>Required when `FORMAT` is `CSV`. Specifies the quotation character for `CSV` mode. The default is double-quote (`"`).</dd>
+
+<dt>HEADER  </dt>
+<dd>Optional. Specifies that the first line in the data file(s) is a header row (contains the names of the columns) and should not be included as data to be loaded. If using multiple data source files, all files must have a header row. The default is to assume that the input files do not have a header row.</dd>
+
+<dt>ENCODING  </dt>
+<dd>Optional. Character set encoding of the source data. Specify a string constant (such as `'SQL_ASCII'`), an integer encoding number, or `'DEFAULT'` to use the default client encoding. If not specified, the default client encoding is used.</dd>
+
+<dt>ERROR\_LIMIT  </dt>
+<dd>Optional. Sets the error limit count for HAWQ segment instances during input processing. Error rows will be written to the table specified in `ERROR_TABLE`. The value of ERROR\_LIMIT must be 2 or greater.</dd>
+
+<dt>ERROR\_TABLE  </dt>
+<dd>Optional when `ERROR_LIMIT` is declared. Specifies an error table where rows with formatting errors will be logged when running in single row error isolation mode. You can then examine this error table to see error rows that were not loaded (if any). If the `ERROR_TABLE` specified already exists, it will be used. If it does not exist, it will be automatically generated.
+
+For more information about handling load errors, see "[Loading and Unloading Data](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1)".</dd>
+
+<dt>OUTPUT   </dt>
+<dd>Required element. Defines the target table and final data column values that are to be loaded into the database.</dd>
+
+<dt>TABLE  </dt>
+<dd>Required. The name of the target table to load into.</dd>
+
+<dt>MODE  </dt>
+<dd>Optional. Defaults to `INSERT` if not specified. There are three available load modes:</dd>
+
+<dt>INSERT  </dt>
+<dd>Loads data into the target table using the following method:
+
+``` pre
+INSERT INTO target_table SELECT * FROM input_data;
+```
+</dd>
+
+<dt>UPDATE</dt>
+<dd>Updates the `UPDATE_COLUMNS` of the target table where the rows have `MATCH_COLUMNS` attribute values equal to those of the input data, and the optional `UPDATE_CONDITION` is true.</dd>
+
+<dt>MERGE</dt>
+<dd>Inserts new rows and updates the `UPDATE_COLUMNS` of existing rows where `MATCH_COLUMNS` attribute values are equal to those of the input data, and the optional `UPDATE_CONDITION` is true. New rows are identified when the `MATCH_COLUMNS` value in the source data does not have a corresponding value in the existing data of the target table. In those cases, the **entire row** from the source file is inserted, not only the `MATCH` and `UPDATE` columns. If there are multiple new `MATCH_COLUMNS` values that are the same, only one new row for that value will be inserted. Use `UPDATE_CONDITION` to filter out the rows to discard.</dd>
+
+<dt>MATCH\_COLUMNS  </dt>
+<dd>Required if `MODE` is `UPDATE` or `MERGE`. Specifies the column(s) to use as the join condition for the update. The attribute value in the specified target column(s) must be equal to that of the corresponding source data column(s) in order for the row to be updated in the target table.</dd>
+
+<dt>UPDATE\_COLUMNS  </dt>
+<dd>Required if `MODE` is `UPDATE` or `MERGE`. Specifies the column(s) to update for the rows that meet the `MATCH_COLUMNS` criteria and the optional `UPDATE_CONDITION`.</dd>
+
+<dt>UPDATE\_CONDITION  </dt>
+<dd>Optional. Specifies a Boolean condition (similar to what you would declare in a `WHERE` clause) that must be met for a row in the target table to be updated (or inserted in the case of a `MERGE`).</dd>
+
+<dt>MAPPING  </dt>
+<dd>Optional. If a mapping is specified, it overrides the default source-to-target column mapping. The default source-to-target mapping is based on a match of column names as defined in the source `COLUMNS` section and the column names of the target `TABLE`. A mapping is specified as either:
+
+`target_column_name:                                                   source_column_name`
+
+or
+
+`target_column_name:                                                   'expression'`
+
+Where &lt;expression&gt; is any expression that you would specify in the `SELECT` list of a query, such as a constant value, a column reference, an operator invocation, a function call, and so on.</dd>
+
+<dt>PRELOAD  </dt>
+<dd>Optional. Specifies operations to run prior to the load operation. Currently, the only preload operation is `TRUNCATE`.</dd>
+
+<dt>TRUNCATE  </dt>
+<dd>Optional. If set to true, `hawq load` will remove all rows in the target table prior to loading it.</dd>
+
+<dt>REUSE\_TABLES  </dt>
+<dd>Optional. If set to true, `hawq load` will not drop the external table objects and staging table objects it creates. These objects will be reused for future load operations that use the same load specifications. Reusing objects improves performance of trickle loads (ongoing small loads to the same target table).</dd>
+
+<dt>SQL  </dt>
+<dd>Optional. Defines SQL commands to run before and/or after the load operation. Commands that contain spaces or special characters must be enclosed in quotes. You can specify multiple `BEFORE` and/or `AFTER` commands. List commands in the desired order of execution.</dd>
+
+<dt>BEFORE  </dt>
+<dd>Optional. A SQL command to run before the load operation starts. Enclose commands in quotes.</dd>
+
+<dt>AFTER  </dt>
+<dd>Optional. A SQL command to run after the load operation completes. Enclose commands in quotes.</dd>
+
+## Notes
+
+If your database object names were created using a double-quoted identifier (delimited identifier), you must specify the delimited name within single quotes in the `hawq load` control file. For example, if you create a table as follows:
+
+``` sql
+CREATE TABLE "MyTable" ("MyColumn" text);
+```
+
+Your YAML-formatted `hawq load` control file would refer to the above table and column names as follows:
+
+``` pre
+- COLUMNS:
+   - '"MyColumn"': text
+OUTPUT:
+   - TABLE: public.'"MyTable"'
+```
+
+## <a id="topic1__section9"></a>Log File Format
+
+Log files output by `hawq load` have the following format:
+
+``` pre
+timestamp|level|message
+```
+
+Where &lt;timestamp&gt; takes the form: `YYYY-MM-DD                     HH:MM:SS`, &lt;level&gt; is one of `DEBUG`, `LOG`, `INFO`, `ERROR`, and &lt;message&gt; is a normal text message.
+
+Some `INFO` messages that may be of interest in the log files are (where *\#* corresponds to the actual number of seconds, units of data, or failed rows):
+
+``` pre
+INFO|running time: #.## seconds
+INFO|transferred #.# kB of #.# kB.
+INFO|hawq load succeeded
+INFO|hawq load succeeded with warnings
+INFO|hawq load failed
+INFO|1 bad row
+INFO|# bad rows
+```
+
+## <a id="topic1__section10"></a>Examples
+
+Run a load job as defined in `my_load.yml`:
+
+``` shell
+$ hawq load -f my_load.yml
+```
+
+Example load control file:
+
+``` pre
+---
+VERSION: 1.0.0.1
+DATABASE: ops
+USER: gpadmin
+HOST: mdw-1
+PORT: 5432
+GPLOAD:
+   INPUT:
+    - SOURCE:
+         LOCAL_HOSTNAME:
+           - etl1-1
+           - etl1-2
+���������� - etl1-3
+           - etl1-4
+         PORT: 8081
+         FILE: 
+�����������- /var/load/data/*
+    - COLUMNS:
+           - name: text
+�����������- amount: float4
+�����������- category: text
+�����������- desc: text
+           - date: date
+    - FORMAT: text
+    - DELIMITER: '|'
+����- ERROR_LIMIT: 25
+    - ERROR_TABLE: payables.err_expenses
+   OUTPUT:
+    - TABLE: payables.expenses
+    - MODE: INSERT
+���SQL:
+���- BEFORE: "INSERT INTO audit VALUES('start', current_timestamp)"
+���- AFTER: "INSERT INTO audit VALUES('end', 
+current_timestamp)"
+```
+
+## <a id="topic1__section11"></a>See Also
+
+[gpfdist](gpfdist.html#topic1), [CREATE EXTERNAL TABLE](../../sql/CREATE-EXTERNAL-TABLE.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb b/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb
new file mode 100644
index 0000000..c230d6d
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb
@@ -0,0 +1,254 @@
+---
+title: hawq register
+---
+
+Loads and registers AO or Parquet-formatted tables in HDFS into a corresponding table in HAWQ.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+Usage 1:
+hawq register [<connection_options>] [-f <hdfsfilepath>] [-e <Eof>] <tablename>
+
+Usage 2:
+hawq register [<connection_options>] [-c <configfilepath>][-F] <tablename>
+
+Connection Options:
+     [-h | --host <hostname>] 
+     [-p | --port <port>] 
+     [-U | --user <username>] 
+     [-d | --database <database>]
+     
+Misc. Options:
+     [-f | --filepath <filepath>] 
+	 [-e | --eof<eof>]
+ 	 [-F | --force ] 
+     [-c | --config <yml_config>]  
+hawq register help | -? 
+hawq register --version
+```
+
+## <a id="topic1__section3"></a>Prerequisites
+
+The client machine where `hawq register` is executed must meet the following conditions:
+
+-   All hosts in your HAWQ cluster (master and segments) must have network access between them and the hosts containing the data to be loaded.
+-   The Hadoop client must be configured and the hdfs filepath specified.
+-   The files to be registered and the HAWQ table must be located in the same HDFS cluster.
+-   The target table DDL is configured with the correct data type mapping.
+
+## <a id="topic1__section4"></a>Description
+
+`hawq register` is a utility that loads and registers existing data files or folders in HDFS into HAWQ internal tables, allowing HAWQ to directly read the data and use internal table processing for operations such as transactions and high perforance, without needing to load or copy it. Data from the file or directory specified by \<hdfsfilepath\> is loaded into the appropriate HAWQ table directory in HDFS and the utility updates the corresponding HAWQ metadata for the files. 
+
+You can use `hawq register` to:
+
+-  Load and register external Parquet-formatted file data generated by an external system such as Hive or Spark.
+-  Recover cluster data from a backup cluster.
+
+Two usage models are available.
+
+###Usage Model 1: Register file data to an existing table.
+
+`hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-f filepath] [-e eof]<tablename>`
+
+Metadata for the Parquet file(s) and the destination table must be consistent. Different  data types are used by HAWQ tables and Parquet files, so the data is mapped. Refer to the section [Data Type Mapping](hawqregister.html#topic1__section7) below. You must verify that the structure of the Parquet files and the HAWQ table are compatible before running `hawq register`. 
+
+####Limitations
+
+Only HAWQ or Hive-generated Parquet tables are supported.
+Hash tables and partitioned tables are not supported in this use model.
+
+###Usage Model 2: Use information from a YAML configuration file to register data
+ 
+`hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-c configfile] [--force] <tablename>`
+
+Files generated by the `hawq extract` command are registered through use of metadata in a YAML configuration file. Both AO and Parquet tables can be registered. Tables need not exist in HAWQ before being registered.
+
+The register process behaves differently, according to different conditions. 
+
+-  Existing tables have files appended to the existing HAWQ table.
+-  If a table does not exist, it is created and registered into HAWQ. 
+-  If the -\\\-force option is used, the data in existing catalog tables is erased and re-registered.
+
+
+###Limitations for Registering Hive Tables to HAWQ
+The currently-supported data types for generating Hive tables into HAWQ tables are: boolean, int, smallint, tinyint, bigint, float, double, string, binary, char, and varchar.  
+
+The following HIVE data types cannot be converted to HAWQ equivalents: timestamp, decimal, array, struct, map, and union.   
+
+Only single-level partitioned tables are supported.
+
+###Data Type Mapping<a id="topic1__section7"></a>
+
+HAWQ and Parquet tables and HIVE and HAWQ tables use different data types. Mapping must be used for compatibility. You are responsible for making sure your implementation is mapped to the appropriate data type before running `hawq register`. The tables below show equivalent data types, if available.
+
+<span class="tablecap">Table 1. HAWQ to Parquet Mapping</span>
+
+|HAWQ Data Type   | Parquet Data Type  |
+| :------------| :---------------|
+| bool        | boolean       |
+| int2/int4/date        | int32       |
+| int8/money       | int64      |
+| time/timestamptz/timestamp       | int64      |
+| float4        | float       |
+|float8        | double       |
+|bit/varbit/bytea/numeric       | Byte array       |
+|char/bpchar/varchar/name| Byte array |
+| text/xml/interval/timetz  | Byte array  |
+| macaddr/inet/cidr  | Byte array  |
+
+**Additional HAWQ-to-Parquet Mapping**
+
+**point**:  
+
+``` 
+group {
+    required int x;
+    required int y;
+}
+```
+
+**circle:** 
+
+```
+group {
+    required int x;
+    required int y;
+    required int r;
+}
+```
+
+**box:**  
+
+```
+group {
+    required int x1;
+    required int y1;
+    required int x2;
+    required int y2;
+}
+```
+
+**iseg:** 
+
+
+```
+group {
+    required int x1;
+    required int y1;
+    required int x2;
+    required int y2;
+}
+``` 
+
+**path**:
+  
+```
+group {
+    repeated group {
+        required int x;
+        required int y;
+    }
+}
+```
+
+
+<span class="tablecap">Table 2. HIVE to HAWQ Mapping</span>
+
+|HIVE Data Type   | HAWQ Data Type  |
+| :------------| :---------------|
+| boolean        | bool       |
+| tinyint        | int2       |
+| smallint       | int2/smallint      |
+| int            | int4 / int |
+| bigint         | int8 / bigint      |
+| float        | float4       |
+| double	| float8 |
+| string        | varchar       |
+| binary      | bytea       |
+| char | char |
+| varchar  | varchar  |
+
+
+## <a id="topic1__section5"></a>Options
+
+**General Options**
+
+<dt>-? (show help) </dt>  
+<dd>Show help, then exit.
+
+<dt>-\\\-version  </dt> 
+<dd>Show the version of this utility, then exit.</dd>
+
+
+**Connection Options**
+
+<dt>-h , -\\\-host \<hostname\> </dt>
+<dd>Specifies the host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `$PGHOST` or defaults to `localhost`.</dd>
+
+<dt> -p , -\\\-port \<port\> </dt> 
+<dd>Specifies the TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `$PGPORT` or defaults to 5432.</dd>
+
+<dt>-U , -\\\-user \<username\> </dt> 
+<dd>The database role name to connect as. If not specified, reads from the environment variable `$PGUSER` or defaults to the current system user name.</dd>
+
+<dt>-d  , -\\\-database \<databasename\>  </dt>
+<dd>The database to register the Parquet HDFS data into. The default is `postgres`<dd>
+
+<dt>-f , -\\\-filepath \<hdfspath\></dt>
+<dd>The path of the file or directory in HDFS containing the files to be registered.</dd>
+ 
+<dt>\<tablename\> </dt>
+<dd>The HAWQ table that will store the data to be registered. If the --config option is not supplied, the table cannot use hash distribution. Random table distribution is strongly preferred. If hash distribution must be used, make sure that the distribution policy for the data files described in the YAML file is consistent with the table being registered into.</dd>
+
+####Miscellaneous Options
+
+The following options are used with specific use models.
+
+<dt>-e , -\\\-eof \<eof\></dt>
+<dd>Specify the end of the file to be registered. \<eof\> represents the valid content length of the file, in bytes to be used, a value between 0 the actual size of the file. If this option is not included, the actual file size, or size of files within a folder, is used. Used with Use Model 1.</dd>
+
+<dt>-F , -\\\-force</dt>
+<dd>Used for disaster recovery of a cluster. Clears all HDFS-related catalog contents in `pg_aoseg.pg_paqseg_$relid `and re-registers files to a specified table. The HDFS files are not removed or modified. To use this option for recovery, data is assumed to be periodically imported to the cluster to be recovered. Used with Usage Model 2.</dd>
+
+<dt>-c , -\\\-config \<yml_config\> </dt> 
+<dd>Registers files specified by YAML-format configuration files into HAWQ. Used with Usage Model 2.</dd>
+
+
+## <a id="topic1__section6"></a>Example: Usage Model 2
+
+This example shows how to register files using a YAML configuration file. This file is usually generated by the `hawq extract` command. 
+
+Create a table and insert data into the table:
+
+```
+=> CREATE TABLE paq1(a int, b varchar(10))with(appendonly=true, orientation=parquet);`
+=> INSERT INTO paq1 values(generate_series(1,1000), 'abcde');
+```
+
+Extract the table's metadata.
+
+```
+hawq extract -o paq1.yml paq1
+```
+
+Use the YAML file to register the new table paq2:
+
+```
+hawq register --config paq1.yml paq2
+```
+
+Select the new table to determine if the content has already been registered:
+
+```
+=> SELECT count(*) FROM paq2;
+```
+The result should return 1000.
+
+## See Also
+
+[hawq extract](hawqextract.html#topic1)
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb b/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb
new file mode 100644
index 0000000..6d80e90
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb
@@ -0,0 +1,112 @@
+---
+title: hawq restart
+---
+
+Shuts down and then restarts a HAWQ system after shutdown is complete.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq restart <object> [-l|--logdir <logfile_directory>] [-q|--quiet] [-v|--verbose]    
+        [-M|--mode smart | fast | immediate] [-u|--reload] [-m|--masteronly] [-R|--restrict]
+        [-t|--timeout <timeout_seconds>]  [-U | --special-mode maintenance]
+        [--ignore-bad-hosts cluster | allsegments]
+     
+```
+
+``` pre
+hawq restart -? | -h | --help 
+
+hawq restart --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq restart` utility is used to shut down and restart the HAWQ server processes. It is essentially equivalent to performing a `hawq stop -M                                         smart` operation followed by `hawq                                         start`.
+
+The \<object\> in the command specifies what entity should be started: e.g. a cluster, a segment, the master node, standby node, or all segments in the cluster.
+
+When the `hawq restart` command runs, the utility uploads changes made to the master `pg_hba.conf` file or to the runtime configuration parameters in the master `hawq-site.xml` file without interruption of service. Note that any active sessions will not pick up the changes until they reconnect to the database.
+
+## Objects
+
+<dt>cluster  </dt>
+<dd>Restart a HAWQ cluster.</dd>
+
+<dt>master  </dt>
+<dd>Restart HAWQ master.</dd>
+
+<dt>segment  </dt>
+<dd>Restart a local segment node.</dd>
+
+<dt>standby  </dt>
+<dd>Restart a HAWQ standby.</dd>
+
+<dt>allsegments  </dt>
+<dd>Restart all segments.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-a (do not prompt)  </dt>
+<dd>Do not prompt the user for confirmation.</dd>
+
+<dt>-l, -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>Specifies the log directory for logs of the management tools. The default is `~/hawq/Adminlogs/`.</dd>
+
+<dt>-q, -\\\-quiet   </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages output by the utility.</dd>
+
+<dt>-t,  -\\\-timeout \<timeout\_seconds\>  </dt>
+<dd>Specifies a timeout in seconds to wait for a segment instance to start up. If a segment instance was shutdown abnormally (due to power failure or killing its `postgres` database listener process, for example), it may take longer to start up due to the database recovery and validation process. If not specified, the default timeout is 60 seconds.</dd>
+
+<dt>-M, -\\\-mode smart | fast | immediate  </dt>
+<dd>Smart shutdown is the default. Shutdown fails with a warning message, if active connections are found.
+
+Fast shut down interrupts and rolls back any transactions currently in progress .
+
+Immediate shutdown aborts transactions in progress and kills all `postgres` processes without allowing the database server to complete transaction processing or clean up any temporary or in-process work files. Because of this, immediate shutdown is not recommended. In some instances, it can cause database corruption that requires manual recovery.</dd>
+
+<dt>-u, -\\\-reload  </dt>
+<dd>Utility mode. This mode runs on the master, only, and only allows incoming sessions that specify gp\_session\_role=utility. It allows bash scripts to reload the parameter values and connect but protects the system from normal clients who might be trying to connect to the system during startup.</dd>
+
+<dt>-R, -\\\-restrict   </dt>
+<dd>Starts HAWQ in restricted mode (only database superusers are allowed to connect).</dd>
+
+<dt>-U, -\\\-special-mode maintenance   </dt>
+<dd>(Superuser only) Start HAWQ in \[maintenance | upgrade\] mode. In maintenance mode, the `gp_maintenance_conn` parameter is set.</dd>
+
+<dt>-\\\-ignore\-bad\-hosts cluster | allsegments  </dt>
+<dd>Overrides copying configuration files to a host on which SSH validation fails. If ssh to a skipped host is reestablished, make sure the configuration files are re-synched once it is reachable.</dd>
+
+<dt>-? , -h , -\\\-help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version (show utility version)  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Restart a HAWQ cluster:
+
+``` shell
+$ hawq restart cluster
+```
+
+Restart a HAWQ system in restricted mode (only allow superuser connections):
+
+``` shell
+$ hawq restart cluster -R
+```
+
+Start the HAWQ master instance only and connect in utility mode:
+
+``` shell
+$ hawq start master -m PGOPTIONS='-c gp_session_role=utility' psql
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq stop](hawqstop.html#topic1), [hawq start](hawqstart.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb b/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb
new file mode 100644
index 0000000..77f64a8
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb
@@ -0,0 +1,95 @@
+---
+title: hawq scp
+---
+
+Copies files between multiple hosts at once.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq scp -f <hostfile_hawqssh> | -h <hostname> [-h <hostname> ...] 
+    [--ignore-bad-hosts] [-J <character>] [-r] [-v] 
+    [[<user>@]<hostname>:]<file_to_copy> [...]
+    [[<user>@]<hostname>:]<copy_to_path>
+
+hawq scp -? 
+
+hawq scp --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq scp` utility allows you to copy one or more files from the specified hosts to other specified hosts in one command using SCP (secure copy). For example, you can copy a file from the HAWQ master host to all of the segment hosts at the same time.
+
+To specify the hosts involved in the SCP session, use the `-f` option to specify a file containing a list of host names, or use the `-h` option to name single host names on the command-line. At least one host name (`-h`) or a host file (`-f`) is required. The `-J` option allows you to specify a single character to substitute for the *hostname* in the `<file_to_copy>` and `<copy_to_path>` destination strings. If `-J` is not specified, the default substitution character is an equal sign (`=`). For example, the following command will copy `.bashrc` from the local host to `/home/gpadmin` on all hosts named in `hostfile_gpssh`:
+
+``` shell
+$ hawq scp -f hostfile_hawqssh .bashrc =:/home/gpadmin
+```
+
+If a user name is not specified in the host list or with *user*`@` in the file path, `hawq scp` will copy files as the currently logged in user. To determine the currently logged in user, invoke the `whoami` command. By default, `hawq scp` copies to `$HOME` of the session user on the remote hosts after login. To ensure the file is copied to the correct location on the remote hosts, use absolute paths.
+
+Before using `hawq scp`, you must have a trusted host setup between the hosts involved in the SCP session. You can use the utility `hawq ssh-exkeys` to update the known host files and exchange public keys between hosts if you have not done so already.
+
+## <a id="topic1__section9"></a>Arguments
+<dt>-f \<hostfile\_hawqssh\>  </dt>
+<dd>Specifies the name of a file that contains a list of hosts that will participate in this SCP session. The syntax of the host file is one host per line as follows:
+
+``` pre
+<hostname>
+```
+</dd>
+
+<dt>-h \<hostname\>  </dt>
+<dd>Specifies a single host name that will participate in this SCP session. You can use the `-h` option multiple times to specify multiple host names.</dd>
+
+<dt>\<file\_to\_copy\>  </dt>
+<dd>The name (or absolute path) of a file or directory that you want to copy to other hosts (or file locations). This can be either a file on the local host or on another named host.</dd>
+
+<dt>\<copy\_to\_path\>  </dt>
+<dd>The path where you want the file(s) to be copied on the named hosts. If an absolute path is not used, the file will be copied relative to `$HOME` of the session user. You can also use the equal sign '`=`' (or another character that you specify with the `-J` option) in place of a \<hostname\>. This will then substitute in each host name as specified in the supplied host file (`-f`) or with the `-h` option.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>
+-\\\-ignore-bad-hosts 
+</dt>
+<dd>
+Overrides copying configuration files to a host on which SSH validation fails. If SSH to a skipped host is reestablished, make sure the files are re-synched once it is reachable.
+</dd>
+
+<dt>-J \<character\>  </dt>
+<dd>The `-J` option allows you to specify a single character to substitute for the \<hostname\> in the `<file_to_copy\>` and `<copy_to_path\>` destination strings. If `-J` is not specified, the default substitution character is an equal sign (`=`).</dd>
+
+
+<dt>-v (verbose mode)  </dt>
+<dd>Reports additional messages in addition to the SCP command output.</dd>
+
+<dt>-r (recursive mode)  </dt>
+<dd>If \<file\_to\_copy\> is a directory, copies the contents of \<file\_to\_copy\> and all subdirectories.</dd>
+
+<dt>-? (help)  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Copy the file named `installer.tar` to `/` on all the hosts in the file `hostfile_hawqssh`.
+
+``` shell
+$ hawq scp -f hostfile_hawqssh installer.tar =:/
+```
+
+Copy the file named *myfuncs.so* to the specified location on the hosts named `sdw1` and `sdw2`:
+
+``` shell
+$ hawq scp -h sdw1 -h sdw2 myfuncs.so =:/usr/local/-db/lib
+```
+
+## See Also
+
+[hawq ssh](hawqssh.html#topic1), [hawq ssh-exkeys](hawqssh-exkeys.html#topic1)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb b/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb
new file mode 100644
index 0000000..2567faf
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb
@@ -0,0 +1,105 @@
+---
+title: hawq ssh-exkeys
+---
+
+Exchanges SSH public keys between hosts.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq ssh-exkeys -f <hostfile_exkeys> | - h <hostname> [-h <hostname> ...] [-p <password>]
+
+hawq ssh-exkeys -e <hostfile_exkeys> -x <hostfile_hawqexpand>  [-p <password>]
+
+hawq ssh-exkeys --version
+
+hawq ssh-exkeys [-? | --help]
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq ssh-exkeys` utility exchanges SSH keys between the specified host names (or host addresses). This allows SSH connections between HAWQ hosts and network interfaces without a password prompt. The utility is used to initially prepare a HAWQ system for password-free SSH access, and also to add additional ssh keys when expanding a HAWQ system.
+
+To specify the hosts involved in an initial SSH key exchange, use the `-f` option to specify a file containing a list of host names (recommended), or use the `-h` option to name single host names on the command-line. At least one host name (`-h`) or a host file is required. Note that the local host is included in the key exchange by default.
+
+To specify new expansion hosts to be added to an existing HAWQ system, use the `-e` and `-x` options. The `-e` option specifies a file containing a list of existing hosts in the system that already have SSH keys. The `-x` option specifies a file containing a list of new hosts that need to participate in the SSH key exchange.
+
+Keys are exchanged as the currently logged in user. A good practice is performing the key exchange process twice: once as `root` and once as the `gpadmin` user (the designated owner of your HAWQ installation). The HAWQ management utilities require that the same non-root user be created on all hosts in the HAWQ system, and the utilities must be able to connect as that user to all hosts without a password prompt.
+
+The `hawq ssh-exkeys` utility performs key exchange using the following steps:
+
+-   Creates an RSA identification key pair for the current user if one does not already exist. The public key of this pair is added to the `authorized_keys` file of the current user.
+-   Updates the `known_hosts` file of the current user with the host key of each host specified using the `-h`, `-f`, `-e`, and `-x` options.
+-   Connects to each host using `ssh` and obtains the `authorized_keys`, `known_hosts`, and `id_rsa.pub` files to set up password-free access.
+-   Adds keys from the `id_rsa.pub` files obtained from each host to the `authorized_keys` file of the current user.
+-   Updates the `authorized_keys`, `known_hosts`, and `id_rsa.pub` files on all hosts with new host information (if any).
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-e \<hostfile\_exkeys\>  </dt>
+<dd>When doing a system expansion, this is the name and location of a file containing all configured host names and host addresses (interface names) for each host in your *current* HAWQ system (master, standby master and segments), one name per line without blank lines or extra spaces. Hosts specified in this file cannot be specified in the host file used with `-x`.</dd>
+
+<dt>-f \<hostfile\_exkeys\>  </dt>
+<dd>Specifies the name and location of a file containing all configured host names and host addresses (interface names) for each host in your HAWQ system (master, standby master and segments), one name per line without blank lines or extra spaces.</dd>
+
+<dt>-h \<hostname\>  </dt>
+<dd>Specifies a single host name (or host address) that will participate in the SSH key exchange. You can use the `-h` option multiple times to specify multiple host names and host addresses.</dd>
+
+<dt>-p \<password\>  </dt>
+<dd>Specifies the password used to log in to the hosts. The hosts should share the same password. This option is useful when invoking `hawq ssh-exkeys` in a script.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+<dt>-x \<hostfile\_hawqexpand\>  </dt>
+<dd>When doing a system expansion, this is the name and location of a file containing all configured host names and host addresses (interface names) for each new segment host you are adding to your HAWQ system, one name per line without blank lines or extra spaces. Hosts specified in this file cannot be specified in the host file used with `-e`.</dd>
+
+<dt>-?, --help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Exchange SSH keys between all host names and addresses listed in the file `hostfile_exkeys`:
+
+``` shell
+$ hawq ssh-exkeys -f hostfile_exkeys
+```
+
+Exchange SSH keys between the hosts `sdw1`, `sdw2`, and `sdw3`:
+
+``` shell
+$ hawq ssh-exkeys -h sdw1 -h sdw2 -h sdw3
+```
+
+Exchange SSH keys between existing hosts `sdw1`, `sdw2`, and `sdw3`, and new hosts `sdw4` and `sdw5` as part of a system expansion operation:
+
+``` shell
+$ cat hostfile_exkeys
+mdw
+mdw-1
+mdw-2
+smdw
+smdw-1
+smdw-2
+sdw1
+sdw1-1
+sdw1-2
+sdw2
+sdw2-1
+sdw2-2
+sdw3
+sdw3-1
+sdw3-2
+$ cat hostfile_hawqexpand
+sdw4
+sdw4-1
+sdw4-2
+sdw5
+sdw5-1
+sdw5-2
+$ hawq ssh-exkeys -e hostfile_exkeys -x hostfile_hawqexpand
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq ssh](hawqssh.html#topic1), [hawq scp](hawqscp.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb b/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb
new file mode 100644
index 0000000..ee31308
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb
@@ -0,0 +1,105 @@
+---
+title: hawq ssh
+---
+
+Provides SSH access to multiple hosts at once.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq ssh -f <hostfile_hawqssh>) | (-h <hostname> [-h <hostname> ...]
+    [-e]
+    [-u <username>]
+    [-v]
+    [<bash_command>]
+
+hawq ssh [-? | --help]
+
+hawq ssh --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq ssh` utility allows you to run bash shell commands on multiple hosts at once using SSH (secure shell). You can execute a single command by specifying it on the command-line, or omit the command to enter into an interactive command-line session.
+
+To specify the hosts involved in the SSH session, use the `-f` option to specify a file containing a list of host names, or use the `-h` option to name single host names on the command-line. At least one host name (`-h`) or a host file (`-f`) is required. Note that the current host is ***not*** included in the session by default \u2014 to include the local host, you must explicitly declare it in the list of hosts involved in the session.
+
+Before using `hawq ssh`, you must have a trusted host setup between the hosts involved in the SSH session. You can use the utility `hawq ssh-exkeys` to update the known host files and exchange public keys between hosts if you have not done so already.
+
+If you do not specify a command on the command-line, `hawq ssh` will go into interactive mode. At the `hawq ssh` command prompt (`=>`), you can enter a command as you would in a regular bash terminal command-line, and the command will be executed on all hosts involved in the session. To end an interactive session, press `CTRL`+`D` on the keyboard or type `exit` or `quit`.
+
+If a user name is not specified in the host file or via the `-u` option, `hawq ssh` will execute commands as the currently logged in user. To determine the currently logged in user, do a `whoami` command. By default, `hawq ssh` goes to `$HOME` of the session user on the remote hosts after login. To ensure commands are executed correctly on all remote hosts, you should always enter absolute paths.
+
+## <a id="args"></a>Arguments
+<dt>-f \<hostfile\_hawqssh\>  </dt>
+<dd>Specifies the name of a file that contains a list of hosts that will participate in this SSH session. The host name is required, and you can optionally specify an alternate user name and/or SSH port number per host. The syntax of the host file is one host per line as follows:
+
+``` pre
+[username@]hostname[:ssh_port]
+```
+</dd>
+
+<dt>-h \<hostname\>  </dt>
+<dd>Specifies a single host name that will participate in this SSH session. You can use the `-h` option multiple times to specify multiple host names.</dd>
+
+
+## <a id="topic1__section4"></a>Options
+
+<dt>\<bash\_command\>   </dt>
+<dd>A bash shell command to execute on all hosts involved in this session (optionally enclosed in quotes). If not specified, `hawq ssh` will start an interactive session.</dd>
+
+<dt>-e (echo)  </dt>
+<dd>Optional. Echoes the commands passed to each host and their resulting output while running in non-interactive mode.</dd>
+
+<dt>-u \<username\>  </dt>
+<dd>Specifies the userid for the SSH session.</dd>
+
+<dt>-v (verbose mode)  </dt>
+<dd>Reports additional messages in addition to the command output when running in non-interactive mode.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+<dt>-?, -\\\-help </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Start an interactive group SSH session with all hosts listed in the file `hostfile_hawqssh`:
+
+``` shell
+$ hawq ssh -f hostfile_hawqssh
+```
+
+At the `hawq ssh` interactive command prompt, run a shell command on all the hosts involved in this session.
+
+``` pre
+=> ls -a /data/path-to-masterdd/*
+```
+
+Exit an interactive session:
+
+``` pre
+=> exit
+=> quit
+```
+
+Start a non-interactive group SSH session with the hosts named `sdw1` and `sdw2` and pass a file containing several commands named `command_file` to `hawq ssh`:
+
+``` shell
+$ hawq ssh -h sdw1 -h sdw2 -v -e < command_file
+```
+
+Execute single commands in non-interactive mode on hosts `sdw2` and `localhost`:
+
+``` shell
+$ hawq ssh -h sdw2 -h localhost -v -e 'ls -a /data/primary/*'
+$ hawq ssh -h sdw2 -h localhost -v -e 'echo $GPHOME'
+$ hawq ssh -h sdw2 -h localhost -v -e 'ls -1 | wc -l'
+```
+
+## See Also
+
+[hawq ssh-exkeys](hawqssh-exkeys.html#topic1), [hawq scp](hawqscp.html#topic1)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb b/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb
new file mode 100644
index 0000000..ff7b427
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb
@@ -0,0 +1,119 @@
+---
+title: hawq start
+---
+
+Starts a HAWQ system.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq start <object> [-l| --logdir <logfile_directory>] [-q| --quiet] 
+        [-v|--verbose] [-m|--masteronly]  [-t|--timeout <timeout_seconds>] 
+        [-R | --restrict] [-U | --special-mode maintenance]
+        [--ignore-bad-hosts cluster | allsegments]
+     
+```
+
+``` pre
+hawq start -? | -h | --help 
+
+hawq start --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq start` utility is used to start the HAWQ server processes. When you start a HAWQ system, you are actually starting several `postgres` database server listener processes at once (the master and all of the segment instances). The `hawq start` utility handles the startup of the individual instances. Each instance is started in parallel.
+
+The *object* in the command specifies what entity should be started: e.g. a cluster, a segment, the master node, standby node, or all segments in the cluster.
+
+The first time an administrator runs `hawq start cluster`, the utility creates a static hosts cache file named `$GPHOME/etc/slaves` to store the segment host names. Subsequently, the utility uses this list of hosts to start the system more efficiently. The utility will create a new hosts cache file at each startup.
+
+The `hawq start master` command starts only the HAWQ master, without segment or standby nodes. These can be started later, using `hawq start segment` and/or `hawq                                         start standby`.
+
+**Note:** Typically you should always use `hawq start cluster` or `hawq restart cluster` to start the cluster. If you do end up using `hawq start                                         standby|master|segment` to start nodes individually, make sure you always start the standby before the active master. Otherwise, the standby can become unsynchronized with the active master.
+
+Before you can start a HAWQ system, you must have initialized the system or node by using `hawq init <object>` first.
+
+## Objects
+
+<dt>cluster  </dt>
+<dd>Start a HAWQ cluster.</dd>
+
+<dt>master  </dt>
+<dd>Start HAWQ master.</dd>
+
+<dt>segment  </dt>
+<dd>Start a local segment node.</dd>
+
+<dt>standby  </dt>
+<dd>Start a HAWQ standby.</dd>
+
+<dt>allsegments  </dt>
+<dd>Start all segments.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-l , -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>Specifies the log directory for logs of the management tools. The default is `~/hawq/Adminlogs/`.</dd>
+
+<dt>-q , -\\\-quiet   </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.</dd>
+
+<dt>-v , -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages output by the utility.</dd>
+
+<dt>-m , -\\\-masteronly  </dt>
+<dd>Optional. Starts the HAWQ master instance only, in utility mode, which may be useful for maintenance tasks. This mode only allows connections to the master in utility mode. For example:
+
+``` shell
+$ PGOPTIONS='-c gp_role=utility' psql
+```
+</dd>
+
+<dt>-R , -\\\-restrict (restricted mode)  </dt>
+<dd>Starts HAWQ in restricted mode (only database superusers are allowed to connect).</dd>
+
+<dt>-t , -\\\-timeout \<timeout\_seconds\>  </dt>
+<dd>Specifies a timeout in seconds to wait for a segment instance to start up. If a segment instance was shutdown abnormally (due to power failure or killing its `postgres` database listener process, for example), it may take longer to start up due to the database recovery and validation process. If not specified, the default timeout is 60 seconds.</dd>
+
+<dt>-U , -\\\-special-mode maintenance   </dt>
+<dd>(Superuser only) Start HAWQ in \[maintenance | upgrade\] mode. In maintenance mode, the `gp_maintenance_conn` parameter is set.</dd>
+
+<dt>-\\\-ignore-bad-hosts cluster | allsegments  </dt>
+<dd>Overrides copying configuration files to a host on which SSH validation fails. If ssh to a skipped host is reestablished, make sure the configuration files are re-synched once it is reachable.</dd>
+
+<dt>-? , -h , -\\\-help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>--version (show utility version)  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Start a HAWQ system:
+
+``` shell
+$ hawq start cluster
+```
+
+Start a HAWQ master in maintenance mode:
+
+``` shell
+$ hawq start master -m
+```
+
+Start a HAWQ system in restricted mode (only allow superuser connections):
+
+``` shell
+$ hawq start cluster -R
+```
+
+Start the HAWQ master instance only and connect in utility mode:
+
+``` shell
+$ hawq start master -m PGOPTIONS='-c gp_session_role=utility' psql
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq stop](hawqstop.html#topic1), [hawq init](hawqinit.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb b/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb
new file mode 100644
index 0000000..3927442
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb
@@ -0,0 +1,65 @@
+---
+title: hawq state
+---
+
+Shows the status of a running HAWQ system.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq state 
+     [-b]
+     [-l <logfile_directory> | --logdir <logfile_directory>]
+     [(-v | --verbose) | (-q | --quiet)]  
+     
+hawq state [-h | --help]
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq state` utility displays information about a running HAWQ instance. A HAWQ system is comprised of multiple PostgreSQL database instances (segments) spanning multiple machines, and the `hawq state` utility can provide additional status information, such as:
+
+-   Total segment count.
+-   Which segments are down.
+-   Master and segment configuration information (hosts, data directories, etc.).
+-   The ports used by the system.
+-   Whether a standby master is present, and if it is active.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-b (brief status)  </dt>
+<dd>Display a brief summary of the state of the HAWQ system. This is the default mode.</dd>
+
+<dt>-l, -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>Specifies the directory to check for logfiles. The default is `$GPHOME/hawqAdminLogs`. 
+
+Log files within the directory are named according to the command being invoked, for example:  hawq\_config\_\<log\_id\>.log, hawq\_state\_\<log\_id\>.log, etc.</dd>
+
+<dt>-q, -\\\-quiet  </dt>
+<dd>Run in quiet mode. Except for warning messages, command output is not displayed on the screen. However, this information is still written to the log file.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays error messages and outputs detailed status and progress information.</dd>
+
+<dt>-h, -\\\-help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+Show brief status information of a HAWQ system:
+
+``` shell
+$ hawq state -b
+```
+
+Change the log directory from `hawqAdminLogs` to `TodaysLogs`.
+
+```shell
+$ hawq state -l TodaysLogs
+$ ls TodaysLogs
+hawq_config_20160707.log  hawq_init_20160707.log   master.initdb
+```
+
+## <a id="topic1__section7"></a>See Also
+
+[hawq start](hawqstart.html#topic1), [gplogfilter](gplogfilter.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb b/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb
new file mode 100644
index 0000000..dd54156
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb
@@ -0,0 +1,104 @@
+---
+title: hawq stop
+---
+
+Stops or restarts a HAWQ system.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq stop <object> [-a | --prompt]
+       [-M (smart|fast|immediate) | --mode (smart|fast|immediate)]   
+       [-t <timeout_seconds> | --timeout <timeout_seconds>]  
+       [-l <logfile_directory> | --logdir <logfile_directory>]
+       [(-v | --verbose) | (-q | --quiet)]
+
+hawq stop [-? | -h | --help]
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq stop` utility is used to stop the database servers that comprise a HAWQ system. When you stop a HAWQ system, you are actually stopping several `postgres` database server processes at once (the master and all of the segment instances). The `hawq           stop` utility handles the shutdown of the individual instances. Each instance is shut down in parallel.
+
+By default, you are not allowed to shut down HAWQ if there are any client connections to the database. Use the `-M fast` option to roll back all in progress transactions and terminate any connections before shutting down. If there are any transactions in progress, the default behavior is to wait for them to commit before shutting down.
+
+With the `-u` option, the utility uploads changes made to the master `pg_hba.conf` file or to *runtime* configuration parameters in the master `hawq-site.xml` file without interruption of service. Note that any active sessions will not pick up the changes until they reconnect to the database.
+If the HAWQ cluster has active connections, use the command `hawq stop cluster -u -M fast` to ensure that changes to the parameters are reloaded.  
+
+## Objects
+
+<dt>cluster  </dt>
+<dd>Stop a HAWQ cluster.</dd>
+
+<dt>master  </dt>
+<dd>Shuts down a HAWQ master instance that was started in maintenance mode.</dd>
+
+<dt>segment  </dt>
+<dd>Stop a local segment node.</dd>
+
+<dt>standby  </dt>
+<dd>Stop the HAWQ standby master process.</dd>
+
+<dt>allsegments  </dt>
+<dd>Stop all segments.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-a, -\\\-prompt  </dt>
+<dd>Do not prompt the user for confirmation before executing.</dd>
+
+<dt>-l, -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>The directory to write the log file. The default is `~/hawq/Adminlogs/`.</dd>
+
+<dt>-M, -\\\-mode (smart | fast | immediate)  </dt>
+<dd>Smart shutdown is the default. Shutdown fails with a warning message, if active connections are found.
+
+Fast shut down interrupts and rolls back any transactions currently in progress .
+
+Immediate shutdown aborts transactions in progress and kills all `postgres` processes without allowing the database server to complete transaction processing or clean up any temporary or in-process work files. Because of this, immediate shutdown is not recommended. In some instances, it can cause database corruption that requires manual recovery.</dd>
+
+<dt>-q, -\\\-quiet  </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.</dd>
+
+<dt>-t, -\\\-timeout \<timeout\_seconds\>  </dt>
+<dd>Specifies a timeout threshold (in seconds) to wait for a segment instance to shutdown. If a segment instance does not shut down in the specified number of seconds, `hawq stop` displays a message indicating that one or more segments are still in the process of shutting down and that you cannot restart HAWQ until the segment instance(s) are stopped. This option is useful in situations where `hawq stop` is executed and there are very large transactions that need to rollback. These large transactions can take over a minute to rollback and surpass the default timeout period of 600 seconds.</dd>
+
+<dt>-u, -\\\-reload   </dt>
+<dd>This option reloads configuration parameter values without restarting the HAWQ cluster.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages output by the utility.</dd>
+
+<dt>-?, -h, -\\\-help (help) </dt>
+<dd>Displays the online help.</dd>
+
+
+## <a id="topic1__section5"></a>Examples
+
+Stop a HAWQ system in smart mode:
+
+``` shell
+$ hawq stop cluster -M smart
+```
+
+Stop a HAWQ system in fast mode:
+
+``` shell
+$ hawq stop cluster -M fast
+```
+
+Stop a master instance that was started in maintenance mode:
+
+``` shell
+$ hawq stop master -m
+```
+
+Reload the `hawq-site.xml` and `pg_hba.conf` files after making configuration changes but do not shutdown the HAWQ array:
+
+``` shell
+$ hawq stop cluster -u
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq start](hawqstart.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/createdb.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/createdb.html.md.erb b/markdown/reference/cli/client_utilities/createdb.html.md.erb
new file mode 100644
index 0000000..31b0c80
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/createdb.html.md.erb
@@ -0,0 +1,105 @@
+---
+title: createdb
+---
+
+Creates a new database.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+
+createdb [<connection_options>] [<database_options>] [-e | --echo] [<dbname> ['<description>']]
+
+createdb --help 
+
+createdb --version
+
+```
+where:
+
+``` pre
+<connection_options> =
+	[-h <host> | --host <host>] 
+	[-p <port> | -- port <port>] 
+	[-U <username> | --username <username>] 
+    [-W | --password] 
+         
+<database_options> =
+    [-D <tablespace> | --tablespace <tablespace>]
+    [-E <encoding> | --encoding <encoding>]
+    [-O <username> | --owner <username>] 
+    [-T <template>| --template <template>] 
+```
+
+## <a id="topic1__section3"></a>Description
+
+`createdb` creates a new database in a HAWQ system.
+
+Normally, the database user who executes this command becomes the owner of the new database. However a different owner can be specified via the `-O` option, if the executing user has appropriate privileges.
+
+`createdb` is a wrapper around the SQL command `CREATE DATABASE`.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>**\<dbname\>**</dt>
+<dd>The name of the database to be created. The name must be unique among all other databases in the HAWQ system. If not specified, reads from the environment variable `PGDATABASE`, then `PGUSER` or defaults to the current system user.</dd>
+
+<dt>\<description\></dt>
+<dd>Optional comment to be associated with the newly created database. Descriptions containing white space must be enclosed in quotes.</dd>
+
+<dt>-e, --echo </dt>
+<dd>Echo the commands that createdb generates and sends to the server.</dd>
+
+**\<database_options\>**
+
+<dt>-D, -\\\-tablespace \<tablespace\>  </dt>
+<dd>The default tablespace for the database.</dd>
+
+<dt>-E, -\\\-encoding \<encoding\> </dt>
+<dd>Character set encoding to use in the new database. Specify a string constant (such as `'UTF8'`), an integer encoding number, or `DEFAULT` to use the default encoding.</dd>
+
+<dt>-O, -\\\-owner \<username\>  </dt>
+<dd>The name of the database user who will own the new database. Defaults to the user executing this command.</dd>
+
+<dt>-T, -\\\-template \<template\>  </dt>
+<dd>The name of the template from which to create the new database. Defaults to `template1`.</dd>
+
+**\<connection_options\>**
+ 
+<dt>-h, -\\\-host \<hostname\>  </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p, -\\\-port \<port\>  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U, -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-w, -\\\-no-password  </dt>
+<dd>Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a `.pgpass` file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+
+**Other Options**
+
+<dt>-\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+To create the database `testdb` using the default options:
+
+``` shell
+$ createdb testdb
+```
+
+To create the database `demo` using the HAWQ master on host `gpmaster`, port `54321`, using the `LATIN1` encoding scheme:
+
+``` shell
+$ createdb -p 54321 -h gpmaster -E LATIN1 demo
+```



[24/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_partition_columns.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_partition_columns.html.md.erb b/markdown/reference/catalog/pg_partition_columns.html.md.erb
new file mode 100644
index 0000000..2205a24
--- /dev/null
+++ b/markdown/reference/catalog/pg_partition_columns.html.md.erb
@@ -0,0 +1,20 @@
+---
+title: pg_partition_columns
+---
+
+The `pg_partition_columns` system view is used to show the partition key columns of a partitioned table.
+
+<a id="topic1__ha179967"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_partition\_columns</span>
+
+| column                      | type     | references | description                                                                                                                          |
+|-----------------------------|----------|------------|--------------------------------------------------------------------------------------------------------------------------------------|
+| `schemaname`                | name     | �          | The name of the schema the partitioned table is in.                                                                                  |
+| `tablename`                 | name     | �          | The table name of the top-level parent table.                                                                                        |
+| `columnname`                | name     | �          | The name of the partition key column.                                                                                                |
+| `partitionlevel`            | smallint | �          | The level of this subpartition in the hierarchy.                                                                                     |
+| `position_in_partition_key` | integer  | �          | For list partitions you can have a composite (multi-column) partition key. This shows the position of the column in a composite key. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_partition_encoding.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_partition_encoding.html.md.erb b/markdown/reference/catalog/pg_partition_encoding.html.md.erb
new file mode 100644
index 0000000..e1dbabb
--- /dev/null
+++ b/markdown/reference/catalog/pg_partition_encoding.html.md.erb
@@ -0,0 +1,18 @@
+---
+title: pg_partition_encoding
+---
+
+The `pg_partition_encoding` system catalog table describes the available column compression options for a partition template.
+
+<a id="topic1__hb177831"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_attribute\_encoding</span>
+
+| column             | type       | modifers | storage  | description |
+|--------------------|------------|----------|----------|-------------|
+| `parencoid`        | oid        | not null | plain    | �           |
+| `parencattnum`     | smallint   | not null | plain    | �           |
+| `parencattoptions` | text \[ \] | �        | extended | �           |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_partition_rule.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_partition_rule.html.md.erb b/markdown/reference/catalog/pg_partition_rule.html.md.erb
new file mode 100644
index 0000000..9648132
--- /dev/null
+++ b/markdown/reference/catalog/pg_partition_rule.html.md.erb
@@ -0,0 +1,28 @@
+---
+title: pg_partition_rule
+---
+
+The `pg_partition_rule` system catalog table is used to track partitioned tables, their check constraints, and data containment rules. Each row of `pg_partition_rule` represents either a leaf partition (the bottom level partitions that contain data), or a branch partition (a top or mid-level partition that is used to define the partition hierarchy, but does not contain any data).
+
+<a id="topic1__hc179425"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_partition\_rule</span>
+
+
+| column              | type     | references                 | description                                                                                                                                                                                                                                                                                                                                                                  |
+|---------------------|----------|----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `paroid`            | oid      | pg\_partition.oid          | Row identifier of the partitioning level (from [pg\_partition](pg_partition.html#topic1)) to which this partition belongs. In the case of a branch partition, the corresponding table (identified by `pg_partition_rule`) is an empty container table. In case of a leaf partition, the table contains the rows for that partition containment rule. |
+| `parchildrelid`     | oid      | pg\_class.oid              | The table identifier of the partition (child table).                                                                                                                                                                                                                                                                                                                         |
+| `parparentrule`     | oid      | pg\_partition\_rule.paroid | The row identifier of the rule associated with the parent table of this partition.                                                                                                                                                                                                                                                                                           |
+| `parname`           | name     | �                          | The given name of this partition.                                                                                                                                                                                                                                                                                                                                            |
+| `parisdefault`      | boolean  | �                          | Whether or not this partition is a default partition.                                                                                                                                                                                                                                                                                                                        |
+| `parruleord`        | smallint | �                          | For range partitioned tables, the rank of this partition on this level of the partition hierarchy.                                                                                                                                                                                                                                                                           |
+| `parrangestartincl` | boolean  | �                          | For range partitioned tables, whether or not the starting value is inclusive.                                                                                                                                                                                                                                                                                                |
+| `parrangeendincl`   | boolean  | �                          | For range partitioned tables, whether or not the ending value is inclusive.                                                                                                                                                                                                                                                                                                  |
+| `parrangestart`     | text     | �                          | For range partitioned tables, the starting value of the range.                                                                                                                                                                                                                                                                                                               |
+| `parrangeend`       | text     | �                          | For range partitioned tables, the ending value of the range.                                                                                                                                                                                                                                                                                                                 |
+| `parrangeevery`     | text     | �                          | For range partitioned tables, the interval value of the `EVERY` clause.                                                                                                                                                                                                                                                                                                      |
+| `parlistvalues`     | text     | �                          | For list partitioned tables, the list of values assigned to this partition.                                                                                                                                                                                                                                                                                                  |
+| `parreloptions`     | text     | �                          | An array describing the storage characteristics of the particular partition.                                                                                                                                                                                                                                                                                                 |
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_partition_templates.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_partition_templates.html.md.erb b/markdown/reference/catalog/pg_partition_templates.html.md.erb
new file mode 100644
index 0000000..ff397fb
--- /dev/null
+++ b/markdown/reference/catalog/pg_partition_templates.html.md.erb
@@ -0,0 +1,30 @@
+---
+title: pg_partition_templates
+---
+
+The `pg_partition_templates` system view is used to show the subpartitions that were created using a subpartition template.
+
+<a id="topic1__hd179967"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_partition\_templates</span>
+
+
+| column                    | type     | references | description                                                                                                                                                                                                      |
+|---------------------------|----------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `schemaname`              | name     | �          | The name of the schema the partitioned table is in.                                                                                                                                                              |
+| `tablename`               | name     | �          | The table name of the top-level parent table.                                                                                                                                                                    |
+| `partitionname`           | name     | �          | The name of the subpartition (this is the name to use if referring to the partition in an `ALTER TABLE` command). `NULL` if the partition was not given a name at create time or generated by an `EVERY` clause. |
+| `partitiontype`           | text     | �          | The type of subpartition (range or list).                                                                                                                                                                        |
+| `partitionlevel`          | smallint | �          | The level of this subpartition in the hierarchy.                                                                                                                                                                 |
+| `partitionrank`           | bigint   | �          | For range partitions, the rank of the partition compared to other partitions of the same level.                                                                                                                  |
+| `partitionposition`       | smallint | �          | The rule order position of this subpartition.                                                                                                                                                                    |
+| `partitionlistvalues`     | text     | �          | For list partitions, the list value(s) associated with this subpartition.                                                                                                                                        |
+| `partitionrangestart`     | text     | �          | For range partitions, the start value of this subpartition.                                                                                                                                                      |
+| `partitionstartinclusive` | boolean  | �          | `T` if the start value is included in this subpartition. `F` if it is excluded.                                                                                                                                  |
+| `partitionrangeend`       | text     | �          | For range partitions, the end value of this subpartition.                                                                                                                                                        |
+| `partitionendinclusive`   | boolean  | �          | `T` if the end value is included in this subpartition. `F` if it is excluded.                                                                                                                                    |
+| `partitioneveryclause`    | text     | �          | The `EVERY` clause (interval) of this subpartition.                                                                                                                                                              |
+| `partitionisdefault`      | boolean  | �          | `T` if this is a default subpartition, otherwise `F`.                                                                                                                                                            |
+| `partitionboundary`       | text     | �          | The entire partition specification for this subpartition.                                                                                                                                                        |
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_partitions.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_partitions.html.md.erb b/markdown/reference/catalog/pg_partitions.html.md.erb
new file mode 100644
index 0000000..2c0b26a
--- /dev/null
+++ b/markdown/reference/catalog/pg_partitions.html.md.erb
@@ -0,0 +1,30 @@
+---
+title: pg_partitions
+---
+
+The `pg_partitions` system view is used to show the structure of a partitioned table.
+
+<a id="topic1__he143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_partitions</span>
+
+| column                     | type     | references | description                                                                                                                                                                                                   |
+|----------------------------|----------|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `schemaname`               | name     | �          | The name of the schema the partitioned table is in.                                                                                                                                                           |
+| `tablename`                | name     | �          | The name of the top-level parent table.                                                                                                                                                                       |
+| `partitiontablename`       | name     | �          | The relation name of the partitioned table (this is the table name to use if accessing the partition directly).                                                                                               |
+| `partitionname`            | name     | �          | The name of the partition (this is the name to use if referring to the partition in an `ALTER TABLE` command). `NULL` if the partition was not given a name at create time or generated by an `EVERY` clause. |
+| `parentpartitiontablename` | name     | �          | The relation name of the parent table one level up from this partition.                                                                                                                                       |
+| `parentpartitionname`      | name     | �          | The given name of the parent table one level up from this partition.                                                                                                                                          |
+| `partitiontype`            | text     | �          | The type of partition (range or list).                                                                                                                                                                        |
+| `partitionlevel`           | smallint | �          | The level of this partition in the hierarchy.                                                                                                                                                                 |
+| `partitionrank`            | bigint   | �          | For range partitions, the rank of the partition compared to other partitions of the same level.                                                                                                               |
+| `partitionposition`        | smallint | �          | The rule order position of this partition.                                                                                                                                                                    |
+| `partitionlistvalues`      | text     | �          | For list partitions, the list value(s) associated with this partition.                                                                                                                                        |
+| `partitionrangestart`      | text     | �          | For range partitions, the start value of this partition.                                                                                                                                                      |
+| `partitionstartinclusive`  | boolean  | �          | `T` if the start value is included in this partition. `F` if it is excluded.                                                                                                                                  |
+| `partitionrangeend`        | text     | �          | For range partitions, the end value of this partition.                                                                                                                                                        |
+| `partitionendinclusive`    | boolean  | �          | `T` if the end value is included in this partition. `F` if it is excluded.                                                                                                                                    |
+| `partitioneveryclause`     | text     | �          | The `EVERY` clause (interval) of this partition.                                                                                                                                                              |
+| `partitionisdefault`       | boolean  | �          | `T` if this is a default partition, otherwise `F`.                                                                                                                                                            |
+| `partitionboundary`        | text     | �          | The entire partition specification for this partition.                                                                                                                                                        |
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_pltemplate.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_pltemplate.html.md.erb b/markdown/reference/catalog/pg_pltemplate.html.md.erb
new file mode 100644
index 0000000..0aee00a
--- /dev/null
+++ b/markdown/reference/catalog/pg_pltemplate.html.md.erb
@@ -0,0 +1,22 @@
+---
+title: pg_pltemplate
+---
+
+The `pg_pltemplate` system catalog table stores template information for procedural languages. A template for a language allows the language to be created in a particular database by a simple `CREATE LANGUAGE` command, with no need to specify implementation details. Unlike most system catalogs, `pg_pltemplate` is shared across all databases of HAWQ system: there is only one copy of `pg_pltemplate` per system, not one per database. This allows the information to be accessible in each database as it is needed.
+
+There are not currently any commands that manipulate procedural language templates; to change the built-in information, a superuser must modify the table using ordinary `INSERT`, `DELETE`, or `UPDATE` commands.
+
+<a id="topic1__hf150092"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_pltemplate</span>
+
+| column           | type        | references | description                                           |
+|------------------|-------------|------------|-------------------------------------------------------|
+| `tmplname`       | name        | �          | Name of the language this template is for             |
+| `tmpltrusted`    | boolean     | �          | True if language is considered trusted                |
+| `tmplhandler`    | text        | �          | Name of call handler function                         |
+| `tmplvalidator ` | text        | �          | Name of validator function, or `NULL` if none           |
+| `tmpllibrary`    | text        | �          | Path of shared library that implements language       |
+| `tmplacl`        | aclitem\[\] | �          | Access privileges for template (not yet implemented). |
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_proc.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_proc.html.md.erb b/markdown/reference/catalog/pg_proc.html.md.erb
new file mode 100644
index 0000000..4d1d194
--- /dev/null
+++ b/markdown/reference/catalog/pg_proc.html.md.erb
@@ -0,0 +1,36 @@
+---
+title: pg_proc
+---
+
+The `pg_proc` system catalog table stores information about functions (or procedures), both built-in functions and those defined by `CREATE FUNCTION`. The table contains data for aggregate and window functions as well as plain functions. If `proisagg` is true, there should be a matching row in `pg_aggregate`. If `proiswin` is true, there should be a matching row in `pg_window`.
+
+For compiled functions, both built-in and dynamically loaded, `prosrc` contains the function's C-language name (link symbol). For all other currently-known language types, `prosrc` contains the function's source text. `probin` is unused except for dynamically-loaded C functions, for which it gives the name of the shared library file containing the function.
+
+<a id="topic1__hg150092"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_proc</span>
+
+| column           | type        | references        | description                                                                                                                                                                                                                                                                                                                                        |
+|------------------|-------------|-------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `proname`        | name        | �                 | Name of the function.                                                                                                                                                                                                                                                                                                                              |
+| `pronamespace`   | oid         | pg\_namespace.oid | The OID of the namespace that contains this function.                                                                                                                                                                                                                                                                                              |
+| `proowner`       | oid         | pg\_authid.oid    | Owner of the function.                                                                                                                                                                                                                                                                                                                             |
+| `prolang`        | oid         | pg\_language.oid  | Implementation language or call interface of this function.                                                                                                                                                                                                                                                                                        |
+| `proisagg`       | boolean     | �                 | Function is an aggregate function.                                                                                                                                                                                                                                                                                                                 |
+| `prosecdef`      | boolean     | �                 | Function is a security definer (for example, a 'setuid' function).                                                                                                                                                                                                                                                                                 |
+| `proisstrict`    | boolean     | �                 | Function returns NULL if any call argument is NULL. In that case the function will not actually be called at all. Functions that are not strict must be prepared to handle NULL inputs.                                                                                                                                                            |
+| `proretset`      | boolean     | �                 | Function returns a set (multiple values of the specified data type).                                                                                                                                                                                                                                                                               |
+| `provolatile`    | char        | �                 | Tells whether the function's result depends only on its input arguments, or is affected by outside factors. `i` = *immutable* (always delivers the same result for the same inputs), `s` = *stable* (results (for fixed inputs) do not change within a scan), or `v` = *volatile* (results may change at any time or functions with side-effects). |
+| `pronargs`       | smallint    | �                 | Number of arguments.                                                                                                                                                                                                                                                                                                                               |
+| `prorettype`     | oid         | pg\_type.oid      | Data type of the return value.                                                                                                                                                                                                                                                                                                                     |
+| `proiswin`       | boolean     | �                 | Function is neither an aggregate nor a scalar function, but a pure window function.                                                                                                                                                                                                                                                                |
+| `proargtypes`    | oidvector   | pg\_type.oid      | An array with the data types of the function arguments. This includes only input arguments (including `INOUT` arguments), and thus represents the call signature of the function.                                                                                                                                                                  |
+| `proallargtypes` | oid\[\]     | pg\_type.oid      | An array with the data types of the function arguments. This includes all arguments (including `OUT` and `INOUT` arguments); however, if all the arguments are `IN` arguments, this field will be null. Note that subscripting is 1-based, whereas for historical reasons proargtypes is subscripted from 0.                                       |
+| `proargmodes`    | char\[\]    | �                 | An array with the modes of the function arguments: `i` = `IN`, `o` = `OUT` , `b` = `INOUT`. If all the arguments are IN arguments, this field will be null. Note that subscripts correspond to positions of proallargtypes not proargtypes.                                                                                                        |
+| `proargnames`    | text\[\]    | �                 | An array with the names of the function arguments. Arguments without a name are set to empty strings in the array. If none of the arguments have a name, this field will be null. Note that subscripts correspond to positions of proallargtypes not proargtypes.                                                                                  |
+| `prosrc `        | text        | �                 | This tells the function handler how to invoke the function. It might be the actual source code of the function for interpreted languages, a link symbol, a file name, or just about anything else, depending on the implementation language/call convention.                                                                                       |
+| `probin`         | bytea       | �                 | Additional information about how to invoke the function. Again, the interpretation is language-specific.                                                                                                                                                                                                                                           |
+| `proacl`         | aclitem\[\] | �                 | Access privileges for the function as given by `GRANT`/`REVOKE`.                                                                                                                                                                                                                                                                                   |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_resqueue.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_resqueue.html.md.erb b/markdown/reference/catalog/pg_resqueue.html.md.erb
new file mode 100644
index 0000000..0b8d414
--- /dev/null
+++ b/markdown/reference/catalog/pg_resqueue.html.md.erb
@@ -0,0 +1,30 @@
+---
+title: pg_resqueue
+---
+
+The `pg_resqueue` system catalog table contains information about HAWQ resource queues, which are used for managing resources. This table is populated only on the master. This table is defined in the `pg_global` tablespace, meaning it is globally shared across all databases in the system.
+
+<a id="topic1__hi141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_resqueue</span>
+
+| column                  | type                     | references | description                                                                                                                                                                              |
+|-------------------------|--------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `rsqname`               | name                     | �          | The name of the resource queue.                                                                                                                                                          |
+| `parentoid`             | oid                      | �          | OID of the parent queue of the resource queue.                                                                                                                                           |
+| `activestats`           | integer                  | �          | The maximum number of parallel active statements allowed for the resource queue.                                                                                                         |
+| `memorylimit`           | text                     | �          | The maximum amount of memory that can be consumed by the resource queue (expressed as a percentage of the cluster's memory.)                                                             |
+| `corelimit`             | text                     | �          | The maximum amount of cores that can be consumed by the resource queue (expressed as a percentage of the cluster's cores.)                                                               |
+| `resovercommit`         | real                     | �          | The ratio of resource consumption overcommit for the resource queue.                                                                                                                     |
+| `allocpolicy`           | text                     | �          | The resource allocation policy name for the resource queue.                                                                                                                              |
+| `vsegresourcequota`     | text                     | �          | The virtual segment resource quota for the resource queue.                                                                                                                               |
+| `nvsegupperlimit`       | integer                  | �          | The upper limit of number of virtual segments allowed for one statement execution.                                                                                                       |
+| `nvseglowerlimit`       | integer                  | �          | The lower limit of number of virtual segments allowed for one statement execution.                                                                                                       |
+| `nvsegupperlimitperseg` | real                     | �          | The upper limit of number of virtual segments allowed for one statement execution. The limit is averaged by the number of segments in the cluster.                                       |
+| `nvseglowerlimitperseg` | real                     | �          | The lower limit of number of virtual segments aloowed for one statement execution. The limit is averaged by the number of segments in the cluster.                                       |
+| `creationtime`          | timestamp with time zone | �          | Time when the resource queue was created.                                                                                                                                                |
+| `updatetime`            | timestamp with time zone | �          | Time when the resource queue was last changed.                                                                                                                                           |
+| `status`                | text                     | �          | Current status of the resource queue.Possible values are `branch`, which indicates a branch resource queue (has children), and `NULL`, which indicates a leaf-level queue (no children). |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_resqueue_status.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_resqueue_status.html.md.erb b/markdown/reference/catalog/pg_resqueue_status.html.md.erb
new file mode 100644
index 0000000..7c841c2
--- /dev/null
+++ b/markdown/reference/catalog/pg_resqueue_status.html.md.erb
@@ -0,0 +1,94 @@
+---
+title: pg_resqueue_status
+---
+
+The `pg_resqueue_status` view allows administrators to see status and activity for a workload management resource queue. It shows how many queries are waiting to run and how many queries are currently active in the system from a particular resource queue.
+
+<a id="topic1__fp141982"></a>
+<span class="tablecap">Table 1. pg\_resqueue\_status</span>
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">rsqname</code></td>
+<td>name</td>
+<td>pg_resqueue_ rsqname</td>
+<td>The name of the resource queue.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">segmem</code></td>
+<td>text</td>
+<td>�</td>
+<td>The calculated virtual segment memory resource quota.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">segcore</code></td>
+<td>text</td>
+<td>�</td>
+<td>The calculated virtual segment core resource quota.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">segsize</code></td>
+<td>text</td>
+<td>�</td>
+<td>The number of virtual segments that can be allocated to the resource queue.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">segsizemax</code></td>
+<td>text</td>
+<td>�</td>
+<td>The maximum number of virtual segments that can be allocated to the resource queue.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">inusemem</code></td>
+<td>text</td>
+<td>�</td>
+<td>Aggregated in-use memory by running statements.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">inusecore</code></td>
+<td>text</td>
+<td>�</td>
+<td>Aggregated in-use core by running statements.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">rsqholders</code></td>
+<td>text</td>
+<td>�</td>
+<td>The number of resource holders for running statements. A resource holder is a running statement whose allocated resources from the resource manager has not been returned yet. In other words, the statement holds some resources allocated from the resource manager.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">resqwaiters</code></td>
+<td>text</td>
+<td>�</td>
+<td>The number of resource requests that are queued and waiting for the resource.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">paused</code></td>
+<td>text</td>
+<td>�</td>
+<td>The dynamic pause status of the resource queue. There are three possible statuses:
+<ul>
+<li><code class="ph codeph">T</code> : Queue is paused for the allocation of resources to queued and incoming requests.</li>
+<li><code class="ph codeph">F</code> : Queue is in a normal working status.</li>
+<li><code class="ph codeph">R</code> : Queue is paused and may have encountered resource fragmentation.</li>
+</ul></td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_rewrite.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_rewrite.html.md.erb b/markdown/reference/catalog/pg_rewrite.html.md.erb
new file mode 100644
index 0000000..9b2a76b
--- /dev/null
+++ b/markdown/reference/catalog/pg_rewrite.html.md.erb
@@ -0,0 +1,20 @@
+---
+title: pg_rewrite
+---
+
+The `pg_rewrite` system catalog table stores rewrite rules for tables and views. `pg_class.relhasrules` must be true if a table has any rules in this catalog.
+
+<a id="topic1__hm149830"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_rewrite</span>
+
+| column       | type     | references    | description                                                                                             |
+|--------------|----------|---------------|---------------------------------------------------------------------------------------------------------|
+| `rulename`   | name     | �             | Rule name.                                                                                              |
+| `ev_class`   | oid      | pg\_class.oid | The table this rule is for.                                                                             |
+| `ev_attr`    | smallint | �             | The column this rule is for (currently, always zero to indicate the whole table).                       |
+| `ev_type `   | char     | �             | Event type that the rule is for: <ul><li> 1 = `SELECT` </li> <li>2 = `UPDATE`</li> <li>3 = `INSERT`</li> <li>4 = `DELETE`</li> </ul>                       |
+| `is_instead` | boolean  | �             | True if the rule is an `INSTEAD` rule.                                                                    |
+| `ev_qual`    | text     | �             | Expression tree (in the form of a `nodeToString()` representation) for the rule's qualifying condition. |
+| `ev_action`  | text     | �             | Query tree (in the form of a `nodeToString()` representation) for the rule's action.                    |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_roles.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_roles.html.md.erb b/markdown/reference/catalog/pg_roles.html.md.erb
new file mode 100644
index 0000000..9e70f46
--- /dev/null
+++ b/markdown/reference/catalog/pg_roles.html.md.erb
@@ -0,0 +1,31 @@
+---
+title: pg_roles
+---
+
+The view `pg_roles` provides access to information about database roles. This is simply a publicly readable view of [pg\_authid](pg_authid.html#topic1) that blanks out the password field. This view explicitly exposes the OID column of the underlying table, since that is needed to do joins to other catalogs.
+
+<a id="topic1__hn141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_roles</span>
+
+| column              | type                     | references       | description                                                                                                         |
+|---------------------|--------------------------|------------------|---------------------------------------------------------------------------------------------------------------------|
+| `rolname`           | name                     | �                | Role name                                                                                                           |
+| `rolsuper`          | boolean                  | �                | Role has superuser privileges                                                                                       |
+| `rolinherit`        | boolean                  | �                | Role automatically inherits privileges of roles it is a member of                                                   |
+| `rolcreaterole`     | boolean                  | �                | Role may create more roles                                                                                          |
+| `rolcreatedb`       | boolean                  | �                | Role may create databases                                                                                           |
+| `rolcatupdate`      | boolean                  | �                | Role may update system catalogs directly. (Even a superuser may not do this unless this column is true.)            |
+| `rolcanlogin`       | boolean                  | �                | Role may log in. That is, this role can be given as the initial session authorization identifier                    |
+| `rolconnlimit`      | integer                  | �                | For roles that can log in, this sets maximum number of concurrent connections this role can make. -1 means no limit |
+| `rolpassword`       | text                     | �                | Not the password (always reads as \*\*\*\*\*\*\*\*)                                                                 |
+| `rolvaliduntil `    | timestamp with time zone | �                | Password expiry time (only used for password authentication); NULL if no expiration                                 |
+| `rolconfig `        | text\[\]                 | �                | Session defaults for run-time configuration variables                                                               |
+| ` rolresqueue`      | oid                      | pg\_resqueue.oid | Object ID of the resource queue this role is assigned to.                                                           |
+| `oid`               | oid                      | pg\_authid.oid   | Object ID of role                                                                                                   |
+| `rolcreaterextgpfd` | boolean                  | �                | Role may create readable external tables that use the gpfdist protocol.                                             |
+| `rolcreaterexthttp` | boolean                  | �                | Role may create readable external tables that use the http protocol.                                                |
+| `rolcreatewextgpfd` | boolean                  | �                | Role may create writable external tables that use the gpfdist protocol.                                             |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_shdepend.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_shdepend.html.md.erb b/markdown/reference/catalog/pg_shdepend.html.md.erb
new file mode 100644
index 0000000..b966155
--- /dev/null
+++ b/markdown/reference/catalog/pg_shdepend.html.md.erb
@@ -0,0 +1,28 @@
+---
+title: pg_shdepend
+---
+
+The `pg_shdepend` system catalog table records the dependency relationships between database objects and shared objects, such as roles. This information allows HAWQ to ensure that those objects are unreferenced before attempting to delete them. See also [pg\_depend](pg_depend.html#topic1), which performs a similar function for dependencies involving objects within a single database. Unlike most system catalogs, `pg_shdepend` is shared across all databases of HAWQ system: there is only one copy of `pg_shdepend` per system, not one per database.
+
+In all cases, a `pg_shdepend` entry indicates that the referenced object may not be dropped without also dropping the dependent object. However, there are several subflavors identified by `deptype`:
+
+-   **SHARED\_DEPENDENCY\_OWNER (o)** \u2014 The referenced object (which must be a role) is the owner of the dependent object.
+-   **SHARED\_DEPENDENCY\_ACL (a)** \u2014 The referenced object (which must be a role) is mentioned in the ACL (access control list) of the dependent object.
+-   **SHARED\_DEPENDENCY\_PIN (p)** \u2014 There is no dependent object; this type of entry is a signal that the system itself depends on the referenced object, and so that object must never be deleted. Entries of this type are created only by system initialization. The columns for the dependent object contain zeroes. <a id="topic1__ho143898"></a>
+
+<span class="tablecap">Table 1. pg\_catalog.pg\_shdepend</span>
+
+| column         | type    | references       | description                                                                                                |
+|----------------|---------|------------------|------------------------------------------------------------------------------------------------------------|
+| `dbid`         | oid     | pg\_database.oid | The OID of the database the dependent object is in, or zero for a shared object.                           |
+| `classid`      | oid     | pg\_class.oid    | The OID of the system catalog the dependent object is in.                                                  |
+| `objid`        | oid     | any OID column   | The OID of the specific dependent object.                                                                  |
+| `objsubid `    | integer | �                | For a table column, this is the column number. For all other object types, this column is zero.            |
+| `refclassid`   | oid     | pg\_class.oid    | The OID of the system catalog the referenced object is in (must be a shared catalog).                      |
+| `refobjid`     | oid     | any OID column   | The OID of the specific referenced object.                                                                 |
+| `refobjsubid ` | integer | �                | For a table column, this is the referenced column number. For all other object types, this column is zero. |
+| `deptype`      | char    | �                | A code defining the specific semantics of this dependency relationship.                                    |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_shdescription.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_shdescription.html.md.erb b/markdown/reference/catalog/pg_shdescription.html.md.erb
new file mode 100644
index 0000000..133e326
--- /dev/null
+++ b/markdown/reference/catalog/pg_shdescription.html.md.erb
@@ -0,0 +1,18 @@
+---
+title: pg_shdescription
+---
+
+The `pg_shdescription` system catalog table stores optional descriptions (comments) for shared database objects. Descriptions can be manipulated with the `COMMENT` command and viewed with `psql`'s `\d` meta-commands. See also [pg\_description](pg_description.html#topic1), which performs a similar function for descriptions involving objects within a single database. Unlike most system catalogs, `pg_shdescription` is shared across all databases of a HAWQ system: there is only one copy of `pg_shdescription` per system, not one per database.
+
+<a id="topic1__hp143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_shdescription</span>
+
+
+| column        | type | references     | description                                                   |
+|---------------|------|----------------|---------------------------------------------------------------|
+| `objoid`      | oid  | any OID column | The OID of the object this description pertains to.           |
+| `classoid`    | oid  | pg\_class.oid  | The OID of the system catalog this object appears in          |
+| `description` | text | �              | Arbitrary text that serves as the description of this object. |
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_stat_activity.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_stat_activity.html.md.erb b/markdown/reference/catalog/pg_stat_activity.html.md.erb
new file mode 100644
index 0000000..008ae8b
--- /dev/null
+++ b/markdown/reference/catalog/pg_stat_activity.html.md.erb
@@ -0,0 +1,30 @@
+---
+title: pg_stat_activity
+---
+
+The view `pg_stat_activity` shows one row per server process and details about it associated user session and query. The columns that report data on the current query are available unless the parameter `stats_command_string` has been turned off. Furthermore, these columns are only visible if the user examining the view is a superuser or the same as the user owning the process being reported on.
+
+<a id="topic1__hq141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_stat\_activity</span>
+
+| column             | type                     | references       | description                                                   |
+|--------------------|--------------------------|------------------|---------------------------------------------------------------|
+| `datid`            | oid                      | pg\_database.oid | Database OID                                                  |
+| `datname`          | name                     | �                | Database name                                                 |
+| `procpid`          | integer                  | �                | Process ID of the server process                              |
+| `sess_id`          | integer                  | �                | Session ID                                                    |
+| `usesysid`         | oid                      | pg\_authid.oid   | Role OID                                                      |
+| `usename`          | name                     | �                | Role name                                                     |
+| `current_query`    | text                     | �                | Current query that process is running                         |
+| `waiting`          | boolean                  | �                | True if waiting on a lock, false if not waiting               |
+| `query_start`      | timestamp with time zone | �                | Time query began execution                                    |
+| `backend_start`    | timestamp with time zone | �                | Time backend process was started                              |
+| `client_addr`      | inet                     | �                | Client address                                                |
+| `client_port`      | integer                  | �                | Client port                                                   |
+| `application_name` | text                     | �                | Client application name                                       |
+| `xact_start`       | timestamp with time zone | �                | Transaction start time                                        |
+| `waiting_resource` | boolean                  | �                | True if waiting for resource allocation, false if not waiting |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_stat_last_operation.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_stat_last_operation.html.md.erb b/markdown/reference/catalog/pg_stat_last_operation.html.md.erb
new file mode 100644
index 0000000..b7f812b
--- /dev/null
+++ b/markdown/reference/catalog/pg_stat_last_operation.html.md.erb
@@ -0,0 +1,21 @@
+---
+title: pg_stat_last_operation
+---
+
+The `pg_stat_last_operation` table contains metadata tracking information about database objects (tables, views, etc.).
+
+<a id="topic1__hr138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_stat\_last\_operation</span>
+
+| column          | type                    | references     | description                                                                                                                                                                                    |
+|-----------------|-------------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `classid`       | oid                     | pg\_class.oid  | OID of the system catalog containing the object.                                                                                                                                               |
+| `objid`         | oid                     | any OID column | OID of the object within its system catalog.                                                                                                                                                   |
+| `staactionname` | name                    | �              | The action that was taken on the object.                                                                                                                                                       |
+| `stasysid`      | oid                     | pg\_authid.oid | A foreign key to pg\_authid.oid.                                                                                                                                                               |
+| `stausename`    | name                    | �              | The name of the role that performed the operation on this object.                                                                                                                              |
+| `stasubtype`    | text                    | �              | The type of object operated on or the subclass of operation performed.                                                                                                                         |
+| `statime`       | timestamp with timezone | �              | The timestamp of the operation. This is the same timestamp that is written to the HAWQ server log files in case you need to look up more detailed information about the operation in the logs. |
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_stat_last_shoperation.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_stat_last_shoperation.html.md.erb b/markdown/reference/catalog/pg_stat_last_shoperation.html.md.erb
new file mode 100644
index 0000000..0dc5a03
--- /dev/null
+++ b/markdown/reference/catalog/pg_stat_last_shoperation.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: pg_stat_last_shoperation
+---
+
+The `pg_stat_last_shoperation` table contains metadata tracking information about global objects (roles, tablespaces, etc.).
+
+<a id="topic1__hs138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_stat\_last\_shoperation</span>
+
+
+| column          | type                    | references     | description                                                                                                                                                                                    |
+|-----------------|-------------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| classid         | oid                     | pg\_class.oid  | OID of the system catalog containing the object.                                                                                                                                               |
+| `objid`         | oid                     | any OID column | OID of the object within its system catalog.                                                                                                                                                   |
+| `staactionname` | name                    | �              | The action that was taken on the object.                                                                                                                                                       |
+| `stasysid`      | oid                     | �              | �                                                                                                                                                                                              |
+| `stausename`    | name                    | �              | The name of the role that performed the operation on this object.                                                                                                                              |
+| `stasubtype`    | text                    | �              | The type of object operated on or the subclass of operation performed.                                                                                                                         |
+| `statime`       | timestamp with timezone | �              | The timestamp of the operation. This is the same timestamp that is written to the HAWQ server log files in case you need to look up more detailed information about the operation in the logs. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_stat_operations.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_stat_operations.html.md.erb b/markdown/reference/catalog/pg_stat_operations.html.md.erb
new file mode 100644
index 0000000..65833f8
--- /dev/null
+++ b/markdown/reference/catalog/pg_stat_operations.html.md.erb
@@ -0,0 +1,87 @@
+---
+title: pg_stat_operations
+---
+
+The view `pg_stat_operations` shows details about the last operation performed on a database object (such as a table, index, view or database) or a global object (such as a role).
+
+<a id="topic1__ht141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_stat\_operations</span>
+
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="10%" />
+<col width="40%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">classname</code></td>
+<td>text</td>
+<td>�</td>
+<td>The name of the system table in the <code class="ph codeph">pg_catalog</code> schema where the record about this object is stored (<code class="ph codeph">pg_class</code>=relations, <code class="ph codeph">pg_database</code>=databases,
+<p><code class="ph codeph">pg_namespace</code>=schemas,</p>
+<p><code class="ph codeph">pg_authid</code>=roles)</p></td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">objname</code></td>
+<td>name</td>
+<td>�</td>
+<td>The name of the object.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">objid</code></td>
+<td>oid</td>
+<td>�</td>
+<td>The OID of the object.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">schemaname</code></td>
+<td>name</td>
+<td>�</td>
+<td>The name of the schema where the object resides.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">usestatus</code></td>
+<td>text</td>
+<td>�</td>
+<td>The status of the role who performed the last operation on the object (<code class="ph codeph">CURRENT</code>=a currently active role in the system, <code class="ph codeph">DROPPED</code>=a role that no longer exists in the system, <code class="ph codeph">CHANGED</code>=a role name that exists in the system, but has changed since the last operation was performed).</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">usename</code></td>
+<td>name</td>
+<td>�</td>
+<td>The name of the role that performed the operation on this object.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">actionname</code></td>
+<td>name</td>
+<td>�</td>
+<td>The action that was taken on the object.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">subtype</code></td>
+<td>text</td>
+<td>�</td>
+<td>The type of object operated on or the subclass of operation performed.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">statime</code></td>
+<td>timestamp with time zone</td>
+<td>�</td>
+<td>The timestamp of the operation. This is the same timestamp that is written to the HAWQ server log files in case you need to look up more detailed information about the operation in the logs.</td>
+</tr>
+</tbody>
+</table>
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_stat_partition_operations.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_stat_partition_operations.html.md.erb b/markdown/reference/catalog/pg_stat_partition_operations.html.md.erb
new file mode 100644
index 0000000..2d2fb17
--- /dev/null
+++ b/markdown/reference/catalog/pg_stat_partition_operations.html.md.erb
@@ -0,0 +1,28 @@
+---
+title: pg_stat_partition_operations
+---
+
+The `pg_stat_partition_operations` view shows details about the last operation performed on a partitioned table.
+
+<a id="topic1__hu141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_stat\_partition\_operations</span>
+
+| column             | type                     | references | description                                                                                                                                                                                                                                                                                            |
+|--------------------|--------------------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `classname`        | text                     | �          | The name of the system table in the `pg_catalog` schema where the record about this object is stored (always `pg_class` for tables and partitions).                                                                                                                                                    |
+| `objname`          | name                     | �          | The name of the object.                                                                                                                                                                                                                                                                                |
+| `objid`            | oid                      | �          | The OID of the object.                                                                                                                                                                                                                                                                                 |
+| `schemaname`       | name                     | �          | The name of the schema where the object resides.                                                                                                                                                                                                                                                       |
+| `usestatus`        | text                     | �          | The status of the role who performed the last operation on the object (`CURRENT`=a currently active role in the system, `DROPPED`=a role that no longer exists in the system, `CHANGED`=a role name that exists in the system, but its definition has changed since the last operation was performed). |
+| `usename`          | name                     | �          | The name of the role that performed the operation on this object.                                                                                                                                                                                                                                      |
+| `actionname`       | name                     | �          | The action that was taken on the object.                                                                                                                                                                                                                                                               |
+| `subtype`          | text                     | �          | The type of object operated on or the subclass of operation performed.                                                                                                                                                                                                                                 |
+| `statime`          | timestamp with time zone | �          | The timestamp of the operation. This is the same timestamp that is written to the HAWQ server log files in case you need to look up more detailed information about the operation in the logs.                                                                                                         |
+| `partitionlevel`   | smallint                 | �          | The level of this partition in the hierarchy.                                                                                                                                                                                                                                                          |
+| `parenttablename`  | name                     | �          | The relation name of the parent table one level up from this partition.                                                                                                                                                                                                                                |
+| `parentschemaname` | name                     | �          | The name of the schema where the parent table resides.                                                                                                                                                                                                                                                 |
+| `parent_relid`     | oid                      | �          | The OID of the parent table one level up from this partition.                                                                                                                                                                                                                                          |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_statistic.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_statistic.html.md.erb b/markdown/reference/catalog/pg_statistic.html.md.erb
new file mode 100644
index 0000000..b784da1
--- /dev/null
+++ b/markdown/reference/catalog/pg_statistic.html.md.erb
@@ -0,0 +1,30 @@
+---
+title: pg_statistic
+---
+
+The `pg_statistic` system catalog table stores statistical data about the contents of the database. Entries are created by `ANALYZE` and subsequently used by the query optimizer. There is one entry for each table column that has been analyzed. Note that all the statistical data is inherently approximate, even assuming that it is up-to-date.
+
+`pg_statistic` also stores statistical data about the values of index expressions. These are described as if they were actual data columns; in particular, `starelid` references the index. No entry is made for an ordinary non-expression index column, however, since it would be redundant with the entry for the underlying table column.
+
+Since different kinds of statistics may be appropriate for different kinds of data, `pg_statistic` is designed not to assume very much about what sort of statistics it stores. Only extremely general statistics (such as nullness) are given dedicated columns in `pg_statistic`. Everything else is stored in slots, which are groups of associated columns whose content is identified by a code number in one of the slot's columns.
+
+`pg_statistic` should not be readable by the public, since even statistical information about a table's contents may be considered sensitive (for example: minimum and maximum values of a salary column). `pg_stats` is a publicly readable view on `pg_statistic` that only exposes information about those tables that are readable by the current user. See [pg\_stats](pg_stats.html#topic1), for more information on this view.
+
+<a id="topic1__hv156260"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_statistic</span>
+
+| column        | type     | references           | description                                                                                                                                                                                                                                                                                                                                                                                               |
+|---------------|----------|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `starelid`    | oid      | pg\_class.oid        | The table or index that the described column belongs to.                                                                                                                                                                                                                                                                                                                                                  |
+| `staattnum`   | smallint | pg\_attribute.attnum | The number of the described column.                                                                                                                                                                                                                                                                                                                                                                       |
+| `stanullfrac` | real     | �                    | The fraction of the column's entries that are null.                                                                                                                                                                                                                                                                                                                                                       |
+| `stawidth`    | integer  | �                    | The average stored width, in bytes, of nonnull entries.                                                                                                                                                                                                                                                                                                                                                   |
+| `stadistinct` | real     | �                    | The number of distinct nonnull data values in the column. A value greater than zero is the actual number of distinct values. A value less than zero is the negative of a fraction of the number of rows in the table (for example, a column in which values appear about twice on the average could be represented by `stadistinct` = -0.5). A zero value means the number of distinct values is unknown. |
+| `stakindN`    | smallint | �                    | A code number indicating the kind of statistics stored in the `N`th slot of the `pg_statistic` row.                                                                                                                                                                                                                                                                                                       |
+| `staopN`      | oid      | pg\_operator.oid     | An operator used to derive the statistics stored in the `N`th slot. For example, a histogram slot would show the `<` operator that defines the sort order of the data.                                                                                                                                                                                                                                    |
+| `stanumbersN` | real\[\] | �                    | Numerical statistics of the appropriate kind for the `N`th slot, or NULL if the slot kind does not involve numerical values.                                                                                                                                                                                                                                                                              |
+| `stavaluesN`  | anyarray | �                    | Column data values of the appropriate kind for the `N`th slot, or NULL if the slot kind does not store any data values. Each array's element values are actually of the specific column's data type, so there is no way to define these columns' type more specifically than `anyarray`.                                                                                                                  |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_stats.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_stats.html.md.erb b/markdown/reference/catalog/pg_stats.html.md.erb
new file mode 100644
index 0000000..f7cb0f4
--- /dev/null
+++ b/markdown/reference/catalog/pg_stats.html.md.erb
@@ -0,0 +1,27 @@
+---
+title: pg_stats
+---
+
+The `pg_stats` is a publicly readable view on `pg_statistic` that only exposes information about those tables that are readable by the current user. The `pg_stats` view presents the contents of `pg_statistic` in a friendlier format.
+
+All the statistical data is inherently approximate, even assuming that it is up-to-date. The `pg_stats` schema must be extended whenever new slot types are defined.
+
+<a id="topic1__table_ckx_t2w_jv"></a>
+<span class="tablecap">Table 1. pg\_stats</span>
+
+| Name                | Type     | References                                                                 | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
+|---------------------|----------|----------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| schemaname          | name     | [pg\_namespace](pg_namespace.html#topic1).nspname. | The name of the schema containing the table.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| tablename           | name     | [pg\_class](pg_class.html#topic1).relname          | The name of the table.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| attname             | name     | [pg\_attribute](pg_attribute.html#topic1).attname  | The name of the column this row describes.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+| null\_frac          | real     | �                                                                          | The fraction of column entries that are null.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
+| avg\_width          | integer  | �                                                                          | The average storage width in bytes of the column's entries, calculated as `avg(pg_column_size(column_name))`.                                                                                                                                                                                                                                                                                                                                                                                                                          |
+| n\_distinct         | real     | �                                                                          | A positive number is an estimate of the number of distinct values in the column; the number is not expected to vary with the number of rows. A negative value is the number of distinct values divided by the number of rows, that is, the ratio of rows with distinct values for the column, negated. This form is used when the number of distinct values increases with the number of rows. A unique column, for example, has an `n_distinct` value of -1.0. Columns with an average width greater than 1024 are considered unique. |
+| most\_common-vals   | anyarray | �                                                                          | An array containing the most common values in the column, or null if no values seem to be more common. If the `n_distinct` column is -1, `most_common_vals` is null. The length of the array is the lesser of the number of actual distinct column values or the value of the `default_statistics_target` configuration parameter. The number of values can be overridden for a column using `ALTER TABLE                   table SET COLUMN column SET STATISTICS                   N`.                                               |
+| most\_common\_freqs | real\[\] | �                                                                          | An array containing the frequencies of the values in the `most_common_vals` array. This is the number of occurrences of the value divided by the total number of rows. The array is the same length as the `most_common_vals` array. It is null if `most_common_vals` is null.                                                                                                                                                                                                                                                         |
+| histogram\_bounds   | anyarray | �                                                                          | An array of values that divide the column values into groups of approximately the same size. A histogram can be defined only if there is a `max()` aggregate function for the column. The number of groups in the histogram is the same as the `most_common_vals` array size.                                                                                                                                                                                                                                                          |
+| correlation         | real     | �                                                                          | HAWQ does not calculate the correlation statistic.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+
+
+
+



[14/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-TABLE.html.md.erb b/markdown/reference/sql/CREATE-TABLE.html.md.erb
new file mode 100644
index 0000000..162a438
--- /dev/null
+++ b/markdown/reference/sql/CREATE-TABLE.html.md.erb
@@ -0,0 +1,455 @@
+---
+title: CREATE TABLE
+---
+
+Defines a new table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [[GLOBAL | LOCAL] {TEMPORARY | TEMP}] TABLE <table_name> (
+[ { <column_name> <data_type> [ DEFAULT <default_expr> ]
+���[<column_constraint> [ ... ]
+[ ENCODING ( <storage_directive> [,...] ) ]
+]
+���| <table_constraint>
+���| LIKE <other_table> [{INCLUDING | EXCLUDING}
+����������������������{DEFAULTS | CONSTRAINTS}] ...} ]
+���[, ... ] ]
+   [<column_reference_storage_directive> [, \u2026] ]
+���)
+���[ INHERITS ( <parent_table> [, ... ] ) ]
+���[ WITH ( <storage_parameter>=<value> [, ... ] )
+���[ ON COMMIT {PRESERVE ROWS | DELETE ROWS | DROP} ]
+���[ TABLESPACE <tablespace> ]
+���[ DISTRIBUTED BY (<column>, [ ... ] ) | DISTRIBUTED RANDOMLY ]
+���[ PARTITION BY <partition_type> (<column>)
+�������[ SUBPARTITION BY <partition_type> (<column>) ]
+����������[ SUBPARTITION TEMPLATE ( <template_spec> ) ]
+�������[...]
+����( <partition_spec> )
+��������| [ SUBPARTITION BY partition_type (<column>) ]
+����������[...]
+����( <partition_spec>
+������[ ( <subpartition_spec>
+�����������[(...)]
+���������) ]
+����)
+```
+
+where \<column\_constraint\> is:
+
+``` pre
+���[CONSTRAINT <constraint_name>]
+���NOT NULL | NULL
+���| CHECK ( <expression> )
+```
+
+where \<storage\_directive\> for a column is:
+
+``` pre
+   COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+ | COMPRESSLEVEL={0-9}
+ | BLOCKSIZE={8192-2097152}
+```
+
+where \<storage\_parameter\> for a table is:
+
+``` pre
+���APPENDONLY={TRUE}
+���BLOCKSIZE={8192-2097152}
+   bucketnum={<x>}
+���ORIENTATION={ROW | PARQUET}
+�  COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+���COMPRESSLEVEL={0-9}
+���FILLFACTOR={10-100}
+���OIDS=[TRUE|FALSE]
+   PAGESIZE={1024-1073741823}
+   ROWGROUPSIZE={1024-1073741823}
+```
+
+and \<table\_constraint\> is:
+
+``` pre
+���[CONSTRAINT <constraint_name>]
+���| CHECK ( <expression> )
+```
+
+where \<partition\_type\> is:
+
+``` pre
+����LIST��| RANGE
+```
+
+where \<partition\_specification\> is:
+
+``` pre
+            <partition_element> [, ...]
+```
+
+and \<partition\_element\> is:
+
+``` pre
+���DEFAULT PARTITION <name>
+��| [PARTITION <name>] VALUES (<list_value> [,...] )
+��| [PARTITION <name>]
+�����START ([<datatype>] '<start_value>') [INCLUSIVE | EXCLUSIVE]
+�����[ END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE] ]
+�����[ EVERY ([<datatype>] [<number> | INTERVAL] '<interval_value>') ]
+��| [PARTITION <name>]
+�����END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE]
+�����[ EVERY ([<datatype>] [<number> | INTERVAL] '<interval_value>') ]
+[ WITH ( <partition_storage_parameter>=<value> [, ... ] ) ]
+[<column_reference_storage_directive> [, \u2026] ]
+[ TABLESPACE <tablespace> ]
+```
+
+where \<subpartition\_spec\> or \<template\_spec\> is:
+
+``` pre
+            <subpartition_element> [, ...]
+```
+
+and \<subpartition\_element\> is:
+
+``` pre
+���DEFAULT SUBPARTITION <name>
+��| [SUBPARTITION <name>] VALUES (<list_value> [,...] )
+��| [SUBPARTITION <name>]
+�����START ([<datatype>] '<start_value>') [INCLUSIVE | EXCLUSIVE]
+�����[ END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE] ]
+�����[ EVERY ([<datatype>] [<number> | INTERVAL] '<interval_value>') ]
+��| [SUBPARTITION <name>]
+�����END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE]
+�����[ EVERY ([<datatype>] [<number> | INTERVAL] '<interval_value>') ]
+[ WITH ( <partition_storage_parameter>=<value> [, ... ] ) ]
+[<column_reference_storage_directive> [, \u2026] ]
+[ TABLESPACE <tablespace> ]
+```
+
+where \<storage\_directive\> is:
+
+``` pre
+   COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+ | COMPRESSLEVEL={0-9}
+ | BLOCKSIZE={8192-2097152}
+```
+
+where \<column\_reference\_storage\_directive\> is:
+
+``` pre
+   COLUMN column_name ENCODING (<storage_directive> [, ... ] ), ...
+ |
+   DEFAULT COLUMN ENCODING (<storage_directive> [, ... ] )
+```
+
+where \<storage\_parameter\> for a partition is:
+
+``` pre
+���APPENDONLY={TRUE}
+���BLOCKSIZE={8192-2097152}
+���ORIENTATION={ROW | PARQUET}
+�  COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+���COMPRESSLEVEL={0-9}
+���FILLFACTOR={10-100}
+���OIDS=[TRUE|FALSE]
+   PAGESIZE={1024-1073741823}
+   ROWGROUPSIZE={1024-1073741823}
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE TABLE` creates a new, initially empty table in the current database. The table is owned by the user issuing the command. If a schema name is given then the table is created in the specified schema. Otherwise it is created in the current schema. Temporary tables exist in a special schema, so a schema name may not be given when creating a temporary table. The name of the table must be distinct from the name of any other table, external table, sequence, or view in the same schema.
+
+The optional constraint clauses specify conditions that new rows must satisfy for an insert operation to succeed. A constraint is an SQL object that helps define the set of valid values in the table in various ways. Constraints apply to tables, not to partitions. You cannot add a constraint to a partition or subpartition.
+
+There are two ways to define constraints: table constraints and column constraints. A column constraint is defined as part of a column definition. A table constraint definition is not tied to a particular column, and it can encompass more than one column. Every column constraint can also be written as a table constraint; a column constraint is only a notational convenience for use when the constraint only affects one column.
+
+When creating a table, there is an additional clause to declare the HAWQ distribution policy. If a `DISTRIBUTED BY` clause is not supplied, HAWQ assigns a `RANDOM` distribution policy to the table, where the rows are distributed based on a round-robin or random distribution. You can also choose to distribute data with a hash-based policy, where the `bucketnum` attribute sets the number of hash buckets used by a hash-distributed table. Columns of geometric or user-defined data types are not eligible as HAWQ distribution key columns. The number of buckets affects how many virtual segments will be used in processing.
+
+By default, a HASH distributed table is created with the number of hash buckets specified by the parameter \<default\_hash\_table\_bucket\_number\>. This can be changed in session level or in the create table DDL with `bucketnum` storage parameter.
+
+**Note:** Column-oriented tables are no longer supported. Use Parquet tables for HAWQ internal tables.
+
+The `PARTITION BY` clause allows you to divide the table into multiple sub-tables (or parts) that, taken together, make up the parent table and share its schema. Though the sub-tables exist as independent tables, HAWQ restricts their use in important ways. Internally, partitioning is implemented as a special form of inheritance. Each child table partition is created with a distinct `CHECK` constraint which limits the data the table can contain, based on some defining criteria. The `CHECK` constraints are also used by the query planner to determine which table partitions to scan in order to satisfy a given query predicate. These partition constraints are managed automatically by HAWQ.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>GLOBAL | LOCAL  </dt>
+<dd>These keywords are present for SQL standard compatibility, but have no effect in HAWQ.</dd>
+
+<dt>TEMPORARY | TEMP  </dt>
+<dd>If specified, the table is created as a temporary table. Temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction (see `ON COMMIT`). Existing permanent tables with the same name are not visible to the current session while the temporary table exists, unless they are referenced with schema-qualified names. Any indexes created on a temporary table are automatically temporary as well.</dd>
+
+<dt> \<table\_name\>  </dt>
+<dd>The name (optionally schema-qualified) of the table to be created.</dd>
+
+<dt> \<column\_name\>  </dt>
+<dd>The name of a column to be created in the new table.</dd>
+
+<dt> \<data\_type\>  </dt>
+<dd>The data type of the column. This may include array specifiers.</dd>
+
+<dt>DEFAULT \<default\_expr\>  </dt>
+<dd>The `DEFAULT` clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (subqueries and cross-references to other columns in the current table are not allowed). The data type of the default expression must match the data type of the column. The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.</dd>
+
+<dt>INHERITS  </dt>
+<dd>The optional `INHERITS` clause specifies a list of tables from which the new table automatically inherits all columns. Use of `INHERITS` creates a persistent relationship between the new child table and its parent table(s). Schema modifications to the parent(s) normally propagate to children as well, and by default the data of the child table is included in scans of the parent(s).
+
+In HAWQ, the `INHERITS` clause is not used when creating partitioned tables. Although the concept of inheritance is used in partition hierarchies, the inheritance structure of a partitioned table is created using the PARTITION BY clause.
+
+If the same column name exists in more than one parent table, an error is reported unless the data types of the columns match in each of the parent tables. If there is no conflict, then the duplicate columns are merged to form a single column in the new table. If the column name list of the new table contains a column name that is also inherited, the data type must likewise match the inherited column(s), and the column definitions are merged into one. However, inherited and new column declarations of the same name need not specify identical constraints: all constraints provided from any declaration are merged together and all are applied to the new table. If the new table explicitly specifies a default value for the column, this default overrides any defaults from inherited declarations of the column. Otherwise, any parents that specify default values for the column must all specify the same default, or an error will be reported.</dd>
+
+<dt>LIKE \<other\_table\> \[{INCLUDING | EXCLUDING} {DEFAULTS | CONSTRAINTS}\]  </dt>
+<dd>The `LIKE` clause specifies a table from which the new table automatically copies all column names, data types, not-null constraints, and distribution policy. Storage properties like append-only or partition structure are not copied. Unlike `INHERITS`, the new table and original table are completely decoupled after creation is complete.
+
+Default expressions for the copied column definitions will only be copied if `INCLUDING DEFAULTS` is specified. The default behavior is to exclude default expressions, resulting in the copied columns in the new table having null defaults.
+
+Not-null constraints are always copied to the new table. `CHECK` constraints will only be copied if `INCLUDING CONSTRAINTS` is specified; other types of constraints will *never* be copied. Also, no distinction is made between column constraints and table constraints \u2014 when constraints are requested, all check constraints are copied.
+
+Note also that unlike `INHERITS`, copied columns and constraints are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another `LIKE` clause an error is signalled.</dd>
+
+<dt>NULL | NOT NULL  </dt>
+<dd>Specifies if the column is or is not allowed to contain null values. `NULL` is the default.</dd>
+
+<dt>CHECK ( \<expression\> )  </dt>
+<dd>The `CHECK` clause specifies an expression producing a Boolean result which new rows must satisfy for an insert operation to succeed. Expressions evaluating to `TRUE` or `UNKNOWN` succeed. Should any row of an insert operation produce a `FALSE` result an error exception is raised and the insert does not alter the database. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint may reference multiple columns. `CHECK` expressions cannot contain subqueries nor refer to variables other than columns of the current row.</dd>
+
+<dt>WITH ( \<storage\_option\>=\<value\> )  </dt>
+<dd>The `WITH` clause can be used to set storage options for the table or its indexes. Note that you can also set storage parameters on a particular partition or subpartition by declaring the `WITH` clause in the partition specification.
+
+Note: You cannot create a table with both column encodings and compression parameters in a WITH clause.
+
+The following storage options are available:
+
+**APPENDONLY** \u2014 Set to `TRUE` to create the table as an append-only table. If `FALSE` is specified, an error message displays stating that heap tables are not supported.
+
+**BLOCKSIZE** \u2014 Set to the size, in bytes for each block in a table. The `BLOCKSIZE` must be between 8192 and 2097152 bytes, and be a multiple of 8192. The default is 32768.
+
+**bucketnum** \u2014 Set to the number of hash buckets to be used in creating a hash-distributed table, specified as an integer greater than 0 and no more than the value of `default_hash_table_bucket_number`. The default when the table is created is 6 times the segment count. However, explicitly setting the bucket number when creating a hash table is recommended.
+
+**ORIENTATION** \u2014 Set to `row` (the default) for row-oriented storage, or parquet. The parquet column-oriented format can be more efficient for large-scale queries. This option is only valid if `APPENDONLY=TRUE`. 
+
+**COMPRESSTYPE** \u2014 Set to `ZLIB`, `SNAPPY`, or `GZIP` to specify the type of compression used. `ZLIB` provides more compact compression ratios at lower speeds. Parquet tables support `SNAPPY` and `GZIP` compression. Append-only tables support `SNAPPY` and `ZLIB` compression.  This option is valid only if `APPENDONLY=TRUE`.
+
+**COMPRESSLEVEL** \u2014 Set to an integer value from 1 (fastest compression) to 9 (highest compression ratio). If not specified, the default is 1. This option is valid only if `APPENDONLY=TRUE` and `COMPRESSTYPE=[ZLIB|GZIP]`.
+
+**OIDS** \u2014 Set to `OIDS=FALSE` (the default) so that rows do not have object identifiers assigned to them. Do not enable OIDS when creating a table. On large tables, such as those in a typical HAWQ system, using OIDs for table rows can cause wrap-around of the 32-bit OID counter. Once the counter wraps around, OIDs can no longer be assumed to be unique, which not only makes them useless to user applications, but can also cause problems in the HAWQ system catalog tables. In addition, excluding OIDs from a table reduces the space required to store the table on disk by 4 bytes per row, slightly improving performance. OIDS are not allowed on partitioned tables.</dd>
+
+<dt>ON COMMIT  </dt>
+<dd>The behavior of temporary tables at the end of a transaction block can be controlled using `ON COMMIT`. The three options are:
+
+**PRESERVE ROWS** - No special action is taken at the ends of transactions for temporary tables. This is the default behavior.
+
+**DELETE ROWS** - All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic `TRUNCATE` is done at each commit.
+
+**DROP** - The temporary table will be dropped at the end of the current transaction block.</dd>
+
+<dt>TABLESPACE \<tablespace\>  </dt>
+<dd>The name of the tablespace in which the new table is to be created. If not specified, the database's default tablespace dfs\_default is used. Creating table on tablespace `pg_default` is not allowed.</dd>
+
+<dt>DISTRIBUTED BY (\<column\>, \[ ... \] )  
+DISTRIBUTED RANDOMLY  </dt>
+<dd>Used to declare the HAWQ distribution policy for the table. The default is RANDOM distribution. `DISTIBUTED BY` can use hash distribution with one or more columns declared as the distribution key. If hash distribution is desired, it must be specified using the first eligible column of the table as the distribution key.</dd>
+
+<dt>PARTITION BY  </dt>
+<dd>Declares one or more columns by which to partition the table.</dd>
+
+<dt> \<partition\_type\>  </dt>
+<dd>Declares partition type: `LIST` (list of values) or `RANGE` (a numeric or date range).</dd>
+
+<dt> \<partition\_specification\>  </dt>
+<dd>Declares the individual partitions to create. Each partition can be defined individually or, for range partitions, you can use the `EVERY` clause (with a `START` and optional `END` clause) to define an increment pattern to use to create the individual partitions.
+
+**`DEFAULT PARTITION \<name\>                      `** \u2014 Declares a default partition. When data does not match to an existing partition, it is inserted into the default partition. Partition designs that do not have a default partition will reject incoming rows that do not match to an existing partition.
+
+**`PARTITION \<name\>  `** \u2014 Declares a name to use for the partition. Partitions are created using the following naming convention: `                      parentname_level#_prt_givenname                   `.
+
+**`VALUES`** \u2014 For list partitions, defines the value(s) that the partition will contain.
+
+**`START`** \u2014 For range partitions, defines the starting range value for the partition. By default, start values are `INCLUSIVE`. For example, if you declared a start date of '`2008-01-01`', then the partition would contain all dates greater than or equal to '`2008-01-01`'. Typically the data type of the `START` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**`END`** \u2014 For range partitions, defines the ending range value for the partition. By default, end values are `EXCLUSIVE`. For example, if you declared an end date of '`2008-02-01`', then the partition would contain all dates less than but not equal to '`2008-02-01`'. Typically the data type of the `END` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**`EVERY`** \u2014 For range partitions, defines how to increment the values from `START` to `END` to create individual partitions. Typically the data type of the `EVERY` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**`WITH`** \u2014 Sets the table storage options for a partition. For example, you may want older partitions to be append-only tables and newer partitions to be regular heap tables.
+
+**`TABLESPACE`** \u2014 The name of the tablespace in which the partition is to be created.</dd>
+
+<dt>SUBPARTITION BY  </dt>
+<dd>Declares one or more columns by which to subpartition the first-level partitions of the table. The format of the subpartition specification is similar to that of a partition specification described above.</dd>
+
+<dt>SUBPARTITION TEMPLATE  </dt>
+<dd>Instead of declaring each subpartition definition individually for each partition, you can optionally declare a subpartition template to be used to create the subpartitions. This subpartition specification would then apply to all parent partitions.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Using OIDs in new applications is not recommended. Avoid assuming that OIDs are unique across tables; if you need a database-wide unique identifier, use the combination of table OID and row OID for the purpose.
+
+Primary key and foreign key constraints are not supported in HAWQ. For inherited tables, table privileges *are not* inherited in the current implementation.
+
+HAWQ also supports the parquet columnar storage format. Parquet tables can be more efficient for increasing performance on large queries.
+
+## <a id="parquetset"></a>Setting Parameters for Parquet Tables
+
+You can set three kinds of parameters for a parquet table.
+
+1.  Set the parquet orientation parameter:
+
+    ``` pre
+    with (appendonly=true, orientation=parquet);
+    ```
+
+2.  Set the compression type parameter. Parquet tables can be compressed using either `SNAPPY` or `GZIP`. `GZIP`�supports compression level values between 1 and 9. `SNAPPY` does not support compression level; providing a compression level when using `SNAPPY` will cause the create table operation to fail. Specifying a compression level but no compression type when�creating a parquet table will default to `GZIP` compression.
+
+    **Note:**  For best performance with parquet storage, use `SNAPPY` compression. 
+
+3.  Set the data storage parameter: By default, the two parameters, `PAGESIZE` and `ROWGROUPSIZE`�are set to 1MB/8MB for common and partitioned tables.
+
+    **Note:** The page size should be less than the rowgroup size. This is because rowgroup includes the metadata information of a single page even for a single column table. The parameters `PAGESIZE` and `ROWGROUPSIZE` are valid for parquet tables, while `BLOCKSIZE` is valid for append-only tables
+
+## <a id="aboutparquet"></a>About Parquet Storage
+
+DDL and DML: Most DDL and DML operations are valid for a parquet table. The usage for DDL and DML�operations is similar to append-only tables. Valid operations on parquet tables include:
+
+-   Parquet table creation (with/without partition, with/without compression type)
+-   Insert and Select
+
+**Compression type and level**: You can only set the compression type at the table level. HAWQ does not�support setting column level compression. The specified compression type is propagated to the columns. All the columns�must have the same compress type and level.
+
+Using `SNAPPY` compression with parquet files is recommended for best performance.
+
+**Data type**: HAWQ supports all data types except arrays and user defined types.
+
+**Alter table**: HAWQ does not support adding a new column to an existing parquet table or dropping a column. You�can use `ALTER TABLE` for a partition operation.
+
+**FillFactor/OIDS/Checksum**: HAWQ does not support these operations when creating parquet tables. The�default value for checksum for a parquet table is false. You cannot set this value or specify fillfactor and oids.
+
+**Memory occupation**: When inserting or loading data to a parquet table, the whole rowgroup is stored in�physical memory until the size exceeds the threshold or the end of the�`INSERT` operation. Once either occurs, the entire rowgroup is flushed to disk. Also, at the beginning of�the `INSERT` operation, each column is pre-allocated a page buffer. The column pre-allocated page buffer�size should be `min(pageSizeLimit,                rowgroupSizeLimit/estimatedColumnWidth/estimatedRecordWidth)` for�the first rowgroup. For the following rowgroup, it should be `min(pageSizeLimit,                actualColumnChunkSize in last�rowgroup * 1.05)`, of which 1.05 is the estimated scaling factor. When reading data from a parquet table, the�requested columns of the row group are loaded into memory. Memory is allocated 8 MB by default. Ensure that memory occupation does not exceed physical memory when setting `ROWGROUPSIZE` or `PAGESIZE`, otherwise you may encounter an out of memory erro
 r.�
+
+**Bulk vs. trickle loads**
+Only bulk loads are recommended for use with parquet tables. Trickle loads can result in bloated footers and larger data files.
+
+## <a id="parquetexamples"></a>Parquet Examples
+
+**Parquet Example 1**
+
+Create an append-only table using the parquet format:
+
+``` pre
+CREATE TABLE customer ( id integer, fname text, lname text,
+    address text, city text, state text, zip text )
+WITH (APPENDONLY=true, ORIENTATION=parquet, OIDS=FALSE)
+DISTRIBUTED BY (id);
+```
+
+**Parquet Example 2**
+
+Create a parquet table with twelve monthly partitions:
+
+``` pre
+CREATE TABLE sales (id int, date date, amt decimal(10,2))
+WITH (APPENDONLY=true, ORIENTATION=parquet, OIDS=FALSE)
+DISTRIBUTED BY (id)
+PARTITION BY RANGE (date)
+  ( START (date '2016-01-01') INCLUSIVE
+    END   (date '2017-01-01') EXCLUSIVE
+    EVERY (INTERVAL '1 month')
+  );
+```
+
+**Parquet Example 3**
+
+Add a new partition to the sales table:
+
+``` pre
+ALTER TABLE sales ADD PARTITION
+    START (date '2017-01-01') INCLUSIVE
+    END (date '2017-02-01') EXCLUSIVE;
+```
+
+## <a id="aoexamples"></a>AO Examples
+
+Append-only tables support `ZLIB` and `SNAPPY` compression types.
+
+**AO Example 1**:�Create a table named rank in the schema named baby and distribute the data using the columns rank, gender, and year:
+
+``` pre
+CREATE TABLE baby.rank ( id int, rank int, year smallint, gender char(1), count int )
+DISTRIBUTED BY (rank, gender, year);
+```
+
+**AO Example 2**:�Create table films and table distributors. The first column will be used as the HAWQ distribution key by default:
+
+``` pre
+CREATE TABLE films (
+    code char(5), title varchar(40) NOT NULL, did integer NOT NULL,
+    date_prod date, kind varchar(10), len interval hour to minute
+);
+
+CREATE TABLE distributors (
+    did integer,
+    name varchar(40) NOT NULL CHECK (name <> '')
+);
+```
+
+**AO Example 3**:�Create a snappy-compressed, append-only table:
+
+``` pre
+CREATE TABLE sales (txn_id int, qty int, date date)
+WITH (appendonly=true, compresstype=snappy)
+DISTRIBUTED BY (txn_id);
+```
+
+**AO Example 4**:�Create a three level partitioned table using subpartition templates and default partitions at each level:
+
+``` pre
+CREATE TABLE sales (id int, year int, month int, day int,
+region text)
+DISTRIBUTED BY (id)
+PARTITION BY RANGE (year)
+SUBPARTITION BY RANGE (month)
+SUBPARTITION TEMPLATE (
+START (1) END (13) EVERY (1),
+DEFAULT SUBPARTITION other_months )
+SUBPARTITION BY LIST (region)
+SUBPARTITION TEMPLATE (
+SUBPARTITION usa VALUES ('usa'),
+SUBPARTITION europe VALUES ('europe'),
+SUBPARTITION asia VALUES ('asia'),
+DEFAULT SUBPARTITION other_regions)
+( START (2002) END (2010) EVERY (1),
+DEFAULT PARTITION outlying_years);
+```
+
+**AO Example 5** Create a hash-distributed table named \u201csales\u201d with 100 buckets.
+
+``` pre
+CREATE TABLE sales(id int, profit float)
+WITH (bucketnum=100)
+DISTRIBUTED BY (id);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The `CREATE TABLE` command conforms to the SQL standard, with the following exceptions:
+
+-   **Temporary Tables** \u2014 In the SQL standard, temporary tables are defined just once and automatically exist (starting with empty contents) in every session that needs them. HAWQ instead requires each session to issue its own `CREATE TEMPORARY                   TABLE` command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.
+
+    The standard's distinction between global and local temporary tables is not in HAWQ. HAWQ will accept the `GLOBAL` and `LOCAL` keywords in a temporary table declaration, but they have no effect.
+
+    If the `ON COMMIT` clause is omitted, the SQL standard specifies that the default behavior as `ON COMMIT DELETE ROWS`. However, the default behavior in HAWQ is `ON COMMIT PRESERVE ROWS`. The `ON COMMIT DROP` option does not exist in the SQL standard.
+
+-   **Column Check Constraints** \u2014 The SQL standard says that `CHECK` column constraints may only refer to the column they apply to; only `CHECK` table constraints may refer to multiple columns. HAWQ does not enforce this restriction; it treats column and table check constraints alike.
+-   **NULL Constraint** \u2014 The `NULL` constraint is a HAWQ extension to the SQL standard that is included for compatibility with some other database systems (and for symmetry with the `NOT NULL` constraint). Since it is the default for any column, its presence is not required.
+-   **Inheritance** \u2014 Multiple inheritance via the `INHERITS` clause is a HAWQ language extension. SQL:1999 and later define single inheritance using a different syntax and different semantics. SQL:1999-style inheritance is not yet supported by HAWQ.
+-   **Partitioning** \u2014 Table partitioning via the `PARTITION BY` clause is a HAWQ language extension.
+-   **Zero-column tables** \u2014 HAWQ allows a table of no columns to be created (for example, `CREATE TABLE foo();`). This is an extension from the SQL standard, which does not allow zero-column tables. Zero-column tables are not in themselves very useful, but disallowing them creates odd special cases for `ALTER TABLE DROP COLUMN`, so this spec restriction is ignored.
+-   **WITH clause** \u2014 The `WITH` clause is an extension; neither storage parameters nor OIDs are in the standard.
+-   **Tablespaces** \u2014 The HAWQ concept of tablespaces is not part of the SQL standard. The clauses `TABLESPACE` and `USING INDEX TABLESPACE` are extensions.
+-   **Data Distribution** \u2014 The HAWQ concept of a parallel or distributed database is not part of the SQL standard. The `DISTRIBUTED` clauses are extensions.
+
+## <a id="topic1__section8"></a>See Also
+
+[ALTER TABLE](ALTER-TABLE.html), [DROP TABLE](DROP-TABLE.html), [CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html), [CREATE TABLE AS](CREATE-TABLE-AS.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-TABLESPACE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-TABLESPACE.html.md.erb b/markdown/reference/sql/CREATE-TABLESPACE.html.md.erb
new file mode 100644
index 0000000..2d20107
--- /dev/null
+++ b/markdown/reference/sql/CREATE-TABLESPACE.html.md.erb
@@ -0,0 +1,58 @@
+---
+title: CREATE TABLESPACE
+---
+
+Defines a new tablespace.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE TABLESPACE <tablespace_name> [OWNER <username>]
+�������FILESPACE <filespace_name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE TABLESPACE` registers a new tablespace for your HAWQ system. The tablespace name must be distinct from the name of any existing tablespace in the system.
+
+A tablespace allows superusers to define an alternative location on the file system where the data files containing database objects (such as tables) may reside.
+
+A user with appropriate privileges can pass a tablespace name to [CREATE DATABASE](CREATE-DATABASE.html) or [CREATE TABLE](CREATE-TABLE.html) to have the data files for these objects stored within the specified tablespace.
+
+In HAWQ, there must be a file system location defined for the master and each segment in order for the tablespace to have a location to store its objects across an entire HAWQ system. This collection of file system locations is defined in a filespace object. A filespace must be defined before you can create a tablespace. See [hawq filespace](../cli/admin_utilities/hawqfilespace.html#topic1) for more information.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<tablespacename\>   </dt>
+<dd>The name of a tablespace to be created. The name cannot begin with `pg_`, as such names are reserved for system tablespaces.</dd>
+
+<dt>OWNER \<username\>   </dt>
+<dd>The name of the user who will own the tablespace. If omitted, defaults to the user executing the command. Only superusers may create tablespaces, but they can assign ownership of tablespaces to non-superusers.</dd>
+
+<dt>FILESPACE \<filespace\_name\>   </dt>
+<dd>The name of a HAWQ filespace that was defined using the `hawq filespace` management utility.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+You must first create a filespace to be used by the tablespace. See [hawq filespace](../cli/admin_utilities/hawqfilespace.html#topic1) for more information.
+
+Tablespaces are only supported on systems that support symbolic links.
+
+`CREATE TABLESPACE` cannot be executed inside a transaction block.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a new tablespace by specifying the corresponding filespace to use:
+
+``` pre
+CREATE TABLESPACE mytblspace FILESPACE myfilespace;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE TABLESPACE` is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE DATABASE](CREATE-DATABASE.html), [CREATE TABLE](CREATE-TABLE.html), [DROP TABLESPACE](DROP-TABLESPACE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-TYPE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-TYPE.html.md.erb b/markdown/reference/sql/CREATE-TYPE.html.md.erb
new file mode 100644
index 0000000..9e7b59f
--- /dev/null
+++ b/markdown/reference/sql/CREATE-TYPE.html.md.erb
@@ -0,0 +1,185 @@
+---
+title: CREATE TYPE
+---
+
+Defines a new data type.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE TYPE <name> AS ( <attribute_name>
+            <data_type> [, ... ] )
+
+CREATE TYPE <name> (
+    INPUT = <input_function>,
+    OUTPUT = <output_function>
+    [, RECEIVE = <receive_function>]
+    [, SEND = <send_function>]
+    [, INTERNALLENGTH = {<internallength> | VARIABLE}]
+    [, PASSEDBYVALUE]
+    [, ALIGNMENT = <alignment>]
+    [, STORAGE = <storage>]
+    [, DEFAULT = <default>]
+    [, ELEMENT = <element>]
+    [, DELIMITER = <delimiter>] )
+
+CREATE TYPE name
+
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE TYPE` registers a new data type for use in the current database. The user who defines a type becomes its owner.
+
+If a schema name is given then the type is created in the specified schema. Otherwise it is created in the current schema. The type name must be distinct from the name of any existing type or domain in the same schema. The type name must also be distinct from the name of any existing table in the same schema.
+
+**Composite Types**
+
+The first form of `CREATE TYPE` creates a composite type. This is the only form currently supported by HAWQ. The composite type is specified by a list of attribute names and data types. This is essentially the same as the row type of a table, but using `CREATE TYPE` avoids the need to create an actual table when all that is wanted is to define a type. A stand-alone composite type is useful as the argument or return type of a function.
+
+**Base Types**
+
+The second form of `CREATE TYPE` creates a new base type (scalar type). The parameters may appear in any order, not only that shown in the syntax, and most are optional. You must register two or more functions (using `CREATE FUNCTION`) before defining the type. The support functions \<input\_function\> and \<output\_function\> are required, while the functions \<receive\_function\>, \<send\_function\> and \<analyze\_function\> are optional. Generally these functions have to be coded in C or another low-level language. In HAWQ, any function used to implement a data type must be defined as `IMMUTABLE`.
+
+The \<input\_function\> converts the type's external textual representation to the internal representation used by the operators and functions defined for the type. \<output\_function\> performs the reverse transformation. The input function may be declared as taking one argument of type `cstring`, or as taking three arguments of types `cstring`, `oid`, `integer`. The first argument is the input text as a C string, the second argument is the type's own OID (except for array types, which instead receive their element type's OID), and the third is the `typmod` of the destination column, if known (`-1` will be passed if not). The input function must return a value of the data type itself. Usually, an input function should be declared `STRICT`; if it is not, it will be called with a `NULL` first parameter when reading a `NULL` input value. The function must still return `NULL` in this case, unless it raises an error. (This case is mainly meant to support domain input functions, which ma
 y need to reject `NULL` inputs.) The output function must be declared as taking one argument of the new data type. The output function must return type `cstring`. Output functions are not invoked for `NULL` values.
+
+The optional \<receive\_function\> converts the type's external binary representation to the internal representation. If this function is not supplied, the type cannot participate in binary input. The binary representation should be chosen to be cheap to convert to internal form, while being reasonably portable. (For example, the standard integer data types use network byte order as the external binary representation, while the internal representation is in the machine's native byte order.) The receive function should perform adequate checking to ensure that the value is valid. The receive function may be declared as taking one argument of type `internal`, or as taking three arguments of types `internal`, `oid`, `integer`. The first argument is a pointer to a `StringInfo` buffer holding the received byte string; the optional arguments are the same as for the text input function. The receive function must return a value of the data type itself. Usually, a receive function should be d
 eclared `STRICT`; if it is not, it will be called with a `NULL` first parameter when reading a NULL input value. The function must still return `NULL` in this case, unless it raises an error. (This case is mainly meant to support domain receive functions, which may need to reject `NULL` inputs.) Similarly, the optional \<send\_function\> converts from the internal representation to the external binary representation. If this function is not supplied, the type cannot participate in binary output. The send function must be declared as taking one argument of the new data type. The send function must return type `bytea`. Send functions are not invoked for `NULL` values.
+
+You should at this point be wondering how the input and output functions can be declared to have results or arguments of the new type, when they have to be created before the new type can be created. The answer is that the type should first be defined as a shell type, which is a placeholder type that has no properties except a name and an owner. This is done by issuing the command `CREATE TYPE                         name                `, with no additional parameters. Then the I/O functions can be defined referencing the shell type. Finally, `CREATE                         TYPE` with a full definition replaces the shell entry with a complete, valid type definition, after which the new type can be used normally.
+
+While the details of the new type's internal representation are only known to the I/O functions and other functions you create to work with the type, there are several properties of the internal representation that must be declared to HAWQ. Foremost of these is \<internallength\>. Base data types can be fixed-length, in which case \<internallength\> is a positive integer, or variable length, indicated by setting \<internallength\> to `VARIABLE`. (Internally, this is represented by setting `typlen` to `-1`.) The internal representation of all variable-length types must start with a 4-byte integer giving the total length of this value of the type.
+
+The optional flag `PASSEDBYVALUE` indicates that values of this data type are passed by value, rather than by reference. You may not pass by value types whose internal representation is larger than the size of the `Datum` type (4 bytes on most machines, 8 bytes on a few).
+
+The \<alignment\> parameter specifies the storage alignment required for the data type. The allowed values equate to alignment on 1, 2, 4, or 8 byte boundaries. Note that variable-length types must have an alignment of at least 4, since they necessarily contain an `int4` as their first component.
+
+The \<storage\> parameter allows selection of storage strategies for variable-length data types. (Only `plain` is allowed for fixed-length types.) `plain` specifies that data of the type will always be stored in-line and not compressed. `extended` specifies that the system will first try to compress a long data value, and will move the value out of the main table row if it's still too long. `external` allows the value to be moved out of the main table, but the system will not try to compress it. `main` allows compression, but discourages moving the value out of the main table. (Data items with this storage strategy may still be moved out of the main table if there is no other way to make a row fit, but they will be kept in the main table preferentially over `extended` and `external` items.)
+
+A default value may be specified, in case a user wants columns of the data type to default to something other than the null value. Specify the default with the `DEFAULT` key word. (Such a default may be overridden by an explicit `DEFAULT` clause attached to a particular column.)
+
+To indicate that a type is an array, specify the type of the array elements using the `ELEMENT` key word. For example, to define an array of 4-byte integers (int4), specify `ELEMENT = int4`. More details about array types appear below.
+
+To indicate the delimiter to be used between values in the external representation of arrays of this type, `delimiter` can be set to a specific character. The default delimiter is the comma (,). Note that the delimiter is associated with the array element type, not the array type itself.
+
+**Array Types**
+
+Whenever a user-defined base data type is created, HAWQ automatically creates an associated array type, whose name consists of the base type's name prepended with an underscore. The parser understands this naming convention, and translates requests for columns of type `foo[]` into requests for type `_foo`. The implicitly-created array type is variable length and uses the built-in input and output functions `array_in` and `array_out`.
+
+You might reasonably ask why there is an `ELEMENT` option, if the system makes the correct array type automatically. The only case where it's useful to use `ELEMENT` is when you are making a fixed-length type that happens to be internally an array of a number of identical things, and you want to allow these things to be accessed directly by subscripting, in addition to whatever operations you plan to provide for the type as a whole. For example, type `name` allows its constituent `char` elements to be accessed this way. A 2-D point type could allow its two component numbers to be accessed like point\[0\] and point\[1\]. Note that this facility only works for fixed-length types whose internal form is exactly a sequence of identical fixed-length fields. A subscriptable variable-length type must have the generalized internal representation used by `array_in` and `array_out`. For historical reasons, subscripting of fixed-length array types starts from zero, rather than from one as for v
 ariable-length arrays.
+
+## <a id="topic1__section7"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name (optionally schema-qualified) of a type to be created.</dd>
+
+<dt> \<attribute\_name\>  </dt>
+<dd>The name of an attribute (column) for the composite type.</dd>
+
+<dt> \<data\_type\>  </dt>
+<dd>The name of an existing data type to become a column of the composite type.</dd>
+
+<dt> \<input\_function\>  </dt>
+<dd>The name of a function that converts data from the type's external textual form to its internal form.</dd>
+
+<dt> \<output\_function\>  </dt>
+<dd>The name of a function that converts data from the type's internal form to its external textual form.</dd>
+
+<dt> \<receive\_function\>  </dt>
+<dd>The name of a function that converts data from the type's external binary form to its internal form.</dd>
+
+<dt> \<send\_function\>  </dt>
+<dd>The name of a function that converts data from the type's internal form to its external binary form.</dd>
+
+<dt> \<internallength\>  </dt>
+<dd>A numeric constant that specifies the length in bytes of the new type's internal representation. The default assumption is that it is variable-length.</dd>
+
+<dt> \<alignment\>  </dt>
+<dd>The storage alignment requirement of the data type. Must be one of `char`, `int2`, `int4`, or `double`. The default is `int4`.</dd>
+
+<dt> \<storage\>  </dt>
+<dd>The storage strategy for the data type. Must be one of `plain`, `external`, `extended`, or `main`. The default is `plain`.</dd>
+
+<dt> \<default\>  </dt>
+<dd>The default value for the data type. If this is omitted, the default is null.</dd>
+
+<dt> \<element\>  </dt>
+<dd>The type being created is an array; this specifies the type of the array elements.</dd>
+
+<dt> \<delimiter\>  </dt>
+<dd>The delimiter character to be used between values in arrays made of this type.</dd>
+
+## <a id="topic1__section8"></a>Notes
+
+User-defined type names cannot begin with the underscore character (\_) and can only be 62 characters long (or in general `NAMEDATALEN - 2`, rather than the `NAMEDATALEN - 1` characters allowed for other names). Type names beginning with underscore are reserved for internally-created array type names.
+
+Because there are no restrictions on use of a data type once it's been created, creating a base type is tantamount to granting public execute permission on the functions mentioned in the type definition. (The creator of the type is therefore required to own these functions.) This is usually not an issue for the sorts of functions that are useful in a type definition. But you might want to think twice before designing a type in a way that would require 'secret' information to be used while converting it to or from external form.
+
+## <a id="topic1__section9"></a>Examples
+
+This example creates a composite type and uses it in a function definition:
+
+``` pre
+CREATE TYPE compfoo AS (f1 int, f2 text);
+
+CREATE FUNCTION getfoo() RETURNS SETOF compfoo AS $$
+    SELECT fooid, fooname FROM foo
+$$ LANGUAGE SQL;
+```
+
+This example creates the base data type `box` and then uses the type in a table definition:
+
+``` pre
+CREATE TYPE box;
+
+CREATE FUNCTION my_box_in_function(cstring) RETURNS box AS
+... ;
+
+CREATE FUNCTION my_box_out_function(box) RETURNS cstring AS
+... ;
+
+CREATE TYPE box (
+    INTERNALLENGTH = 16,
+    INPUT = my_box_in_function,
+    OUTPUT = my_box_out_function
+);
+
+CREATE TABLE myboxes (
+    id integer,
+    description box
+);
+```
+
+If the internal structure of `box` were an array of four `float4` elements, we might instead use:
+
+``` pre
+CREATE TYPE box (
+    INTERNALLENGTH = 16,
+    INPUT = my_box_in_function,
+    OUTPUT = my_box_out_function,
+    ELEMENT = float4
+);
+```
+
+which would allow a box value's component numbers to be accessed by subscripting. Otherwise the type behaves the same as before.
+
+This example creates a large object type and uses it in a table definition:
+
+``` pre
+CREATE TYPE bigobj (
+    INPUT = lo_filein, OUTPUT = lo_fileout,
+    INTERNALLENGTH = VARIABLE
+);
+
+CREATE TABLE big_objs (
+    id integer,
+    obj bigobj
+);
+```
+
+## <a id="topic1__section10"></a>Compatibility
+
+`CREATE TYPE` command is a HAWQ extension. There is a `CREATE                     TYPE` statement in the SQL standard that is rather different in detail.
+
+## <a id="topic1__section11"></a>See Also
+
+[CREATE FUNCTION](CREATE-FUNCTION.html), [ALTER TYPE](ALTER-TYPE.html), [DROP TYPE](DROP-TYPE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-USER.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-USER.html.md.erb b/markdown/reference/sql/CREATE-USER.html.md.erb
new file mode 100644
index 0000000..738c645
--- /dev/null
+++ b/markdown/reference/sql/CREATE-USER.html.md.erb
@@ -0,0 +1,46 @@
+---
+title: CREATE USER
+---
+
+Defines a new database role with the `LOGIN` privilege by default.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE USER <name> [ [WITH] <option> [ ... ] ]
+```
+
+where \<option\> can be:
+
+``` pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+    | CREATEUSER | NOCREATEUSER
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | [ ENCRYPTED | UNENCRYPTED ] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>'
+    | IN ROLE <rolename> [, ...]
+    | IN GROUP <rolename> [, ...]
+    | ROLE <rolename> [, ...]
+    | ADMIN <rolename> [, ...]
+    | USER <rolename> [, ...]
+    | SYSID <uid>
+����| RESOURCE QUEUE <queue_name>
+
+```
+
+## <a id="topic1__section3"></a>Description
+
+HAWQ does not support `CREATE USER`. This command has been replaced by [CREATE ROLE](CREATE-ROLE.html).
+
+The only difference between `CREATE ROLE` and `CREATE                     USER` is that `LOGIN` is assumed by default with `CREATE USER`, whereas `NOLOGIN` is assumed by default with `CREATE ROLE`.
+
+## <a id="topic1__section4"></a>Compatibility
+
+There is no `CREATE USER` statement in the SQL standard.
+
+## <a id="topic1__section5"></a>See Also
+
+[CREATE ROLE](CREATE-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-VIEW.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-VIEW.html.md.erb b/markdown/reference/sql/CREATE-VIEW.html.md.erb
new file mode 100644
index 0000000..e39d8d3
--- /dev/null
+++ b/markdown/reference/sql/CREATE-VIEW.html.md.erb
@@ -0,0 +1,88 @@
+---
+title: CREATE VIEW
+---
+
+Defines a new view.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [OR REPLACE] [TEMP | TEMPORARY] VIEW <name>
+�������[ ( <column_name> [, ...] ) ]
+�������AS <query>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE VIEW` defines a view of a query. The view is not physically materialized. Instead, the query is run every time the view is referenced in a query.
+
+`CREATE OR REPLACE VIEW` is similar, but if a view of the same name already exists, it is replaced. You can only replace a view with a new query that generates the identical set of columns (same column names and data types).
+
+If a schema name is given then the view is created in the specified schema. Otherwise it is created in the current schema. Temporary views exist in a special schema, so a schema name may not be given when creating a temporary view. The name of the view must be distinct from the name of any other view, table, sequence, or index in the same schema.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>TEMPORARY | TEMP  </dt>
+<dd>If specified, the view is created as a temporary view. Temporary views are automatically dropped at the end of the current session. Existing permanent relations with the same name are not visible to the current session while the temporary view exists, unless they are referenced with schema-qualified names. If any of the tables referenced by the view are temporary, the view is created as a temporary view (whether `TEMPORARY` is specified or not).</dd>
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of a view to be created.</dd>
+
+<dt> \<column\_name\>   </dt>
+<dd>An optional list of names to be used for columns of the view. If not given, the column names are deduced from the query.</dd>
+
+<dt> \<query\>   </dt>
+<dd>A [SELECT](SELECT.html) command which will provide the columns and rows of the view.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Views in HAWQ are read only. The system will not allow an insert, update, or delete on a view. You can get the effect of an updatable view by creating rewrite rules on the view into appropriate actions on other tables. For more information see `CREATE RULE`.
+
+Be careful that the names and data types of the view's columns will be assigned the way you want. For example, if you run the following command:
+
+``` pre
+CREATE VIEW vista AS SELECT 'Hello World';
+```
+
+The result is poor: the column name defaults to `?column?`, and the column data type defaults to `unknown`. If you want a string literal in a view's result, use the following command:
+
+``` pre
+CREATE VIEW vista AS SELECT text 'Hello World' AS hello;
+```
+
+Check that you have permission to access the tables referenced in the view. View ownership determines permissions, not your status as current user. This is true, even if you are a superuser. This concept is unusual, since superusers typically have access to all objects. In the case of views, even superusers must be explicitly granted access to tables referenced if they do not own the view.
+
+However, functions called in the view are treated the same as if they had been called directly from the query using the view. Therefore the user of a view must have permissions to call any functions used by the view.
+
+If you create a view with an `ORDER BY` clause, the `ORDER           BY` clause is ignored when you do a `SELECT` from the view.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a view consisting of all comedy films:
+
+``` pre
+CREATE VIEW comedies AS SELECT * FROM films WHERE kind = 
+'comedy';
+```
+
+Create a view that gets the top ten ranked baby names:
+
+``` pre
+CREATE VIEW topten AS SELECT name, rank, gender, year FROM 
+names, rank WHERE rank < '11' AND names.id=rank.id;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard specifies some additional capabilities for the `CREATE           VIEW` statement that are not in HAWQ. The optional clauses for the full SQL command in the standard are:
+
+-   **CHECK OPTION** \u2014 This option has to do with updatable views. All `INSERT` commands on the view will be checked to ensure data satisfy the view-defining condition (that is, the new data would be visible through the view). If they do not, the insert will be rejected.
+-   **LOCAL** \u2014 Check for integrity on this view.
+-   **CASCADED** \u2014 Check for integrity on this view and on any dependent view. `CASCADED` is assumed if neither `CASCADED` nor `LOCAL` is specified.
+
+`CREATE OR REPLACE VIEW` is a HAWQ language extension. So is the concept of a temporary view.
+
+## <a id="topic1__section8"></a>See Also
+
+[SELECT](SELECT.html), [DROP VIEW](DROP-VIEW.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DEALLOCATE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DEALLOCATE.html.md.erb b/markdown/reference/sql/DEALLOCATE.html.md.erb
new file mode 100644
index 0000000..846f282
--- /dev/null
+++ b/markdown/reference/sql/DEALLOCATE.html.md.erb
@@ -0,0 +1,42 @@
+---
+title: DEALLOCATE
+---
+
+Deallocates a prepared statement.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DEALLOCATE [PREPARE] <name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DEALLOCATE` is used to deallocate a previously prepared SQL statement. If you do not explicitly deallocate a prepared statement, it is deallocated when the session ends.
+
+For more information on prepared statements, see [PREPARE](PREPARE.html).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>PREPARE  </dt>
+<dd>Optional key word which is ignored.</dd>
+
+<dt>\<name\>  </dt>
+<dd>The name of the prepared statement to deallocate.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Deallocated the previously prepared statement named `insert_names`:
+
+``` pre
+DEALLOCATE insert_names;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+The SQL standard includes a `DEALLOCATE` statement, but it is only for use in embedded SQL.
+
+## <a id="topic1__section7"></a>See Also
+
+[EXECUTE](EXECUTE.html), [PREPARE](PREPARE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DECLARE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DECLARE.html.md.erb b/markdown/reference/sql/DECLARE.html.md.erb
new file mode 100644
index 0000000..d6fed83
--- /dev/null
+++ b/markdown/reference/sql/DECLARE.html.md.erb
@@ -0,0 +1,84 @@
+---
+title: DECLARE
+---
+
+Defines a cursor.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DECLARE <name> [BINARY] [INSENSITIVE] [NO SCROLL] CURSOR
+�����[{WITH | WITHOUT} HOLD]
+�����FOR <query> [FOR READ ONLY]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DECLARE` allows a user to create cursors, which can be used to retrieve a small number of rows at a time out of a larger query. Cursors can return data either in text or in binary format using [FETCH](FETCH.html).
+
+Normal cursors return data in text format, the same as a `SELECT` would produce. Since data is stored natively in binary format, the system must do a conversion to produce the text format. Once the information comes back in text form, the client application may need to convert it to a binary format to manipulate it. In addition, data in the text format is often larger in size than in the binary format. Binary cursors return the data in a binary representation that may be more easily manipulated. Nevertheless, if you intend to display the data as text anyway, retrieving it in text form will save you some effort on the client side.
+
+As an example, if a query returns a value of one from an integer column, you would get a string of 1 with a default cursor whereas with a binary cursor you would get a 4-byte field containing the internal representation of the value (in big-endian byte order).
+
+Binary cursors should be used carefully. Many applications, including psql, are not prepared to handle binary cursors and expect data to come back in the text format.
+
+**Note:**
+When the client application uses the 'extended query' protocol to issue a `FETCH` command, the Bind protocol message specifies whether data is to be retrieved in text or binary format. This choice overrides the way that the cursor is defined. The concept of a binary cursor as such is thus obsolete when using extended query protocol \u2014 any cursor can be treated as either text or binary.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<name\>  </dt>
+<dd>The name of the cursor to be created.</dd>
+
+<dt>BINARY  </dt>
+<dd>Causes the cursor to return data in binary rather than in text format.</dd>
+
+<dt>INSENSITIVE  </dt>
+<dd>Indicates that data retrieved from the cursor should be unaffected by updates to the tables underlying the cursor while the cursor exists. In HAWQ, all cursors are insensitive. This key word currently has no effect and is present for compatibility with the SQL standard.</dd>
+
+<dt>NO SCROLL  </dt>
+<dd>A cursor cannot be used to retrieve rows in a nonsequential fashion. This is the default behavior in HAWQ, since scrollable cursors (`SCROLL`) are not supported.</dd>
+
+<dt>WITH HOLD  
+WITHOUT HOLD  </dt>
+<dd>`WITH HOLD` specifies that the cursor may continue to be used after the transaction that created it successfully commits. `WITHOUT HOLD` specifies that the cursor cannot be used outside of the transaction that created it. `WITHOUT HOLD` is the default.</dd>
+
+<dt>\<query\> </dt>
+<dd>A [SELECT](SELECT.html) command which will provide the rows to be returned by the cursor.</dd>
+
+<!-- -->
+
+<dt>FOR READ ONLY  </dt>
+<dd>`FOR READ ONLY` indicates that the cursor is used in a read-only mode. Cursors can only be used in a read-only mode in HAWQ. HAWQ does not support updatable cursors (FOR UPDATE), so this is the default behavior.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Unless `WITH HOLD` is specified, the cursor created by this command can only be used within the current transaction. Thus, `DECLARE` without `WITH           HOLD` is useless outside a transaction block: the cursor would survive only to the completion of the statement. Therefore HAWQ reports an error if this command is used outside a transaction block. Use `BEGIN`, `COMMIT` and `ROLLBACK` to define a transaction block.
+
+If `WITH HOLD` is specified and the transaction that created the cursor successfully commits, the cursor can continue to be accessed by subsequent transactions in the same session. (But if the creating transaction is aborted, the cursor is removed.) A cursor created with `WITH HOLD` is closed when an explicit `CLOSE` command is issued on it, or the session ends. In the current implementation, the rows represented by a held cursor are copied into a temporary file or memory area so that they remain available for subsequent transactions.
+
+Scrollable cursors are not currently supported in HAWQ. You can only use `FETCH` to move the cursor position forward, not backwards.
+
+You can see all available cursors by querying the `pg_cursors` system view.
+
+## <a id="topic1__section6"></a>Examples
+
+Declare a cursor:
+
+``` pre
+DECLARE mycursor CURSOR FOR SELECT * FROM mytable;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+SQL standard allows cursors only in embedded SQL and in modules. HAWQ permits cursors to be used interactively.
+
+HAWQ does not implement an `OPEN` statement for cursors. A cursor is considered to be open when it is declared.
+
+The SQL standard allows cursors to move both forward and backward. All HAWQ cursors are forward moving only (not scrollable).
+
+Binary cursors are a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CLOSE](CLOSE.html), [FETCH](FETCH.html), [SELECT](SELECT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-AGGREGATE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-AGGREGATE.html.md.erb b/markdown/reference/sql/DROP-AGGREGATE.html.md.erb
new file mode 100644
index 0000000..f40ca5f
--- /dev/null
+++ b/markdown/reference/sql/DROP-AGGREGATE.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: DROP AGGREGATE
+---
+
+Removes an aggregate function.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP AGGREGATE [IF EXISTS] <name> ( <type> [, ...] ) [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP AGGREGATE` will delete an existing aggregate function. To execute this command the current user must be the owner of the aggregate function.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the aggregate does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing aggregate function.</dd>
+
+<dt>\<type\>   </dt>
+<dd>An input data type on which the aggregate function operates. To reference a zero-argument aggregate function, write `*` in place of the list of input data types.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the aggregate function.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the aggregate function if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+To remove the aggregate function `myavg` for type `integer`:
+
+``` pre
+DROP AGGREGATE myavg(integer);
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP AGGREGATE` statement in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER AGGREGATE](ALTER-AGGREGATE.html), [CREATE AGGREGATE](CREATE-AGGREGATE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-DATABASE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-DATABASE.html.md.erb b/markdown/reference/sql/DROP-DATABASE.html.md.erb
new file mode 100644
index 0000000..d8ae296
--- /dev/null
+++ b/markdown/reference/sql/DROP-DATABASE.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: DROP DATABASE
+---
+
+Removes a database.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP DATABASE [IF EXISTS] <name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP DATABASE` drops a database. It removes the catalog entries for the database and deletes the directory containing the data. It can only be executed by the database owner. Also, it cannot be executed while you or anyone else are connected to the target database. (Connect to `template1` or any other database to issue this command.)
+
+**Warning:** `DROP DATABASE` cannot be undone. Use it with care!
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the database does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name of the database to remove.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+`DROP DATABASE` cannot be executed inside a transaction block.
+
+This command cannot be executed while connected to the target database. Thus, it might be more convenient to use the program `dropdb` instead, which is a wrapper around this command.
+
+## <a id="topic1__section6"></a>Examples
+
+Drop the database named `testdb`:
+
+``` pre
+DROP DATABASE testdb;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+There is no `DROP DATABASE` statement in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE DATABASE](CREATE-DATABASE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-EXTERNAL-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-EXTERNAL-TABLE.html.md.erb b/markdown/reference/sql/DROP-EXTERNAL-TABLE.html.md.erb
new file mode 100644
index 0000000..01d0fb1
--- /dev/null
+++ b/markdown/reference/sql/DROP-EXTERNAL-TABLE.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: DROP EXTERNAL TABLE
+---
+
+Removes an external table definition.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP EXTERNAL [WEB] TABLE [IF EXISTS] <name> [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP EXTERNAL TABLE` drops an existing external table definition from the database system. The external data sources or files are not deleted. To execute this command you must be the owner of the external table.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WEB  </dt>
+<dd>Optional keyword for dropping external web tables.</dd>
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the external table does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing external table.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the external table (such as views).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the external table if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the external table named `staging` if it exists:
+
+``` pre
+DROP EXTERNAL TABLE IF EXISTS staging;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP EXTERNAL TABLE` statement in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-FILESPACE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-FILESPACE.html.md.erb b/markdown/reference/sql/DROP-FILESPACE.html.md.erb
new file mode 100644
index 0000000..afae3fe
--- /dev/null
+++ b/markdown/reference/sql/DROP-FILESPACE.html.md.erb
@@ -0,0 +1,42 @@
+---
+title: DROP FILESPACE
+---
+
+Removes a filespace.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP FILESPACE [IF EXISTS]  <filespacename>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP FILESPACE` removes a filespace definition and its system-generated data directories from the system.
+
+A filespace can only be dropped by its owner or a superuser. The filespace must be empty of all tablespace objects before it can be dropped. It is possible that tablespaces in other databases may still be using a filespace even if no tablespaces in the current database are using the filespace.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the filespace does not exist. A notice is issued in this case.</dd>
+
+<dt>\<filespacename>   </dt>
+<dd>The name of the filespace to remove.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the tablespace `myfs`:
+
+``` pre
+DROP FILESPACE myfs;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP FILESPACE` statement in the SQL standard or in PostgreSQL.
+
+## <a id="topic1__section7"></a>See Also
+
+[DROP TABLESPACE](DROP-TABLESPACE.html), [hawq filespace](../cli/admin_utilities/hawqfilespace.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-FUNCTION.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-FUNCTION.html.md.erb b/markdown/reference/sql/DROP-FUNCTION.html.md.erb
new file mode 100644
index 0000000..5ebd4e5
--- /dev/null
+++ b/markdown/reference/sql/DROP-FUNCTION.html.md.erb
@@ -0,0 +1,55 @@
+---
+title: DROP FUNCTION
+---
+
+Removes a function.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP FUNCTION [IF EXISTS] <name> ( [ [<argmode>] [<argname>] <argtype> 
+    [, ...] ] ) [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP FUNCTION` removes the definition of an existing function. To execute this command the user must be the owner of the function. The argument types to the function must be specified, since several different functions may exist with the same name and different argument lists.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the function does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing function.</dd>
+
+<dt>\<argmode\>   </dt>
+<dd>The mode of an argument: either `IN`, `OUT`, or `INOUT`. If omitted, the default is IN. Note that `DROP               FUNCTION` does not actually pay any attention to `OUT` arguments, since only the input arguments are needed to determine the function's identity. So it is sufficient to list the `IN` and `INOUT` arguments.</dd>
+
+<dt>\<argname\>   </dt>
+<dd>The name of an argument. Note that `DROP FUNCTION` does not actually pay any attention to argument names, since only the argument data types are needed to determine the function's identity.</dd>
+
+<dt>\<argtype\>   </dt>
+<dd>The data type(s) of the function's arguments (optionally schema-qualified), if any.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the function such as operators.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the function if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Drop the square root function:
+
+``` pre
+DROP FUNCTION sqrt(integer);
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+A `DROP FUNCTION` statement is defined in the SQL standard, but it is not compatible with this command.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER FUNCTION](ALTER-FUNCTION.html), [CREATE FUNCTION](CREATE-FUNCTION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-GROUP.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-GROUP.html.md.erb b/markdown/reference/sql/DROP-GROUP.html.md.erb
new file mode 100644
index 0000000..5fce3ae
--- /dev/null
+++ b/markdown/reference/sql/DROP-GROUP.html.md.erb
@@ -0,0 +1,31 @@
+---
+title: DROP GROUP
+---
+
+Removes a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP GROUP [IF EXISTS] <name> [, ...]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP GROUP` is an obsolete command, though still accepted for backwards compatibility. Groups (and users) have been superseded by the more general concept of roles. See [DROP ROLE](DROP-ROLE.html) for more information.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the role does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name of an existing role.</dd>
+
+## <a id="topic1__section5"></a>Compatibility
+
+There is no `DROP GROUP` statement in the SQL standard.
+
+## <a id="topic1__section6"></a>See Also
+
+[DROP ROLE](DROP-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-LANGUAGE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-LANGUAGE.html.md.erb b/markdown/reference/sql/DROP-LANGUAGE.html.md.erb
new file mode 100644
index 0000000..efb95f8
--- /dev/null
+++ b/markdown/reference/sql/DROP-LANGUAGE.html.md.erb
@@ -0,0 +1,49 @@
+---
+title: DROP LANGUAGE
+---
+
+Removes a procedural language.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP [PROCEDURAL] LANGUAGE [IF EXISTS] <name> [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP LANGUAGE` will remove the definition of the previously registered procedural language \<name\>. You must have superuser privileges to drop a language.
+
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>PROCEDURAL  </dt>
+<dd>(Optional, no effect) Indicates that this is a procedural language.</dd>
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the language does not exist. A notice message is issued in this circumstance.</dd>
+
+<dt> \<name\>   </dt>
+<dd>The name of an existing procedural language. The name may be enclosed by single quotes.</dd>
+
+<dt>CASCADE </dt>
+<dd>Automatically drop objects that depend on the language (for example, functions written in that language).</dd>
+
+<dt>RESTRICT</dt>
+<dd>Refuse to drop the language if any objects depend on it. This is the default behavior.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+Remove the procedural language `plsample`:
+
+``` pre
+DROP LANGUAGE plsample;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`DROP LANGUAGE` is a HAWQ extension; there is no `DROP LANGUAGE` statement in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE LANGUAGE](CREATE-LANGUAGE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-OPERATOR-CLASS.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-OPERATOR-CLASS.html.md.erb b/markdown/reference/sql/DROP-OPERATOR-CLASS.html.md.erb
new file mode 100644
index 0000000..da22425
--- /dev/null
+++ b/markdown/reference/sql/DROP-OPERATOR-CLASS.html.md.erb
@@ -0,0 +1,54 @@
+---
+title: DROP OPERATOR CLASS
+---
+
+Removes an operator class.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP OPERATOR CLASS [IF EXISTS] <name> USING <index_method> [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP OPERATOR` drops an existing operator class. To execute this command you must be the owner of the operator class.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the operator class does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing operator class.</dd>
+
+<dt>\<index\_method\>   </dt>
+<dd>The name of the index access method the operator class is for.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the operator class.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the operator class if any objects depend on it. This is the default.</dd>
+
+## Notes
+
+This command will not succeed if there are any existing indexes that use the operator class. Add `CASCADE` to drop such indexes along with the operator class.
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the B-tree operator class `widget_ops`:
+
+``` pre
+DROP OPERATOR CLASS widget_ops USING btree;
+```
+
+This command will not succeed if there are any existing indexes that use the operator class. Add `CASCADE` to drop such indexes along with the operator class.
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP OPERATOR CLASS` statement in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER OPERATOR](ALTER-OPERATOR.html), [CREATE OPERATOR](CREATE-OPERATOR.html) [CREATE OPERATOR CLASS](CREATE-OPERATOR-CLASS.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-OPERATOR.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-OPERATOR.html.md.erb b/markdown/reference/sql/DROP-OPERATOR.html.md.erb
new file mode 100644
index 0000000..b59fde4
--- /dev/null
+++ b/markdown/reference/sql/DROP-OPERATOR.html.md.erb
@@ -0,0 +1,64 @@
+---
+title: DROP OPERATOR
+---
+
+Removes an operator.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP OPERATOR [IF EXISTS] <name> ( {<lefttype> | NONE} , 
+    {<righttype> | NONE} ) [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP OPERATOR` drops an existing operator from the database system. To execute this command you must be the owner of the operator.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the operator does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing operator.</dd>
+
+<dt>\<lefttype\>  </dt>
+<dd>The data type of the operator's left operand; write `NONE` if the operator has no left operand.</dd>
+
+<dt>\<righttype\> </dt>
+<dd>The data type of the operator's right operand; write `NONE` if the operator has no right operand.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the operator.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the operator if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the power operator `a^b` for type `integer`:
+
+``` pre
+DROP OPERATOR ^ (integer, integer);
+```
+
+Remove the left unary bitwise complement operator `~b` for type `bit`:
+
+``` pre
+DROP OPERATOR ~ (none, bit);
+```
+
+Remove the right unary factorial operator `x!` for type `bigint`:
+
+``` pre
+DROP OPERATOR ! (bigint, none);
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP OPERATOR` statement in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER OPERATOR](ALTER-OPERATOR.html), [CREATE OPERATOR](CREATE-OPERATOR.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-OWNED.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-OWNED.html.md.erb b/markdown/reference/sql/DROP-OWNED.html.md.erb
new file mode 100644
index 0000000..50c5272
--- /dev/null
+++ b/markdown/reference/sql/DROP-OWNED.html.md.erb
@@ -0,0 +1,50 @@
+---
+title: DROP OWNED
+---
+
+Removes database objects owned by a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP OWNED BY <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP OWNED` drops all the objects in the current database that are owned by one of the specified roles. Any privileges granted to the given roles on objects in the current database will also be revoked.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<name\>   </dt>
+<dd>The name of a role whose objects will be dropped, and whose privileges will be revoked.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the affected objects.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the objects owned by a role if any other database objects depend on one of the affected objects. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+`DROP OWNED` is often used to prepare for the removal of one or more roles. Because `DROP OWNED` only affects the objects in the current database, it is usually necessary to execute this command in each database that contains objects owned by a role that is to be removed.
+
+Using the `CASCADE` option may make the command recurse to objects owned by other users.
+
+The `REASSIGN OWNED` command is an alternative that reassigns the ownership of all the database objects owned by one or more roles.
+
+## <a id="topic1__section6"></a>Examples
+
+Remove any database objects owned by the role named `sally`:
+
+``` pre
+DROP OWNED BY sally;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The `DROP OWNED` statement is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[REASSIGN OWNED](REASSIGN-OWNED.html), [DROP ROLE](DROP-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-RESOURCE-QUEUE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-RESOURCE-QUEUE.html.md.erb b/markdown/reference/sql/DROP-RESOURCE-QUEUE.html.md.erb
new file mode 100644
index 0000000..473923f
--- /dev/null
+++ b/markdown/reference/sql/DROP-RESOURCE-QUEUE.html.md.erb
@@ -0,0 +1,65 @@
+---
+title: DROP RESOURCE QUEUE
+---
+
+Removes a resource queue.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP RESOURCE QUEUE <queue_name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+This command removes a resource queue from HAWQ. To drop a resource queue, the queue cannot have any roles assigned to it, nor can it have any statements waiting in the queue or have any children resource queues. Only a superuser can drop a resource queue.
+
+**Note:** The `pg_root` and `pg_default` resource queues cannot be dropped.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<queue\_name\>   </dt>
+<dd>The name of a resource queue to remove.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use [ALTER ROLE](ALTER-ROLE.html) to remove a user from a resource queue.
+
+To see all the currently active queries for all resource queues, perform the following query of the `pg_locks` table joined with the `pg_roles` and `pg_resqueue` tables:
+
+``` pre
+SELECT rolname, rsqname, locktype, objid, transaction, pid, 
+mode, granted FROM pg_roles, pg_resqueue, pg_locks WHERE 
+pg_roles.rolresqueue=pg_locks.objid AND 
+pg_locks.objid=pg_resqueue.oid;
+```
+
+To see the roles assigned to a resource queue, perform the following query of the `pg_roles` and `pg_resqueue` system catalog tables:
+
+``` pre
+SELECT rolname, rsqname FROM pg_roles, pg_resqueue WHERE 
+pg_roles.rolresqueue=pg_resqueue.oid;
+```
+
+## <a id="topic1__section6"></a>Examples
+
+Remove a role from a resource queue (and move the role to the default resource queue, `pg_default`):
+
+``` pre
+ALTER ROLE bob RESOURCE QUEUE NONE;
+```
+
+Remove the resource queue named `adhoc`:
+
+``` pre
+DROP RESOURCE QUEUE adhoc;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The `DROP RESOURCE QUEUE` statement is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html), [ALTER ROLE](ALTER-ROLE.html), [ALTER RESOURCE QUEUE](ALTER-RESOURCE-QUEUE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-ROLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-ROLE.html.md.erb b/markdown/reference/sql/DROP-ROLE.html.md.erb
new file mode 100644
index 0000000..b1d305b
--- /dev/null
+++ b/markdown/reference/sql/DROP-ROLE.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: DROP ROLE
+---
+
+Removes a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP ROLE [IF EXISTS] <name> [, ...]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP ROLE` removes the specified role(s). To drop a superuser role, you must be a superuser yourself. To drop non-superuser roles, you must have `CREATEROLE` privilege.
+
+A role cannot be removed if it is still referenced in any database; an error will be raised if so. Before dropping the role, you must drop all the objects it owns (or reassign their ownership) and revoke any privileges the role has been granted. The `REASSIGN           OWNED` and `DROP OWNED` commands can be useful for this purpose.
+
+However, it is not necessary to remove role memberships involving the role; `DROP ROLE` automatically revokes any memberships of the target role in other roles, and of other roles in the target role. The other roles are not dropped nor otherwise affected.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the role does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name of the role to remove.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the roles named `sally` and `bob`:
+
+``` pre
+DROP ROLE sally, bob;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+The SQL standard defines `DROP ROLE`, but it allows only one role to be dropped at a time, and it specifies different privilege requirements than HAWQ uses.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER ROLE](ALTER-ROLE.html), [CREATE ROLE](CREATE-ROLE.html), [DROP OWNED](DROP-OWNED.html), [REASSIGN OWNED](REASSIGN-OWNED.html), [SET ROLE](SET-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-SCHEMA.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-SCHEMA.html.md.erb b/markdown/reference/sql/DROP-SCHEMA.html.md.erb
new file mode 100644
index 0000000..8d7846f
--- /dev/null
+++ b/markdown/reference/sql/DROP-SCHEMA.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: DROP SCHEMA
+---
+
+Removes a schema.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP SCHEMA [IF EXISTS] <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP SCHEMA` removes schemas from the database. A schema can only be dropped by its owner or a superuser. Note that the owner can drop the schema (and thereby all contained objects) even if he does not own some of the objects within the schema.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the schema does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name of the schema to remove.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drops any objects contained in the schema (tables, functions, etc.).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the schema if it contains any objects. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the schema `mystuff` from the database, along with everything it contains:
+
+``` pre
+DROP SCHEMA mystuff CASCADE;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`DROP SCHEMA` is fully conforming with the SQL standard, except that the standard only allows one schema to be dropped per command. Also, the `IF           EXISTS` option is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE SCHEMA](CREATE-SCHEMA.html)



[36/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-table.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-table.html.md.erb b/markdown/ddl/ddl-table.html.md.erb
new file mode 100644
index 0000000..bc4f0c4
--- /dev/null
+++ b/markdown/ddl/ddl-table.html.md.erb
@@ -0,0 +1,149 @@
+---
+title: Creating and Managing Tables
+---
+
+HAWQ Tables are similar to tables in any relational database, except that table rows are distributed across the different segments in the system. When you create a table, you specify the table's distribution policy.
+
+## <a id="topic26"></a>Creating a Table 
+
+The `CREATE TABLE` command creates a table and defines its structure. When you create a table, you define:
+
+-   The columns of the table and their associated data types. See [Choosing Column Data Types](#topic27).
+-   Any table constraints to limit the data that a column or table can contain. See [Setting Table Constraints](#topic28).
+-   The distribution policy of the table, which determines how HAWQ divides data is across the segments. See [Choosing the Table Distribution Policy](#topic34).
+-   The way the table is stored on disk.
+-   The table partitioning strategy for large tables, which specifies how the data should be divided. See [Creating and Managing Databases](../ddl/ddl-database.html).
+
+### <a id="topic27"></a>Choosing Column Data Types 
+
+The data type of a column determines the types of data values the column can contain. Choose the data type that uses the least possible space but can still accommodate your data and that best constrains the data. For example, use character data types for strings, date or timestamp data types for dates, and numeric data types for numbers.
+
+There are no performance differences among the character data types `CHAR`, `VARCHAR`, and `TEXT` apart from the increased storage size when you use the blank-padded type. In most situations, use `TEXT` or `VARCHAR` rather than `CHAR`.
+
+Use the smallest numeric data type that will accommodate your numeric data and allow for future expansion. For example, using `BIGINT` for data that fits in `INT` or `SMALLINT` wastes storage space. If you expect that your data values will expand over time, consider that changing from a smaller datatype to a larger datatype after loading large amounts of data is costly. For example, if your current data values fit in a `SMALLINT` but it is likely that the values will expand, `INT` is the better long-term choice.
+
+Use the same data types for columns that you plan to use in cross-table joins. When the data types are different, the database must convert one of them so that the data values can be compared correctly, which adds unnecessary overhead.
+
+HAWQ supports the parquet columnar storage format, which can increase performance on large queries. Use parquet tables for HAWQ internal tables.
+
+### <a id="topic28"></a>Setting Table Constraints 
+
+You can define constraints to restrict the data in your tables. HAWQ support for constraints is the same as PostgreSQL with some limitations, including:
+
+-   `CHECK` constraints can refer only to the table on which they are defined.
+-   `FOREIGN KEY` constraints are allowed, but not enforced.
+-   Constraints that you define on partitioned tables apply to the partitioned table as a whole. You cannot define constraints on the individual parts of the table.
+
+#### <a id="topic29"></a>Check Constraints 
+
+Check constraints allow you to specify that the value in a certain column must satisfy a Boolean \(truth-value\) expression. For example, to require positive product prices:
+
+``` sql
+=> CREATE TABLE products
+     ( product_no integer,
+       name text,
+       price numeric CHECK (price > 0) );
+```
+
+#### <a id="topic30"></a>Not-Null Constraints 
+
+Not-null constraints specify that a column must not assume the null value. A not-null constraint is always written as a column constraint. For example:
+
+``` sql
+=> CREATE TABLE products
+     ( product_no integer NOT NULL,
+       name text NOT NULL,
+       price numeric );
+```
+
+#### <a id="topic33"></a>Foreign Keys 
+
+Foreign keys are not supported. You can declare them, but referential integrity is not enforced.
+
+Foreign key constraints specify that the values in a column or a group of columns must match the values appearing in some row of another table to maintain referential integrity between two related tables. Referential integrity checks cannot be enforced between the distributed table segments of a HAWQ database.
+
+### <a id="topic34"></a>Choosing the Table Distribution Policy 
+
+All HAWQ tables are distributed. The default is `DISTRIBUTED RANDOMLY` \(round-robin distribution\) to determine the table row distribution. However, when you create or alter a table, you can optionally specify `DISTRIBUTED BY` to distribute data according to a hash-based policy. In this case, the `bucketnum` attribute sets the number of hash buckets used by a hash-distributed table. Columns of geometric or user-defined data types are not eligible as HAWQ distribution key columns. 
+
+Randomly distributed tables have benefits over hash distributed tables. For example, after expansion, HAWQ's elasticity feature lets it automatically use more resources without needing to redistribute the data. For extremely large tables, redistribution is very expensive. Also, data locality for randomly distributed tables is better, especially after the underlying HDFS redistributes its data during rebalancing or because of DataNode failures. This is quite common when the cluster is large.
+
+However, hash distributed tables can be faster than randomly distributed tables. For example, for TPCH queries, where there are several queries, HASH distributed tables can have performance benefits. Choose a distribution policy that best suits your application scenario. When you `CREATE TABLE`, you can also specify the `bucketnum` option. The `bucketnum` determines the number of hash buckets used in creating a hash-distributed table or for PXF external table intermediate processing. The number of buckets also affects how many virtual segments will be created when processing this data. The bucketnumber of a gpfdist external table is the number of gpfdist location, and the bucketnumber of a command external table is `ON #num`. PXF external tables use the `default_hash_table_bucket_number` parameter to control virtual segments. 
+
+HAWQ's elastic execution runtime is based on virtual segments, which are allocated on demand, based on the cost of the query. Each node uses one physical segment and a number of dynamically allocated virtual segments distributed to different hosts, thus simplifying performance tuning. Large queries use large numbers of virtual segments, while smaller queries use fewer virtual segments. Tables do not need to be redistributed when nodes are added or removed.
+
+In general, the more virtual segments are used, the faster the query will be executed. You can tune the parameters for `default_hash_table_bucket_number` and `hawq_rm_nvseg_perquery_limit` to adjust performance by controlling the number of virtual segments used for a query. However, be aware that if the value of `default_hash_table_bucket_number` is changed, data must be redistributed, which can be costly. Therefore, it is better to set the `default_hash_table_bucket_number` up front, if you expect to need a larger number of virtual segments. However, you might need to adjust the value in `default_hash_table_bucket_number` after cluster expansion, but should take care not to exceed the number of virtual segments per query set in `hawq_rm_nvseg_perquery_limit`. Refer to the recommended guidelines for setting the value of `default_hash_table_bucket_number`, later in this section.
+
+For random or gpfdist external tables, as well as user-defined functions, the value set in the `hawq_rm_nvseg_perquery_perseg_limit` parameter limits the number of virtual segments that are used for one segment for one query, to optimize query resources. Resetting this parameter is not recommended.
+
+Consider the following points when deciding on a table distribution policy.
+
+-   **Even Data Distribution** \u2014 For the best possible performance, all segments should contain equal portions of data. If the data is unbalanced or skewed, the segments with more data must work harder to perform their portion of the query processing.
+-   **Local and Distributed Operations** \u2014 Local operations are faster than distributed operations. Query processing is fastest if the work associated with join, sort, or aggregation operations is done locally, at the segment level. Work done at the system level requires distributing tuples across the segments, which is less efficient. When tables share a common distribution key, the work of joining or sorting on their shared distribution key columns is done locally. With a random distribution policy, local join operations are not an option.
+-   **Even Query Processing** \u2014 For best performance, all segments should handle an equal share of the query workload. Query workload can be skewed if a table's data distribution policy and the query predicates are not well matched. For example, suppose that a sales transactions table is distributed based on a column that contains corporate names \(the distribution key\), and the hashing algorithm distributes the data based on those values. If a predicate in a query references a single value from the distribution key, query processing runs on only one segment. This works if your query predicates usually select data on a criteria other than corporation name. For queries that use corporation name in their predicates, it's possible that only one segment instance will handle the query workload.
+
+HAWQ utilizes dynamic parallelism, which can affect the performance of a query execution significantly. Performance depends on the following factors:
+
+-   The size of a randomly distributed table.
+-   The `bucketnum` of a hash distributed table.
+-   Data locality.
+-   The values of `default_hash_table_bucket_number`, and `hawq_rm_nvseg_perquery_limit` \(including defaults and user-defined values\).
+
+For any specific query, the first four factors are fixed values, while the configuration parameters in the last item can be used to tune performance of the query execution. In querying a random table, the query resource load is related to the data size of the table, usually one virtual segment for one HDFS block. As a result, querying a large table could use a large number of resources.
+
+The `bucketnum` for a hash table specifies the number of hash buckets to be used in creating virtual segments. A HASH distributed table is created with `default_hash_table_bucket_number` buckets. The default bucket value can be changed in session level or in the `CREATE TABLE` DDL by using the `bucketnum` storage parameter.
+
+In an Ambari-managed HAWQ cluster, the default bucket number \(`default_hash_table_bucket_number`\) is derived from the number of segment nodes. In command-line-managed HAWQ environments, you can use the `--bucket_number` option of `hawq init` to explicitly set `default_hash_table_bucket_number` during cluster initialization.
+
+**Note:** For best performance with large tables, the number of buckets should not exceed the value of the `default_hash_table_bucket_number` parameter. Small tables can use one segment node, `WITH bucketnum=1`. For larger tables, the `bucketnum` is set to a multiple of the number of segment nodes, for the best load balancing on different segment nodes. The elastic runtime will attempt to find the optimal number of buckets for the number of nodes being processed. Larger tables need more virtual segments, and hence use larger numbers of buckets.
+
+The following statement creates a table \u201csales\u201d with 8 buckets, which would be similar to a hash-distributed table on 8 segments.
+
+``` sql
+=> CREATE TABLE sales(id int, profit float)  WITH (bucketnum=8) DISTRIBUTED BY (id);
+```
+
+There are four ways of creating a table from an origin table. The ways in which the new table is generated from the original table are listed below.
+
+<table>
+  <tr>
+    <th></th>
+    <th>Syntax</th>
+  </tr>
+  <tr><td>INHERITS</td><td><pre><code>CREATE TABLE new_table INHERITS (origintable) [WITH(bucketnum=x)] <br/>[DISTRIBUTED BY col]</code></pre></td></tr>
+  <tr><td>LIKE</td><td><pre><code>CREATE TABLE new_table (LIKE origintable) [WITH(bucketnum=x)] <br/>[DISTRIBUTED BY col]</code></pre></td></tr>
+  <tr><td>AS</td><td><pre><code>CREATE TABLE new_table [WITH(bucketnum=x)] AS SUBQUERY [DISTRIBUTED BY col]</code></pre></td></tr>
+  <tr><td>SELECT INTO</td><td><pre><code>CREATE TABLE origintable [WITH(bucketnum=x)] [DISTRIBUTED BY col]; SELECT * <br/>INTO new_table FROM origintable;</code></pre></td></tr>
+</table>
+
+The optional `INHERITS` clause specifies a list of tables from which the new table automatically inherits all columns. Hash tables inherit bucketnumbers from their origin table if not otherwise specified. If `WITH` specifies `bucketnum` in creating a hash-distributed table, it will be copied. If distribution is specified by column, the table will inherit it. Otherwise, the table will use default distribution from `default_hash_table_bucket_number`.
+
+The `LIKE` clause specifies a table from which the new table automatically copies all column names, data types, not-null constraints, and distribution policy. If a `bucketnum` is specified, it will be copied. Otherwise, the table will use default distribution.
+
+For hash tables, the `SELECT INTO` function always uses random distribution.
+
+#### <a id="topic_kjg_tqm_gv"></a>Declaring Distribution Keys 
+
+`CREATE TABLE`'s optional clause `DISTRIBUTED BY` specifies the distribution policy for a table. The default is a random distribution policy. You can also choose to distribute data as a hash-based policy, where the `bucketnum` attribute sets the number of hash buckets used by a hash-distributed table. HASH distributed tables are created with the number of hash buckets specified by the `default_hash_table_bucket_number` parameter.
+
+Policies for different application scenarios can be specified to optimize performance. The number of virtual segments used for query execution can now be tuned using the `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, in connection with the `default_hash_table_bucket_number` parameter, which sets the default `bucketnum`. For more information, see the guidelines for Virtual Segments in the next section and in [Query Performance](../query/query-performance.html#topic38).
+
+#### <a id="topic_wff_mqm_gv"></a>Performance Tuning 
+
+Adjusting the values of the configuration parameters `default_hash_table_bucket_number` and `hawq_rm_nvseg_perquery_limit` can tune performance by controlling the number of virtual segments being used. In most circumstances, HAWQ's elastic runtime will dynamically allocate virtual segments to optimize performance, so further tuning should not be needed..
+
+Hash tables are created using the value specified in `default_hash_table_bucket_number`. Queries for hash tables use a fixed number of buckets, regardless of the amount of data present. Explicitly setting `default_hash_table_bucket_number` can be useful in managing resources. If you desire a larger or smaller number of hash buckets, set this value before you create tables. Resources are dynamically allocated to a multiple of the number of nodes. If you use `hawq init --bucket_number` to set the value of `default_hash_table_bucket_number` during cluster initialization or expansion, the value should not exceed the value of `hawq_rm_nvseg_perquery_limit`. This server parameter defines the maximum number of virtual segments that can be used for a query \(default = 512, with a maximum of 65535\). Modifying the value to greater than 1000 segments is not recommended.
+
+The following per-node guidelines apply to values for `default_hash_table_bucket_number`.
+
+|Number of Nodes|default\_hash\_table\_bucket\_number value|
+|---------------|------------------------------------------|
+|<= 85|6 \* \#nodes|
+|\> 85 and <= 102|5 \* \#nodes|
+|\> 102 and <= 128|4 \* \#nodes|
+|\> 128 and <= 170|3 \* \#nodes|
+|\> 170 and <= 256|2 \* \#nodes|
+|\> 256 and <= 512|1 \* \#nodes|
+|\> 512|512|
+
+Reducing the value of `hawq_rm_nvseg_perquery_perseg_limit`can improve concurrency and increasing the value of `hawq_rm_nvseg_perquery_perseg_limit`could possibly increase the degree of parallelism. However, for some queries, increasing the degree of parallelism will not improve performance if the query has reached the limits set by the hardware. Therefore, increasing the value of `hawq_rm_nvseg_perquery_perseg_limit` above the default value is not recommended. Also, changing the value of `default_hash_table_bucket_number` after initializing a cluster means the hash table data must be redistributed. If you are expanding a cluster, you might wish to change this value, but be aware that retuning could adversely affect performance.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-tablespace.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-tablespace.html.md.erb b/markdown/ddl/ddl-tablespace.html.md.erb
new file mode 100644
index 0000000..8720665
--- /dev/null
+++ b/markdown/ddl/ddl-tablespace.html.md.erb
@@ -0,0 +1,154 @@
+---
+title: Creating and Managing Tablespaces
+---
+
+Tablespaces allow database administrators to have multiple file systems per machine and decide how to best use physical storage to store database objects. They are named locations within a filespace in which you can create objects. Tablespaces allow you to assign different storage for frequently and infrequently used database objects or to control the I/O performance on certain database objects. For example, place frequently-used tables on file systems that use high performance solid-state drives \(SSD\), and place other tables on standard hard drives.
+
+A tablespace requires a file system location to store its database files. In HAWQ, the master and each segment require a distinct storage location. The collection of file system locations for all components in a HAWQ system is a *filespace*. Filespaces can be used by one or more tablespaces.
+
+## <a id="topic10"></a>Creating a Filespace 
+
+A filespace sets aside storage for your HAWQ system. A filespace is a symbolic storage identifier that maps onto a set of locations in your HAWQ hosts' file systems. To create a filespace, prepare the logical file systems on all of your HAWQ hosts, then use the `hawq filespace` utility to define the filespace. You must be a database superuser to create a filespace.
+
+**Note:** HAWQ is not directly aware of the file system boundaries on your underlying systems. It stores files in the directories that you tell it to use. You cannot control the location on disk of individual files within a logical file system.
+
+### <a id="im178954"></a>To create a filespace using hawq filespace 
+
+1.  Log in to the HAWQ master as the `gpadmin` user.
+
+    ``` shell
+    $ su - gpadmin
+    ```
+
+2.  Create a filespace configuration file:
+
+    ``` shell
+    $ hawq filespace -o hawqfilespace_config
+    ```
+
+3.  At the prompt, enter a name for the filespace, a master file system location, and the primary segment file system locations. For example:
+
+    ``` shell
+    $ hawq filespace -o hawqfilespace_config
+    ```
+    ``` pre
+    Enter a name for this filespace
+    > testfs
+    Enter replica num for filespace. If 0, default replica num is used (default=3)
+    > 
+
+    Please specify the DFS location for the filespace (for example: localhost:9000/fs)
+    location> localhost:8020/fs        
+    20160409:16:53:25:028082 hawqfilespace:gpadmin:gpadmin-[INFO]:-[created]
+    20160409:16:53:25:028082 hawqfilespace:gpadmin:gpadmin-[INFO]:-
+    To add this filespace to the database please run the command:
+       hawqfilespace --config /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+       
+    ``` shell
+    $ cat /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+    ``` pre
+    filespace:testfs
+    fsreplica:3
+    dfs_url::localhost:8020/fs
+    ```
+    ``` shell
+    $ hawq filespace --config /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+    ``` pre
+    Reading Configuration file: '/Users/gpadmin/curwork/git/hawq/hawqfilespace_config'
+
+    CREATE FILESPACE testfs ON hdfs 
+    ('localhost:8020/fs/testfs') WITH (NUMREPLICA = 3);
+    20160409:16:57:56:028104 hawqfilespace:gpadmin:gpadmin-[INFO]:-Connecting to database
+    20160409:16:57:56:028104 hawqfilespace:gpadmin:gpadmin-[INFO]:-Filespace "testfs" successfully created
+
+    ```
+
+
+4.  `hawq filespace` creates a configuration file. Examine the file to verify that the hawq filespace configuration is correct. The following is a sample configuration file:
+
+    ```
+    filespace:fastdisk
+    mdw:1:/hawq_master_filespc/gp-1
+    sdw1:2:/hawq_pri_filespc/gp0
+    sdw2:3:/hawq_pri_filespc/gp1
+    ```
+
+5.  Run hawq filespace again to create the filespace based on the configuration file:
+
+    ``` shell
+    $ hawq filespace -c hawqfilespace_config
+    ```
+
+
+## <a id="topic13"></a>Creating a Tablespace 
+
+After you create a filespace, use the `CREATE TABLESPACE` command to define a tablespace that uses that filespace. For example:
+
+``` sql
+=# CREATE TABLESPACE fastspace FILESPACE fastdisk;
+```
+
+Database superusers define tablespaces and grant access to database users with the `GRANT``CREATE`command. For example:
+
+``` sql
+=# GRANT CREATE ON TABLESPACE fastspace TO admin;
+```
+
+## <a id="topic14"></a>Using a Tablespace to Store Database Objects 
+
+Users with the `CREATE` privilege on a tablespace can create database objects in that tablespace, such as tables, indexes, and databases. The command is:
+
+``` sql
+CREATE TABLE tablename(options) TABLESPACE spacename
+```
+
+For example, the following command creates a table in the tablespace *space1*:
+
+``` sql
+CREATE TABLE foo(i int) TABLESPACE space1;
+```
+
+You can also use the `default_tablespace` parameter to specify the default tablespace for `CREATE TABLE` and `CREATE INDEX` commands that do not specify a tablespace:
+
+``` sql
+SET default_tablespace = space1;
+CREATE TABLE foo(i int);
+```
+
+The tablespace associated with a database stores that database's system catalogs, temporary files created by server processes using that database, and is the default tablespace selected for tables and indexes created within the database, if no `TABLESPACE` is specified when the objects are created. If you do not specify a tablespace when you create a database, the database uses the same tablespace used by its template database.
+
+You can use a tablespace from any database if you have appropriate privileges.
+
+## <a id="topic15"></a>Viewing Existing Tablespaces and Filespaces 
+
+Every HAWQ system has the following default tablespaces.
+
+-   `pg_global` for shared system catalogs.
+-   `pg_default`, the default tablespace. Used by the *template1* and *template0* databases.
+
+These tablespaces use the system default filespace, `pg_system`, the data directory location created at system initialization.
+
+To see filespace information, look in the *pg\_filespace* and *pg\_filespace\_entry* catalog tables. You can join these tables with *pg\_tablespace* to see the full definition of a tablespace. For example:
+
+``` sql
+=# SELECT spcname AS tblspc, fsname AS filespc,
+          fsedbid AS seg_dbid, fselocation AS datadir
+   FROM   pg_tablespace pgts, pg_filespace pgfs,
+          pg_filespace_entry pgfse
+   WHERE  pgts.spcfsoid=pgfse.fsefsoid
+          AND pgfse.fsefsoid=pgfs.oid
+   ORDER BY tblspc, seg_dbid;
+```
+
+## <a id="topic16"></a>Dropping Tablespaces and Filespaces 
+
+To drop a tablespace, you must be the tablespace owner or a superuser. You cannot drop a tablespace until all objects in all databases using the tablespace are removed.
+
+Only a superuser can drop a filespace. A filespace cannot be dropped until all tablespaces using that filespace are removed.
+
+The `DROP TABLESPACE` command removes an empty tablespace.
+
+The `DROP FILESPACE` command removes an empty filespace.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-view.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-view.html.md.erb b/markdown/ddl/ddl-view.html.md.erb
new file mode 100644
index 0000000..35da41e
--- /dev/null
+++ b/markdown/ddl/ddl-view.html.md.erb
@@ -0,0 +1,25 @@
+---
+title: Creating and Managing Views
+---
+
+Views enable you to save frequently used or complex queries, then access them in a `SELECT` statement as if they were a table. A view is not physically materialized on disk: the query runs as a subquery when you access the view.
+
+If a subquery is associated with a single query, consider using the `WITH` clause of the `SELECT` command instead of creating a seldom-used view.
+
+## <a id="topic101"></a>Creating Views 
+
+The `CREATE VIEW`command defines a view of a query. For example:
+
+``` sql
+CREATE VIEW comedies AS SELECT * FROM films WHERE kind = 'comedy';
+```
+
+Views ignore `ORDER BY` and `SORT` operations stored in the view.
+
+## <a id="topic102"></a>Dropping Views 
+
+The `DROP VIEW` command removes a view. For example:
+
+``` sql
+DROP VIEW topten;
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl.html.md.erb b/markdown/ddl/ddl.html.md.erb
new file mode 100644
index 0000000..7873fe7
--- /dev/null
+++ b/markdown/ddl/ddl.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: Defining Database Objects
+---
+
+This section covers data definition language \(DDL\) in HAWQ and how to create and manage database objects.
+
+Creating objects in a HAWQ includes making up-front choices about data distribution, storage options, data loading, and other HAWQ features that will affect the ongoing performance of your database system. Understanding the options that are available and how the database will be used will help you make the right decisions.
+
+Most of the advanced HAWQ features are enabled with extensions to the SQL `CREATE` DDL statements.
+
+This section contains the topics:
+
+*  <a class="subnav" href="./ddl-database.html">Creating and Managing Databases</a>
+*  <a class="subnav" href="./ddl-tablespace.html">Creating and Managing Tablespaces</a>
+*  <a class="subnav" href="./ddl-schema.html">Creating and Managing Schemas</a>
+*  <a class="subnav" href="./ddl-table.html">Creating and Managing Tables</a>
+*  <a class="subnav" href="./ddl-storage.html">Table Storage Model and Distribution Policy</a>
+*  <a class="subnav" href="./ddl-partition.html">Partitioning Large Tables</a>
+*  <a class="subnav" href="./ddl-view.html">Creating and Managing Views</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/02-pipeline.png
----------------------------------------------------------------------
diff --git a/markdown/images/02-pipeline.png b/markdown/images/02-pipeline.png
new file mode 100644
index 0000000..26fec1b
Binary files /dev/null and b/markdown/images/02-pipeline.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/03-gpload-files.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/03-gpload-files.jpg b/markdown/images/03-gpload-files.jpg
new file mode 100644
index 0000000..d50435f
Binary files /dev/null and b/markdown/images/03-gpload-files.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/basic_query_flow.png
----------------------------------------------------------------------
diff --git a/markdown/images/basic_query_flow.png b/markdown/images/basic_query_flow.png
new file mode 100644
index 0000000..59172a2
Binary files /dev/null and b/markdown/images/basic_query_flow.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/ext-tables-xml.png
----------------------------------------------------------------------
diff --git a/markdown/images/ext-tables-xml.png b/markdown/images/ext-tables-xml.png
new file mode 100644
index 0000000..f208828
Binary files /dev/null and b/markdown/images/ext-tables-xml.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/ext_tables.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/ext_tables.jpg b/markdown/images/ext_tables.jpg
new file mode 100644
index 0000000..d5a0940
Binary files /dev/null and b/markdown/images/ext_tables.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/ext_tables_multinic.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/ext_tables_multinic.jpg b/markdown/images/ext_tables_multinic.jpg
new file mode 100644
index 0000000..fcf09c4
Binary files /dev/null and b/markdown/images/ext_tables_multinic.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/gangs.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/gangs.jpg b/markdown/images/gangs.jpg
new file mode 100644
index 0000000..0d14585
Binary files /dev/null and b/markdown/images/gangs.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/gporca.png
----------------------------------------------------------------------
diff --git a/markdown/images/gporca.png b/markdown/images/gporca.png
new file mode 100644
index 0000000..2909443
Binary files /dev/null and b/markdown/images/gporca.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/hawq_hcatalog.png
----------------------------------------------------------------------
diff --git a/markdown/images/hawq_hcatalog.png b/markdown/images/hawq_hcatalog.png
new file mode 100644
index 0000000..35b74c3
Binary files /dev/null and b/markdown/images/hawq_hcatalog.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/slice_plan.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/slice_plan.jpg b/markdown/images/slice_plan.jpg
new file mode 100644
index 0000000..ad8da83
Binary files /dev/null and b/markdown/images/slice_plan.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/install/aws-config.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/install/aws-config.html.md.erb b/markdown/install/aws-config.html.md.erb
new file mode 100644
index 0000000..21cadf5
--- /dev/null
+++ b/markdown/install/aws-config.html.md.erb
@@ -0,0 +1,123 @@
+---
+title: Amazon EC2 Configuration
+---
+
+Amazon Elastic Compute Cloud (EC2) is a service provided by Amazon Web Services (AWS).  You can install and configure HAWQ on virtual servers provided by Amazon EC2. The following information describes some considerations when deploying a HAWQ cluster in an Amazon EC2 environment.
+
+## <a id="topic_wqv_yfx_y5"></a>About Amazon EC2 
+
+Amazon EC2 can be used to launch as many virtual servers as you need, configure security and networking, and manage storage. An EC2 *instance* is a virtual server in the AWS cloud virtual computing environment.
+
+EC2 instances are managed by AWS. AWS isolates your EC2 instances from other users in a virtual private cloud (VPC) and lets you control access to the instances. You can configure instance features such as operating system, network connectivity (network ports and protocols, IP addresses), access to the Internet, and size and type of disk storage. 
+
+For information about Amazon EC2, see the [EC2 User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html).
+
+## <a id="topic_nhk_df4_2v"></a>Create and Launch HAWQ Instances
+
+Use the *Amazon EC2 Console* to launch instances and configure, start, stop, and terminate (delete) virtual servers. When you launch a HAWQ instance, you select and configure key attributes via the EC2 Console.
+
+
+### <a id="topic_amitype"></a>Choose AMI Type
+
+An Amazon Machine Image (AMI) is a template that contains a software configuration including the operating system, application server, and applications that best suit your purpose. When configuring a HAWQ virtual instance, we recommend you use a *hardware virtualized* AMI running 64-bit Red Hat Enterprise Linux version 6.4 or 6.5 or 64-bit CentOS 6.4 or 6.5.  Obtain the licenses and instances directly from the OS provider.
+
+### <a id="topic_selcfgstorage"></a>Consider Storage
+EC2 instances can be launched as either Elastic Block Store (EBS)-backed or instance store-backed.  
+
+Instance store-backed storage is generally better performing than EBS and recommended for HAWQ's large data workloads. SSD (solid state) instance store is preferred over magnetic drives.
+
+**Note** EC2 *instance store* provides temporary block-level storage. This storage is located on disks that are physically attached to the host computer. While instance store provides high performance, powering off the instance causes data loss. Soft reboots preserve instance store data. 
+     
+Virtual devices for instance store volumes for HAWQ EC2 instance store instances are named `ephemeralN` (where *N* varies based on instance type). CentOS instance store block device are named `/dev/xvdletter` (where *letter* is a lower case letter of the alphabet).
+
+### <a id="topic_cfgplacegrp"></a>Configure Placement Group 
+
+A placement group is a logical grouping of instances within a single availability zone that together participate in a low-latency, 10 Gbps network.  Your HAWQ master and segment cluster instances should support enhanced networking and reside in a single placement group (and subnet) for optimal network performance.  
+
+If your Ambari node is not a DataNode, locating the Ambari node instance in a subnet separate from the HAWQ master/segment placement group enables you to manage multiple HAWQ clusters from the single Ambari instance.
+
+Amazon recommends that you use the same instance type for all instances in the placement group and that you launch all instances within the placement group at the same time.
+
+Membership in a placement group has some implications on your HAWQ cluster.  Specifically, growing the cluster over capacity may require shutting down all HAWQ instances in the current placement group and restarting the instances to a new placement group. Instance store volumes are lost in this scenario.
+
+### <a id="topic_selinsttype"></a>Select EC2 Instance Type
+
+An EC2 instance type is a specific combination of CPU, memory, default storage, and networking capacity.  
+
+Several instance store-backed EC2 instance types have shown acceptable performance for HAWQ nodes in development and production environments: 
+
+| Instance Type  | Env | vCPUs | Memory (GB) | Disk Capacity (GB) | Storage Type |
+|-------|-----|------|--------|----------|--------|
+| cc2.8xlarge  | Dev | 32 | 60.5 | 4 x 840 | HDD |
+| d2.2xlarge  | Dev | 8 | 60 | 6 x 2000 | HDD |
+| d2.4xlarge  | Dev/QA | 16 | 122 | 12 x 2000 | HDD |
+| i2.8xlarge  | Prod | 32 | 244 | 8 x 800 | SSD |
+| hs1.8xlarge  | Prod | 16 | 117 | 24 x 2000 | HDD |
+| d2.8xlarge  | Prod | 36 | 244 | 24 x 2000 | HDD |
+ 
+For optimal network performance, the chosen HAWQ instance type should support EC2 enhanced networking. Enhanced networking results in higher performance, lower latency, and lower jitter. Refer to [Enhanced Networking on Linux Instances](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html) for detailed information on enabling enhanced networking in your instances.
+
+All instance types identified in the table above support enhanced networking.
+
+### <a id="topic_cfgnetw"></a>Configure Networking 
+
+Your HAWQ cluster instances should be in a single VPC and on the same subnet. Instances are always assigned a VPC internal IP address. This internal IP address should be used for HAWQ communication between hosts. You can also use the internal IP address to access an instance from another instance within the HAWQ VPC.
+
+You may choose to locate your Ambari node on a separate subnet in the VPC. Both a public IP address for the instance and an Internet gateway configured for the EC2 VPC are required to access the Ambari instance from an external source and for the instance to access the Internet. 
+
+Ensure your Ambari and HAWQ master instances are each assigned a public IP address for external and internet access. We recommend you also assign an Elastic IP Address to the HAWQ master instance.
+
+
+###Configure Security Groups<a id="topic_cfgsecgrp"></a>
+
+A security group is a set of rules that control network traffic to and from your HAWQ instance.  One or more rules may be associated with a security group, and one or more security groups may be associated with an instance.
+
+To configure HAWQ communication between nodes in the HAWQ cluster, include and open the following ports in the appropriate security group for the HAWQ master and segment nodes:
+
+| Port  | Application |
+|-------|-------------------------------------|
+| 22    | ssh - secure connect to other hosts |
+
+To allow access to/from a source external to the Ambari management node, include and open the following ports in an appropriate security group for your Ambari node:
+
+| Port  | Application |
+|-------|-------------------------------------|
+| 22    | ssh - secure connect to other hosts |
+| 8080  | Ambari - HAWQ admin/config web console |  
+
+
+###Generate Key Pair<a id="topic_cfgkeypair"></a>
+AWS uses public-key cryptography to secure the login information for your instance. You use the EC2 console to generate and name a key pair when you launch your instance.  
+
+A key pair for an EC2 instance consists of a *public key* that AWS stores, and a *private key file* that you maintain. Together, they allow you to connect to your instance securely. The private key file name typically has a `.pem` suffix.
+
+This example logs into an into EC2 instance from an external location with the private key file `my-test.pem` as user `user1`.  In this example, the instance is configured with the public IP address `192.0.2.0` and the private key file resides in the current directory.
+
+```shell
+$ ssh -i my-test.pem user1@192.0.2.0
+```
+
+##Additional HAWQ Considerations <a id="topic_mj4_524_2v"></a>
+
+After launching your HAWQ instance, you will connect to and configure the instance. The  *Instances* page of the EC2 Console lists the running instances and their associated network access information.
+
+Before installing HAWQ, set up the EC2 instances as you would local host server machines. Configure the host operating system, configure host network information (for example, update the `/etc/hosts` file), set operating system parameters, and install operating system packages. For information about how to prepare your operating system environment for HAWQ, see [Apache HAWQ System Requirements](../requirements/system-requirements.html) and [Select HAWQ Host Machines](../install/select-hosts.html).
+
+###Passwordless SSH Configuration<a id="topic_pwdlessssh_cc"></a>
+
+HAWQ hosts will be configured during the installation process to use passwordless SSH for intra-cluster communications. Temporary password-based authentication must be enabled on each HAWQ host in preparation for this configuration. Password authentication is typically disabled by default in cloud images. Update the cloud configuration in `/etc/cloud/cloud.cfg` to enable password authentication in your AMI(s). Set `ssh_pwauth: True` in this file. If desired, disable password authentication after HAWQ installation by setting the property back to `False`.
+  
+##References<a id="topic_hgz_zwy_bv"></a>
+
+Links to related Amazon Web Services and EC2 features and information.
+
+- [Amazon Web Services](https://aws.amazon.com)
+- [Amazon Machine Image \(AMI\)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
+- [EC2 Instance Store](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html)
+- [Elastic Block Store](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html)
+- [EC2 Key Pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
+- [Elastic IP Address](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html)
+- [Enhanced Networking on Linux Instances](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html)
+- [Internet Gateways] (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html)
+- [Subnet Public IP Addressing](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html#subnet-public-ip)
+- [Virtual Private Cloud](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/install/select-hosts.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/install/select-hosts.html.md.erb b/markdown/install/select-hosts.html.md.erb
new file mode 100644
index 0000000..ecbe0b5
--- /dev/null
+++ b/markdown/install/select-hosts.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: Select HAWQ Host Machines
+---
+
+Before you begin to install HAWQ, follow these steps to select and prepare the host machines.
+
+Complete this procedure for all HAWQ deployments:
+
+1.  **Choose the host machines that will host a HAWQ segment.** Keep in mind these restrictions and requirements:
+    -   Each host must meet the system requirements for the version of HAWQ you are installing.
+    -   Each HAWQ segment must be co-located on a host that runs an HDFS DataNode.
+    -   The HAWQ master segment and standby master segment must be hosted on separate machines.
+2.  **Choose the host machines that will run PXF.** Keep in mind these restrictions and requirements:
+    -   PXF must be installed on the HDFS NameNode *and* on all HDFS DataNodes.
+    -   If you have configured Hadoop with high availability, PXF must also be installed on all HDFS nodes including all NameNode services.
+    -   If you want to use PXF with HBase or Hive, you must first install the HBase client \(hbase-client\) and/or Hive client \(hive-client\) on each machine where you intend to install PXF. See the [HDP installation documentation](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/index.html) for more information.
+3.  **Verify that required ports on all machines are unused.** By default, a HAWQ master or standby master service configuration uses port 5432. Hosts that run other PostgreSQL instances cannot be used to run a default HAWQ master or standby service configuration because the default PostgreSQL port \(5432\) conflicts with the default HAWQ port. You must either change the default port configuration of the running PostgreSQL instance or change the HAWQ master port setting during the HAWQ service installation to avoid port conflicts.
+    
+    **Note:** The Ambari server node uses PostgreSQL as the default metadata database. The Hive Metastore uses MySQL as the default metadata database.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/02-pipeline.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/02-pipeline.png b/markdown/mdimages/02-pipeline.png
new file mode 100644
index 0000000..26fec1b
Binary files /dev/null and b/markdown/mdimages/02-pipeline.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/03-gpload-files.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/03-gpload-files.jpg b/markdown/mdimages/03-gpload-files.jpg
new file mode 100644
index 0000000..d50435f
Binary files /dev/null and b/markdown/mdimages/03-gpload-files.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/1-assign-masters.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/1-assign-masters.tiff b/markdown/mdimages/1-assign-masters.tiff
new file mode 100644
index 0000000..b5c4cb4
Binary files /dev/null and b/markdown/mdimages/1-assign-masters.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/1-choose-services.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/1-choose-services.tiff b/markdown/mdimages/1-choose-services.tiff
new file mode 100644
index 0000000..d21b706
Binary files /dev/null and b/markdown/mdimages/1-choose-services.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/3-assign-slaves-and-clients.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/3-assign-slaves-and-clients.tiff b/markdown/mdimages/3-assign-slaves-and-clients.tiff
new file mode 100644
index 0000000..93ea3bd
Binary files /dev/null and b/markdown/mdimages/3-assign-slaves-and-clients.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/4-customize-services-hawq.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/4-customize-services-hawq.tiff b/markdown/mdimages/4-customize-services-hawq.tiff
new file mode 100644
index 0000000..c6bfee8
Binary files /dev/null and b/markdown/mdimages/4-customize-services-hawq.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/5-customize-services-pxf.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/5-customize-services-pxf.tiff b/markdown/mdimages/5-customize-services-pxf.tiff
new file mode 100644
index 0000000..3812aa1
Binary files /dev/null and b/markdown/mdimages/5-customize-services-pxf.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/6-review.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/6-review.tiff b/markdown/mdimages/6-review.tiff
new file mode 100644
index 0000000..be7debb
Binary files /dev/null and b/markdown/mdimages/6-review.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/7-install-start-test.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/7-install-start-test.tiff b/markdown/mdimages/7-install-start-test.tiff
new file mode 100644
index 0000000..b556e9a
Binary files /dev/null and b/markdown/mdimages/7-install-start-test.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/ext-tables-xml.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/ext-tables-xml.png b/markdown/mdimages/ext-tables-xml.png
new file mode 100644
index 0000000..f208828
Binary files /dev/null and b/markdown/mdimages/ext-tables-xml.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/ext_tables.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/ext_tables.jpg b/markdown/mdimages/ext_tables.jpg
new file mode 100644
index 0000000..d5a0940
Binary files /dev/null and b/markdown/mdimages/ext_tables.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/ext_tables_multinic.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/ext_tables_multinic.jpg b/markdown/mdimages/ext_tables_multinic.jpg
new file mode 100644
index 0000000..fcf09c4
Binary files /dev/null and b/markdown/mdimages/ext_tables_multinic.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gangs.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gangs.jpg b/markdown/mdimages/gangs.jpg
new file mode 100644
index 0000000..0d14585
Binary files /dev/null and b/markdown/mdimages/gangs.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gp_orca_fallback.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gp_orca_fallback.png b/markdown/mdimages/gp_orca_fallback.png
new file mode 100644
index 0000000..000a6af
Binary files /dev/null and b/markdown/mdimages/gp_orca_fallback.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gpfdist_instances.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gpfdist_instances.png b/markdown/mdimages/gpfdist_instances.png
new file mode 100644
index 0000000..6fae2d4
Binary files /dev/null and b/markdown/mdimages/gpfdist_instances.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gpfdist_instances_backup.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gpfdist_instances_backup.png b/markdown/mdimages/gpfdist_instances_backup.png
new file mode 100644
index 0000000..7cd3e1a
Binary files /dev/null and b/markdown/mdimages/gpfdist_instances_backup.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gporca.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gporca.png b/markdown/mdimages/gporca.png
new file mode 100644
index 0000000..2909443
Binary files /dev/null and b/markdown/mdimages/gporca.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/hawq_architecture_components.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/hawq_architecture_components.png b/markdown/mdimages/hawq_architecture_components.png
new file mode 100644
index 0000000..cea50b0
Binary files /dev/null and b/markdown/mdimages/hawq_architecture_components.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/hawq_hcatalog.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/hawq_hcatalog.png b/markdown/mdimages/hawq_hcatalog.png
new file mode 100644
index 0000000..35b74c3
Binary files /dev/null and b/markdown/mdimages/hawq_hcatalog.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/hawq_high_level_architecture.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/hawq_high_level_architecture.png b/markdown/mdimages/hawq_high_level_architecture.png
new file mode 100644
index 0000000..d88bf7a
Binary files /dev/null and b/markdown/mdimages/hawq_high_level_architecture.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/partitions.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/partitions.jpg b/markdown/mdimages/partitions.jpg
new file mode 100644
index 0000000..d366e21
Binary files /dev/null and b/markdown/mdimages/partitions.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/piv-opt.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/piv-opt.png b/markdown/mdimages/piv-opt.png
new file mode 100644
index 0000000..f8f192b
Binary files /dev/null and b/markdown/mdimages/piv-opt.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/resource_queues.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/resource_queues.jpg b/markdown/mdimages/resource_queues.jpg
new file mode 100644
index 0000000..7f5a54c
Binary files /dev/null and b/markdown/mdimages/resource_queues.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/slice_plan.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/slice_plan.jpg b/markdown/mdimages/slice_plan.jpg
new file mode 100644
index 0000000..ad8da83
Binary files /dev/null and b/markdown/mdimages/slice_plan.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/source/gporca.graffle
----------------------------------------------------------------------
diff --git a/markdown/mdimages/source/gporca.graffle b/markdown/mdimages/source/gporca.graffle
new file mode 100644
index 0000000..fb835d5
Binary files /dev/null and b/markdown/mdimages/source/gporca.graffle differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/source/hawq_hcatalog.graffle
----------------------------------------------------------------------
diff --git a/markdown/mdimages/source/hawq_hcatalog.graffle b/markdown/mdimages/source/hawq_hcatalog.graffle
new file mode 100644
index 0000000..f46bfb2
Binary files /dev/null and b/markdown/mdimages/source/hawq_hcatalog.graffle differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/standby_master.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/standby_master.jpg b/markdown/mdimages/standby_master.jpg
new file mode 100644
index 0000000..ef195ab
Binary files /dev/null and b/markdown/mdimages/standby_master.jpg differ


[29/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/JsonPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/JsonPXF.html.md.erb b/markdown/pxf/JsonPXF.html.md.erb
new file mode 100644
index 0000000..97195ad
--- /dev/null
+++ b/markdown/pxf/JsonPXF.html.md.erb
@@ -0,0 +1,197 @@
+---
+title: Accessing JSON File Data
+---
+
+The PXF JSON plug-in reads native JSON stored in HDFS.  The plug-in supports common data types, as well as basic (N-level) projection and arrays.
+
+To access JSON file data with HAWQ, the data must be stored in HDFS and an external table created from the HDFS data store.
+
+## Prerequisites<a id="jsonplugprereq"></a>
+
+Before working with JSON file data using HAWQ and PXF, ensure that:
+
+-   The PXF HDFS plug-in is installed on all cluster nodes.
+-   The PXF JSON plug-in is installed on all cluster nodes.
+-   You have tested PXF on HDFS.
+
+
+## Working with JSON Files<a id="topic_workwjson"></a>
+
+JSON is a text-based data-interchange format.  JSON data is typically stored in a file with a `.json` suffix. A `.json` file will contain a collection of objects.  A JSON object is a collection of unordered name/value pairs.  A value can be a string, a number, true, false, null, or an object or array. Objects and arrays can be nested.
+
+Refer to [Introducing JSON](http://www.json.org/) for specific information on JSON syntax.
+
+Sample JSON data file content:
+
+``` json
+  {
+    "created_at":"MonSep3004:04:53+00002013",
+    "id_str":"384529256681725952",
+    "user": {
+      "id":31424214,
+       "location":"COLUMBUS"
+    },
+    "coordinates":null
+  }
+```
+
+### JSON to HAWQ Data Type Mapping<a id="topic_workwjson"></a>
+
+To represent JSON data in HAWQ, map data values that use a primitive data type to HAWQ columns of the same type. JSON supports complex data types including projections and arrays. Use N-level projection to map members of nested objects and arrays to primitive data types.
+
+The following table summarizes external mapping rules for JSON data.
+
+<caption><span class="tablecap">Table 1. JSON Mapping</span></caption>
+
+<a id="topic_table_jsondatamap"></a>
+
+| JSON Data Type                                                    | HAWQ Data Type                                                                                                                                                                                            |
+|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Primitive type (integer, float, string, boolean, null) | Use the corresponding HAWQ built-in data type; see [Data Types](../reference/HAWQDataTypes.html). |
+| Array                         | Use `[]` brackets to identify a specific array index to a member of primitive type.                                                                                            |
+| Object                | Use dot `.` notation to specify each level of projection (nesting) to a member of a primitive type.                                                                                         |
+
+
+### JSON File Read Modes<a id="topic_jsonreadmodes"></a>
+
+
+The PXF JSON plug-in reads data in one of two modes. The default mode expects one full JSON record per line.  The JSON plug-in also supports a read mode operating on multi-line JSON records.
+
+In the following discussion, a data set defined by a sample schema will be represented using each read mode of the PXF JSON plug-in.  The sample schema contains data fields with the following names and data types:
+
+   - "created_at" - text
+   - "id_str" - text
+   - "user" - object
+      - "id" - integer
+      - "location" - text
+   - "coordinates" - object (optional)
+      - "type" - text
+      - "values" - array
+         - [0] - integer
+         - [1] - integer
+
+
+Example 1 - Data Set for Single-JSON-Record-Per-Line Read Mode:
+
+``` pre
+{"created_at":"FriJun0722:45:03+00002013","id_str":"343136551322136576","user":{
+"id":395504494,"location":"NearCornwall"},"coordinates":{"type":"Point","values"
+: [ 6, 50 ]}},
+{"created_at":"FriJun0722:45:02+00002013","id_str":"343136547115253761","user":{
+"id":26643566,"location":"Austin,Texas"}, "coordinates": null},
+{"created_at":"FriJun0722:45:02+00002013","id_str":"343136547136233472","user":{
+"id":287819058,"location":""}, "coordinates": null}
+```  
+
+Example 2 - Data Set for Multi-Line JSON Record Read Mode:
+
+``` json
+{
+  "root":[
+    {
+      "record_obj":{
+        "created_at":"MonSep3004:04:53+00002013",
+        "id_str":"384529256681725952",
+        "user":{
+          "id":31424214,
+          "location":"COLUMBUS"
+        },
+        "coordinates":null
+      },
+      "record_obj":{
+        "created_at":"MonSep3004:04:54+00002013",
+        "id_str":"384529260872228864",
+        "user":{
+          "id":67600981,
+          "location":"KryberWorld"
+        },
+        "coordinates":{
+          "type":"Point",
+          "values":[
+             8,
+             52
+          ]
+        }
+      }
+    }
+  ]
+}
+```
+
+## Loading JSON Data to HDFS<a id="jsontohdfs"></a>
+
+The PXF JSON plug-in reads native JSON stored in HDFS.�Before JSON data can be queried via HAWQ, it must first be loaded to an HDFS data store.
+
+Copy and paste the single line JSON record data set to a file named `singleline.json`.  Similarly, copy and paste the multi-line JSON record data set to `multiline.json`.
+
+**Note**:  Ensure there are **no** blank lines in your JSON files.
+
+Add the data set files to the HDFS data store:
+
+``` shell
+$ hdfs dfs -mkdir /user/data
+$ hdfs dfs -put singleline.json /user/data
+$ hdfs dfs -put multiline.json /user/data
+```
+
+Once loaded to HDFS, JSON data may be queried and analyzed via HAWQ.
+
+## Querying External JSON Data<a id="jsoncetsyntax1"></a>
+
+Use the following syntax to create an external table representing JSON data:�
+
+``` sql
+CREATE EXTERNAL TABLE <table_name> 
+    ( <column_name> <data_type> [, ...] | LIKE <other_table> )
+LOCATION ( 'pxf://<host>[:<port>]/<path-to-data>?PROFILE=Json[&IDENTIFIER=<value>]' )
+      FORMAT 'CUSTOM' ( FORMATTER='pxfwritable_import' );
+```
+JSON-plug-in-specific keywords and values used in the `CREATE EXTERNAL TABLE` call are described below.
+
+| Keyword  | Value |
+|-------|-------------------------------------|
+| \<host\>    | Specify the HDFS NameNode in the \<host\> field. |
+| PROFILE    | The `PROFILE` keyword must specify the value `Json`. |
+| IDENTIFIER  | Include the `IDENTIFIER` keyword and \<value\> in the `LOCATION` string only when accessing a JSON file with multi-line records. \<value\> should identify the member name used to determine the encapsulating JSON object to return.  (If the JSON file is the multi-line record Example 2 above, `&IDENTIFIER=created_at` would be specified.) |  
+| FORMAT    | The `FORMAT` clause must specify `CUSTOM`. |
+| FORMATTER    | The JSON `CUSTOM` format supports only the built-in `pxfwritable_import` `FORMATTER`. |
+
+
+### Example 1 <a id="jsonexample1"></a>
+
+The following [CREATE EXTERNAL TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) SQL call creates a queryable external table based on the data in the single-line-per-record JSON example.
+
+``` sql 
+CREATE EXTERNAL TABLE sample_json_singleline_tbl(
+  created_at TEXT,
+  id_str TEXT,
+  text TEXT,
+  "user.id" INTEGER,
+  "user.location" TEXT,
+  "coordinates.values[0]" INTEGER,
+  "coordinates.values[1]" INTEGER
+)
+LOCATION('pxf://namenode:51200/user/data/singleline.json?PROFILE=Json')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+SELECT * FROM sample_json_singleline_tbl;
+```
+
+Notice the use of `.` projection to access the nested fields in the `user` and `coordinates` objects.  Also notice the use of `[]` to access the specific elements of the `coordinates.values` array.
+
+### Example 2 <a id="jsonexample2"></a>
+
+A `CREATE EXTERNAL TABLE` SQL call to create a queryable external table based on the multi-line-per-record JSON data set would be very similar to that of the single line data set above. You might specify a different database name, `sample_json_multiline_tbl` for example. 
+
+The `LOCATION` clause would differ.  The `IDENTIFIER` keyword and an associated value must be specified when reading from multi-line JSON records:
+
+``` sql
+LOCATION('pxf://namenode:51200/user/data/multiline.json?PROFILE=Json&IDENTIFIER=created_at')
+```
+
+`created_at` identifies the member name used to determine the encapsulating JSON object, `record_obj` in this case.
+
+To query this external table populated with JSON data:
+
+``` sql
+SELECT * FROM sample_json_multiline_tbl;
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/PXFExternalTableandAPIReference.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/PXFExternalTableandAPIReference.html.md.erb b/markdown/pxf/PXFExternalTableandAPIReference.html.md.erb
new file mode 100644
index 0000000..292616b
--- /dev/null
+++ b/markdown/pxf/PXFExternalTableandAPIReference.html.md.erb
@@ -0,0 +1,1311 @@
+---
+title: PXF External Tables and API
+---
+
+You can use the PXF API to create�your own connectors to access any other type of parallel data store or processing engine.
+
+The PXF Java API lets you extend PXF functionality and add new services and formats without changing HAWQ. The API includes three classes that are extended to allow HAWQ to access an external data source: Fragmenter, Accessor, and Resolver.
+
+The Fragmenter produces a list of data fragments that can be read in parallel from the data source. The Accessor produces a list of records from a single fragment, and the Resolver both deserializes and serializes records.
+
+Together, the Fragmenter, Accessor, and Resolver classes implement a connector. PXF includes plug-ins for tables in HDFS, HBase, and Hive.
+
+## <a id="creatinganexternaltable"></a>Creating an External Table
+
+The syntax for a readable `EXTERNAL TABLE` that uses the PXF protocol is as follows:
+
+``` sql
+CREATE [READABLE|WRITABLE] EXTERNAL TABLE table_name
+        ( column_name data_type [, ...] | LIKE other_table )
+LOCATION('pxf://host[:port]/path-to-data<pxf parameters>[&custom-option=value...]')
+FORMAT 'custom' (formatter='pxfwritable_import|pxfwritable_export');
+```
+
+�where *&lt;pxf parameters&gt;* is:
+
+``` pre
+   ?FRAGMENTER=fragmenter_class&ACCESSOR=accessor_class&RESOLVER=resolver_class]
+ | ?PROFILE=profile-name
+```
+<caption><span class="tablecap">Table 1. Parameter values and description</span></caption>
+
+<a id="creatinganexternaltable__table_pfy_htz_4p"></a>
+
+| Parameter               | Value and description                                                                                                                                                                                                                                                          |
+|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| host                    | The current host of the PXF service.                                                                                                                                                                                                                                           |
+| port�                   | Connection port for the PXF service. If the port is omitted, PXF assumes that High Availability (HA) is enabled and connects to the HA name service port, 51200 by default. The HA name service port can be changed by setting the `pxf_service_port` configuration parameter. |
+| *path\_to\_data*        | A directory, file name, wildcard pattern, table name, etc.                                                                                                                                                                                                                     |
+| FRAGMENTER              | The plug-in (Java class) to use for fragmenting data. Used for READABLE external tables only.                                                                                                                                                                                   |
+| ACCESSOR                | The plug-in (Java class) to use for accessing the data. Used for READABLE and WRITABLE tables.                                                                                                                                                                                  |
+| RESOLVER                | The�plug-in (Java class) to use for serializing and deserializing the data. Used for READABLE and WRITABLE tables.                                                                                                                                                              |
+| *custom-option*=*value* | Additional values to pass to the plug-in class. The parameters are passed at runtime to the plug-ins indicated above. The plug-ins can lookup custom options with `org.apache.hawq.pxf.api.utilities.InputData`.�                                                                 |
+
+**Note:** When creating PXF external tables, you cannot use the `HEADER` option in your `FORMAT` specification.
+
+For more information about this example, see [About the Java Class Services and Formats](#aboutthejavaclassservicesandformats).
+
+## <a id="aboutthejavaclassservicesandformats"></a>About the Java Class Services and Formats
+
+The `LOCATION` string in a PXF `CREATE EXTERNAL TABLE` statement is a URI that specifies the host and port of an external data source and the path to the data in the external data source. The query portion of the URI, introduced by the question mark (?), must include the required parameters `FRAGMENTER` (readable tables only), `ACCESSOR`, and `RESOLVER`, which specify Java class names that extend the base PXF API plug-in classes. Alternatively, the required parameters can be replaced with a `PROFILE` parameter with the name of a profile defined in the `/etc/conf/pxf-profiles.xml` that defines the required classes.
+
+The parameters in the PXF URI are passed from HAWQ as headers to the PXF Java service. You can pass custom information to user-implemented PXF plug-ins by adding optional parameters to the LOCATION string.
+
+The Java PXF service�retrieves the source data from the external data source and converts it to a HAWQ-readable table format.
+
+The Accessor, Resolver, and Fragmenter Java classes extend the `org.apache.hawq.pxf.api.utilities.Plugin` class:
+
+``` java
+package org.apache.hawq.pxf.api.utilities;
+/**
+ * Base class for all plug-in types (Accessor, Resolver, Fragmenter, ...).
+ * Manages the meta data.
+ */
+public class Plugin {
+    protected InputData inputData;
+    /**
+     * Constructs a plug-in.
+     *
+     * @param input the input data
+     */
+    public Plugin(InputData input) {
+        this.inputData = input;
+    }
+    /**
+     * Checks if the plug-in is thread safe or not, based on inputData.
+     *
+     * @return true if plug-in is thread safe
+     */
+    public boolean isThreadSafe() {
+        return true;
+    }
+}
+```
+
+The parameters in the `LOCATION` string are available to the plug-ins through methods in the `org.apache.hawq.pxf.api.utilities.InputData` class. Custom parameters added to the location string can be looked up with the `getUserProperty()` method.
+
+``` java
+/**
+ * Common configuration available to all PXF plug-ins. Represents input data
+ * coming from client applications, such as HAWQ.
+ */
+public class InputData {
+
+    /**
+     * Constructs an InputData from a copy.
+     * Used to create from an extending class.
+     *
+     * @param copy the input data to copy
+     */
+    public InputData(InputData copy);
+
+    /**
+     * Returns value of a user defined property.
+     *
+     * @param userProp the lookup user property
+     * @return property value as a String
+     */
+    public String getUserProperty(String userProp);
+
+    /**
+     * Sets the byte serialization of a fragment meta data
+     * @param location start, len, and location of the fragment
+     */
+    public void setFragmentMetadata(byte[] location);
+
+    /** Returns the byte serialization of a data fragment */
+    public byte[] getFragmentMetadata();
+
+    /**
+     * Gets any custom user data that may have been passed from the
+     * fragmenter. Will mostly be used by the accessor or resolver.
+     */
+    public byte[] getFragmentUserData();
+
+    /**
+     * Sets any custom user data that needs to be shared across plug-ins.
+     * Will mostly be set by the fragmenter.
+     */
+    public void setFragmentUserData(byte[] userData);
+
+    /** Returns the number of segments in GP. */
+    public int getTotalSegments();
+
+    /** Returns the current segment ID. */
+    public int getSegmentId();
+
+    /** Returns true if there is a filter string to parse. */
+    public boolean hasFilter();
+
+    /** Returns the filter string, <tt>null</tt> if #hasFilter is <tt>false</tt> */
+    public String getFilterString();
+
+    /** Returns tuple description. */
+    public ArrayList<ColumnDescriptor> getTupleDescription();
+
+    /** Returns the number of columns in tuple description. */
+    public int getColumns();
+
+    /** Returns column index from tuple description. */
+    public ColumnDescriptor getColumn(int index);
+
+    /**
+     * Returns the column descriptor of the recordkey column. If the recordkey
+     * column was not specified by the user in the create table statement will
+     * return null.
+     */
+    public ColumnDescriptor getRecordkeyColumn();
+
+    /** Returns the data source of the required resource (i.e a file path or a table name). */
+    public String getDataSource();
+
+    /** Sets the data source for the required resource */
+    public void setDataSource(String dataSource);
+
+    /** Returns the ClassName for the java class that was defined as Accessor */
+    public String getAccessor();
+
+    /** Returns the ClassName for the java class that was defined as Resolver */
+    public String getResolver();
+
+    /**
+     * Returns the ClassName for the java class that was defined as Fragmenter
+     * or null if no fragmenter was defined
+     */
+    public String getFragmenter();
+
+    /**
+     * Returns the contents of pxf_remote_service_login set in Hawq.
+     * Should the user set it to an empty string this function will return null.
+     *
+     * @return remote login details if set, null otherwise
+     */
+    public String getLogin();
+
+    /**
+     * Returns the contents of pxf_remote_service_secret set in Hawq.
+     * Should the user set it to an empty string this function will return null.
+     *
+     * @return remote password if set, null otherwise
+     */
+    public String getSecret();
+
+    /**
+     * Returns true if the request is thread safe. Default true. Should be set
+     * by a user to false if the request contains non thread-safe plug-ins or
+     * components, such as BZip2 codec.
+     */
+    public boolean isThreadSafe();
+
+    /**
+     * Returns a data fragment index. plan to deprecate it in favor of using
+     * getFragmentMetadata().
+     */
+    public int getDataFragment();
+}
+```
+
+-   **[Fragmenter](../pxf/PXFExternalTableandAPIReference.html#fragmenter)**
+
+-   **[Accessor](../pxf/PXFExternalTableandAPIReference.html#accessor)**
+
+-   **[Resolver](../pxf/PXFExternalTableandAPIReference.html#resolver)**
+
+### <a id="fragmenter"></a>Fragmenter
+
+**Note:** The Fragmenter Plugin reads data into HAWQ readable external tables. The Fragmenter Plugin cannot write data out of HAWQ into writable external tables.
+
+The Fragmenter is responsible for passing datasource metadata back to HAWQ. It also returns a list of data fragments to the Accessor or Resolver. Each data fragment describes some part of the requested data set. It contains the datasource name, such as the file or table name, including the hostname where it is located. For example, if the source is a HDFS file, the Fragmenter returns a list of data fragments containing a HDFS file block.�Each fragment includes the location of the block. If the source data is an HBase table, the Fragmenter returns information about table regions, including their locations.
+
+The `ANALYZE` command now retrieves advanced statistics for PXF readable tables by estimating the number of tuples in a table, creating a sample table from the external table, and running advanced statistics queries on the sample table in the same way statistics are collected for native HAWQ tables.
+
+The configuration parameter `pxf_enable_stat_collection` controls collection of advanced statistics. If `pxf_enable_stat_collection` is set to false, no analysis is performed on PXF tables. An additional parameter, `pxf_stat_max_fragments`, controls the number of fragments sampled to build a sample table. By default `pxf_stat_max_fragments` is set to 100, which means that even if there are more than 100 fragments, only this number of fragments will be used in `ANALYZE` to sample the data. Increasing this number will result in better sampling, but can also impact performance.
+
+When a PXF table is analyzed and `pxf_enable_stat_collection` is set to off, or an error occurs because the table is not defined correctly, the PXF service is down, or `getFragmentsStats` is not implemented, a warning message is shown and no statistics are gathered for that table. If `ANALYZE` is running over all tables in the database, the next table will be processed \u2013 a failure processing one table does not stop the command.
+
+For a�detailed explanation�about HAWQ statistical data gathering, see `ANALYZE` in the SQL Commands Reference.
+
+**Note:**
+
+-   Depending on external table size, the time required to complete an ANALYZE operation can be lengthy. The boolean parameter `pxf_enable_stat_collection` enables statistics collection for PXF. The default value is `on`. Turning this parameter off (disabling PXF statistics collection) can help decrease the time needed for the ANALYZE operation.
+-   You can also use *pxf\_stat\_max\_fragments* to limit the number of fragments to be sampled by decreasing it from the default (100). However, if the number is too low, the sample might not be uniform and the statistics might be skewed.
+-   You can also implement getFragmentsStats to return an error. This will cause ANALYZE on a table with this Fragmenter to fail immediately, and default statistics values will be used for that table.
+
+The following table lists the Fragmenter plug-in implementations included with the PXF API.
+
+<a id="fragmenter__table_cgs_svp_3s"></a>
+
+<table>
+<caption><span class="tablecap">Table 2. Fragmenter base classes </span></caption>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th><p><code class="ph codeph">Fragmenter class</code></p></th>
+<th><p><code class="ph codeph">Description</code></p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</td>
+<td>Fragmenter for Hdfs files</td>
+</tr>
+<tr class="even">
+<td>org.apache.hawq.pxf.plugins.hbase.HBaseAtomicDataAccessor</td>
+<td>Fragmenter for HBase tables</td>
+</tr>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hive.HiveDataFragmenter</td>
+<td>Fragmenter for Hive tables�</td>
+</tr>
+<tr class="even">
+<td>org.apache.hawq.pxf.plugins.hdfs.HiveInputFormatFragmenter</td>
+<td>Fragmenter for Hive tables with RC or text files�</td>
+</tr>
+</tbody>
+</table>
+
+A�Fragmenter class extends `org.apache.hawq.pxf.api.Fragmenter`:
+
+#### <a id="com.pivotal.pxf.api.fragmenter"></a>org.apache.hawq.pxf.api.Fragmenter
+
+``` java
+package org.apache.hawq.pxf.api;
+/**
+ * Abstract class that defines the splitting of a data resource into fragments
+ * that can be processed in parallel.
+ */
+public abstract class Fragmenter extends Plugin {
+        protected List<Fragment> fragments;
+
+    public Fragmenter(InputData metaData) {
+        super(metaData);
+        fragments = new LinkedList<Fragment>();
+    }
+
+       /**
+        * Gets the fragments of a given path (source name and location of each
+        * fragment). Used to get fragments of data that could be read in parallel
+        * from the different segments.
+        */
+    public abstract List<Fragment> getFragments() throws Exception;
+
+    /**
+        * Default implementation of statistics for fragments. The default is:
+        * <ul>
+        * <li>number of fragments - as gathered by {@link #getFragments()}</li>
+        * <li>first fragment size - 64MB</li>
+        * <li>total size - number of fragments times first fragment size</li>
+        * </ul>
+        * Each fragmenter implementation can override this method to better match
+        * its fragments stats.
+        *
+        * @return default statistics
+        * @throws Exception if statistics cannot be gathered
+        */
+       public FragmentsStats getFragmentsStats() throws Exception {
+        List<Fragment> fragments = getFragments();
+        long fragmentsNumber = fragments.size();
+        return new FragmentsStats(fragmentsNumber,
+                FragmentsStats.DEFAULT_FRAGMENT_SIZE, fragmentsNumber
+                        * FragmentsStats.DEFAULT_FRAGMENT_SIZE);
+    }
+}
+  
+```
+
+`getFragments()` returns a string in JSON format of the retrieved fragment. For example, if the input path is a HDFS directory, the source name for each fragment should include the file name including the path for the fragment.
+
+#### <a id="classdescription"></a>Class Description
+
+The Fragmenter.getFragments()�method returns a�List&lt;Fragment&gt;;:
+
+``` java
+package org.apache.hawq.pxf.api;
+/*
+ * Fragment holds a data fragment' information.
+ * Fragmenter.getFragments() returns a list of fragments.
+ */
+public class Fragment
+{
+    private String sourceName;    // File path+name, table name, etc.
+    private int index;            // Fragment index (incremented per sourceName)
+    private String[] replicas;    // Fragment replicas (1 or more)
+    private byte[]   metadata;    // Fragment metadata information (starting point + length, region location, etc.)
+    private byte[]   userData;    // ThirdParty data added to a fragment. Ignored if null
+    ...
+}
+```
+
+#### <a id="topic_fzd_tlv_c5"></a>org.apache.hawq.pxf.api.FragmentsStats
+
+The `Fragmenter.getFragmentsStats()` method returns a `FragmentsStats`:
+
+``` java
+package org.apache.hawq.pxf.api;
+/**
+ * FragmentsStats holds statistics for a given path.
+ */
+public class FragmentsStats {
+
+    // number of fragments
+    private long fragmentsNumber;
+    // first fragment size
+    private SizeAndUnit firstFragmentSize;
+    // total fragments size
+    private SizeAndUnit totalSize;
+
+   /**
+     * Enum to represent unit (Bytes/KB/MB/GB/TB)
+     */
+    public enum SizeUnit {
+        /**
+         * Byte
+         */
+        B,
+        /**
+         * KB
+         */
+        KB,
+        /**
+         * MB
+         */
+        MB,
+        /**
+         * GB
+         */
+        GB,
+        /**
+         * TB
+         */
+        TB;
+    };
+
+    /**
+     * Container for size and unit
+     */
+    public class SizeAndUnit {
+        long size;
+        SizeUnit unit;
+    ... 
+
+```
+
+`getFragmentsStats()` returns a string in JSON format of statistics for the data source. For example, if the input path is a HDFS directory of 3 files, each one of 1 block, the output will be the number of fragments (3), the size of the first file, and the size of all files in that directory.
+
+### <a id="accessor"></a>Accessor
+
+The Accessor retrieves specific fragments and passes records back to the Resolver.�For example, the HDFS plug-ins create a `org.apache.hadoop.mapred.FileInputFormat` and a `org.apache.hadoop.mapred.RecordReader` for an HDFS file and sends this to the Resolver.�In the case of HBase or Hive files, the Accessor returns single rows from an HBase or Hive table. PXF 1.x or higher contains the following Accessor implementations:
+
+<a id="accessor__table_ewm_ttz_4p"></a>
+
+<table>
+<caption><span class="tablecap">Table 3. Accessor base classes </span></caption>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th><p><code class="ph codeph">Accessor class</code></p></th>
+<th><p><code class="ph codeph">Description</code></p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hdfs.HdfsAtomicDataAccessor</td>
+<td>Base class for accessing datasources which cannot be split. These will be accessed by a single HAWQ segment</td>
+</tr>
+<tr class="even">
+<td>org.apache.hawq.pxf.plugins.hdfs.QuotedLineBreakAccessor</td>
+<td>Accessor for TEXT files that have records with embedded linebreaks</td>
+</tr>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hdfs.HdfsSplittableDataAccessor</td>
+<td><p>Base class for accessing HDFS files using <code class="ph codeph">RecordReaders</code></p></td>
+</tr>
+<tr class="even">
+<td>org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor</td>
+<td>Accessor for TEXT files (replaced the deprecated <code class="ph codeph">TextFileAccessor</code>, <code class="ph codeph">LineReaderAccessor</code>)</td>
+</tr>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hdfs.AvroFileAccessor</td>
+<td>Accessor for Avro files</td>
+</tr>
+<tr class="even">
+<td>org.apache.hawq.pxf.plugins.hdfs.SequenceFileAccessor</td>
+<td>Accessor for Sequence files</td>
+</tr>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hbase.HBaseAccessor�</td>
+<td>Accessor for HBase tables�</td>
+</tr>
+<tr class="even">
+<td>org.apache.hawq.pxf.plugins.hive.HiveAccessor</td>
+<td>Accessor for Hive tables�</td>
+</tr>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hive.HiveLineBreakAccessor</td>
+<td>Accessor for Hive tables with text files</td>
+</tr>
+<tr class="even">
+<td>org.apache.hawq.pxf.plugins.hive.HiveRCFileAccessor</td>
+<td>Accessor for Hive tables with RC files</td>
+</tr>
+</tbody>
+</table>
+
+The class must extend the `org.apache.hawq.pxf.Plugin`� class, and�implement one or both interfaces:
+
+-   `org.apache.hawq.pxf.api.ReadAccessor`
+-   `org.apache.hawq.pxf.api.WriteAccessor`
+
+``` java
+package org.apache.hawq.pxf.api;
+/*
+ * Internal interface that defines the access to data on the source
+ * data store (e.g, a file on HDFS, a region of an HBase table, etc).
+ * All classes that implement actual access to such data sources must
+ * respect this interface
+ */
+public interface ReadAccessor {
+    boolean openForRead() throws Exception;
+    OneRow readNextObject() throws Exception;
+    void closeForRead() throws Exception;
+}
+```
+
+``` java
+package org.apache.hawq.pxf.api;
+/*
+ * An interface for writing data into a data store
+ * (e.g, a sequence file on HDFS).
+ * All classes that implement actual access to such data sources must
+ * respect this interface
+ */
+public interface WriteAccessor {
+    boolean openForWrite() throws Exception;
+    OneRow writeNextObject(OneRow onerow) throws Exception;
+    void closeForWrite() throws Exception;
+}
+```
+
+The Accessor calls `openForRead()` to read existing data. After reading the data, it calls `closeForRead()`. `readNextObject()` returns one of the�following:
+
+-   a single record, encapsulated in a OneRow object
+-   null if it reaches `EOF`
+
+The Accessor calls `openForWrite()` to write data out. After writing the data, it�writes a `OneRow` object with `writeNextObject()`, and when done calls `closeForWrite()`. `OneRow` represents a key-value item.
+
+#### <a id="com.pivotal.pxf.api.onerow"></a>org.apache.hawq.pxf.api.OneRow
+
+``` java
+package org.apache.hawq.pxf.api;
+/*
+ * Represents one row in the external system data store. Supports
+ * the general case where one row contains both a record and a
+ * separate key like in the HDFS key/value model for MapReduce
+ * (Example: HDFS sequence file)
+ */
+public class OneRow {
+    /*
+     * Default constructor
+     */
+    public OneRow();
+
+    /*
+     * Constructor sets key and data
+     */
+    public OneRow(Object inKey, Object inData);
+
+    /*
+     * Setter for key
+     */
+    public void setKey(Object inKey);
+    
+    /*
+     * Setter for data
+     */
+    public void setData(Object inData);
+
+    /*
+     * Accessor for key
+     */
+    public Object getKey();
+
+    /*
+     * Accessor for data
+     */
+    public Object getData();
+
+    /*
+     * Show content
+     */
+    public String toString();
+}
+```
+
+### <a id="resolver"></a>Resolver
+
+The Resolver deserializes records in the `OneRow` format and serializes them to a list of `OneField` objects. PXF converts a `OneField` object to a HAWQ-readable�`GPDBWritable` format.�PXF 1.x or higher contains the following implementations:
+
+<a id="resolver__table_nbd_d5z_4p"></a>
+
+<table>
+<caption><span class="tablecap">Table 4. Resolver base classes</span></caption>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th><p><code class="ph codeph">Resolver class</code></p></th>
+<th><p><code class="ph codeph">Description</code></p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.StringPassResolver</code></p></td>
+<td><p><code class="ph codeph">StringPassResolver</code> replaced the deprecated <code class="ph codeph">TextResolver</code>. It passes whole records (composed of any data types) as strings without parsing them</p></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.WritableResolver</code></p></td>
+<td><p>Resolver for custom Hadoop Writable implementations. Custom class can be specified with the schema in DATA-SCHEMA. Supports the following types:</p>
+<pre class="pre codeblock"><code>DataType.BOOLEAN
+DataType.INTEGER
+DataType.BIGINT
+DataType.REAL
+DataType.FLOAT8
+DataType.VARCHAR
+DataType.BYTEA</code></pre></td>
+</tr>
+<tr class="odd">
+<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.AvroResolver</code></p></td>
+<td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code>.�</p></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hbase.HBaseResolver</code></p></td>
+<td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code> and also supports the following:</p>
+<pre class="pre codeblock"><code>DataType.SMALLINT
+DataType.NUMERIC
+DataType.TEXT
+DataType.BPCHAR
+DataType.TIMESTAMP</code></pre></td>
+</tr>
+<tr class="odd">
+<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveResolver</code></p></td>
+<td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code> and also supports the following:</p>
+<pre class="pre codeblock"><code>DataType.SMALLINT
+DataType.TEXT
+DataType.TIMESTAMP</code></pre></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveStringPassResolver</code></p></td>
+<td>Specialized <code class="ph codeph">HiveResolver</code> for a Hive table stored as Text files. Should be used together with <code class="ph codeph">HiveInputFormatFragmenter</code>/<code class="ph codeph">HiveLineBreakAccessor</code>.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveColumnarSerdeResolver</code></td>
+<td>Specialized <code class="ph codeph">HiveResolver</code> for a Hive table stored as RC file. Should be used together with <code class="ph codeph">HiveInputFormatFragmenter</code>/<code class="ph codeph">HiveRCFileAccessor</code>.</td>
+</tr>
+</tbody>
+</table>
+
+The class needs to extend the `org.apache.hawq.pxf.resolvers.Plugin class                `, and�implement one or both interfaces:
+
+-   `org.apache.hawq.pxf.api.ReadResolver`
+-   `org.apache.hawq.pxf.api.WriteResolver`
+
+``` java
+package org.apache.hawq.pxf.api;
+/*
+ * Interface that defines the deserialization of one record brought from
+ * the data Accessor. Every implementation of a deserialization method
+ * (e.g, Writable, Avro, ...) must implement this interface.
+ */
+public interface ReadResolver {
+    public List<OneField> getFields(OneRow row) throws Exception;
+}
+```
+
+``` java
+package org.apache.hawq.pxf.api;
+/*
+* Interface that defines the serialization of data read from the DB
+* into a OneRow object.
+* Every implementation of a serialization method
+* (e.g, Writable, Avro, ...) must implement this interface.
+*/
+public interface WriteResolver {
+    public OneRow setFields(List<OneField> record) throws Exception;
+}
+```
+
+**Note:**
+
+-   getFields should return a List&lt;OneField&gt;, each OneField representing a single field.
+-   `setFields�`should return a single�`OneRow�`object, given a List&lt;OneField&gt;.
+
+#### <a id="com.pivotal.pxf.api.onefield"></a>org.apache.hawq.pxf.api.OneField
+
+``` java
+package org.apache.hawq.pxf.api;
+/*
+ * Defines one field on a deserialized record.
+ * 'type' is in OID values recognized by GPDBWritable
+ * 'val' is the actual field value
+ */
+public class OneField {
+    public OneField() {}
+    public OneField(int type, Object val) {
+        this.type = type;
+        this.val = val;
+    }
+
+    public int type;
+    public Object val;
+}
+```
+
+The value of `type` should follow the org.apache.hawq.pxf.api.io.DataType�`enums`. `val` is the appropriate Java class. Supported types are as follows:
+
+<a id="com.pivotal.pxf.api.onefield__table_f4x_35z_4p"></a>
+
+<table>
+<caption><span class="tablecap">Table 5. Resolver supported types</span></caption>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th><p>DataType recognized OID</p></th>
+<th><p>Field value</p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><p><code class="ph codeph">DataType.SMALLINT</code></p></td>
+<td><p><code class="ph codeph">Short</code></p></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">DataType.INTEGER</code></p></td>
+<td><p><code class="ph codeph">Integer</code></p></td>
+</tr>
+<tr class="odd">
+<td><p><code class="ph codeph">DataType.BIGINT</code></p></td>
+<td><p><code class="ph codeph">Long</code></p></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">DataType.REAL</code></p></td>
+<td><p><code class="ph codeph">Float</code></p></td>
+</tr>
+<tr class="odd">
+<td><p><code class="ph codeph">DataType.FLOAT8</code></p></td>
+<td><p><code class="ph codeph">Double</code></p></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">DataType.NUMERIC</code></p></td>
+<td><p><code class="ph codeph">String (&quot;651687465135468432168421&quot;)</code></p></td>
+</tr>
+<tr class="odd">
+<td><p><code class="ph codeph">DataType.BOOLEAN</code></p></td>
+<td><p><code class="ph codeph">Boolean</code></p></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">DataType.VARCHAR</code></p></td>
+<td><p><code class="ph codeph">String</code></p></td>
+</tr>
+<tr class="odd">
+<td><p><code class="ph codeph">DataType.BPCHAR</code></p></td>
+<td><p><code class="ph codeph">String</code></p></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">DataType.TEXT</code></p></td>
+<td><p><code class="ph codeph">String</code></p></td>
+</tr>
+<tr class="odd">
+<td><p><code class="ph codeph">DataType.BYTEA</code></p></td>
+<td><p><code class="ph codeph">byte []</code></p></td>
+</tr>
+<tr class="even">
+<td><p><code class="ph codeph">DataType.TIMESTAMP</code></p></td>
+<td><p><code class="ph codeph">Timestamp</code></p></td>
+</tr>
+<tr class="odd">
+<td><p><code class="ph codeph">DataType.Date</code></p></td>
+<td><p><code class="ph codeph">Date</code></p></td>
+</tr>
+</tbody>
+</table>
+
+### <a id="analyzer"></a>Analyzer
+
+The Analyzer has been deprecated. A new function in the Fragmenter API (Fragmenter.getFragmentsStats) is used to gather initial statistics for the data source, and provides PXF statistical data for the HAWQ query optimizer. For a�detailed explanation�about HAWQ statistical data gathering, see `ANALYZE` in the SQL Command Reference.
+
+Using the Analyzer API will result in an error message. Use the Fragmenter and getFragmentsStats to gather advanced statistics.
+
+## <a id="aboutcustomprofiles"></a>About Custom Profiles
+
+Administrators can add new profiles or edit the built-in profiles in�`/etc/conf/pxf-profiles.xml` file. See [Using Profiles to Read and Write Data](ReadWritePXF.html#readingandwritingdatawithpxf) for information on how to add custom profiles.
+
+## <a id="aboutqueryfilterpush-down"></a>About Query Filter Push-Down
+
+If a query includes a number of WHERE clause filters, �HAWQ may push all or some queries to PXF. If pushed to PXF, the Accessor can use the filtering information when accessing the data source to fetch tuples. These filters�only return records that pass filter evaluation conditions.�This reduces data processing�and�reduces network traffic from the SQL engine.
+
+This topic includes the following information:
+
+-   Filter Availability and Ordering�
+-   Creating�a Filter Builder class
+-   Filter Operations
+-   Sample Implementation
+-   Using Filters
+
+### <a id="filteravailabilityandordering"></a>Filter Availability and Ordering
+
+PXF�allows push-down filtering if the following rules are met:
+
+-   Uses only�single expressions or a group of AND'ed expressions - no OR'ed expressions.
+-   Uses only expressions of supported data types and operators.
+
+FilterParser�scans the pushed down filter list and uses the user's build() implementation to build the filter.
+
+-   For simple expressions (e.g, a &gt;= 5), FilterParser places column objects on the left of the�expression and constants on the right.
+-   For compound expressions (e.g &lt;expression&gt; AND &lt;expression&gt;) it handles three cases in the build() function:
+    1.  Simple Expression: &lt;Column Index&gt; &lt;Operation&gt; &lt;Constant&gt;
+    2.  Compound Expression: &lt;Filter Object&gt; AND &lt;Filter Object&gt;
+    3.  Compound Expression: &lt;List of Filter Objects&gt; AND &lt;Filter Object&gt;
+
+### <a id="creatingafilterbuilderclass"></a>Creating a Filter Builder Class
+
+To check�if a filter queried PXF, call the `InputData                   hasFilter()` function:
+
+``` java
+/*
+�* Returns true if there is a filter string to parse
+�*/
+public boolean hasFilter()
+{
+   return filterStringValid;
+}
+```
+
+If `hasFilter()` returns `false`, there is no filter information. If it returns `true`,�PXF parses the serialized filter string into a meaningful filter object to use later. To do so, create a filter builder�class that implements the�`FilterParser.FilterBuilder�` interface:
+
+``` java
+package org.apache.hawq.pxf.api;
+/*
+�* Interface a user of FilterParser should implement
+�* This is used to let the user build filter expressions in the manner she�
+�* sees fit
+�*
+�* When an operator is parsed, this function is called to let the user decide
+�* what to do with its operands.
+�*/
+interface FilterBuilder {
+   public Object build(Operation operation, Object left, Object right) throws Exception;
+}
+```
+
+While PXF parses the serialized filter string from the incoming HAWQ query, it calls the `build() interface` function. PXF�calls this function for each condition or filter pushed down to PXF. Implementing this function returns some Filter object or representation that the Fragmenter, Accessor, or Resolver uses in runtime to filter out records. The `build()` function accepts an Operation as input, and�left and right operands.
+
+### <a id="filteroperations"></a>Filter Operations
+
+``` java
+/*
+�* Operations supported by the parser
+�*/
+public enum Operation
+{
+    HDOP_LT, //less than
+    HDOP_GT, //greater than
+    HDOP_LE, //less than or equal
+    HDOP_GE, //greater than or equal
+    HDOP_EQ, //equal
+    HDOP_NE, //not equal
+    HDOP_AND //AND'ed conditions
+};
+```
+
+#### <a id="filteroperands"></a>Filter Operands
+
+There are three types of operands:
+
+-   Column Index
+-   Constant
+-   Filter Object
+
+#### <a id="columnindex"></a>Column Index
+
+``` java
+/*
+�* Represents a column index
+ */
+public class ColumnIndex
+{
+   public ColumnIndex(int idx);
+
+   public int index();
+}
+```
+
+#### <a id="constant"></a>Constant
+
+``` java
+/*
+ * The class represents a constant object (String, Long, ...)
+�*/
+public class Constant
+{
+    public Constant(Object obj);
+
+    public Object constant();
+}
+```
+
+#### <a id="filterobject"></a>Filter Object
+
+Filter Objects can be internal, such as those you define; or external, those that the remote system uses. For example, for HBase, you define the HBase�`Filter` class (`org.apache.hadoop.hbase.filter.Filter`), while�for Hive, you use an internal default representation created by the PXF framework, called�`BasicFilter`. You can decide�the filter object to use, including writing a new one. `BasicFilter` is the most common:
+
+``` java
+/*
+�* Basic filter provided for cases where the target storage system does not provide its own filter
+�* For example: Hbase storage provides its own filter but for a Writable based record in a SequenceFile
+�* there is no filter provided and so we need to have a default
+�*/
+static public class BasicFilter
+{
+   /*
+�   * C'tor
+�   */
+   public BasicFilter(Operation inOper, ColumnIndex inColumn, Constant inConstant);
+
+   /*
+��  * Returns oper field
+��  */
+   public Operation getOperation();
+
+   /*
+��  * Returns column field
+��  */
+   public ColumnIndex getColumn();
+
+   /*
+��  * Returns constant field
+��  */
+   public Constant getConstant();
+}
+```
+
+### <a id="sampleimplementation"></a>Sample Implementation
+
+Let's look at the following sample implementation of the filter builder class and its `build()` function that handles all 3 cases. Let's assume that BasicFilter was used to hold our filter operations.
+
+``` java
+import java.util.LinkedList;
+import java.util.List;
+
+import org.apache.hawq.pxf.api.FilterParser;
+import org.apache.hawq.pxf.api.utilities.InputData;
+
+public class MyDemoFilterBuilder implements FilterParser.FilterBuilder
+{
+    private InputData inputData;
+
+    public MyDemoFilterBuilder(InputData input)
+    {
+        inputData = input;
+    }
+
+    /*
+     * Translates a filterString into a FilterParser.BasicFilter or a list of such filters
+     */
+    public Object getFilterObject(String filterString) throws Exception
+    {
+        FilterParser parser = new FilterParser(this);
+        Object result = parser.parse(filterString);
+
+        if (!(result instanceof FilterParser.BasicFilter) && !(result instanceof List))
+            throw new Exception("String " + filterString + " resolved to no filter");
+
+        return result;
+    }
+ 
+    public Object build(FilterParser.Operation opId,
+                        Object leftOperand,
+                        Object rightOperand) throws Exception
+    {
+        if (leftOperand instanceof FilterParser.BasicFilter)
+        {
+            //sanity check
+            if (opId != FilterParser.Operation.HDOP_AND || !(rightOperand instanceof FilterParser.BasicFilter))
+                throw new Exception("Only AND is allowed between compound expressions");
+
+            //case 3
+            if (leftOperand instanceof List)
+                return handleCompoundOperations((List<FilterParser.BasicFilter>)leftOperand, (FilterParser.BasicFilter)rightOperand);
+            //case 2
+            else
+                return handleCompoundOperations((FilterParser.BasicFilter)leftOperand, (FilterParser.BasicFilter)rightOperand);
+        }
+
+        //sanity check
+        if (!(rightOperand instanceof FilterParser.Constant))
+            throw new Exception("expressions of column-op-column are not supported");
+
+        //case 1 (assume column is on the left)
+        return handleSimpleOperations(opId, (FilterParser.ColumnIndex)leftOperand, (FilterParser.Constant)rightOperand);
+    }
+
+    private FilterParser.BasicFilter handleSimpleOperations(FilterParser.Operation opId,
+                                                            FilterParser.ColumnIndex column,
+                                                            FilterParser.Constant constant)
+    {
+        return new FilterParser.BasicFilter(opId, column, constant);
+    }
+
+    private  List handleCompoundOperations(List<FilterParser.BasicFilter> left,
+                                       FilterParser.BasicFilter right)
+    {
+        left.add(right);
+        return left;
+    }
+
+    private List handleCompoundOperations(FilterParser.BasicFilter left,
+                                          FilterParser.BasicFilter right)
+    {
+        List<FilterParser.BasicFilter> result = new LinkedList<FilterParser.BasicFilter>();
+
+        result.add(left);
+        result.add(right);
+        return result;
+    }
+}
+```
+
+Here is an example of creating a filter-builder class to implement the Filter interface, implement the `build()` function, and generate�the Filter object. To do this, use either the Accessor, Resolver, or both to�call the `getFilterObject` function:
+
+``` java
+if (inputData.hasFilter())
+{
+    String filterStr = inputData.filterString();
+    MyDemoFilterBuilder demobuilder = new MyDemoFilterBuilder(inputData);
+    Object filter = demobuilder.getFilterObject(filterStr);
+    ...
+}
+```
+
+### <a id="usingfilters"></a>Using Filters
+
+Once you have�built the Filter object(s), you can use them to read data and filter out records that do not meet the filter conditions:
+
+1.  Check whether you have a single or multiple filters.
+2.  Evaluate each�filter and iterate over each filter in�the list. Disqualify the record if filter conditions fail.
+
+``` java
+if (filter instanceof List)
+{
+    for (Object f : (List)filter)
+        <evaluate f>; //may want to break if evaluation results in negative answer for any filter.
+}
+else
+{
+    <evaluate filter>;
+}
+```
+
+Example of evaluating a single filter:
+
+``` java
+//Get our BasicFilter Object
+FilterParser.BasicFilter bFilter = (FilterParser.BasicFilter)filter;
+
+ 
+//Get operation and operator values
+FilterParser.Operation op = bFilter.getOperation();
+int colIdx = bFilter.getColumn().index();
+String val = bFilter.getConstant().constant().toString();
+
+//Get more info about the column if desired
+ColumnDescriptor col = input.getColumn(colIdx);
+String colName = filterColumn.columnName();
+ 
+//Now evaluate it against the actual column value in the record...
+```
+
+## <a id="reference"></a>Examples
+
+This�section contains the following information:
+
+-   [External Table Examples](#externaltableexamples)
+-   [Plug-in Examples](#pluginexamples)
+
+-   **[External Table Examples](../pxf/PXFExternalTableandAPIReference.html#externaltableexamples)**
+
+-   **[Plug-in Examples](../pxf/PXFExternalTableandAPIReference.html#pluginexamples)**
+
+### <a id="externaltableexamples"></a>External Table Examples
+
+#### <a id="example1"></a>Example 1
+
+Shows an external table that can analyze all `Sequencefiles` that are populated `Writable` serialized records and exist inside the hdfs directory `sales/2012/01`. `SaleItem.class` is a Java class that implements the `Writable` interface and describes a Java record that includes�three class members.
+
+**Note:** In this example, the class member names do not necessarily match the database attribute names, but the types match. `SaleItem.class` must exist in the classpath of every DataNode and NameNode.
+
+``` sql
+CREATE EXTERNAL TABLE jan_2012_sales (id int, total int, comments varchar)
+LOCATION ('pxf://10.76.72.26:51200/sales/2012/01/*.seq'
+          '?FRAGMENTER=org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter'
+          '&ACCESSOR=org.apache.hawq.pxf.plugins.hdfs.SequenceFileAccessor'
+          '&RESOLVER=org.apache.hawq.pxf.plugins.hdfs.WritableResolver'
+          '&DATA-SCHEMA=SaleItem')
+FORMAT 'custom' (formatter='pxfwritable_import');
+```
+
+#### <a id="example2"></a>Example 2
+
+Example 2 shows an external table that can analyze an HBase table called `sales`. It has 10 column families `(cf1 \u2013 cf10)` and many qualifier names in each family. This example focuses on the `rowkey`, the qualifier `saleid` inside column family `cf1`, and the qualifier `comments` inside column family `cf8` and uses direct mapping:
+
+``` sql
+CREATE EXTERNAL TABLE hbase_sales
+  (hbaserowkey text, "cf1:saleid" int, "cf8:comments" varchar)
+LOCATION ('pxf://10.76.72.26:51200/sales?PROFILE=HBase')
+FORMAT 'custom' (formatter='pxfwritable_import');
+```
+
+#### <a id="example3"></a>Example 3
+
+This example uses indirect mapping. Note how the attribute name changes and how they correspond to the HBase lookup table. Executing `SELECT FROM                      my_hbase_sales`, the attribute names automatically convert to their HBase correspondents.
+
+``` sql
+CREATE EXTERNAL TABLE my_hbase_sales (hbaserowkey text, id int, cmts varchar)
+LOCATION
+('pxf://10.76.72.26:51200/sales?PROFILE=HBase')
+FORMAT 'custom' (formatter='pxfwritable_import');
+```
+
+#### <a id="example4"></a>Example 4
+
+Shows an example for a writable table of compressed data.�
+
+``` sql
+CREATE WRITABLE EXTERNAL TABLE sales_aggregated_2012
+    (id int, total int, comments varchar)
+LOCATION ('pxf://10.76.72.26:51200/sales/2012/aggregated'
+          '?PROFILE=HdfsTextSimple'
+          '&COMPRESSION_CODEC=org.apache.hadoop.io.compress.BZip2Codec')
+FORMAT 'TEXT';
+```
+
+#### <a id="example5"></a>Example 5
+
+Shows an example for a writable table into a sequence file, using a schema file. For writable tables, the formatter is `pxfwritable_export`.
+
+``` sql
+CREATE WRITABLE EXTERNAL TABLE sales_max_2012
+    (id int, total int, comments varchar)
+LOCATION ('pxf://10.76.72.26:51200/sales/2012/max'
+          '?FRAGMENTER=org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter'
+          '&ACCESSOR=org.apache.hawq.pxf.plugins.hdfs.SequenceFileAccessor'
+          '&RESOLVER=org.apache.hawq.pxf.plugins.hdfs.WritableResolver'
+          '&DATA-SCHEMA=SaleItem')
+FORMAT 'custom' (formatter='pxfwritable_export');
+```
+
+### <a id="pluginexamples"></a>Plug-in Examples
+
+This section contains sample dummy implementations of all three plug-ins. It also contains a usage example.
+
+#### <a id="dummyfragmenter"></a>Dummy Fragmenter
+
+``` java
+import org.apache.hawq.pxf.api.Fragmenter;
+import org.apache.hawq.pxf.api.Fragment;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import java.util.List;
+
+/*
+ * Class that defines the splitting of a data resource into fragments that can
+ * be processed in parallel
+ * getFragments() returns the fragments information of a given path (source name and location of each fragment).
+ * Used to get fragments of data that could be read in parallel from the different segments.
+ * Dummy implementation, for documentation
+ */
+public class DummyFragmenter extends Fragmenter {
+    public DummyFragmenter(InputData metaData) {
+        super(metaData);
+    }
+    /*
+     * path is a data source URI that can appear as a file name, a directory name or a wildcard
+     * returns the data fragments - identifiers of data and a list of available hosts
+     */
+    @Override
+    public List<Fragment> getFragments() throws Exception {
+        String localhostname = java.net.InetAddress.getLocalHost().getHostName();
+        String[] localHosts = new String[]{localhostname, localhostname};
+        fragments.add(new Fragment(inputData.getDataSource() + ".1" /* source name */,
+                localHosts /* available hosts list */,
+                "fragment1".getBytes()));
+        fragments.add(new Fragment(inputData.getDataSource() + ".2" /* source name */,
+                localHosts /* available hosts list */,
+                "fragment2".getBytes()));
+        fragments.add(new Fragment(inputData.getDataSource() + ".3" /* source name */,
+                localHosts /* available hosts list */,
+                "fragment3".getBytes()));
+        return fragments;
+    }
+}
+```
+
+#### <a id="dummyaccessor"></a>Dummy Accessor
+
+``` java
+import org.apache.hawq.pxf.api.WriteAccessor;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.utilities.Plugin;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/*
+ * Internal interface that defines the access to a file on HDFS.  All classes
+ * that implement actual access to an HDFS file (sequence file, avro file,...)
+ * must respect this interface
+ * Dummy implementation, for documentation
+ */
+public class DummyAccessor extends Plugin implements ReadAccessor, WriteAccessor {
+    private static final Log LOG = LogFactory.getLog(DummyAccessor.class);
+    private int rowNumber;
+    private int fragmentNumber;
+    public DummyAccessor(InputData metaData) {
+        super(metaData);
+    }
+    @Override
+    public boolean openForRead() throws Exception {
+        /* fopen or similar */
+        return true;
+    }
+    @Override
+    public OneRow readNextObject() throws Exception {
+        /* return next row , <key=fragmentNo.rowNo, val=rowNo,text,fragmentNo>*/
+        /* check for EOF */
+        if (fragmentNumber > 0)
+            return null; /* signal EOF, close will be called */
+        int fragment = inputData.getDataFragment();
+        String fragmentMetadata = new String(inputData.getFragmentMetadata());
+        /* generate row */
+        OneRow row = new OneRow(fragment + "." + rowNumber, /* key */
+                rowNumber + "," + fragmentMetadata + "," + fragment /* value */);
+        /* advance */
+        rowNumber += 1;
+        if (rowNumber == 2) {
+            rowNumber = 0;
+            fragmentNumber += 1;
+        }
+        /* return data */
+        return row;
+    }
+    @Override
+    public void closeForRead() throws Exception {
+        /* fclose or similar */
+    }
+    @Override
+    public boolean openForWrite() throws Exception {
+        /* fopen or similar */
+        return true;
+    }
+    @Override
+    public boolean writeNextObject(OneRow onerow) throws Exception {
+        LOG.info(onerow.getData());
+        return true;
+    }
+    @Override
+    public void closeForWrite() throws Exception {
+        /* fclose or similar */
+    }
+}
+```
+
+#### <a id="dummyresolver"></a>Dummy Resolver
+
+``` java
+import org.apache.hawq.pxf.api.OneField;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.ReadResolver;
+import org.apache.hawq.pxf.api.WriteResolver;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.utilities.Plugin;
+import java.util.LinkedList;
+import java.util.List;
+import static org.apache.hawq.pxf.api.io.DataType.INTEGER;
+import static org.apache.hawq.pxf.api.io.DataType.VARCHAR;
+
+/*
+ * Class that defines the deserializtion of one record brought from the external input data.
+ * Every implementation of a deserialization method (Writable, Avro, BP, Thrift, ...)
+ * must inherit this abstract class
+ * Dummy implementation, for documentation
+ */
+public class DummyResolver extends Plugin implements ReadResolver, WriteResolver {
+    private int rowNumber;
+    public DummyResolver(InputData metaData) {
+        super(metaData);
+        rowNumber = 0;
+    }
+    @Override
+    public List<OneField> getFields(OneRow row) throws Exception {
+        /* break up the row into fields */
+        List<OneField> output = new LinkedList<OneField>();
+        String[] fields = ((String) row.getData()).split(",");
+        output.add(new OneField(INTEGER.getOID() /* type */, Integer.parseInt(fields[0]) /* value */));
+        output.add(new OneField(VARCHAR.getOID(), fields[1]));
+        output.add(new OneField(INTEGER.getOID(), Integer.parseInt(fields[2])));
+        return output;
+    }
+    @Override
+    public OneRow setFields(List<OneField> record) throws Exception {
+        /* should read inputStream row by row */
+        return rowNumber > 5
+                ? null
+                : new OneRow(null, "row number " + rowNumber++);
+    }
+}
+```
+
+#### <a id="usageexample"></a>Usage Example
+
+``` sql
+psql=# CREATE EXTERNAL TABLE dummy_tbl
+    (int1 integer, word text, int2 integer)
+LOCATION ('pxf://localhost:51200/dummy_location'
+          '?FRAGMENTER=DummyFragmenter'
+          '&ACCESSOR=DummyAccessor'
+          '&RESOLVER=DummyResolver')
+FORMAT 'custom' (formatter = 'pxfwritable_import');
+ 
+CREATE EXTERNAL TABLE
+psql=# SELECT * FROM dummy_tbl;
+int1 | word | int2
+------+------+------
+0 | fragment1 | 0
+1 | fragment1 | 0
+0 | fragment2 | 0
+1 | fragment2 | 0
+0 | fragment3 | 0
+1 | fragment3 | 0
+(6 rows)
+
+psql=# CREATE WRITABLE EXTERNAL TABLE dummy_tbl_write
+    (int1 integer, word text, int2 integer)
+LOCATION ('pxf://localhost:51200/dummy_location'
+          '?ACCESSOR=DummyAccessor'
+          '&RESOLVER=DummyResolver')
+FORMAT 'custom' (formatter = 'pxfwritable_export');
+ 
+CREATE EXTERNAL TABLE
+psql=# INSERT INTO dummy_tbl_write VALUES (1, 'a', 11), (2, 'b', 22);
+INSERT 0 2
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/ReadWritePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/ReadWritePXF.html.md.erb b/markdown/pxf/ReadWritePXF.html.md.erb
new file mode 100644
index 0000000..18f655d
--- /dev/null
+++ b/markdown/pxf/ReadWritePXF.html.md.erb
@@ -0,0 +1,123 @@
+---
+title: Using Profiles to Read and Write Data
+---
+
+PXF profiles are collections of common metadata attributes that can be used to simplify the reading and writing of data. You can use any of the built-in profiles that come with PXF or you can create your own.
+
+For example, if you are writing single line records to text files on HDFS, you could use the built-in HdfsTextSimple profile. You specify this profile when you create the PXF external table used to write the data to HDFS.
+
+## <a id="built-inprofiles"></a>Built-In Profiles
+
+PXF comes with a number of built-in profiles that group together�a collection of metadata attributes. PXF built-in profiles simplify access to the following types of data storage systems:
+
+-   HDFS File Data (Read + Write)
+-   Hive (Read only)
+-   HBase (Read only)
+-   JSON (Read only)
+
+You can specify a built-in profile when you want to read data that exists inside HDFS files, Hive tables, HBase tables, and JSON files and for writing data into HDFS files.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Profile</th>
+<th>Description</th>
+<th>Fragmenter/Accessor/Resolver</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>HdfsTextSimple</td>
+<td>Read or write delimited single line records from or to plain text files on HDFS.</td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor</li>
+<li>org.apache.hawq.pxf.plugins.hdfs.StringPassResolver</li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>HdfsTextMulti</td>
+<td>Read delimited single or multi-line records (with quoted linefeeds) from plain text files on HDFS. This profile is not splittable (non parallel); reading is slower than reading with HdfsTextSimple.</td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.hdfs.QuotedLineBreakAccessor</li>
+<li>org.apache.hawq.pxf.plugins.hdfs.StringPassResolver</li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>Hive</td>
+<td>Read a Hive table with any of the available storage formats: text, RC, ORC, Sequence, or Parquet.</td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hive.HiveDataFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveAccessor</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveResolver</li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>HiveRC</td>
+<td>Optimized read of a Hive table where each partition is stored as an RCFile. 
+<div class="note note">
+Note: The <code class="ph codeph">DELIMITER</code> parameter is mandatory.
+</div></td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hive.HiveInputFormatFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveRCFileAccessor</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveColumnarSerdeResolver</li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>HiveText</td>
+<td>Optimized read of a Hive table where each partition is stored as a text file.
+<div class="note note">
+Note: The <code class="ph codeph">DELIMITER</code> parameter is mandatory.
+</div></td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hive.HiveInputFormatFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveLineBreakAccessor</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveStringPassResolver</li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>HBase</td>
+<td>Read an HBase data store engine.</td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.hbase.HBaseAccessor</li>
+<li>org.apache.hawq.pxf.plugins.hbase.HBaseResolver</li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>Avro</td>
+<td>Read Avro files (fileName.avro).</td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.hdfs.AvroFileAccessor</li>
+<li>org.apache.hawq.pxf.plugins.hdfs.AvroResolver</li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>JSON</td>
+<td>Read JSON files (fileName.json) from HDFS.</td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.json.JsonAccessor</li>
+<li>org.apache.hawq.pxf.plugins.json.JsonResolver</li>
+</ul></td>
+</tr>
+</tbody>
+</table>
+
+## <a id="addingandupdatingprofiles"></a>Adding and Updating Profiles
+
+Each profile has a mandatory unique�name and an optional�description. In addition, each profile contains a set of plug-ins�that�are an�extensible set of metadata attributes.  Administrators can add new profiles or edit the built-in profiles defined in�`/etc/pxf/conf/pxf-profiles.xml`. 
+
+**Note:** Add the JAR files associated with custom PXF plug-ins to the `/etc/pxf/conf/pxf-public.classpath` configuration file.
+
+After you make changes in `pxf-profiles.xml` (or any other PXF configuration file), propagate the changes to all nodes with PXF installed, and then restart the PXF service on all nodes.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/TroubleshootingPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/TroubleshootingPXF.html.md.erb b/markdown/pxf/TroubleshootingPXF.html.md.erb
new file mode 100644
index 0000000..9febe09
--- /dev/null
+++ b/markdown/pxf/TroubleshootingPXF.html.md.erb
@@ -0,0 +1,273 @@
+---
+title: Troubleshooting PXF
+---
+
+## <a id="pxerrortbl"></a>PXF Errors
+
+The following table lists some common errors encountered while using PXF:
+
+<table>
+<caption><span class="tablecap">Table 1. PXF Errors and Explanation</span></caption>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Error</th>
+<th>Common Explanation</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>ERROR:� invalid URI pxf://localhost:51200/demo/file1: missing options section</td>
+<td><code class="ph codeph">LOCATION</code> does not include options after the file name: <code class="ph codeph">&lt;path&gt;?&lt;key&gt;=&lt;value&gt;&amp;&lt;key&gt;=&lt;value&gt;...</code></td>
+</tr>
+<tr class="even">
+<td>ERROR:� protocol &quot;pxf&quot; does not exist</td>
+<td>HAWQ is not compiled with PXF�protocol. It requires�the GPSQL�version of�HAWQ</td>
+</tr>
+<tr class="odd">
+<td>ERROR:� remote component error (0) from '&lt;x&gt;': There is no pxf servlet listening on the host and port specified in the external table url.</td>
+<td>Wrong server or port, or the service is not started</td>
+</tr>
+<tr class="even">
+<td>ERROR:� Missing FRAGMENTER option in the pxf uri: pxf://localhost:51200/demo/file1?a=a</td>
+<td>No <code class="ph codeph">FRAGMENTER</code> option was specified in <code class="ph codeph">LOCATION</code>.</td>
+</tr>
+<tr class="odd">
+<td>ERROR:� remote component error (500) from '&lt;x&gt;': � type �Exception report � message � org.apache.hadoop.mapred.InvalidInputException:
+<p>Input path does not exist: hdfs://0.0.0.0:8020/demo/file1��</p></td>
+<td>File or pattern given in <code class="ph codeph">LOCATION</code> doesn't exist on specified path.</td>
+</tr>
+<tr class="even">
+<td>ERROR:�remote component error (500) from '&lt;x&gt;': � type �Exception report � message � org.apache.hadoop.mapred.InvalidInputException : Input Pattern hdfs://0.0.0.0:8020/demo/file* matches 0 files�</td>
+<td>File or pattern given in <code class="ph codeph">LOCATION</code> doesn't exist on specified path.</td>
+</tr>
+<tr class="odd">
+<td>ERROR:� remote component error (500) from '&lt;x&gt;': PXF not correctly installed in CLASSPATH</td>
+<td>Cannot find PXF Jar</td>
+</tr>
+<tr class="even">
+<td>ERROR:� PXF API encountered a HTTP 404 error. Either the PXF service (tomcat) on the DataNode was not started or the PXF webapp was not started.</td>
+<td>Either the required DataNode does not exist or PXF service (tcServer) on the DataNode is not started or PXF webapp was not started</td>
+</tr>
+<tr class="odd">
+<td>ERROR: �remote component error (500) from '&lt;x&gt;': �type �Exception report � message � java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/HTableInterface</td>
+<td>One of the classes required for running PXF or one of its plug-ins is missing. Check that all resources in the PXF classpath files exist on the cluster nodes</td>
+</tr>
+<tr class="even">
+<td>ERROR: remote component error (500)�from '&lt;x&gt;': � type �Exception report � message � java.io.IOException: Can't get Master Kerberos principal for use as renewer</td>
+<td>Secure PXF: YARN isn't properly configured for secure (Kerberized) HDFS installs</td>
+</tr>
+<tr class="odd">
+<td>ERROR: fail to get filesystem credential for uri hdfs://&lt;namenode&gt;:8020/</td>
+<td>Secure PXF: Wrong HDFS host or port is not 8020 (this is a limitation that will be removed in the next release)</td>
+</tr>
+<tr class="even">
+<td>ERROR: remote component error (413) from '&lt;x&gt;': HTTP status code is 413 but HTTP response string is empty</td>
+<td>The PXF table number of attributes and their name sizes are too large for tcServer to accommodate in its request buffer. The solution is to increase the value of the maxHeaderCount and maxHttpHeaderSize parameters on server.xml on tcServer instance on all nodes and then restart PXF:
+<p>&lt;Connector acceptCount=&quot;100&quot; connectionTimeout=&quot;20000&quot; executor=&quot;tomcatThreadPool&quot; maxKeepAliveRequests=&quot;15&quot;maxHeaderCount=&quot;&lt;some larger value&gt;&quot;maxHttpHeaderSize=&quot;&lt;some larger value in bytes&gt;&quot; port=&quot;${bio.http.port}&quot; protocol=&quot;org.apache.coyote.http11.Http11Protocol&quot; redirectPort=&quot;${bio.https.port}&quot;/&gt;</p></td>
+</tr>
+<tr class="odd">
+<td>ERROR: remote component error (500) from '&lt;x&gt;': type Exception report message java.lang.Exception: Class com.pivotal.pxf.&lt;plugin name&gt; does not appear in classpath. Plugins provided by PXF must start with &quot;org.apache.hawq.pxf&quot;</td>
+<td>Querying a PXF table that still uses the old package name (&quot;com.pivotal.pxf.*&quot;) results in an error message that recommends moving to the new package name (&quot;org.apache.hawq.pxf&quot;). </td>
+</tr>
+<tr class="even">
+<td><strong>HBase Specific Errors</strong></td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>ERROR:� remote component error (500) from '&lt;x&gt;': � type �Exception report � message � �org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for t1,,99999999999999 after 10 tries.</td>
+<td>HBase service is down, probably HRegionServer</td>
+</tr>
+<tr class="even">
+<td>ERROR:� remote component error (500) from '&lt;x&gt;':� type �Exception report � message � org.apache.hadoop.hbase.TableNotFoundException: nosuch</td>
+<td>HBase cannot find the requested table</td>
+</tr>
+<tr class="odd">
+<td>ERROR: �remote component error (500) from '&lt;x&gt;': �type �Exception report � message � java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/HTableInterface</td>
+<td>PXF cannot find a required JAR file, probably HBase's</td>
+</tr>
+<tr class="even">
+<td>ERROR:� remote component error (500) from '&lt;x&gt;': � type �Exception report � message � java.lang.NoClassDefFoundError: org/apache/zookeeper/KeeperException</td>
+<td>PXF cannot find ZooKeeper's JAR</td>
+</tr>
+<tr class="odd">
+<td>ERROR: �remote component error (500) from '&lt;x&gt;': �type �Exception report � message � java.lang.Exception: java.lang.IllegalArgumentException: Illegal HBase column name a, missing :</td>
+<td>PXF table has an illegal field name. Each field name must correspond to an HBase column in the syntax &lt;column family&gt;:&lt;field name&gt;</td>
+</tr>
+<tr class="even">
+<td>ERROR: remote component error (500) from '&lt;x&gt;': type Exception report message org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family a does not exist in region t1,,1405517248353.85f4977bfa88f4d54211cb8ac0f4e644. in table 't1', {NAME =&amp;gt; 'cf', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', BLOOMFILTER =&amp;gt; 'ROW', REPLICATION_SCOPE =&amp;gt; '0', COMPRESSION =&amp;gt; 'NONE', VERSIONS =&amp;gt; '1', TTL =&amp;gt; '2147483647', MIN_VERSIONS =&amp;gt; '0', KEEP_DELETED_CELLS =&amp;gt; 'false', BLOCKSIZE =&amp;gt; '65536', ENCODE_ON_DISK =&amp;gt; 'true', IN_MEMORY =&amp;gt; 'false', BLOCKCACHE =&amp;gt; 'true'}</td>
+<td>Required HBase table does not contain the requested column</td>
+</tr>
+<tr class="odd">
+<td><strong>Hive-Specific Errors</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>ERROR:� remote component error (500) from '&lt;x&gt;': �type �Exception report � message ��java.lang.RuntimeException: Failed to connect to Hive metastore: java.net.ConnectException: Connection refused</td>
+<td>Hive Metastore service is down</td>
+</tr>
+<tr class="odd">
+<td>ERROR:� remote component error (500) from '&lt;x&gt;':�type �Exception report � message
+<p>NoSuchObjectException(message:default.players table not found)</p></td>
+<td>Table doesn't exist in Hive</td>
+</tr>
+<tr class="even">
+<td><strong>JSON-Specific Errors</strong></td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>ERROR: No fields in record (seg0 slice1 host:&ltn&gt pid=&ltn&gt)
+<p>DETAIL: External table &lttablename&gt</p></td>
+<td>Check your JSON file for empty lines; remove them and try again</td>
+</tr>
+<tr class="odd">
+<td>ERROR:  remote component error (500) from host:51200:  type  Exception report   message   &lttext&gt[0] is not an array node    description   The server encountered an internal error that prevented it from fulfilling this request.    exception   java.io.IOException: &lttext&gt[0] is not an array node (libchurl.c:878)  (seg4 host:40000 pid=&ltn&gt)  
+<p>DETAIL:  External table &lttablename&gt</p></td>
+<td>JSON field assumed to be an array, but it is a scalar field.
+</td>
+</tr>
+
+</tbody>
+</table>
+
+
+## <a id="pxflogging"></a>PXF Logging
+Enabling more verbose logging may aid PXF troubleshooting efforts.
+
+PXF provides two categories of message logging - service-level and database-level.
+
+### <a id="pxfsvclogmsg"></a>Service-Level Logging
+
+PXF utilizes `log4j` for service-level logging. PXF-service-related log messages are captured in a log file specified by PXF's `log4j` properties file, `/etc/pxf/conf/pxf-log4j.properties`. The default PXF logging configuration will write `INFO` and more severe level logs to `/var/log/pxf/pxf-service.log`.
+
+PXF provides more detailed logging when the `DEBUG` level is enabled.  To configure PXF `DEBUG` logging, uncomment the following line in `pxf-log4j.properties`:
+
+``` shell
+#log4j.logger.org.apache.hawq.pxf=DEBUG
+```
+
+and restart the PXF service:
+
+``` shell
+$ sudo service pxf-service restart
+```
+
+With `DEBUG` level logging now enabled, perform your PXF operations; for example, creating and querying an external table. (Make note of the time; this will direct you to the relevant log messages in `/var/log/pxf/pxf-service.log`.)
+
+``` shell
+$ psql
+```
+
+``` sql
+gpadmin=# CREATE EXTERNAL TABLE hivetest(id int, newid int)
+    LOCATION ('pxf://namenode:51200/pxf_hive1?PROFILE=Hive')
+    FORMAT 'CUSTOM' (formatter='pxfwritable_import');
+gpadmin=# select * from hivetest;
+<select output>
+```
+
+Examine/collect the log messages from `pxf-service.log`.
+
+**Note**: `DEBUG` logging is verbose and has a performance impact.  Remember to turn off PXF service `DEBUG` logging after you have collected the desired information.
+ 
+
+### <a id="pxfdblogmsg"></a>Database-Level Logging
+
+Enable HAWQ and PXF debug message logging during operations on PXF external tables by setting the `client_min_messages` server configuration parameter to `DEBUG2` in your `psql` session.
+
+``` shell
+$ psql
+```
+
+``` sql
+gpadmin=# SET client_min_messages=DEBUG2
+gpadmin=# SELECT * FROM hivetest;
+...
+DEBUG2:  churl http header: cell #19: X-GP-URL-HOST: localhost
+DEBUG2:  churl http header: cell #20: X-GP-URL-PORT: 51200
+DEBUG2:  churl http header: cell #21: X-GP-DATA-DIR: pxf_hive1
+DEBUG2:  churl http header: cell #22: X-GP-profile: Hive
+DEBUG2:  churl http header: cell #23: X-GP-URI: pxf://namenode:51200/pxf_hive1?profile=Hive
+...
+```
+
+Examine/collect the log messages from `stdout`.
+
+**Note**: `DEBUG2` database session logging has a performance impact.  Remember to turn off `DEBUG2` logging after you have collected the desired information.
+
+``` sql
+gpadmin=# SET client_min_messages=NOTICE
+```
+
+
+## <a id="pxf-memcfg"></a>Addressing PXF Memory Issues
+
+The Java heap size can be a limiting factor in PXF\u2019s ability to serve many concurrent requests or to run queries against large tables.
+
+You may run into situations where a query will hang or fail with an Out of Memory exception (OOM). This typically occurs when many threads are reading different data fragments from an external table and insufficient heap space exists to open all fragments at the same time. To avert or remedy this situation, Pivotal recommends first increasing the Java maximum heap size or decreasing the Tomcat maximum number of threads, depending upon what works best for your system configuration.
+
+**Note**: The configuration changes described in this topic require modifying config files on *each* PXF node in your HAWQ cluster. After performing the updates, be sure to verify that the configuration on all PXF nodes is the same.
+
+You will need to re-apply these configuration changes after any PXF version upgrades.
+
+### <a id="pxf-heapcfg"></a>Increasing the Maximum Heap Size
+
+Each PXF node is configured with a default Java heap size of 512MB. If the nodes in your cluster have an ample amount of memory, increasing the amount allocated to the PXF agents is the best approach. Pivotal recommends a heap size value between 1-2GB.
+
+Perform the following steps to increase the PXF agent heap size in your HAWQ  deployment. **You must perform the configuration changes on each PXF node in your HAWQ cluster.**
+
+1. Open `/var/pxf/pxf-service/bin/setenv.sh` in a text editor.
+
+    ``` shell
+    root@pxf-node$ vi /var/pxf/pxf-service/bin/setenv.sh
+    ```
+
+2. Update the `-Xmx` option to the desired value in the `JVM_OPTS` setting:
+
+    ``` shell
+    JVM_OPTS="-Xmx1024M -Xss256K"
+    ```
+
+3. Restart PXF:
+
+    1. If you use Ambari to manage your cluster, restart the PXF service via the Ambari console.
+    2. If you do not use Ambari, restart the PXF service from the command line on each node:
+
+        ``` shell
+        root@pxf-node$ service pxf-service restart
+        ```
+
+### <a id="pxf-heapcfg"></a>Decreasing the Maximum Number of  Threads
+
+If increasing the maximum heap size is not suitable for your HAWQ cluster, try decreasing the number of concurrent working threads configured for the underlying Tomcat web application. A decrease in the number of running threads will prevent any PXF node from exhausting its memory, while ensuring that current queries run to completion (albeit a bit slower). As Tomcat's default behavior is to queue requests until a thread is free, decreasing this value will not result in denied requests.
+
+The Tomcat default maximum number of threads is 300. Pivotal recommends  decreasing the maximum number of threads to under 6. (If you plan to run large workloads on a large number of files using a Hive profile, Pivotal recommends you pick an even lower value.)
+
+Perform the following steps to decrease the maximum number of Tomcat threads in your HAWQ PXF deployment. **You must perform the configuration changes on each PXF node in your HAWQ cluster.**
+
+1. Open the `/var/pxf/pxf-service/conf/server.xml` file in a text editor.
+
+    ``` shell
+    root@pxf-node$ vi /var/pxf/pxf-service/conf/server.xml
+    ```
+
+2. Update the `Catalina` `Executor` block to identify the desired `maxThreads` value:
+
+    ``` xml
+    <Executor maxThreads="2"
+              minSpareThreads="50"
+              name="tomcatThreadPool"
+              namePrefix="tomcat-http--"/>
+    ```
+
+3. Restart PXF:
+
+    1. If you use Ambari to manage your cluster, restart the PXF service via the Ambari console.
+    2. If you do not use Ambari, restart the PXF service from the command line on each node:
+
+        ``` shell
+        root@pxf-node$ service pxf-service restart
+        ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/HAWQQueryProcessing.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/HAWQQueryProcessing.html.md.erb b/markdown/query/HAWQQueryProcessing.html.md.erb
new file mode 100644
index 0000000..1d221f4
--- /dev/null
+++ b/markdown/query/HAWQQueryProcessing.html.md.erb
@@ -0,0 +1,60 @@
+---
+title: About HAWQ Query Processing
+---
+
+This topic provides an overview of how HAWQ processes queries. Understanding this process can be useful when writing and tuning queries.
+
+Users issue queries to HAWQ as they would to any database management system. They connect to the database instance on the HAWQ master host using a client application such as `psql` and submit SQL statements.
+
+## <a id="topic2"></a>Understanding Query Planning and Dispatch
+
+After a query is accepted on master, the master parses and analyzes the query. After completing its analysis, the master generates a query tree and provides the query tree to the query optimizer.
+
+The query optimizer generates a query plan. Given the cost information of the query plan, resources are requested from the HAWQ resource manager. After the resources are obtained, the dispatcher starts virtual segments and dispatches the query plan to virtual segments for execution.
+
+This diagram depicts basic query flow in HAWQ.
+
+<img src="../images/basic_query_flow.png" id="topic2__image_ezs_wbh_sv" class="image" width="672" />
+
+## <a id="topic3"></a>Understanding HAWQ Query Plans
+
+A query plan is the set of operations HAWQ will perform to produce the answer to a query. Each *node* or step in the plan represents a database operation such as a table scan, join, aggregation, or sort. Plans are read and executed from bottom to top.
+
+In addition to common database operations such as tables scans, joins, and so on, HAWQ has an additional operation type called *motion*. A motion operation involves moving tuples between the segments during query processing. Note that not every query requires a motion. For example, a targeted query plan does not require data to move across the interconnect.
+
+To achieve maximum parallelism during query execution, HAWQ divides the work of the query plan into *slices*. A slice is a portion of the plan that segments can work on independently. A query plan is sliced wherever a *motion* operation occurs in the plan, with one slice on each side of the motion.
+
+For example, consider the following simple query involving a join between two tables:
+
+``` sql
+SELECT customer, amount
+FROM sales JOIN customer USING (cust_id)
+WHERE dateCol = '04-30-2008';
+```
+
+[Query Slice Plan](#topic3__iy140224) shows the query plan. Each segment receives a copy of the query plan and works on it in parallel.
+
+The query plan for this example has a *redistribute motion* that moves tuples between the segments to complete the join. The redistribute motion is necessary because the customer table is distributed across the segments by `cust_id`, but the sales table is distributed across the segments by `sale_id`. To perform the join, the `sales` tuples must be redistributed by `cust_id`. The plan is sliced on either side of the redistribute motion, creating *slice 1* and *slice 2*.
+
+This query plan has another type of motion operation called a *gather motion*. A gather motion is when the segments send results back up to the master for presentation to the client. Because a query plan is always sliced wherever a motion occurs, this plan also has an implicit slice at the very top of the plan (*slice 3*). Not all query plans involve a gather motion. For example, a `CREATE TABLE x AS SELECT...` statement would not have a gather motion because tuples are sent to the newly created table, not to the master.
+
+<a id="topic3__iy140224"></a>
+<span class="figtitleprefix">Figure: </span>Query Slice Plan
+
+<img src="../images/slice_plan.jpg" class="image" width="462" height="382" />
+
+## <a id="topic4"></a>Understanding Parallel Query Execution
+
+HAWQ creates a number of database processes to handle the work of a query. On the master, the query worker process is called the *query dispatcher* (QD). The QD is responsible for creating and dispatching the query plan. It also accumulates and presents the final results. On virtual segments, a query worker process is called a *query executor* (QE). A QE is responsible for completing its portion of work and communicating its intermediate results to the other worker processes.
+
+There is at least one worker process assigned to each *slice* of the query plan. A worker process works on its assigned portion of the query plan independently. During query execution, each virtual segment will have a number of processes working on the query in parallel.
+
+Related processes that are working on the same slice of the query plan but on different virtual segments are called *gangs*. As a portion of work is completed, tuples flow up the query plan from one gang of processes to the next. This inter-process communication between virtual segments is referred to as the *interconnect* component of HAWQ.
+
+[Query Worker Processes](#topic4__iy141495) shows the query worker processes on the master and two virtual segment instances for the query plan illustrated in [Query Slice Plan](#topic3__iy140224).
+
+<a id="topic4__iy141495"></a>
+<span class="figtitleprefix">Figure: </span>Query Worker Processes
+
+<img src="../images/gangs.jpg" class="image" width="318" height="288" />
+



[51/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
HAWQ-1254 Fix/remove book branching on incubator-hawq-docs


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/de1e2e07
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/de1e2e07
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/de1e2e07

Branch: refs/heads/develop
Commit: de1e2e07e31b703d2353624b93f7437363a3ef1c
Parents: 2524285
Author: David Yozie <yo...@apache.org>
Authored: Fri Jan 6 09:31:57 2017 -0800
Committer: David Yozie <yo...@apache.org>
Committed: Fri Jan 6 09:31:57 2017 -0800

----------------------------------------------------------------------
 README.md                                       |   53 +-
 ...ckingUpandRestoringHAWQDatabases.html.md.erb |  373 --
 admin/ClusterExpansion.html.md.erb              |  226 --
 admin/ClusterShrink.html.md.erb                 |   55 -
 admin/FaultTolerance.html.md.erb                |   52 -
 ...esandHighAvailabilityEnabledHDFS.html.md.erb |  223 --
 admin/HighAvailability.html.md.erb              |   37 -
 admin/MasterMirroring.html.md.erb               |  144 -
 admin/RecommendedMonitoringTasks.html.md.erb    |  259 --
 admin/RunningHAWQ.html.md.erb                   |   37 -
 admin/ambari-admin.html.md.erb                  |  439 ---
 admin/ambari-rest-api.html.md.erb               |  163 -
 admin/maintain.html.md.erb                      |   31 -
 admin/monitor.html.md.erb                       |  444 ---
 admin/setuphawqopenv.html.md.erb                |   81 -
 admin/startstop.html.md.erb                     |  146 -
 .../HAWQBestPracticesOverview.html.md.erb       |   28 -
 bestpractices/general_bestpractices.html.md.erb |   26 -
 .../managing_data_bestpractices.html.md.erb     |   47 -
 ...managing_resources_bestpractices.html.md.erb |  144 -
 .../operating_hawq_bestpractices.html.md.erb    |  298 --
 .../querying_data_bestpractices.html.md.erb     |   43 -
 bestpractices/secure_bestpractices.html.md.erb  |   11 -
 book/Gemfile                                    |    5 +
 book/Gemfile.lock                               |  203 ++
 book/config.yml                                 |   21 +
 book/master_middleman/source/images/favicon.ico |  Bin 0 -> 1150 bytes
 .../master_middleman/source/javascripts/book.js |   16 +
 .../source/javascripts/waypoints/context.js     |  300 ++
 .../source/javascripts/waypoints/group.js       |  105 +
 .../javascripts/waypoints/noframeworkAdapter.js |  213 ++
 .../source/javascripts/waypoints/sticky.js      |   63 +
 .../source/javascripts/waypoints/waypoint.js    |  160 +
 book/master_middleman/source/layouts/_title.erb |    6 +
 .../patch/dynamic_variable_interpretation.py    |  192 ++
 .../source/stylesheets/book-styles.css.scss     |    3 +
 .../stylesheets/partials/_book-base-values.scss |    0
 .../source/stylesheets/partials/_book-vars.scss |   19 +
 .../source/subnavs/apache-hawq-nav.erb          |  894 +++++
 book/redirects.rb                               |    4 +
 clientaccess/client_auth.html.md.erb            |  193 --
 clientaccess/disable-kerberos.html.md.erb       |   85 -
 clientaccess/g-connecting-with-psql.html.md.erb |   35 -
 ...-database-application-interfaces.html.md.erb |   96 -
 ...-establishing-a-database-session.html.md.erb |   17 -
 ...awq-database-client-applications.html.md.erb |   23 -
 .../g-supported-client-applications.html.md.erb |    8 -
 ...ubleshooting-connection-problems.html.md.erb |   13 -
 clientaccess/index.md.erb                       |   17 -
 clientaccess/kerberos.html.md.erb               |  308 --
 clientaccess/ldap.html.md.erb                   |  116 -
 clientaccess/roles_privs.html.md.erb            |  285 --
 datamgmt/BasicDataOperations.html.md.erb        |   64 -
 datamgmt/ConcurrencyControl.html.md.erb         |   24 -
 .../HAWQInputFormatforMapReduce.html.md.erb     |  304 --
 datamgmt/Transactions.html.md.erb               |   54 -
 datamgmt/about_statistics.html.md.erb           |  209 --
 datamgmt/dml.html.md.erb                        |   35 -
 datamgmt/load/client-loadtools.html.md.erb      |  104 -
 ...reating-external-tables-examples.html.md.erb |  117 -
 ...ut-gpfdist-setup-and-performance.html.md.erb |   22 -
 datamgmt/load/g-character-encoding.html.md.erb  |   11 -
 ...ommand-based-web-external-tables.html.md.erb |   26 -
 .../g-configuration-file-format.html.md.erb     |   66 -
 ...-controlling-segment-parallelism.html.md.erb |   11 -
 ...table-and-declare-a-reject-limit.html.md.erb |   11 -
 ...ng-and-using-web-external-tables.html.md.erb |   13 -
 ...-with-single-row-error-isolation.html.md.erb |   24 -
 ...ased-writable-external-web-table.html.md.erb |   43 -
 ...le-based-writable-external-table.html.md.erb |   16 -
 ...ermine-the-transformation-schema.html.md.erb |   33 -
 ...-web-or-writable-external-tables.html.md.erb |   11 -
 ...-escaping-in-csv-formatted-files.html.md.erb |   29 -
 ...escaping-in-text-formatted-files.html.md.erb |   31 -
 datamgmt/load/g-escaping.html.md.erb            |   16 -
 ...e-publications-in-demo-directory.html.md.erb |   29 -
 ...example-hawq-file-server-gpfdist.html.md.erb |   13 -
 ...-mef-xml-files-in-demo-directory.html.md.erb |   54 -
 ...e-witsml-files-in-demo-directory.html.md.erb |   54 -
 ...g-examples-read-fixed-width-data.html.md.erb |   37 -
 datamgmt/load/g-external-tables.html.md.erb     |   44 -
 datamgmt/load/g-formatting-columns.html.md.erb  |   19 -
 .../load/g-formatting-data-files.html.md.erb    |   17 -
 datamgmt/load/g-formatting-rows.html.md.erb     |    7 -
 datamgmt/load/g-gpfdist-protocol.html.md.erb    |   15 -
 datamgmt/load/g-gpfdists-protocol.html.md.erb   |   37 -
 ...g-handling-errors-ext-table-data.html.md.erb |    9 -
 .../load/g-handling-load-errors.html.md.erb     |   28 -
 ...id-csv-files-in-error-table-data.html.md.erb |    7 -
 ...g-and-exporting-fixed-width-data.html.md.erb |   38 -
 datamgmt/load/g-installing-gpfdist.html.md.erb  |    7 -
 datamgmt/load/g-load-the-data.html.md.erb       |   17 -
 .../g-loading-and-unloading-data.html.md.erb    |   55 -
 ...and-writing-non-hdfs-custom-data.html.md.erb |    9 -
 ...ing-data-using-an-external-table.html.md.erb |   18 -
 .../load/g-loading-data-with-copy.html.md.erb   |   11 -
 .../g-loading-data-with-hawqload.html.md.erb    |   56 -
 .../g-moving-data-between-tables.html.md.erb    |   12 -
 ...-data-load-and-query-performance.html.md.erb |   10 -
 datamgmt/load/g-register_files.html.md.erb      |  217 --
 .../load/g-representing-null-values.html.md.erb |    7 -
 ...-single-row-error-isolation-mode.html.md.erb |   17 -
 .../g-starting-and-stopping-gpfdist.html.md.erb |   42 -
 .../g-transfer-and-store-the-data.html.md.erb   |   16 -
 .../load/g-transforming-with-gpload.html.md.erb |   30 -
 ...ing-with-insert-into-select-from.html.md.erb |   22 -
 .../load/g-transforming-xml-data.html.md.erb    |   34 -
 .../load/g-troubleshooting-gpfdist.html.md.erb  |   23 -
 ...nloading-data-from-hawq-database.html.md.erb |   17 -
 ...-using-a-writable-external-table.html.md.erb |   17 -
 .../g-unloading-data-using-copy.html.md.erb     |   12 -
 .../g-url-based-web-external-tables.html.md.erb |   24 -
 .../load/g-using-a-custom-format.html.md.erb    |   23 -
 ...g-the-hawq-file-server--gpfdist-.html.md.erb |   19 -
 ...rking-with-file-based-ext-tables.html.md.erb |   21 -
 datamgmt/load/g-write-a-transform.html.md.erb   |   48 -
 ...-write-the-gpfdist-configuration.html.md.erb |   61 -
 .../g-xml-transformation-examples.html.md.erb   |   13 -
 ddl/ddl-database.html.md.erb                    |   78 -
 ddl/ddl-partition.html.md.erb                   |  483 ---
 ddl/ddl-schema.html.md.erb                      |   88 -
 ddl/ddl-storage.html.md.erb                     |   71 -
 ddl/ddl-table.html.md.erb                       |  149 -
 ddl/ddl-tablespace.html.md.erb                  |  154 -
 ddl/ddl-view.html.md.erb                        |   25 -
 ddl/ddl.html.md.erb                             |   19 -
 images/02-pipeline.png                          |  Bin 40864 -> 0 bytes
 images/03-gpload-files.jpg                      |  Bin 38954 -> 0 bytes
 images/basic_query_flow.png                     |  Bin 74709 -> 0 bytes
 images/ext-tables-xml.png                       |  Bin 92048 -> 0 bytes
 images/ext_tables.jpg                           |  Bin 65371 -> 0 bytes
 images/ext_tables_multinic.jpg                  |  Bin 24394 -> 0 bytes
 images/gangs.jpg                                |  Bin 30405 -> 0 bytes
 images/gporca.png                               |  Bin 53323 -> 0 bytes
 images/hawq_hcatalog.png                        |  Bin 120047 -> 0 bytes
 images/slice_plan.jpg                           |  Bin 53086 -> 0 bytes
 install/aws-config.html.md.erb                  |  123 -
 install/select-hosts.html.md.erb                |   19 -
 ...ckingUpandRestoringHAWQDatabases.html.md.erb |  373 ++
 markdown/admin/ClusterExpansion.html.md.erb     |  226 ++
 markdown/admin/ClusterShrink.html.md.erb        |   55 +
 markdown/admin/FaultTolerance.html.md.erb       |   52 +
 ...esandHighAvailabilityEnabledHDFS.html.md.erb |  223 ++
 markdown/admin/HighAvailability.html.md.erb     |   37 +
 markdown/admin/MasterMirroring.html.md.erb      |  144 +
 .../RecommendedMonitoringTasks.html.md.erb      |  259 ++
 markdown/admin/RunningHAWQ.html.md.erb          |   37 +
 markdown/admin/ambari-admin.html.md.erb         |  439 +++
 markdown/admin/ambari-rest-api.html.md.erb      |  163 +
 markdown/admin/maintain.html.md.erb             |   31 +
 markdown/admin/monitor.html.md.erb              |  444 +++
 markdown/admin/setuphawqopenv.html.md.erb       |   81 +
 markdown/admin/startstop.html.md.erb            |  146 +
 .../HAWQBestPracticesOverview.html.md.erb       |   28 +
 .../general_bestpractices.html.md.erb           |   26 +
 .../managing_data_bestpractices.html.md.erb     |   47 +
 ...managing_resources_bestpractices.html.md.erb |  144 +
 .../operating_hawq_bestpractices.html.md.erb    |  298 ++
 .../querying_data_bestpractices.html.md.erb     |   43 +
 .../secure_bestpractices.html.md.erb            |   11 +
 markdown/clientaccess/client_auth.html.md.erb   |  193 ++
 .../clientaccess/disable-kerberos.html.md.erb   |   85 +
 .../g-connecting-with-psql.html.md.erb          |   35 +
 ...-database-application-interfaces.html.md.erb |   96 +
 ...-establishing-a-database-session.html.md.erb |   17 +
 ...awq-database-client-applications.html.md.erb |   23 +
 .../g-supported-client-applications.html.md.erb |    8 +
 ...ubleshooting-connection-problems.html.md.erb |   13 +
 markdown/clientaccess/index.md.erb              |   17 +
 markdown/clientaccess/kerberos.html.md.erb      |  308 ++
 markdown/clientaccess/ldap.html.md.erb          |  116 +
 markdown/clientaccess/roles_privs.html.md.erb   |  285 ++
 .../datamgmt/BasicDataOperations.html.md.erb    |   64 +
 .../datamgmt/ConcurrencyControl.html.md.erb     |   24 +
 .../HAWQInputFormatforMapReduce.html.md.erb     |  304 ++
 markdown/datamgmt/Transactions.html.md.erb      |   54 +
 markdown/datamgmt/about_statistics.html.md.erb  |  209 ++
 markdown/datamgmt/dml.html.md.erb               |   35 +
 .../datamgmt/load/client-loadtools.html.md.erb  |  104 +
 ...reating-external-tables-examples.html.md.erb |  117 +
 ...ut-gpfdist-setup-and-performance.html.md.erb |   22 +
 .../load/g-character-encoding.html.md.erb       |   11 +
 ...ommand-based-web-external-tables.html.md.erb |   26 +
 .../g-configuration-file-format.html.md.erb     |   66 +
 ...-controlling-segment-parallelism.html.md.erb |   11 +
 ...table-and-declare-a-reject-limit.html.md.erb |   11 +
 ...ng-and-using-web-external-tables.html.md.erb |   13 +
 ...-with-single-row-error-isolation.html.md.erb |   24 +
 ...ased-writable-external-web-table.html.md.erb |   43 +
 ...le-based-writable-external-table.html.md.erb |   16 +
 ...ermine-the-transformation-schema.html.md.erb |   33 +
 ...-web-or-writable-external-tables.html.md.erb |   11 +
 ...-escaping-in-csv-formatted-files.html.md.erb |   29 +
 ...escaping-in-text-formatted-files.html.md.erb |   31 +
 markdown/datamgmt/load/g-escaping.html.md.erb   |   16 +
 ...e-publications-in-demo-directory.html.md.erb |   29 +
 ...example-hawq-file-server-gpfdist.html.md.erb |   13 +
 ...-mef-xml-files-in-demo-directory.html.md.erb |   54 +
 ...e-witsml-files-in-demo-directory.html.md.erb |   54 +
 ...g-examples-read-fixed-width-data.html.md.erb |   37 +
 .../datamgmt/load/g-external-tables.html.md.erb |   44 +
 .../load/g-formatting-columns.html.md.erb       |   19 +
 .../load/g-formatting-data-files.html.md.erb    |   17 +
 .../datamgmt/load/g-formatting-rows.html.md.erb |    7 +
 .../load/g-gpfdist-protocol.html.md.erb         |   15 +
 .../load/g-gpfdists-protocol.html.md.erb        |   37 +
 ...g-handling-errors-ext-table-data.html.md.erb |    9 +
 .../load/g-handling-load-errors.html.md.erb     |   28 +
 ...id-csv-files-in-error-table-data.html.md.erb |    7 +
 ...g-and-exporting-fixed-width-data.html.md.erb |   38 +
 .../load/g-installing-gpfdist.html.md.erb       |    7 +
 .../datamgmt/load/g-load-the-data.html.md.erb   |   17 +
 .../g-loading-and-unloading-data.html.md.erb    |   55 +
 ...and-writing-non-hdfs-custom-data.html.md.erb |    9 +
 ...ing-data-using-an-external-table.html.md.erb |   18 +
 .../load/g-loading-data-with-copy.html.md.erb   |   11 +
 .../g-loading-data-with-hawqload.html.md.erb    |   56 +
 .../g-moving-data-between-tables.html.md.erb    |   12 +
 ...-data-load-and-query-performance.html.md.erb |   10 +
 .../datamgmt/load/g-register_files.html.md.erb  |  217 ++
 .../load/g-representing-null-values.html.md.erb |    7 +
 ...-single-row-error-isolation-mode.html.md.erb |   17 +
 .../g-starting-and-stopping-gpfdist.html.md.erb |   42 +
 .../g-transfer-and-store-the-data.html.md.erb   |   16 +
 .../load/g-transforming-with-gpload.html.md.erb |   30 +
 ...ing-with-insert-into-select-from.html.md.erb |   22 +
 .../load/g-transforming-xml-data.html.md.erb    |   34 +
 .../load/g-troubleshooting-gpfdist.html.md.erb  |   23 +
 ...nloading-data-from-hawq-database.html.md.erb |   17 +
 ...-using-a-writable-external-table.html.md.erb |   17 +
 .../g-unloading-data-using-copy.html.md.erb     |   12 +
 .../g-url-based-web-external-tables.html.md.erb |   24 +
 .../load/g-using-a-custom-format.html.md.erb    |   23 +
 ...g-the-hawq-file-server--gpfdist-.html.md.erb |   19 +
 ...rking-with-file-based-ext-tables.html.md.erb |   21 +
 .../load/g-write-a-transform.html.md.erb        |   48 +
 ...-write-the-gpfdist-configuration.html.md.erb |   61 +
 .../g-xml-transformation-examples.html.md.erb   |   13 +
 markdown/ddl/ddl-database.html.md.erb           |   78 +
 markdown/ddl/ddl-partition.html.md.erb          |  483 +++
 markdown/ddl/ddl-schema.html.md.erb             |   88 +
 markdown/ddl/ddl-storage.html.md.erb            |   71 +
 markdown/ddl/ddl-table.html.md.erb              |  149 +
 markdown/ddl/ddl-tablespace.html.md.erb         |  154 +
 markdown/ddl/ddl-view.html.md.erb               |   25 +
 markdown/ddl/ddl.html.md.erb                    |   19 +
 markdown/images/02-pipeline.png                 |  Bin 0 -> 40864 bytes
 markdown/images/03-gpload-files.jpg             |  Bin 0 -> 38954 bytes
 markdown/images/basic_query_flow.png            |  Bin 0 -> 74709 bytes
 markdown/images/ext-tables-xml.png              |  Bin 0 -> 92048 bytes
 markdown/images/ext_tables.jpg                  |  Bin 0 -> 65371 bytes
 markdown/images/ext_tables_multinic.jpg         |  Bin 0 -> 24394 bytes
 markdown/images/gangs.jpg                       |  Bin 0 -> 30405 bytes
 markdown/images/gporca.png                      |  Bin 0 -> 53323 bytes
 markdown/images/hawq_hcatalog.png               |  Bin 0 -> 120047 bytes
 markdown/images/slice_plan.jpg                  |  Bin 0 -> 53086 bytes
 markdown/install/aws-config.html.md.erb         |  123 +
 markdown/install/select-hosts.html.md.erb       |   19 +
 markdown/mdimages/02-pipeline.png               |  Bin 0 -> 40864 bytes
 markdown/mdimages/03-gpload-files.jpg           |  Bin 0 -> 38954 bytes
 markdown/mdimages/1-assign-masters.tiff         |  Bin 0 -> 248134 bytes
 markdown/mdimages/1-choose-services.tiff        |  Bin 0 -> 258298 bytes
 .../mdimages/3-assign-slaves-and-clients.tiff   |  Bin 0 -> 199176 bytes
 .../mdimages/4-customize-services-hawq.tiff     |  Bin 0 -> 241800 bytes
 markdown/mdimages/5-customize-services-pxf.tiff |  Bin 0 -> 192364 bytes
 markdown/mdimages/6-review.tiff                 |  Bin 0 -> 230890 bytes
 markdown/mdimages/7-install-start-test.tiff     |  Bin 0 -> 204112 bytes
 markdown/mdimages/ext-tables-xml.png            |  Bin 0 -> 92048 bytes
 markdown/mdimages/ext_tables.jpg                |  Bin 0 -> 65371 bytes
 markdown/mdimages/ext_tables_multinic.jpg       |  Bin 0 -> 24394 bytes
 markdown/mdimages/gangs.jpg                     |  Bin 0 -> 30405 bytes
 markdown/mdimages/gp_orca_fallback.png          |  Bin 0 -> 14683 bytes
 markdown/mdimages/gpfdist_instances.png         |  Bin 0 -> 26236 bytes
 markdown/mdimages/gpfdist_instances_backup.png  |  Bin 0 -> 48414 bytes
 markdown/mdimages/gporca.png                    |  Bin 0 -> 53323 bytes
 .../mdimages/hawq_architecture_components.png   |  Bin 0 -> 99650 bytes
 markdown/mdimages/hawq_hcatalog.png             |  Bin 0 -> 120047 bytes
 .../mdimages/hawq_high_level_architecture.png   |  Bin 0 -> 491840 bytes
 markdown/mdimages/partitions.jpg                |  Bin 0 -> 43514 bytes
 markdown/mdimages/piv-opt.png                   |  Bin 0 -> 4823 bytes
 markdown/mdimages/resource_queues.jpg           |  Bin 0 -> 18793 bytes
 markdown/mdimages/slice_plan.jpg                |  Bin 0 -> 53086 bytes
 markdown/mdimages/source/gporca.graffle         |  Bin 0 -> 2814 bytes
 markdown/mdimages/source/hawq_hcatalog.graffle  |  Bin 0 -> 2967 bytes
 markdown/mdimages/standby_master.jpg            |  Bin 0 -> 18180 bytes
 .../svg/hawq_architecture_components.svg        | 1083 ++++++
 markdown/mdimages/svg/hawq_hcatalog.svg         |    3 +
 .../mdimages/svg/hawq_resource_management.svg   |  621 ++++
 markdown/mdimages/svg/hawq_resource_queues.svg  |  340 ++
 markdown/overview/ElasticSegments.html.md.erb   |   31 +
 markdown/overview/HAWQArchitecture.html.md.erb  |   69 +
 markdown/overview/HAWQOverview.html.md.erb      |   43 +
 markdown/overview/HDFSCatalogCache.html.md.erb  |    7 +
 markdown/overview/ManagementTools.html.md.erb   |    9 +
 .../overview/RedundancyFailover.html.md.erb     |   29 +
 .../overview/ResourceManagement.html.md.erb     |   14 +
 .../TableDistributionStorage.html.md.erb        |   41 +
 markdown/overview/system-overview.html.md.erb   |   11 +
 .../plext/UsingProceduralLanguages.html.md.erb  |   23 +
 markdown/plext/builtin_langs.html.md.erb        |  110 +
 markdown/plext/using_pgcrypto.html.md.erb       |   32 +
 markdown/plext/using_pljava.html.md.erb         |  709 ++++
 markdown/plext/using_plperl.html.md.erb         |   27 +
 markdown/plext/using_plpgsql.html.md.erb        |  142 +
 markdown/plext/using_plpython.html.md.erb       |  789 +++++
 markdown/plext/using_plr.html.md.erb            |  229 ++
 markdown/pxf/ConfigurePXF.html.md.erb           |   69 +
 markdown/pxf/HBasePXF.html.md.erb               |  105 +
 markdown/pxf/HDFSFileDataPXF.html.md.erb        |  452 +++
 .../pxf/HawqExtensionFrameworkPXF.html.md.erb   |   45 +
 markdown/pxf/HivePXF.html.md.erb                |  700 ++++
 markdown/pxf/InstallPXFPlugins.html.md.erb      |   81 +
 markdown/pxf/JsonPXF.html.md.erb                |  197 ++
 .../PXFExternalTableandAPIReference.html.md.erb | 1311 +++++++
 markdown/pxf/ReadWritePXF.html.md.erb           |  123 +
 markdown/pxf/TroubleshootingPXF.html.md.erb     |  273 ++
 markdown/query/HAWQQueryProcessing.html.md.erb  |   60 +
 markdown/query/defining-queries.html.md.erb     |  528 +++
 markdown/query/functions-operators.html.md.erb  |  437 +++
 .../gporca/query-gporca-changed.html.md.erb     |   17 +
 .../gporca/query-gporca-enable.html.md.erb      |   95 +
 .../gporca/query-gporca-fallback.html.md.erb    |  142 +
 .../gporca/query-gporca-features.html.md.erb    |  215 ++
 .../gporca/query-gporca-limitations.html.md.erb |   37 +
 .../query/gporca/query-gporca-notes.html.md.erb |   28 +
 .../gporca/query-gporca-optimizer.html.md.erb   |   39 +
 .../gporca/query-gporca-overview.html.md.erb    |   23 +
 markdown/query/query-performance.html.md.erb    |  155 +
 markdown/query/query-profiling.html.md.erb      |  240 ++
 markdown/query/query.html.md.erb                |   37 +
 .../CharacterSetSupportReference.html.md.erb    |  439 +++
 markdown/reference/HAWQDataTypes.html.md.erb    |  139 +
 .../HAWQEnvironmentVariables.html.md.erb        |   97 +
 .../reference/HAWQSampleSiteConfig.html.md.erb  |  120 +
 markdown/reference/HAWQSiteConfig.html.md.erb   |   23 +
 ...SConfigurationParameterReference.html.md.erb |  257 ++
 .../reference/SQLCommandReference.html.md.erb   |  163 +
 .../catalog/catalog_ref-html.html.md.erb        |  143 +
 .../catalog/catalog_ref-tables.html.md.erb      |   68 +
 .../catalog/catalog_ref-views.html.md.erb       |   21 +
 .../reference/catalog/catalog_ref.html.md.erb   |   21 +
 .../gp_configuration_history.html.md.erb        |   23 +
 .../catalog/gp_distribution_policy.html.md.erb  |   18 +
 .../catalog/gp_global_sequence.html.md.erb      |   16 +
 .../catalog/gp_master_mirroring.html.md.erb     |   19 +
 .../gp_persistent_database_node.html.md.erb     |   71 +
 .../gp_persistent_filespace_node.html.md.erb    |   83 +
 .../gp_persistent_relation_node.html.md.erb     |   85 +
 .../gp_persistent_relfile_node.html.md.erb      |   96 +
 .../gp_persistent_tablespace_node.html.md.erb   |   72 +
 .../catalog/gp_relfile_node.html.md.erb         |   19 +
 .../gp_segment_configuration.html.md.erb        |   25 +
 .../catalog/gp_version_at_initdb.html.md.erb    |   17 +
 .../reference/catalog/pg_aggregate.html.md.erb  |   25 +
 markdown/reference/catalog/pg_am.html.md.erb    |   38 +
 markdown/reference/catalog/pg_amop.html.md.erb  |   20 +
 .../reference/catalog/pg_amproc.html.md.erb     |   19 +
 .../reference/catalog/pg_appendonly.html.md.erb |   29 +
 .../reference/catalog/pg_attrdef.html.md.erb    |   19 +
 .../reference/catalog/pg_attribute.html.md.erb  |   32 +
 .../catalog/pg_attribute_encoding.html.md.erb   |   18 +
 .../catalog/pg_auth_members.html.md.erb         |   19 +
 .../reference/catalog/pg_authid.html.md.erb     |   36 +
 markdown/reference/catalog/pg_cast.html.md.erb  |   23 +
 markdown/reference/catalog/pg_class.html.md.erb |  213 ++
 .../catalog/pg_compression.html.md.erb          |   22 +
 .../reference/catalog/pg_constraint.html.md.erb |   30 +
 .../reference/catalog/pg_conversion.html.md.erb |   22 +
 .../reference/catalog/pg_database.html.md.erb   |   26 +
 .../reference/catalog/pg_depend.html.md.erb     |   26 +
 .../catalog/pg_description.html.md.erb          |   17 +
 .../reference/catalog/pg_exttable.html.md.erb   |   23 +
 .../reference/catalog/pg_filespace.html.md.erb  |   19 +
 .../catalog/pg_filespace_entry.html.md.erb      |   18 +
 markdown/reference/catalog/pg_index.html.md.erb |   23 +
 .../reference/catalog/pg_inherits.html.md.erb   |   16 +
 .../reference/catalog/pg_language.html.md.erb   |   21 +
 .../catalog/pg_largeobject.html.md.erb          |   19 +
 .../reference/catalog/pg_listener.html.md.erb   |   20 +
 markdown/reference/catalog/pg_locks.html.md.erb |   35 +
 .../reference/catalog/pg_namespace.html.md.erb  |   18 +
 .../reference/catalog/pg_opclass.html.md.erb    |   22 +
 .../reference/catalog/pg_operator.html.md.erb   |   32 +
 .../reference/catalog/pg_partition.html.md.erb  |   20 +
 .../catalog/pg_partition_columns.html.md.erb    |   20 +
 .../catalog/pg_partition_encoding.html.md.erb   |   18 +
 .../catalog/pg_partition_rule.html.md.erb       |   28 +
 .../catalog/pg_partition_templates.html.md.erb  |   30 +
 .../reference/catalog/pg_partitions.html.md.erb |   30 +
 .../reference/catalog/pg_pltemplate.html.md.erb |   22 +
 markdown/reference/catalog/pg_proc.html.md.erb  |   36 +
 .../reference/catalog/pg_resqueue.html.md.erb   |   30 +
 .../catalog/pg_resqueue_status.html.md.erb      |   94 +
 .../reference/catalog/pg_rewrite.html.md.erb    |   20 +
 markdown/reference/catalog/pg_roles.html.md.erb |   31 +
 .../reference/catalog/pg_shdepend.html.md.erb   |   28 +
 .../catalog/pg_shdescription.html.md.erb        |   18 +
 .../catalog/pg_stat_activity.html.md.erb        |   30 +
 .../catalog/pg_stat_last_operation.html.md.erb  |   21 +
 .../pg_stat_last_shoperation.html.md.erb        |   23 +
 .../catalog/pg_stat_operations.html.md.erb      |   87 +
 .../pg_stat_partition_operations.html.md.erb    |   28 +
 .../reference/catalog/pg_statistic.html.md.erb  |   30 +
 markdown/reference/catalog/pg_stats.html.md.erb |   27 +
 .../reference/catalog/pg_tablespace.html.md.erb |   22 +
 .../reference/catalog/pg_trigger.html.md.erb    |  114 +
 markdown/reference/catalog/pg_type.html.md.erb  |  176 +
 .../catalog/pg_type_encoding.html.md.erb        |   15 +
 .../reference/catalog/pg_window.html.md.erb     |   97 +
 .../cli/admin_utilities/analyzedb.html.md.erb   |  160 +
 .../cli/admin_utilities/gpfdist.html.md.erb     |  157 +
 .../cli/admin_utilities/gplogfilter.html.md.erb |  180 +
 .../admin_utilities/hawqactivate.html.md.erb    |   87 +
 .../cli/admin_utilities/hawqcheck.html.md.erb   |  126 +
 .../admin_utilities/hawqcheckperf.html.md.erb   |  137 +
 .../cli/admin_utilities/hawqconfig.html.md.erb  |  134 +
 .../cli/admin_utilities/hawqextract.html.md.erb |  319 ++
 .../admin_utilities/hawqfilespace.html.md.erb   |  147 +
 .../cli/admin_utilities/hawqinit.html.md.erb    |  156 +
 .../cli/admin_utilities/hawqload.html.md.erb    |  420 +++
 .../admin_utilities/hawqregister.html.md.erb    |  254 ++
 .../cli/admin_utilities/hawqrestart.html.md.erb |  112 +
 .../cli/admin_utilities/hawqscp.html.md.erb     |   95 +
 .../admin_utilities/hawqssh-exkeys.html.md.erb  |  105 +
 .../cli/admin_utilities/hawqssh.html.md.erb     |  105 +
 .../cli/admin_utilities/hawqstart.html.md.erb   |  119 +
 .../cli/admin_utilities/hawqstate.html.md.erb   |   65 +
 .../cli/admin_utilities/hawqstop.html.md.erb    |  104 +
 .../cli/client_utilities/createdb.html.md.erb   |  105 +
 .../cli/client_utilities/createuser.html.md.erb |  158 +
 .../cli/client_utilities/dropdb.html.md.erb     |   86 +
 .../cli/client_utilities/dropuser.html.md.erb   |   78 +
 .../cli/client_utilities/pg_dump.html.md.erb    |  252 ++
 .../cli/client_utilities/pg_dumpall.html.md.erb |  180 +
 .../cli/client_utilities/pg_restore.html.md.erb |  256 ++
 .../cli/client_utilities/psql.html.md.erb       |  760 +++++
 .../cli/client_utilities/vacuumdb.html.md.erb   |  122 +
 .../reference/cli/management_tools.html.md.erb  |   63 +
 .../reference/guc/guc_category-list.html.md.erb |  418 +++
 markdown/reference/guc/guc_config.html.md.erb   |   77 +
 .../guc/parameter_definitions.html.md.erb       | 3196 ++++++++++++++++++
 markdown/reference/hawq-reference.html.md.erb   |   43 +
 markdown/reference/sql/ABORT.html.md.erb        |   37 +
 .../reference/sql/ALTER-AGGREGATE.html.md.erb   |   68 +
 .../reference/sql/ALTER-DATABASE.html.md.erb    |   52 +
 .../reference/sql/ALTER-FUNCTION.html.md.erb    |  108 +
 .../sql/ALTER-OPERATOR-CLASS.html.md.erb        |   43 +
 .../reference/sql/ALTER-OPERATOR.html.md.erb    |   50 +
 .../sql/ALTER-RESOURCE-QUEUE.html.md.erb        |  132 +
 markdown/reference/sql/ALTER-ROLE.html.md.erb   |  178 +
 markdown/reference/sql/ALTER-TABLE.html.md.erb  |  422 +++
 .../reference/sql/ALTER-TABLESPACE.html.md.erb  |   55 +
 markdown/reference/sql/ALTER-TYPE.html.md.erb   |   54 +
 markdown/reference/sql/ALTER-USER.html.md.erb   |   44 +
 markdown/reference/sql/ANALYZE.html.md.erb      |   75 +
 markdown/reference/sql/BEGIN.html.md.erb        |   58 +
 markdown/reference/sql/CHECKPOINT.html.md.erb   |   23 +
 markdown/reference/sql/CLOSE.html.md.erb        |   45 +
 markdown/reference/sql/COMMIT.html.md.erb       |   43 +
 markdown/reference/sql/COPY.html.md.erb         |  256 ++
 .../reference/sql/CREATE-AGGREGATE.html.md.erb  |  162 +
 .../reference/sql/CREATE-DATABASE.html.md.erb   |   86 +
 .../sql/CREATE-EXTERNAL-TABLE.html.md.erb       |  333 ++
 .../reference/sql/CREATE-FUNCTION.html.md.erb   |  190 ++
 markdown/reference/sql/CREATE-GROUP.html.md.erb |   43 +
 .../reference/sql/CREATE-LANGUAGE.html.md.erb   |   93 +
 .../sql/CREATE-OPERATOR-CLASS.html.md.erb       |  153 +
 .../reference/sql/CREATE-OPERATOR.html.md.erb   |  171 +
 .../sql/CREATE-RESOURCE-QUEUE.html.md.erb       |  139 +
 markdown/reference/sql/CREATE-ROLE.html.md.erb  |  196 ++
 .../reference/sql/CREATE-SCHEMA.html.md.erb     |   63 +
 .../reference/sql/CREATE-SEQUENCE.html.md.erb   |  135 +
 .../reference/sql/CREATE-TABLE-AS.html.md.erb   |  126 +
 markdown/reference/sql/CREATE-TABLE.html.md.erb |  455 +++
 .../reference/sql/CREATE-TABLESPACE.html.md.erb |   58 +
 markdown/reference/sql/CREATE-TYPE.html.md.erb  |  185 +
 markdown/reference/sql/CREATE-USER.html.md.erb  |   46 +
 markdown/reference/sql/CREATE-VIEW.html.md.erb  |   88 +
 markdown/reference/sql/DEALLOCATE.html.md.erb   |   42 +
 markdown/reference/sql/DECLARE.html.md.erb      |   84 +
 .../reference/sql/DROP-AGGREGATE.html.md.erb    |   48 +
 .../reference/sql/DROP-DATABASE.html.md.erb     |   48 +
 .../sql/DROP-EXTERNAL-TABLE.html.md.erb         |   48 +
 .../reference/sql/DROP-FILESPACE.html.md.erb    |   42 +
 .../reference/sql/DROP-FUNCTION.html.md.erb     |   55 +
 markdown/reference/sql/DROP-GROUP.html.md.erb   |   31 +
 .../reference/sql/DROP-LANGUAGE.html.md.erb     |   49 +
 .../sql/DROP-OPERATOR-CLASS.html.md.erb         |   54 +
 .../reference/sql/DROP-OPERATOR.html.md.erb     |   64 +
 markdown/reference/sql/DROP-OWNED.html.md.erb   |   50 +
 .../sql/DROP-RESOURCE-QUEUE.html.md.erb         |   65 +
 markdown/reference/sql/DROP-ROLE.html.md.erb    |   43 +
 markdown/reference/sql/DROP-SCHEMA.html.md.erb  |   45 +
 .../reference/sql/DROP-SEQUENCE.html.md.erb     |   45 +
 markdown/reference/sql/DROP-TABLE.html.md.erb   |   47 +
 .../reference/sql/DROP-TABLESPACE.html.md.erb   |   42 +
 markdown/reference/sql/DROP-TYPE.html.md.erb    |   45 +
 markdown/reference/sql/DROP-USER.html.md.erb    |   31 +
 markdown/reference/sql/DROP-VIEW.html.md.erb    |   45 +
 markdown/reference/sql/END.html.md.erb          |   37 +
 markdown/reference/sql/EXECUTE.html.md.erb      |   45 +
 markdown/reference/sql/EXPLAIN.html.md.erb      |   96 +
 markdown/reference/sql/FETCH.html.md.erb        |  146 +
 markdown/reference/sql/GRANT.html.md.erb        |  180 +
 markdown/reference/sql/INSERT.html.md.erb       |  111 +
 markdown/reference/sql/PREPARE.html.md.erb      |   67 +
 .../reference/sql/REASSIGN-OWNED.html.md.erb    |   48 +
 .../reference/sql/RELEASE-SAVEPOINT.html.md.erb |   48 +
 markdown/reference/sql/RESET.html.md.erb        |   45 +
 markdown/reference/sql/REVOKE.html.md.erb       |  101 +
 .../sql/ROLLBACK-TO-SAVEPOINT.html.md.erb       |   77 +
 markdown/reference/sql/ROLLBACK.html.md.erb     |   43 +
 markdown/reference/sql/SAVEPOINT.html.md.erb    |   66 +
 markdown/reference/sql/SELECT-INTO.html.md.erb  |   55 +
 markdown/reference/sql/SELECT.html.md.erb       |  507 +++
 markdown/reference/sql/SET-ROLE.html.md.erb     |   72 +
 .../sql/SET-SESSION-AUTHORIZATION.html.md.erb   |   66 +
 markdown/reference/sql/SET.html.md.erb          |   87 +
 markdown/reference/sql/SHOW.html.md.erb         |   47 +
 markdown/reference/sql/TRUNCATE.html.md.erb     |   52 +
 markdown/reference/sql/VACUUM.html.md.erb       |   96 +
 .../reference/toolkit/hawq_toolkit.html.md.erb  |  263 ++
 .../system-requirements.html.md.erb             |  239 ++
 .../ConfigureResourceManagement.html.md.erb     |  120 +
 .../HAWQResourceManagement.html.md.erb          |   69 +
 .../ResourceManagerStatus.html.md.erb           |  152 +
 .../resourcemgmt/ResourceQueues.html.md.erb     |  204 ++
 .../resourcemgmt/YARNIntegration.html.md.erb    |  252 ++
 .../resourcemgmt/best-practices.html.md.erb     |   15 +
 markdown/resourcemgmt/index.md.erb              |   12 +
 .../troubleshooting/Troubleshooting.html.md.erb |  101 +
 mdimages/02-pipeline.png                        |  Bin 40864 -> 0 bytes
 mdimages/03-gpload-files.jpg                    |  Bin 38954 -> 0 bytes
 mdimages/1-assign-masters.tiff                  |  Bin 248134 -> 0 bytes
 mdimages/1-choose-services.tiff                 |  Bin 258298 -> 0 bytes
 mdimages/3-assign-slaves-and-clients.tiff       |  Bin 199176 -> 0 bytes
 mdimages/4-customize-services-hawq.tiff         |  Bin 241800 -> 0 bytes
 mdimages/5-customize-services-pxf.tiff          |  Bin 192364 -> 0 bytes
 mdimages/6-review.tiff                          |  Bin 230890 -> 0 bytes
 mdimages/7-install-start-test.tiff              |  Bin 204112 -> 0 bytes
 mdimages/ext-tables-xml.png                     |  Bin 92048 -> 0 bytes
 mdimages/ext_tables.jpg                         |  Bin 65371 -> 0 bytes
 mdimages/ext_tables_multinic.jpg                |  Bin 24394 -> 0 bytes
 mdimages/gangs.jpg                              |  Bin 30405 -> 0 bytes
 mdimages/gp_orca_fallback.png                   |  Bin 14683 -> 0 bytes
 mdimages/gpfdist_instances.png                  |  Bin 26236 -> 0 bytes
 mdimages/gpfdist_instances_backup.png           |  Bin 48414 -> 0 bytes
 mdimages/gporca.png                             |  Bin 53323 -> 0 bytes
 mdimages/hawq_architecture_components.png       |  Bin 99650 -> 0 bytes
 mdimages/hawq_hcatalog.png                      |  Bin 120047 -> 0 bytes
 mdimages/hawq_high_level_architecture.png       |  Bin 491840 -> 0 bytes
 mdimages/partitions.jpg                         |  Bin 43514 -> 0 bytes
 mdimages/piv-opt.png                            |  Bin 4823 -> 0 bytes
 mdimages/resource_queues.jpg                    |  Bin 18793 -> 0 bytes
 mdimages/slice_plan.jpg                         |  Bin 53086 -> 0 bytes
 mdimages/source/gporca.graffle                  |  Bin 2814 -> 0 bytes
 mdimages/source/hawq_hcatalog.graffle           |  Bin 2967 -> 0 bytes
 mdimages/standby_master.jpg                     |  Bin 18180 -> 0 bytes
 mdimages/svg/hawq_architecture_components.svg   | 1083 ------
 mdimages/svg/hawq_hcatalog.svg                  |    3 -
 mdimages/svg/hawq_resource_management.svg       |  621 ----
 mdimages/svg/hawq_resource_queues.svg           |  340 --
 overview/ElasticSegments.html.md.erb            |   31 -
 overview/HAWQArchitecture.html.md.erb           |   69 -
 overview/HAWQOverview.html.md.erb               |   43 -
 overview/HDFSCatalogCache.html.md.erb           |    7 -
 overview/ManagementTools.html.md.erb            |    9 -
 overview/RedundancyFailover.html.md.erb         |   29 -
 overview/ResourceManagement.html.md.erb         |   14 -
 overview/TableDistributionStorage.html.md.erb   |   41 -
 overview/system-overview.html.md.erb            |   11 -
 plext/UsingProceduralLanguages.html.md.erb      |   23 -
 plext/builtin_langs.html.md.erb                 |  110 -
 plext/using_pgcrypto.html.md.erb                |   32 -
 plext/using_pljava.html.md.erb                  |  709 ----
 plext/using_plperl.html.md.erb                  |   27 -
 plext/using_plpgsql.html.md.erb                 |  142 -
 plext/using_plpython.html.md.erb                |  789 -----
 plext/using_plr.html.md.erb                     |  229 --
 pxf/ConfigurePXF.html.md.erb                    |   69 -
 pxf/HBasePXF.html.md.erb                        |  105 -
 pxf/HDFSFileDataPXF.html.md.erb                 |  452 ---
 pxf/HawqExtensionFrameworkPXF.html.md.erb       |   45 -
 pxf/HivePXF.html.md.erb                         |  700 ----
 pxf/InstallPXFPlugins.html.md.erb               |   81 -
 pxf/JsonPXF.html.md.erb                         |  197 --
 pxf/PXFExternalTableandAPIReference.html.md.erb | 1311 -------
 pxf/ReadWritePXF.html.md.erb                    |  123 -
 pxf/TroubleshootingPXF.html.md.erb              |  273 --
 query/HAWQQueryProcessing.html.md.erb           |   60 -
 query/defining-queries.html.md.erb              |  528 ---
 query/functions-operators.html.md.erb           |  437 ---
 query/gporca/query-gporca-changed.html.md.erb   |   17 -
 query/gporca/query-gporca-enable.html.md.erb    |   95 -
 query/gporca/query-gporca-fallback.html.md.erb  |  142 -
 query/gporca/query-gporca-features.html.md.erb  |  215 --
 .../gporca/query-gporca-limitations.html.md.erb |   37 -
 query/gporca/query-gporca-notes.html.md.erb     |   28 -
 query/gporca/query-gporca-optimizer.html.md.erb |   39 -
 query/gporca/query-gporca-overview.html.md.erb  |   23 -
 query/query-performance.html.md.erb             |  155 -
 query/query-profiling.html.md.erb               |  240 --
 query/query.html.md.erb                         |   37 -
 .../CharacterSetSupportReference.html.md.erb    |  439 ---
 reference/HAWQDataTypes.html.md.erb             |  139 -
 reference/HAWQEnvironmentVariables.html.md.erb  |   97 -
 reference/HAWQSampleSiteConfig.html.md.erb      |  120 -
 reference/HAWQSiteConfig.html.md.erb            |   23 -
 ...SConfigurationParameterReference.html.md.erb |  257 --
 reference/SQLCommandReference.html.md.erb       |  163 -
 reference/catalog/catalog_ref-html.html.md.erb  |  143 -
 .../catalog/catalog_ref-tables.html.md.erb      |   68 -
 reference/catalog/catalog_ref-views.html.md.erb |   21 -
 reference/catalog/catalog_ref.html.md.erb       |   21 -
 .../gp_configuration_history.html.md.erb        |   23 -
 .../catalog/gp_distribution_policy.html.md.erb  |   18 -
 .../catalog/gp_global_sequence.html.md.erb      |   16 -
 .../catalog/gp_master_mirroring.html.md.erb     |   19 -
 .../gp_persistent_database_node.html.md.erb     |   71 -
 .../gp_persistent_filespace_node.html.md.erb    |   83 -
 .../gp_persistent_relation_node.html.md.erb     |   85 -
 .../gp_persistent_relfile_node.html.md.erb      |   96 -
 .../gp_persistent_tablespace_node.html.md.erb   |   72 -
 reference/catalog/gp_relfile_node.html.md.erb   |   19 -
 .../gp_segment_configuration.html.md.erb        |   25 -
 .../catalog/gp_version_at_initdb.html.md.erb    |   17 -
 reference/catalog/pg_aggregate.html.md.erb      |   25 -
 reference/catalog/pg_am.html.md.erb             |   38 -
 reference/catalog/pg_amop.html.md.erb           |   20 -
 reference/catalog/pg_amproc.html.md.erb         |   19 -
 reference/catalog/pg_appendonly.html.md.erb     |   29 -
 reference/catalog/pg_attrdef.html.md.erb        |   19 -
 reference/catalog/pg_attribute.html.md.erb      |   32 -
 .../catalog/pg_attribute_encoding.html.md.erb   |   18 -
 reference/catalog/pg_auth_members.html.md.erb   |   19 -
 reference/catalog/pg_authid.html.md.erb         |   36 -
 reference/catalog/pg_cast.html.md.erb           |   23 -
 reference/catalog/pg_class.html.md.erb          |  213 --
 reference/catalog/pg_compression.html.md.erb    |   22 -
 reference/catalog/pg_constraint.html.md.erb     |   30 -
 reference/catalog/pg_conversion.html.md.erb     |   22 -
 reference/catalog/pg_database.html.md.erb       |   26 -
 reference/catalog/pg_depend.html.md.erb         |   26 -
 reference/catalog/pg_description.html.md.erb    |   17 -
 reference/catalog/pg_exttable.html.md.erb       |   23 -
 reference/catalog/pg_filespace.html.md.erb      |   19 -
 .../catalog/pg_filespace_entry.html.md.erb      |   18 -
 reference/catalog/pg_index.html.md.erb          |   23 -
 reference/catalog/pg_inherits.html.md.erb       |   16 -
 reference/catalog/pg_language.html.md.erb       |   21 -
 reference/catalog/pg_largeobject.html.md.erb    |   19 -
 reference/catalog/pg_listener.html.md.erb       |   20 -
 reference/catalog/pg_locks.html.md.erb          |   35 -
 reference/catalog/pg_namespace.html.md.erb      |   18 -
 reference/catalog/pg_opclass.html.md.erb        |   22 -
 reference/catalog/pg_operator.html.md.erb       |   32 -
 reference/catalog/pg_partition.html.md.erb      |   20 -
 .../catalog/pg_partition_columns.html.md.erb    |   20 -
 .../catalog/pg_partition_encoding.html.md.erb   |   18 -
 reference/catalog/pg_partition_rule.html.md.erb |   28 -
 .../catalog/pg_partition_templates.html.md.erb  |   30 -
 reference/catalog/pg_partitions.html.md.erb     |   30 -
 reference/catalog/pg_pltemplate.html.md.erb     |   22 -
 reference/catalog/pg_proc.html.md.erb           |   36 -
 reference/catalog/pg_resqueue.html.md.erb       |   30 -
 .../catalog/pg_resqueue_status.html.md.erb      |   94 -
 reference/catalog/pg_rewrite.html.md.erb        |   20 -
 reference/catalog/pg_roles.html.md.erb          |   31 -
 reference/catalog/pg_shdepend.html.md.erb       |   28 -
 reference/catalog/pg_shdescription.html.md.erb  |   18 -
 reference/catalog/pg_stat_activity.html.md.erb  |   30 -
 .../catalog/pg_stat_last_operation.html.md.erb  |   21 -
 .../pg_stat_last_shoperation.html.md.erb        |   23 -
 .../catalog/pg_stat_operations.html.md.erb      |   87 -
 .../pg_stat_partition_operations.html.md.erb    |   28 -
 reference/catalog/pg_statistic.html.md.erb      |   30 -
 reference/catalog/pg_stats.html.md.erb          |   27 -
 reference/catalog/pg_tablespace.html.md.erb     |   22 -
 reference/catalog/pg_trigger.html.md.erb        |  114 -
 reference/catalog/pg_type.html.md.erb           |  176 -
 reference/catalog/pg_type_encoding.html.md.erb  |   15 -
 reference/catalog/pg_window.html.md.erb         |   97 -
 .../cli/admin_utilities/analyzedb.html.md.erb   |  160 -
 .../cli/admin_utilities/gpfdist.html.md.erb     |  157 -
 .../cli/admin_utilities/gplogfilter.html.md.erb |  180 -
 .../admin_utilities/hawqactivate.html.md.erb    |   87 -
 .../cli/admin_utilities/hawqcheck.html.md.erb   |  126 -
 .../admin_utilities/hawqcheckperf.html.md.erb   |  137 -
 .../cli/admin_utilities/hawqconfig.html.md.erb  |  134 -
 .../cli/admin_utilities/hawqextract.html.md.erb |  319 --
 .../admin_utilities/hawqfilespace.html.md.erb   |  147 -
 .../cli/admin_utilities/hawqinit.html.md.erb    |  156 -
 .../cli/admin_utilities/hawqload.html.md.erb    |  420 ---
 .../admin_utilities/hawqregister.html.md.erb    |  254 --
 .../cli/admin_utilities/hawqrestart.html.md.erb |  112 -
 .../cli/admin_utilities/hawqscp.html.md.erb     |   95 -
 .../admin_utilities/hawqssh-exkeys.html.md.erb  |  105 -
 .../cli/admin_utilities/hawqssh.html.md.erb     |  105 -
 .../cli/admin_utilities/hawqstart.html.md.erb   |  119 -
 .../cli/admin_utilities/hawqstate.html.md.erb   |   65 -
 .../cli/admin_utilities/hawqstop.html.md.erb    |  104 -
 .../cli/client_utilities/createdb.html.md.erb   |  105 -
 .../cli/client_utilities/createuser.html.md.erb |  158 -
 .../cli/client_utilities/dropdb.html.md.erb     |   86 -
 .../cli/client_utilities/dropuser.html.md.erb   |   78 -
 .../cli/client_utilities/pg_dump.html.md.erb    |  252 --
 .../cli/client_utilities/pg_dumpall.html.md.erb |  180 -
 .../cli/client_utilities/pg_restore.html.md.erb |  256 --
 reference/cli/client_utilities/psql.html.md.erb |  760 -----
 .../cli/client_utilities/vacuumdb.html.md.erb   |  122 -
 reference/cli/management_tools.html.md.erb      |   63 -
 reference/guc/guc_category-list.html.md.erb     |  418 ---
 reference/guc/guc_config.html.md.erb            |   77 -
 reference/guc/parameter_definitions.html.md.erb | 3196 ------------------
 reference/hawq-reference.html.md.erb            |   43 -
 reference/sql/ABORT.html.md.erb                 |   37 -
 reference/sql/ALTER-AGGREGATE.html.md.erb       |   68 -
 reference/sql/ALTER-DATABASE.html.md.erb        |   52 -
 reference/sql/ALTER-FUNCTION.html.md.erb        |  108 -
 reference/sql/ALTER-OPERATOR-CLASS.html.md.erb  |   43 -
 reference/sql/ALTER-OPERATOR.html.md.erb        |   50 -
 reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb  |  132 -
 reference/sql/ALTER-ROLE.html.md.erb            |  178 -
 reference/sql/ALTER-TABLE.html.md.erb           |  422 ---
 reference/sql/ALTER-TABLESPACE.html.md.erb      |   55 -
 reference/sql/ALTER-TYPE.html.md.erb            |   54 -
 reference/sql/ALTER-USER.html.md.erb            |   44 -
 reference/sql/ANALYZE.html.md.erb               |   75 -
 reference/sql/BEGIN.html.md.erb                 |   58 -
 reference/sql/CHECKPOINT.html.md.erb            |   23 -
 reference/sql/CLOSE.html.md.erb                 |   45 -
 reference/sql/COMMIT.html.md.erb                |   43 -
 reference/sql/COPY.html.md.erb                  |  256 --
 reference/sql/CREATE-AGGREGATE.html.md.erb      |  162 -
 reference/sql/CREATE-DATABASE.html.md.erb       |   86 -
 reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb |  333 --
 reference/sql/CREATE-FUNCTION.html.md.erb       |  190 --
 reference/sql/CREATE-GROUP.html.md.erb          |   43 -
 reference/sql/CREATE-LANGUAGE.html.md.erb       |   93 -
 reference/sql/CREATE-OPERATOR-CLASS.html.md.erb |  153 -
 reference/sql/CREATE-OPERATOR.html.md.erb       |  171 -
 reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb |  139 -
 reference/sql/CREATE-ROLE.html.md.erb           |  196 --
 reference/sql/CREATE-SCHEMA.html.md.erb         |   63 -
 reference/sql/CREATE-SEQUENCE.html.md.erb       |  135 -
 reference/sql/CREATE-TABLE-AS.html.md.erb       |  126 -
 reference/sql/CREATE-TABLE.html.md.erb          |  455 ---
 reference/sql/CREATE-TABLESPACE.html.md.erb     |   58 -
 reference/sql/CREATE-TYPE.html.md.erb           |  185 -
 reference/sql/CREATE-USER.html.md.erb           |   46 -
 reference/sql/CREATE-VIEW.html.md.erb           |   88 -
 reference/sql/DEALLOCATE.html.md.erb            |   42 -
 reference/sql/DECLARE.html.md.erb               |   84 -
 reference/sql/DROP-AGGREGATE.html.md.erb        |   48 -
 reference/sql/DROP-DATABASE.html.md.erb         |   48 -
 reference/sql/DROP-EXTERNAL-TABLE.html.md.erb   |   48 -
 reference/sql/DROP-FILESPACE.html.md.erb        |   42 -
 reference/sql/DROP-FUNCTION.html.md.erb         |   55 -
 reference/sql/DROP-GROUP.html.md.erb            |   31 -
 reference/sql/DROP-LANGUAGE.html.md.erb         |   49 -
 reference/sql/DROP-OPERATOR-CLASS.html.md.erb   |   54 -
 reference/sql/DROP-OPERATOR.html.md.erb         |   64 -
 reference/sql/DROP-OWNED.html.md.erb            |   50 -
 reference/sql/DROP-RESOURCE-QUEUE.html.md.erb   |   65 -
 reference/sql/DROP-ROLE.html.md.erb             |   43 -
 reference/sql/DROP-SCHEMA.html.md.erb           |   45 -
 reference/sql/DROP-SEQUENCE.html.md.erb         |   45 -
 reference/sql/DROP-TABLE.html.md.erb            |   47 -
 reference/sql/DROP-TABLESPACE.html.md.erb       |   42 -
 reference/sql/DROP-TYPE.html.md.erb             |   45 -
 reference/sql/DROP-USER.html.md.erb             |   31 -
 reference/sql/DROP-VIEW.html.md.erb             |   45 -
 reference/sql/END.html.md.erb                   |   37 -
 reference/sql/EXECUTE.html.md.erb               |   45 -
 reference/sql/EXPLAIN.html.md.erb               |   96 -
 reference/sql/FETCH.html.md.erb                 |  146 -
 reference/sql/GRANT.html.md.erb                 |  180 -
 reference/sql/INSERT.html.md.erb                |  111 -
 reference/sql/PREPARE.html.md.erb               |   67 -
 reference/sql/REASSIGN-OWNED.html.md.erb        |   48 -
 reference/sql/RELEASE-SAVEPOINT.html.md.erb     |   48 -
 reference/sql/RESET.html.md.erb                 |   45 -
 reference/sql/REVOKE.html.md.erb                |  101 -
 reference/sql/ROLLBACK-TO-SAVEPOINT.html.md.erb |   77 -
 reference/sql/ROLLBACK.html.md.erb              |   43 -
 reference/sql/SAVEPOINT.html.md.erb             |   66 -
 reference/sql/SELECT-INTO.html.md.erb           |   55 -
 reference/sql/SELECT.html.md.erb                |  507 ---
 reference/sql/SET-ROLE.html.md.erb              |   72 -
 .../sql/SET-SESSION-AUTHORIZATION.html.md.erb   |   66 -
 reference/sql/SET.html.md.erb                   |   87 -
 reference/sql/SHOW.html.md.erb                  |   47 -
 reference/sql/TRUNCATE.html.md.erb              |   52 -
 reference/sql/VACUUM.html.md.erb                |   96 -
 reference/toolkit/hawq_toolkit.html.md.erb      |  263 --
 requirements/system-requirements.html.md.erb    |  239 --
 .../ConfigureResourceManagement.html.md.erb     |  120 -
 resourcemgmt/HAWQResourceManagement.html.md.erb |   69 -
 resourcemgmt/ResourceManagerStatus.html.md.erb  |  152 -
 resourcemgmt/ResourceQueues.html.md.erb         |  204 --
 resourcemgmt/YARNIntegration.html.md.erb        |  252 --
 resourcemgmt/best-practices.html.md.erb         |   15 -
 resourcemgmt/index.md.erb                       |   12 -
 troubleshooting/Troubleshooting.html.md.erb     |  101 -
 804 files changed, 42071 insertions(+), 39858 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index ed629f3..331d272 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,8 @@
 # Apache HAWQ (incubating) End-User Documentation
 
-This repository provides the full source for Apache HAWQ (incubating) end-user documentation in MarkDown format. The source files can be built into HTML output using [Bookbinder](https://github.com/cloudfoundry-incubator/bookbinder) or other MarkDown tools.
+This repository provides the full source for Apache HAWQ (incubating) end-user documentation in MarkDown format. You can build the source files into HTML by using [Bookbinder](https://github.com/cloudfoundry-incubator/bookbinder) or other MarkDown tools.
 
-Bookbinder is a gem that binds together a unified documentation web application from markdown, html, and/or DITA source material. The source material for bookbinder must be stored either in local directories or in GitHub repositories. Bookbinder runs [middleman](http://middlemanapp.com/) to produce a Rackup app that can be deployed locally or as a Web application.
+Bookbinder is a Ruby gem that binds together a unified documentation web application from markdown, html, and/or DITA source material. The source material for bookbinder must be stored either in local directories or in GitHub repositories. Bookbinder runs [middleman](http://middlemanapp.com/) to produce a Rackup app that can be deployed locally or as a Web application.
 
 This document contains instructions for building the local Apache HAWQ (incubating) documentation. It contains the sections:
 
@@ -15,38 +15,47 @@ This document contains instructions for building the local Apache HAWQ (incubati
 <a name="usage"></a>
 ## Bookbinder Usage
 
-Bookbinder is meant to be used from within a project called a **book**. The book includes a configuration file that describes which documentation repositories to use as source materials. Bookbinder provides a set of scripts to aggregate those repositories and publish them to various locations.
+Bookbinder is meant to be used from within a project called a **book**. The book includes a configuration file that describes which documentation repositories to use as source materials. Bookbinder provides a set of scripts to aggregate those repositories and publish them to various locations in your final web application.
 
-For Apache HAWQ (incubating), a preconfigured **book** is provided in a separate branch named `book`.  You can use this configuration to build HTML for Apache HAWQ (incubating) on your local system.
+For Apache HAWQ (incubating), a preconfigured **book** is provided in the `/book` directory of this repo.  You can use this configuration to build the HTML for HAWQ on your local system.
 
 <a name="prereq"></a>
 ## Prerequisites
 
-* Bookbinder requires Ruby version 2.0.0-p195 or higher.
+* Ruby version 2.3.0 or higher.
+* Ruby [bundler](http://bundler.io/) installed for gem package management.
 
 <a name="building"></a>
 ## Building the Documentation
 
-1. Begin by cloning the `book` branch of this repository to a new directory that is parallel to `asf/incubator-hawq-docs`. For example:
+1. Change to the `book` directory of this repo.
 
-        $ cd /repos/asf/incubator-hawq-docs
-        $ git clone --branch book  http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs.git ../hawq-book
-        $ cd ../hawq-book
+2. Install bookbinder and its dependent gems. Make sure you are in the `book` directory and enter:
 
-2. The GemFile in the book directory already defines the `gem "bookbindery"` dependency. Make sure you are in the relocated book directory and enter:
+    ``` bash
+$ bundle install
+```
 
-        $ bundle install
-     
-3. The installed `config.yml` file configures the Apache HAWQ (incubating) book for building locally.  Build the files with the command:
+3. The installed `config.yml` file configures the book for building from your local HAWQ source files.  Build the output HTML files by executing the command:
 
-        $ bundle exec bookbinder bind local
-    
-  Bookbinder converts the XML source into HTML, putting the final output in the `final_app` directory.
+    ``` bash
+$ bundle exec bookbinder bind local
+```
+
+   Bookbinder converts the XML source into HTML, and puts the final output in the `final_app` directory.
   
-5. Because the `final_app` directory contains the full output of the HTML conversion process, you can easily publish this directory as a hosted Web application. `final_app` contains a default configuration to serve the local files using the Rack web server:
+5. The `final_app` directory stages the HTML into a web application that you can view using the rack gem. To view the documentation build:
+
+    ``` bash
+$ cd final_app
+$ bundle install
+$ rackup
+```
+
+   Your local documentation is now available for viewing at[http://localhost:9292](http://localhost:9292)
+
+<a name="moreinfo"></a>  
+## Getting More Information
+
+Bookbinder provides additional functionality to construct books from multiple Github repos, to perform variable substitution, and also to automatically build documentation in a continuous integration pipeline.  For more information, see [https://github.com/cloudfoundry-incubator/bookbinder](https://github.com/cloudfoundry-incubator/bookbinder).
 
-        $ cd final_app
-        $ bundle install
-        $ rackup
-    
-  You can now view the local documentation at [http://localhost:9292](http://localhost:9292)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/BackingUpandRestoringHAWQDatabases.html.md.erb b/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
deleted file mode 100644
index 78b0dec..0000000
--- a/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
+++ /dev/null
@@ -1,373 +0,0 @@
----
-title: Backing Up and Restoring HAWQ
----
-
-This chapter provides information on backing up and restoring databases in HAWQ system.
-
-As an administrator, you will need to back up and restore your database. HAWQ provides three utilities to help you back up your data:
-
--   `gpfdist`
--   PXF
--   `pg_dump`
-
-`gpfdist` and PXF�are parallel loading and unloading tools that provide the best performance. �You can use `pg_dump`, a non-parallel utility inherited from PostgreSQL.
-
-In addition, in some situations you should back up your raw data from ETL processes.
-
-This section describes these three utilities, as well as raw data backup, to help you decide what fits your needs.
-
-## <a id="usinggpfdistorpxf"></a>About gpfdist and PXF 
-
-You can perform a parallel backup in HAWQ using `gpfdist` or PXF�to unload all data to external tables. Backup files can reside on a local file system or HDFS. To recover tables, you can load data back from external tables to the database.�
-
-### <a id="performingaparallelbackup"></a>Performing a Parallel Backup 
-
-1.  Check the database size to ensure that the file system has enough space to save the backed up files.
-2.  Use the�`pg_dump` utility to dump the schema of the target database.
-3.  Create a writable external table for each table to back up to that database.
-4.  Load table data into the newly created external tables.
-
->    **Note:** Put the insert statements in a single transaction to prevent problems if you perform any update operations during the backup.
-
-
-### <a id="restoringfromabackup"></a>Restoring from a Backup 
-
-1.  Create a database to recover to.
-2.  Recreate the schema from the schema file \(created during the `pg_dump` process\).
-3.  Create a readable external table for each table in the database.
-4.  Load data from the external table to the actual table.
-5.  Run the `ANALYZE` command once loading is complete. This ensures that the query planner generates optimal plan based on up-to-date table statistics.
-
-### <a id="differencesbetweengpfdistandpxf"></a>Differences between gpfdist and PXF 
-
-`gpfdist` and PXF differ in the following ways:
-
--   `gpfdist` stores backup files on local file system, while PXF stores files on HDFS.
--   `gpfdist` only supports plain text format, while PXF also supports binary format like AVRO and customized format.
--   `gpfdist` doesn\u2019t support generating compressed files, while PXF supports compression \(you can specify a compression codec used in Hadoop such as `org.apache.hadoop.io.compress.GzipCodec`\).
--   Both `gpfdist` and PXF have fast loading performance, but `gpfdist` is much faster than PXF.
-
-## <a id="usingpg_dumpandpg_restore"></a>About pg\_dump and pg\_restore 
-
-HAWQ supports the PostgreSQL backup and restore utilities, `pg_dump` and `pg_restore`. The�`pg_dump`�utility creates a single, large dump file in the master host containing the data from all active segments. The�`pg_restore`�utility restores a HAWQ database from the archive created by `pg_dump`. In most cases, this is probably not practical, as there is most likely not enough disk space in the master host for creating a single backup file of an entire distributed database. HAWQ supports these utilities in case you are migrating data from PostgreSQL to HAWQ.
-
-To create a backup archive for database `mydb`:
-
-```shell
-$ pg_dump -Ft -f mydb.tar mydb
-```
-
-To create a compressed backup using custom format and compression level 3:
-
-```shell
-$ pg_dump -Fc -Z3 -f mydb.dump mydb
-```
-
-To restore from an archive using `pg_restore`:
-
-```shell
-$ pg_restore -d new_db mydb.dump
-```
-
-## <a id="aboutbackinguprawdata"></a>About Backing Up Raw Data 
-
-Parallel backup using�`gpfdist` or�PXF�works fine in most cases. There are a couple of situations where you cannot perform parallel backup and restore operations:
-
--   Performing periodically incremental backups.
--   Dumping a large data volume to external tables - this process takes a long time.
-
-In such situations, you can back up raw data generated during ETL processes and reload it into HAWQ. This provides the flexibility to choose where you store backup files.
-
-## <a id="estimatingthebestpractice"></a>Selecting a Backup Strategy/Utility 
-
-The table below summaries the differences between the four approaches we discussed above.�
-
-<table>
-  <tr>
-    <th></th>
-    <th><code>gpfdist</code></th>
-    <th>PXF</th>
-    <th><code>pg_dump</code></th>
-    <th>Raw Data Backup</th>
-  </tr>
-  <tr>
-    <td><b>Parallel</b></td>
-    <td>Yes</td>
-    <td>Yes</td>
-    <td>No</td>
-    <td>No</td>
-  </tr>
-  <tr>
-    <td><b>Incremental Backup</b></td>
-    <td>No</td>
-    <td>No</td>
-    <td>No</td>
-    <td>Yes</td>
-  </tr>
-  <tr>
-    <td><b>Backup Location</b></td>
-    <td>Local FS</td>
-    <td>HDFS</td>
-    <td>Local FS</td>
-    <td>Local FS, HDFS</td>
-  </tr>
-  <tr>
-    <td><b>Format</b></td>
-    <td>Text, CSV</td>
-    <td>Text, CSV, Custom</td>
-    <td>Text, Tar, Custom</td>
-    <td>Depends on format of row data</td>
-  </tr>
-  <tr>
-<td><b>Compression</b></td><td>No</td><td>Yes</td><td>Only support custom format</td><td>Optional</td></tr>
-<tr><td><b>Scalability</b></td><td>Good</td><td>Good</td><td>---</td><td>Good</td></tr>
-<tr><td><b>Performance</b></td><td>Fast loading, Fast unloading</td><td>Fast loading, Normal unloading</td><td>---</td><td>Fast (Just file copy)</td><tr>
-</table>
-
-## <a id="estimatingspacerequirements"></a>Estimating Space Requirements 
-
-Before you back up your database, ensure that you have enough space to store backup files. This section describes how to get the database size and estimate space requirements.
-
--   Use `hawq_toolkit` to query size of the database you want to backup.�
-
-    ```
-    mydb=# SELECT sodddatsize FROM hawq_toolkit.hawq_size_of_database WHERE sodddatname=\u2019mydb\u2019;
-    ```
-
-    If tables in your database are compressed, this query shows the compressed size of the database.
-
--   Estimate the total size of the backup files.
-    -   If your database tables and backup files are both compressed, you can use the value `sodddatsize` as an estimate value.
-    -   If your database tables are compressed �and backup files are not, you need to multiply `sodddatsize` by the compression ratio. Although this depends on the compression algorithms, you can use an empirical value such as 300%.
-    -   If your back files are compressed and database tables are not, you need to divide `sodddatsize` by the compression ratio.
--   Get space requirement.
-    -   If you use HDFS with PXF, the space requirement is `size_of_backup_files * replication_factor`.
-
-    -   If you use gpfdist, the space requirement for each gpfdist instance is `size_of_backup_files / num_gpfdist_instances`�since�table data will be evenly distributed to all `gpfdist` instances.
-
-
-## <a id="usinggpfdist"></a>Using gpfdist 
-
-This section discusses `gpfdist` and shows an example of how to backup and restore HAWQ database.
-
-`gpfdist` is HAWQ\u2019s parallel file distribution program. It is used by readable external tables and `hawq load` to serve external table files to all HAWQ segments in parallel. It is used by writable external tables to accept output streams from HAWQ segments in parallel and write them out to a file.
-
-To use `gpfdist`, start the `gpfdist` server program on the host where you want to store backup files. You can start multiple `gpfdist` instances on the same host or on different hosts. For each `gpfdist` instance, you specify a directory from which `gpfdist` will serve files for readable external tables or create output files for writable external tables. For example, if you have a dedicated machine for backup with two disks, you can start two `gpfdist` instances, each using one disk:
-
-![](../mdimages/gpfdist_instances_backup.png "Deploying multiple gpfdist instances on a backup host")
-
-You can also run `gpfdist` instances on each segment host. During backup, table data will be evenly distributed to all `gpfdist` instances specified in the `LOCATION` clause in the `CREATE EXTERNAL TABLE` definition.
-
-![](../mdimages/gpfdist_instances.png "Deploying gpfdist instances on each segment host")
-
-### <a id="example"></a>Example 
-
-This example of using `gpfdist`�backs up and restores a 1TB `tpch` database. To do so, start two `gpfdist` instances on the backup host `sdw1` with two 1TB disks \(One disk mounts at `/data1`, another disk mounts at `/data2`\).
-
-#### <a id="usinggpfdisttobackupthetpchdatabase"></a>Using gpfdist to Back Up the tpch Database 
-
-1.  Create backup locations and start the `gpfdist` instances.
-
-    In this example, issuing the first command creates two folders on two different disks with the same postfix `backup/tpch_20140627`. These folders are labeled as backups of the `tpch` database on 2014-06-27. In the next two commands, the example shows two `gpfdist` instances, one using port 8080, and another using port 8081:
-
-    ```shell
-    sdw1$ mkdir -p /data1/gpadmin/backup/tpch_20140627 /data2/gpadmin/backup/tpch_20140627
-    sdw1$ gpfdist -d /data1/gpadmin/backup/tpch_20140627 -p 8080 &
-    sdw1$ gpfdist -d /data2/gpadmin/backup/tpch_20140627 -p 8081 &
-    ```
-
-2.  Save the schema for the database:
-
-    ```shell
-    master_host$ pg_dump --schema-only -f tpch.schema tpch
-    master_host$ scp tpch.schema sdw1:/data1/gpadmin/backup/tpch_20140627
-    ```
-
-    On the HAWQ master host, use the�`pg_dump` utility to save the schema of the tpch database to the file tpch.schema. Copy the schema file to the backup location to restore the database schema.
-
-3.  Create a writable external table for each table in the database:
-
-    ```shell
-    master_host$ psql tpch
-    ```
-    ```sql
-    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_orders (LIKE orders)
-    tpch-# LOCATION('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') FORMAT 'CSV';
-    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_lineitem (LIKE lineitem)
-    tpch-# LOCATION('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') FORMAT 'CSV';
-    ```
-
-    The sample shows two tables in the `tpch` database: `orders` and�`line item`. The sample shows that two corresponding external tables are created. Specify a location or each `gpfdist` instance in the `LOCATION` clause. This sample uses the CSV text format here, but you can also choose other delimited text formats. For more information, see the `CREATE EXTERNAL TABLE` SQL command.
-
-4.  Unload data to the external tables:
-
-    ```sql
-    tpch=# BEGIN;
-    tpch=# INSERT INTO wext_orders SELECT * FROM orders;
-    tpch=# INSERT INTO wext_lineitem SELECT * FROM lineitem;
-    tpch=# COMMIT;
-    ```
-
-5.  **\(Optional\)** Stop `gpfdist` servers to free ports for other processes:
-
-    Find the progress ID and kill the process:
-
-    ```shell
-    sdw1$ ps -ef | grep gpfdist
-    sdw1$ kill 612368; kill 612369
-    ```
-
-
-#### <a id="torecoverusinggpfdist"></a>Recovering Using gpfdist 
-
-1.  Restart `gpfdist` instances if they aren\u2019t running:
-
-    ```shell
-    sdw1$ gpfdist -d /data1/gpadmin/backup/tpch_20140627 -p 8080 &
-    sdw1$ gpfdist -d /data2/gpadmin/backup/tpch_20140627 -p 8081 &
-    ```
-
-2.  Create a new database and restore the schema:
-
-    ```shell
-    master_host$ createdb tpch2
-    master_host$ scp sdw1:/data1/gpadmin/backup/tpch_20140627/tpch.schema .
-    master_host$ psql -f tpch.schema -d tpch2
-    ```
-
-3.  Create a readable external table for each table:
-
-    ```shell
-    master_host$ psql tpch2
-    ```
-    
-    ```sql
-    tpch2=# CREATE EXTERNAL TABLE rext_orders (LIKE orders) LOCATION('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') FORMAT 'CSV';
-    tpch2=# CREATE EXTERNAL TABLE rext_lineitem (LIKE lineitem) LOCATION('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') FORMAT 'CSV';
-    ```
-
-    **Note:** The location clause is the same as the writable external table above.
-
-4.  Load data back from external tables:
-
-    ```sql
-    tpch2=# INSERT INTO orders SELECT * FROM rext_orders;
-    tpch2=# INSERT INTO lineitem SELECT * FROM rext_lineitem;
-    ```
-
-5.  Run the `ANALYZE` command after data loading:
-
-    ```sql
-    tpch2=# analyze;
-    ```
-
-
-### <a id="troubleshootinggpfdist"></a>Troubleshooting gpfdist 
-
-Keep in mind that `gpfdist` is accessed at runtime by the segment instances. Therefore, you must ensure that the HAWQ segment hosts have network access to gpfdist. Since the `gpfdist` program is a �web server, to test connectivity you can run the following command from each host in your HAWQ array \(segments and master\):
-
-```shell
-$ wget http://gpfdist_hostname:port/filename
-```
-
-Also, make sure that your `CREATE EXTERNAL TABLE` definition has the correct host name, port, and file names for `gpfdist`. The file names and paths specified should be relative to the directory where gpfdist is serving files \(the directory path used when you started the `gpfdist` program\). See \u201cDefining External Tables - Examples\u201d.
-
-## <a id="usingpxf"></a>Using PXF 
-
-HAWQ Extension Framework \(PXF\) is an extensible framework that allows HAWQ to query external system data. The details of how to install and use PXF can be found in [Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html).
-
-### <a id="usingpxftobackupthetpchdatabase"></a>Using PXF to Back Up the tpch Database 
-
-1.  Create a folder on HDFS for this backup:
-
-    ```shell
-    master_host$ hdfs dfs -mkdir -p /backup/tpch-2014-06-27
-    ```
-
-2.  Dump the database schema using `pg_dump` and store the schema file in a backup folder:
-
-    ```shell
-    master_host$ pg_dump --schema-only -f tpch.schema tpch
-    master_host$ hdfs dfs -copyFromLocal tpch.schema /backup/tpch-2014-06-27
-    ```
-
-3.  Create a writable external table for each table in the database:
-
-    ```shell
-    master_host$ psql tpch
-    ```
-    
-    ```sql
-    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_orders (LIKE orders)
-    tpch-# LOCATION('pxf://namenode_host:51200/backup/tpch-2014-06-27/orders'
-    tpch-#           '?Profile=HdfsTextSimple'
-    tpch-#           '&COMPRESSION_CODEC=org.apache.hadoop.io.compress.SnappyCodec'
-    tpch-#          )
-    tpch-# FORMAT 'TEXT';
-
-    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_lineitem (LIKE lineitem)
-    tpch-# LOCATION('pxf://namenode_host:51200/backup/tpch-2014-06-27/lineitem'
-    tpch-#           '?Profile=HdfsTextSimple'
-    tpch-#           '&COMPRESSION_CODEC=org.apache.hadoop.io.compress.SnappyCodec')
-    tpch-# FORMAT 'TEXT';
-    ```
-
-    Here, all backup files for the `orders` table go in the /backup/tpch-2014-06-27/orders folder, all backup files for the `lineitem` table go in /backup/tpch-2014-06-27/lineitem folder. We use snappy compression to save disk space.
-
-4.  Unload the data to external tables:
-
-    ```sql
-    tpch=# BEGIN;
-    tpch=# INSERT INTO wext_orders SELECT * FROM orders;
-    tpch=# INSERT INTO wext_lineitem SELECT * FROM lineitem;
-    tpch=# COMMIT;
-    ```
-
-5.  **\(Optional\)** Change the HDFS file replication factor for the backup folder. HDFS replicates each block into three blocks by default for reliability. You can decrease this number for your backup files if you need to:
-
-    ```shell
-    master_host$ hdfs dfs -setrep 2 /backup/tpch-2014-06-27
-    ```
-
-    **Note:** This only changes the replication factor for existing files; new files will still use the default replication factor.
-
-
-### <a id="torecoverfromapxfbackup"></a>Recovering a PXF Backup 
-
-1.  Create a new database and restore the schema:
-
-    ```shell
-    master_host$ createdb tpch2
-    master_host$ hdfs dfs -copyToLocal /backup/tpch-2014-06-27/tpch.schema .
-    master_host$ psql -f tpch.schema -d tpch2
-    ```
-
-2.  Create a readable external table for each table to restore:
-
-    ```shell
-    master_host$ psql tpch2
-    ```
-    
-    ```sql
-    tpch2=# CREATE EXTERNAL TABLE rext_orders (LIKE orders)
-    tpch2-# LOCATION('pxf://namenode_host:51200/backup/tpch-2014-06-27/orders?Profile=HdfsTextSimple')
-    tpch2-# FORMAT 'TEXT';
-    tpch2=# CREATE EXTERNAL TABLE rext_lineitem (LIKE lineitem)
-    tpch2-# LOCATION('pxf://namenode_host:51200/backup/tpch-2014-06-27/lineitem?Profile=HdfsTextSimple')
-    tpch2-# FORMAT 'TEXT';
-    ```
-
-    The location clause is almost the same as above, except you don\u2019t have to specify the `COMPRESSION_CODEC` because PXF will automatically detect it.
-
-3.  Load data back from external tables:
-
-    ```sql
-    tpch2=# INSERT INTO ORDERS SELECT * FROM rext_orders;
-    tpch2=# INSERT INTO LINEITEM SELECT * FROM rext_lineitem;
-    ```
-
-4.  Run `ANALYZE` after data loading:
-
-    ```sql
-    tpch2=# ANALYZE;
-    ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/ClusterExpansion.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ClusterExpansion.html.md.erb b/admin/ClusterExpansion.html.md.erb
deleted file mode 100644
index d3d921b..0000000
--- a/admin/ClusterExpansion.html.md.erb
+++ /dev/null
@@ -1,226 +0,0 @@
----
-title: Expanding a Cluster
----
-
-Apache HAWQ supports dynamic node expansion. You can add segment nodes while HAWQ is running without having to suspend or terminate cluster operations.
-
-**Note:** This topic describes how to expand a cluster using the command-line interface. If you are using Ambari to manage your HAWQ cluster, see [Expanding the HAWQ Cluster](../admin/ambari-admin.html#amb-expand) in [Managing HAWQ Using Ambari](../admin/ambari-admin.html)
-
-## <a id="topic_kkc_tgb_h5"></a>Guidelines for Cluster Expansion 
-
-This topic provides some guidelines around expanding your HAWQ cluster.
-
-There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
-
--   When you add a new node, install both a DataNode and a physical segment on the new node. If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node.
--   After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
--   Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, execute **`select gp_metadata_cache_clear();`**.
--   Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command.
--   If you are using hash tables, consider updating the `default_hash_table_bucket_number` server configuration parameter to a larger value after expanding the cluster but before redistributing the hash tables.
-
-## <a id="task_hawq_expand"></a>Adding a New Node to an Existing HAWQ Cluster 
-
-The following procedure describes the steps required to add a node to an existing HAWQ cluster.  First ensure that the new node has been configured per the instructions found in [Apache HAWQ System Requirements](../requirements/system-requirements.html) and [Select HAWQ Host Machines](../install/select-hosts.html).
-
-For example purposes in this procedure, we are adding a new node named `sdw4`.
-
-1.  Prepare the target machine by checking operating system configurations and passwordless ssh. HAWQ requires passwordless ssh access to all cluster nodes. To set up passwordless ssh on the new node, perform the following steps:
-    1.  Login to the master HAWQ node as gpadmin. If you are logged in as a different user, switch to the gpadmin user and source the `greenplum_path.sh` file.
-
-        ```shell
-        $ su - gpadmin
-        $ source /usr/local/hawq/greenplum_path.sh
-        ```
-
-    2.  On the HAWQ master node, change directories to /usr/local/hawq/etc. In this location, create a file called `new_hosts` and add the hostname\(s\) of the node\(s\) you wish to add to the existing HAWQ cluster, one per line. For example:
-
-        ```
-        sdw4
-        ```
-
-    3.  Login to the master HAWQ node as root and source the `greenplum_path.sh` file.
-
-        ```shell
-        $ su - root
-        $ source /usr/local/hawq/greenplum_path.sh
-        ```
-
-    4.  Execute the following hawq command to set up passwordless ssh for root on the new host machine:
-
-        ```shell
-        $ hawq ssh-exkeys -e hawq_hosts -x new_hosts
-        ```
-
-    5.  Create the gpadmin user on the new host\(s\).
-
-        ```shell
-        $ hawq ssh -f new_hosts -e '/usr/sbin/useradd gpadmin'
-        $ hawq ssh \u2013f new_hosts -e 'echo -e "changeme\changeme" | passwd gpadmin'
-        ```
-
-    6.  Switch to the gpadmin user and source the `greenplum_path.sh` file again.
-
-        ```shell
-        $ su - gpadmin
-        $ source /usr/local/hawq/greenplum_path.sh
-        ```
-
-    7.  Execute the following hawq command a second time to set up passwordless ssh for the gpadmin user:
-
-        ```shell
-        $ hawq ssh-exkeys -e hawq_hosts -x new_hosts
-        ```
-
-    8.  (Optional) If you enabled temporary password-based authentication while preparing/configuring your new HAWQ host system, turn off password-based authentication as described in [Apache HAWQ System Requirements](../requirements/system-requirements.html#topic_pwdlessssh).
-
-    8.  After setting up passwordless ssh, you can execute the following hawq command to check the target machine's configuration.
-
-        ```shell
-        $ hawq check -f new_hosts
-        ```
-
-        Configure operating system parameters as needed on the host machine. See the HAWQ installation documentation for a list of specific operating system parameters to configure.
-
-2.  Login to the target host machine `sdw4` as the root user. If you are logged in as a different user, switch to the root account:
-
-    ```shell
-    $ su - root
-    ```
-
-3.  If not already installed, install the target machine \(`sdw4`\) as an HDFS DataNode.
-4.  If you have any user-defined function (UDF) libraries installed in your existing HAWQ cluster, install them on the new node.
-4.  Download and install HAWQ on the target machine \(`sdw4`\) as described in the [software build instructions](https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install) or in the distribution installation documentation.
-5.  On the HAWQ master node, check current cluster and host information using `psql`.
-
-    ```shell
-    $ psql -d postgres
-    ```
-    
-    ```sql
-    postgres=# SELECT * FROM gp_segment_configuration;
-    ```
-    
-    ```
-     registration_order | role | status | port  | hostname |    address    
-    --------------------+------+--------+-------+----------+---------------
-                     -1 | s    | u      |  5432 | sdw1     | 192.0.2.0
-                      0 | m    | u      |  5432 | mdw      | rhel64-1
-                      1 | p    | u      | 40000 | sdw3     | 192.0.2.2
-                      2 | p    | u      | 40000 | sdw2     | 192.0.2.1
-    (4 rows)
-    ```
-
-    At this point the new node does not appear in the cluster.
-
-6.  Execute the following command to confirm that HAWQ was installed on the new host:
-
-    ```shell
-    $ hawq ssh -f new_hosts -e "ls -l $GPHOME"
-    ```
-
-7.  On the master node, use a text editor to add hostname `sdw4` into the `hawq_hosts` file you created during HAWQ installation. \(If you do not already have this file, then you create it first and list all the nodes in your cluster.\)
-
-    ```
-    mdw
-    smdw
-    sdw1
-    sdw2
-    sdw3
-    sdw4
-    ```
-
-8.  On the master node, use a text editor to add hostname `sdw4` to the `$GPHOME/etc/slaves` file. This file lists all the segment host names for your cluster. For example:
-
-    ```
-    sdw1
-    sdw2
-    sdw3
-    sdw4
-    ```
-
-9.  Sync the `hawq-site.xml` and `slaves` configuration files to all nodes in the cluster \(as listed in hawq\_hosts\).
-
-    ```shell
-    $ hawq scp -f hawq_hosts hawq-site.xml slaves =:$GPHOME/etc/
-    ```
-
-10. Make sure that the HDFS DataNode service has started on the new node.
-11. On `sdw4`, create directories based on the values assigned to the following properties in `hawq-site.xml`. These new directories must be owned by the same database user \(for example, `gpadmin`\) who will execute the `hawq init segment` command in the next step.
-    -   `hawq_segment_directory`
-    -   `hawq_segment_temp_directory`
-    **Note:** The `hawq_segment_directory` must be empty.
-
-12. On `sdw4`, switch to the database user \(for example, `gpadmin`\), and initalize the segment.
-
-    ```shell
-    $ su - gpadmin
-    $ hawq init segment
-    ```
-
-13. On the master node, check current cluster and host information using `psql` to verify that the new `sdw4` node has initialized successfully.
-
-    ```shell
-    $ psql -d postgres
-    ```
-    
-    ```sql
-    postgres=# SELECT * FROM gp_segment_configuration ;
-    ```
-    
-    ```
-     registration_order | role | status | port  | hostname |    address    
-    --------------------+------+--------+-------+----------+---------------
-                     -1 | s    | u      |  5432 | sdw1     | 192.0.2.0
-                      0 | m    | u      |  5432 | mdw      | rhel64-1
-                      1 | p    | u      | 40000 | sdw3     | 192.0.2.2
-                      2 | p    | u      | 40000 | sdw2     | 192.0.2.1
-                      3 | p    | u      | 40000 | sdw4     | 192.0.2.3
-    (5 rows)
-    ```
-
-14. To maintain optimal cluster performance, rebalance HDFS data by running the following command:
-15. 
-    ```shell
-    $ sudo -u hdfs hdfs balancer -threshold threshold_value
-    ```
-    
-    where *threshold\_value* represents how much a DataNode's disk usage, in percentage, can differ from overall disk usage in the cluster. Adjust the threshold value according to the needs of your production data and disk. The smaller the value, the longer the rebalance time.
->
-    **Note:** If you do not specify a threshold, then a default value of 20 is used. If the balancer detects that a DataNode is using less than a 20% difference of the cluster's overall disk usage, then data on that node will not be rebalanced. For example, if disk usage across all DataNodes in the cluster is 40% of the cluster's total disk-storage capacity, then the balancer script ensures that a DataNode's disk usage is between 20% and 60% of that DataNode's disk-storage capacity. DataNodes whose disk usage falls within that percentage range will not be rebalanced.
-
-    Rebalance time is also affected by network bandwidth. You can adjust network bandwidth used by the balancer by using the following command:
-    
-    ```shell
-    $ sudo -u hdfs hdfs dfsadmin -setBalancerBandwidth network_bandwith
-    ```
-    
-    The default value is 1MB/s. Adjust the value according to your network.
-
-15. Speed up the clearing of the metadata cache by using the following command:
-
-    ```shell
-    $ psql -d postgres
-    ```
-    
-    ```sql
-    postgres=# SELECT gp_metadata_cache_clear();
-    ```
-
-16. After expansion, if the new size of your cluster is greater than or equal \(#nodes >=4\) to 4, change the value of the `output.replace-datanode-on-failure` HDFS parameter in `hdfs-client.xml` to `false`.
-
-17. (Optional) If you are using hash tables, adjust the `default_hash_table_bucket_number` server configuration property to reflect the cluster's new size. Update this configuration's value by multiplying the new number of nodes in the cluster by the appropriate amount indicated below.
-
-	|Number of Nodes After Expansion|Suggested default\_hash\_table\_bucket\_number value|
-	|---------------|------------------------------------------|
-	|<= 85|6 \* \#nodes|
-	|\> 85 and <= 102|5 \* \#nodes|
-	|\> 102 and <= 128|4 \* \#nodes|
-	|\> 128 and <= 170|3 \* \#nodes|
-	|\> 170 and <= 256|2 \* \#nodes|
-	|\> 256 and <= 512|1 \* \#nodes|
-	|\> 512|512| 
-   
-18. If you are using hash distributed tables and wish to take advantage of the performance benefits of using a larger cluster, redistribute the data in all hash-distributed tables by using either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command. You should redistribute the table data if you modified the `default_hash_table_bucket_number` configuration parameter. 
-
-
-	**Note:** The redistribution of table data can take a significant amount of time.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/ClusterShrink.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ClusterShrink.html.md.erb b/admin/ClusterShrink.html.md.erb
deleted file mode 100644
index 33c5cc2..0000000
--- a/admin/ClusterShrink.html.md.erb
+++ /dev/null
@@ -1,55 +0,0 @@
----
-title: Removing a Node
----
-
-This topic outlines the proper procedure for removing a node from a HAWQ cluster.
-
-In general, you should not need to remove nodes manually from running HAWQ clusters. HAWQ isolates any nodes that HAWQ detects as failing due to hardware or other types of errors.
-
-## <a id="topic_p53_ct3_kv"></a>Guidelines for Removing a Node 
-
-If you do need to remove a node from a HAWQ cluster, keep in mind the following guidelines around removing nodes:
-
--   Never remove more than two nodes at a time since the risk of data loss is high.
--   Only remove nodes during system maintenance windows when the cluster is not busy or running queries.
-
-## <a id="task_oy5_ct3_kv"></a>Removing a Node from a Running HAWQ Cluster 
-
-The following is a typical procedure to remove a node from a running HAWQ cluster:
-
-1.  Login as gpadmin to the node that you wish to remove and source `greenplum_path.sh`.
-
-    ```shell
-    $ su - gpadmin
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-2.  Make sure that there are no running QEs on the segment. Execute the following command to check for running QE processes:
-
-    ```shell
-    $ ps -ef | grep postgres
-    ```
-
-    In the output, look for processes that contain SQL commands such as INSERT or SELECT. For example:
-
-    ```shell
-    [gpadmin@rhel64-3 ~]$ ps -ef | grep postgres
-    gpadmin 3000 2999 0 Mar21 ? 00:00:08 postgres: port 40000, logger process
-    gpadmin 3003 2999 0 Mar21 ? 00:00:03 postgres: port 40000, stats collector process
-    gpadmin 3004 2999 0 Mar21 ? 00:00:50 postgres: port 40000, writer process
-    gpadmin 3005 2999 0 Mar21 ? 00:00:06 postgres: port 40000, checkpoint process
-    gpadmin 3006 2999 0 Mar21 ? 00:01:25 postgres: port 40000, segment resource manager
-    gpadmin 7880 2999 0 02:08 ? 00:00:00 postgres: port 40000, gpadmin postgres 192.0.2.0(33874) con11 seg0 cmd18 MPPEXEC INSERT
-    ```
-
-3.  Stop hawq on this segment by executing the following command:
-
-    ```shell
-    $ hawq stop segment
-    ```
-
-4.  On HAWQ master, remove the hostname of the segment from the `slaves` file. Then sync the `slaves` file to all nodes in the cluster by executing the following command:
-
-    ```shell
-    $ hawq scp -f hostfile slaves =:  $GPHOME/etc/slaves
-    ```


[44/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-external-tables.html.md.erb b/datamgmt/load/g-external-tables.html.md.erb
deleted file mode 100644
index 4142a07..0000000
--- a/datamgmt/load/g-external-tables.html.md.erb
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: Accessing File-Based External Tables
----
-
-External tables enable accessing external files as if they are regular database tables. They are often used to move data into and out of a HAWQ database.
-
-To create an external table definition, you specify the format of your input files and the location of your external data sources. For information input file formats, see [Formatting Data Files](g-formatting-data-files.html#topic95).
-
-Use one of the following protocols to access external table data sources. You cannot mix protocols in `CREATE EXTERNAL TABLE` statements:
-
--   `gpfdist://` points to a directory on the file host and serves external data files to all HAWQ segments in parallel. See [gpfdist Protocol](g-gpfdist-protocol.html#topic_sny_yph_kr).
--   `gpfdists://` is the secure version of `gpfdist`. See [gpfdists Protocol](g-gpfdists-protocol.html#topic_sny_yph_kr).
--   `pxf://` specifies data accessed through the HAWQ Extensions Framework (PXF). PXF is a service that uses plug-in Java classes to read and write data in external data sources. PXF includes plug-ins to access data in HDFS, HBase, and Hive. Custom plug-ins can be written to access other external data sources.
-
-External tables allow you to access external files from within the database as if they are regular database tables. Used with `gpfdist`, the HAWQ parallel file distribution program, or HAWQ Extensions Framework (PXF), external tables provide full parallelism by using the resources of all HAWQ segments to load or unload data.
-
-You can query external table data directly and in parallel using SQL commands such as `SELECT`, `JOIN`, or `SORT EXTERNAL TABLE             DATA`, and you can create views for external tables.
-
-The steps for using external tables are:
-
-1.  Define the external table.
-2.  Start the gpfdist file server(s) if you plan to use the `gpfdist` or `gpdists` protocols.
-3.  Place the data files in the correct locations.
-4.  Query the external table with SQL commands.
-
-HAWQ provides readable and writable external tables:
-
--   Readable external tables for data loading. Readable external tables support basic extraction, transformation, and loading (ETL) tasks common in data warehousing. HAWQ segment instances read external table data in parallel to optimize large load operations. You cannot modify readable external tables.
--   Writable external tables for data unloading. Writable external tables support:
-
-    -   Selecting data from database tables to insert into the writable external table.
-    -   Sending data to an application as a stream of data. For example, unload data from HAWQ and send it to an application that connects to another database or ETL tool to load the data elsewhere.
-    -   Receiving output from HAWQ parallel MapReduce calculations.
-
-    Writable external tables allow only `INSERT` operations.
-
-External tables can be file-based or web-based.
-
--   Regular (file-based) external tables access static flat files. Regular external tables are rescannable: the data is static while the query runs.
--   Web (web-based) external tables access dynamic data sources, either on a web server with the `http://` protocol or by executing OS commands or scripts. Web external tables are not rescannable: the data can change while the query runs.
-
-Dump and restore operate only on external and web external table *definitions*, not on the data sources.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-formatting-columns.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-formatting-columns.html.md.erb b/datamgmt/load/g-formatting-columns.html.md.erb
deleted file mode 100644
index b828212..0000000
--- a/datamgmt/load/g-formatting-columns.html.md.erb
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Formatting Columns
----
-
-The default column or field delimiter is the horizontal `TAB` character (`0x09`) for text files and the comma character (`0x2C`) for CSV files. You can declare a single character delimiter using the `DELIMITER` clause of `COPY`, `CREATE                 EXTERNAL TABLE` or the `hawq load` configuration table when you define your data format. The delimiter character must appear between any two data value fields. Do not place a delimiter at the beginning or end of a row. For example, if the pipe character ( | ) is your delimiter:
-
-``` pre
-data value 1|data value 2|data value 3
-```
-
-The following command shows the use of the pipe character as a column delimiter:
-
-``` sql
-=# CREATE EXTERNAL TABLE ext_table (name text, date date)
-LOCATION ('gpfdist://host:port/filename.txt)
-FORMAT 'TEXT' (DELIMITER '|');
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-formatting-data-files.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-formatting-data-files.html.md.erb b/datamgmt/load/g-formatting-data-files.html.md.erb
deleted file mode 100644
index 6c929ad..0000000
--- a/datamgmt/load/g-formatting-data-files.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Formatting Data Files
----
-
-When you use the HAWQ tools for loading and unloading data, you must specify how your data is formatted. `COPY`, `CREATE             EXTERNAL TABLE, `and `hawq load` have clauses that allow you to specify how your data is formatted. Data can be delimited text (`TEXT`) or comma separated values (`CSV`) format. External data must be formatted correctly to be read by HAWQ. This topic explains the format of data files expected by HAWQ.
-
--   **[Formatting Rows](../../datamgmt/load/g-formatting-rows.html)**
-
--   **[Formatting Columns](../../datamgmt/load/g-formatting-columns.html)**
-
--   **[Representing NULL Values](../../datamgmt/load/g-representing-null-values.html)**
-
--   **[Escaping](../../datamgmt/load/g-escaping.html)**
-
--   **[Character Encoding](../../datamgmt/load/g-character-encoding.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-formatting-rows.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-formatting-rows.html.md.erb b/datamgmt/load/g-formatting-rows.html.md.erb
deleted file mode 100644
index ea9b416..0000000
--- a/datamgmt/load/g-formatting-rows.html.md.erb
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: Formatting Rows
----
-
-HAWQ expects rows of data to be separated by the `LF` character (Line feed, `0x0A`), `CR` (Carriage return, `0x0D`), or `CR` followed by `LF` (`CR+LF`, `0x0D 0x0A`). `LF` is the standard newline representation on UNIX or UNIX-like operating systems. Operating systems such as Windows or Mac OS X use `CR` or `CR+LF`. All of these representations of a newline are supported by HAWQ as a row delimiter. For more information, see [Importing and Exporting Fixed Width Data](g-importing-and-exporting-fixed-width-data.html#topic37).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-gpfdist-protocol.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-gpfdist-protocol.html.md.erb b/datamgmt/load/g-gpfdist-protocol.html.md.erb
deleted file mode 100644
index ee98609..0000000
--- a/datamgmt/load/g-gpfdist-protocol.html.md.erb
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: gpfdist Protocol
----
-
-The `gpfdist://` protocol is used in a URI to reference a running `gpfdist` instance. The `gpfdist` utility serves external data files from a directory on a file host to all HAWQ segments in parallel.
-
-`gpfdist` is located in the `$GPHOME/bin` directory on your HAWQ master host and on each segment host.
-
-Run `gpfdist` on the host where the external data files reside. `gpfdist` uncompresses `gzip` (`.gz`) and `bzip2` (.`bz2`) files automatically. You can use the wildcard character (\*) or other C-style pattern matching to denote multiple files to read. The files specified are assumed to be relative to the directory that you specified when you started the `gpfdist` instance.
-
-All virtual segments access the external file(s) in parallel, subject to the number of segments set in the `gp_external_max_segments` parameter, the length of the `gpfdist` location list, and the limits specified by the `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit` parameters. Use multiple `gpfdist` data sources in a `CREATE EXTERNAL TABLE` statement to scale the external table's scan performance. For more information about configuring `gpfdist`, see [Using the HAWQ File Server (gpfdist)](g-using-the-hawq-file-server--gpfdist-.html#topic13).
-
-See the `gpfdist` reference documentation for more information about using `gpfdist` with external tables.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-gpfdists-protocol.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-gpfdists-protocol.html.md.erb b/datamgmt/load/g-gpfdists-protocol.html.md.erb
deleted file mode 100644
index 2f5641d..0000000
--- a/datamgmt/load/g-gpfdists-protocol.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: gpfdists Protocol
----
-
-The `gpfdists://` protocol is a secure version of the `gpfdist://         protocol`. To use it, you run the `gpfdist` utility with the `--ssl` option. When specified in a URI, the `gpfdists://` protocol enables encrypted communication and secure identification of the file server and the HAWQ to protect against attacks such as eavesdropping and man-in-the-middle attacks.
-
-`gpfdists` implements SSL security in a client/server scheme with the following attributes and limitations:
-
--   Client certificates are required.
--   Multilingual certificates are not supported.
--   A Certificate Revocation List (CRL) is not supported.
--   The `TLSv1` protocol is used with the `TLS_RSA_WITH_AES_128_CBC_SHA` encryption algorithm.
--   SSL parameters cannot be changed.
--   SSL renegotiation is supported.
--   The SSL ignore host mismatch parameter is set to `false`.
--   Private keys containing a passphrase are not supported for the `gpfdist` file server (server.key) and for the HAWQ (client.key).
--   Issuing certificates that are appropriate for the operating system in use is the user's responsibility. Generally, converting certificates as shown in [https://www.sslshopper.com/ssl-converter.html](https://www.sslshopper.com/ssl-converter.html) is supported.
-
-    **Note:** A server started with the `gpfdist --ssl` option can only communicate with the `gpfdists` protocol. A server that was started with `gpfdist` without the `--ssl` option can only communicate with the `gpfdist` protocol.
-
--   The client certificate file, client.crt
--   The client private key file, client.key
-
-Use one of the following methods to invoke the `gpfdists` protocol.
-
--   Run `gpfdist` with the `--ssl` option and then use the `gpfdists` protocol in the `LOCATION` clause of a `CREATE EXTERNAL TABLE` statement.
--   Use a `hawq load` YAML control file with the `SSL` option set to true.
-
-Using `gpfdists` requires that the following client certificates reside in the `$PGDATA/gpfdists` directory on each segment.
-
--   The client certificate file, `client.crt`
--   The client private key file, `client.key`
--   The trusted certificate authorities, `root.crt`
-
-For an example of loading data into an external table security, see [Example 3 - Multiple gpfdists instances](creating-external-tables-examples.html#topic47).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-handling-errors-ext-table-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-handling-errors-ext-table-data.html.md.erb b/datamgmt/load/g-handling-errors-ext-table-data.html.md.erb
deleted file mode 100644
index 2b8dc78..0000000
--- a/datamgmt/load/g-handling-errors-ext-table-data.html.md.erb
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: Handling Errors in External Table Data
----
-
-By default, if external table data contains an error, the command fails and no data loads into the target database table. Define the external table with single row error handling to enable loading correctly formatted rows and to isolate data errors in external table data. See [Handling Load Errors](g-handling-load-errors.html#topic55).
-
-The `gpfdist` file server uses the `HTTP` protocol. External table queries that use `LIMIT` end the connection after retrieving the rows, causing an HTTP socket error. If you use `LIMIT` in queries of external tables that use the `gpfdist://` or `http:// protocols`, ignore these errors \u2013 data is returned to the database as expected.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-handling-load-errors.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-handling-load-errors.html.md.erb b/datamgmt/load/g-handling-load-errors.html.md.erb
deleted file mode 100644
index 6faf7a5..0000000
--- a/datamgmt/load/g-handling-load-errors.html.md.erb
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Handling Load Errors
----
-
-Readable external tables are most commonly used to select data to load into regular database tables. You use the `CREATE TABLE AS SELECT` or `INSERT                 INTO `commands to query the external table data. By default, if the data contains an error, the entire command fails and the data is not loaded into the target database table.
-
-The `SEGMENT REJECT LIMIT` clause allows you to isolate format errors in external table data and to continue loading correctly formatted rows. Use `SEGMENT REJECT LIMIT `to set an error threshold, specifying the reject limit `count` as number of `ROWS` (the default) or as a `PERCENT` of total rows (1-100).
-
-The entire external table operation is aborted, and no rows are processed, if the number of error rows reaches the `SEGMENT REJECT LIMIT`. The limit of error rows is per-segment, not per entire operation. The operation processes all good rows, and it discards and optionally logs formatting errors for erroneous rows, if the number of error rows does not reach the `SEGMENT REJECT                 LIMIT`.
-
-The `LOG ERRORS` clause allows you to keep error rows for further examination. For information about the `LOG ERRORS` clause, see the `CREATE EXTERNAL TABLE` command.
-
-When you set `SEGMENT REJECT LIMIT`, HAWQ scans the external data in single row error isolation mode. Single row error isolation mode applies to external data rows with format errors such as extra or missing attributes, attributes of a wrong data type, or invalid client encoding sequences. HAWQ does not check constraint errors, but you can filter constraint errors by limiting the `SELECT` from an external table at runtime. For example, to eliminate duplicate key errors:
-
-``` sql
-=# INSERT INTO table_with_pkeys 
-SELECT DISTINCT * FROM external_table;
-```
-
--   **[Define an External Table with Single Row Error Isolation](../../datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html)**
-
--   **[Capture Row Formatting Errors and Declare a Reject Limit](../../datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html)**
-
--   **[Identifying Invalid CSV Files in Error Table Data](../../datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html)**
-
--   **[Moving Data between Tables](../../datamgmt/load/g-moving-data-between-tables.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html.md.erb b/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html.md.erb
deleted file mode 100644
index 534d530..0000000
--- a/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html.md.erb
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: Identifying Invalid CSV Files in Error Table Data
----
-
-If a CSV file contains invalid formatting, the *rawdata* field in the error table can contain several combined rows. For example, if a closing quote for a specific field is missing, all the following newlines are treated as embedded newlines. When this happens, HAWQ stops parsing a row when it reaches 64K, puts that 64K of data into the error table as a single row, resets the quote flag, and continues. If this happens three times during load processing, the load file is considered invalid and the entire load fails with the message "`rejected ` `N` ` or more rows`". See [Escaping in CSV Formatted Files](g-escaping-in-csv-formatted-files.html#topic101) for more information on the correct use of quotes in CSV files.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-importing-and-exporting-fixed-width-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-importing-and-exporting-fixed-width-data.html.md.erb b/datamgmt/load/g-importing-and-exporting-fixed-width-data.html.md.erb
deleted file mode 100644
index f49cae0..0000000
--- a/datamgmt/load/g-importing-and-exporting-fixed-width-data.html.md.erb
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Importing and Exporting Fixed Width Data
----
-
-Specify custom formats for fixed-width data with the HAWQ functions `fixedwith_in` and `fixedwidth_out`. These functions already exist in the file `$GPHOME/share/postgresql/cdb_external_extensions.sql`. The following example declares a custom format, then calls the `fixedwidth_in` function to format the data.
-
-``` sql
-CREATE READABLE EXTERNAL TABLE students (
-  name varchar(20), address varchar(30), age int)
-LOCATION ('gpfdist://mdw:8081/students.txt')
-FORMAT 'CUSTOM' (formatter=fixedwidth_in, name='20', address='30', age='4');
-```
-
-The following options specify how to import fixed width data.
-
--   Read all the data.
-
-    To load all the fields on a line of fixed with data, you must load them in their physical order. You must specify the field length, but cannot specify a starting and ending position. The fields names in the fixed width arguments must match the order in the field list at the beginning of the `CREATE TABLE` command.
-
--   Set options for blank and null characters.
-
-    Trailing blanks are trimmed by default. To keep trailing blanks, use the `preserve_blanks=on` option.You can reset the trailing blanks option to the default with the `preserve_blanks=off` option.
-
-    Use the null=`'null_string_value'` option to specify a value for null characters.
-
--   If you specify `preserve_blanks=on`, you must also define a value for null characters.
--   If you specify `preserve_blanks=off`, null is not defined, and the field contains only blanks, HAWQ writes a null to the table. If null is defined, HAWQ writes an empty string to the table.
-
-    Use the `line_delim='line_ending'` parameter to specify the line ending character. The following examples cover most cases. The `E` specifies an escape string constant.
-
-    ``` pre
-    line_delim=E'\n'
-    line_delim=E'\r'
-    line_delim=E'\r\n'
-    line_delim='abc'
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-installing-gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-installing-gpfdist.html.md.erb b/datamgmt/load/g-installing-gpfdist.html.md.erb
deleted file mode 100644
index 85549df..0000000
--- a/datamgmt/load/g-installing-gpfdist.html.md.erb
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: Installing gpfdist
----
-
-You may choose to run `gpfdist` from a machine other than the HAWQ master, such as on a machine devoted to ETL processing. To install `gpfdist` on your ETL server, refer to [Client-Based HAWQ Load Tools](client-loadtools.html) for information related to Linux and Windows load tools installation and configuration.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-load-the-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-load-the-data.html.md.erb b/datamgmt/load/g-load-the-data.html.md.erb
deleted file mode 100644
index 4c88c9f..0000000
--- a/datamgmt/load/g-load-the-data.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Load the Data
----
-
-Create the tables with SQL statements based on the appropriate schema.
-
-There are no special requirements for the HAWQ tables that hold loaded data. In the prices example, the following command creates the appropriate table.
-
-``` sql
-CREATE TABLE prices (
-  itemnumber integer,       
-  price       decimal        
-) 
-DISTRIBUTED BY (itemnumber);
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-loading-and-unloading-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-loading-and-unloading-data.html.md.erb b/datamgmt/load/g-loading-and-unloading-data.html.md.erb
deleted file mode 100644
index 8ea43d5..0000000
--- a/datamgmt/load/g-loading-and-unloading-data.html.md.erb
+++ /dev/null
@@ -1,55 +0,0 @@
----
-title: Loading and Unloading Data
----
-
-The topics in this section describe methods for loading and writing data into and out of HAWQ, and how to format data files. It also covers registering HDFS files and folders directly into HAWQ internal tables.
-
-HAWQ supports high-performance parallel data loading and unloading, and for smaller amounts of data, single file, non-parallel data import and export.
-
-HAWQ can read from and write to several types of external data sources, including text files, Hadoop file systems, and web servers.
-
--   The `COPY` SQL command transfers data between an external text file on the master host and a HAWQ database table.
--   External tables allow you to query data outside of the database directly and in parallel using SQL commands such as `SELECT`, `JOIN`, or `SORT           EXTERNAL TABLE DATA`, and you can create views for external tables. External tables are often used to load external data into a regular database table using a command such as `CREATE TABLE table AS SELECT * FROM ext_table`.
--   External web tables provide access to dynamic data. They can be backed with data from URLs accessed using the HTTP protocol or by the output of an OS script running on one or more segments.
--   The `gpfdist` utility is the HAWQ parallel file distribution program. It is an HTTP server that is used with external tables to allow HAWQ segments to load external data in parallel, from multiple file systems. You can run multiple instances of `gpfdist` on different hosts and network interfaces and access them in parallel.
--   The `hawq load` utility automates the steps of a load task using a YAML-formatted control file.
-
-The method you choose to load data depends on the characteristics of the source data\u2014its location, size, format, and any transformations required.
-
-In the simplest case, the `COPY` SQL command loads data into a table from a text file that is accessible to the HAWQ master instance. This requires no setup and provides good performance for smaller amounts of data. With the `COPY` command, the data copied into or out of the database passes between a single file on the master host and the database. This limits the total size of the dataset to the capacity of the file system where the external file resides and limits the data transfer to a single file write stream.
-
-More efficient data loading options for large datasets take advantage of the HAWQ MPP architecture, using the HAWQ segments to load data in parallel. These methods allow data to load simultaneously from multiple file systems, through multiple NICs, on multiple hosts, achieving very high data transfer rates. External tables allow you to access external files from within the database as if they are regular database tables. When used with `gpfdist`, the HAWQ parallel file distribution program, external tables provide full parallelism by using the resources of all HAWQ segments to load or unload data.
-
-HAWQ leverages the parallel architecture of the Hadoop Distributed File System to access files on that system.
-
--   **[Working with File-Based External Tables](../../datamgmt/load/g-working-with-file-based-ext-tables.html)**
-
--   **[Using the HAWQ File Server (gpfdist)](../../datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html)**
-
--   **[Creating and Using Web External Tables](../../datamgmt/load/g-creating-and-using-web-external-tables.html)**
-
--   **[Loading Data Using an External Table](../../datamgmt/load/g-loading-data-using-an-external-table.html)**
-
--   **[Registering Files into HAWQ Internal Tables](../../datamgmt/load/g-register_files.html)**
-
--   **[Loading and Writing Non-HDFS Custom Data](../../datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html)**
-
--   **[Creating External Tables - Examples](../../datamgmt/load/creating-external-tables-examples.html#topic44)**
-
--   **[Handling Load Errors](../../datamgmt/load/g-handling-load-errors.html)**
-
--   **[Loading Data with hawq load](../../datamgmt/load/g-loading-data-with-hawqload.html)**
-
--   **[Loading Data with COPY](../../datamgmt/load/g-loading-data-with-copy.html)**
-
--   **[Running COPY in Single Row Error Isolation Mode](../../datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html)**
-
--   **[Optimizing Data Load and Query Performance](../../datamgmt/load/g-optimizing-data-load-and-query-performance.html)**
-
--   **[Unloading Data from HAWQ](../../datamgmt/load/g-unloading-data-from-hawq-database.html)**
-
--   **[Transforming XML Data](../../datamgmt/load/g-transforming-xml-data.html)**
-
--   **[Formatting Data Files](../../datamgmt/load/g-formatting-data-files.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html.md.erb b/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html.md.erb
deleted file mode 100644
index e826963..0000000
--- a/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html.md.erb
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: Loading and Writing Non-HDFS Custom Data
----
-
-HAWQ supports `TEXT` and `CSV` formats for importing and exporting data. You can load and write the data in other formats by defining and using a custom format or custom protocol.
-
--   **[Using a Custom Format](../../datamgmt/load/g-using-a-custom-format.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-loading-data-using-an-external-table.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-loading-data-using-an-external-table.html.md.erb b/datamgmt/load/g-loading-data-using-an-external-table.html.md.erb
deleted file mode 100644
index 32a741a..0000000
--- a/datamgmt/load/g-loading-data-using-an-external-table.html.md.erb
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Loading Data Using an External Table
----
-
-Use SQL commands such as `INSERT` and `SELECT` to query a readable external table, the same way that you query a regular database table. For example, to load travel expense data from an external table, `ext_expenses`, into a database table,` expenses_travel`:
-
-``` sql
-=# INSERT INTO expenses_travel 
-SELECT * FROM ext_expenses WHERE category='travel';
-```
-
-To load all data into a new database table:
-
-``` sql
-=# CREATE TABLE expenses AS SELECT * FROM ext_expenses;
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-loading-data-with-copy.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-loading-data-with-copy.html.md.erb b/datamgmt/load/g-loading-data-with-copy.html.md.erb
deleted file mode 100644
index 72e5ac6..0000000
--- a/datamgmt/load/g-loading-data-with-copy.html.md.erb
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Loading Data with COPY
----
-
-`COPY FROM` copies data from a file or standard input into a table and appends the data to the table contents. `COPY` is non-parallel: data is loaded in a single process using the HAWQ master instance. Using `COPY` is only recommended for very small data files.
-
-The `COPY` source file must be accessible to the master host. Specify the `COPY` source file name relative to the master host location.
-
-HAWQ copies data from `STDIN` or `STDOUT` using the connection between the client and the master server.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-loading-data-with-hawqload.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-loading-data-with-hawqload.html.md.erb b/datamgmt/load/g-loading-data-with-hawqload.html.md.erb
deleted file mode 100644
index 68e4459..0000000
--- a/datamgmt/load/g-loading-data-with-hawqload.html.md.erb
+++ /dev/null
@@ -1,56 +0,0 @@
----
-title: Loading Data with hawq load
----
-
-The HAWQ `hawq load` utility loads data using readable external tables and the HAWQ parallel file server ( `gpfdist` or `gpfdists`). It handles parallel file-based external table setup and allows users to configure their data format, external table definition, and `gpfdist` or `gpfdists` setup in a single configuration file.
-
-## <a id="topic62__du168147"></a>To use hawq load
-
-1.  Ensure that your environment is set up to run `hawq                         load`. Some dependent files from your HAWQ /&gt; installation are required, such as `gpfdist` and Python, as well as network access to the HAWQ segment hosts.
-2.  Create your load control file. This is a YAML-formatted file that specifies the HAWQ connection information, `gpfdist` configuration information, external table options, and data format.
-
-    For example:
-
-    ``` pre
-    ---
-    VERSION: 1.0.0.1
-    DATABASE: ops
-    USER: gpadmin
-    HOST: mdw-1
-    PORT: 5432
-    GPLOAD:
-       INPUT:
-        - SOURCE:
-             LOCAL_HOSTNAME:
-               - etl1-1
-               - etl1-2
-               - etl1-3
-               - etl1-4
-             PORT: 8081
-             FILE: 
-               - /var/load/data/*
-        - COLUMNS:
-               - name: text
-               - amount: float4
-               - category: text
-               - description: text
-               - date: date
-        - FORMAT: text
-        - DELIMITER: '|'
-        - ERROR_LIMIT: 25
-        - ERROR_TABLE: payables.err_expenses
-       OUTPUT:
-        - TABLE: payables.expenses
-        - MODE: INSERT
-    SQL:
-       - BEFORE: "INSERT INTO audit VALUES('start', current_timestamp)"
-       - AFTER: "INSERT INTO audit VALUES('end', current_timestamp)"
-    ```
-
-3.  Run `hawq load`, passing in the load control file. For example:
-
-    ``` shell
-    $ hawq load -f my_load.yml
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-moving-data-between-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-moving-data-between-tables.html.md.erb b/datamgmt/load/g-moving-data-between-tables.html.md.erb
deleted file mode 100644
index 2603ae4..0000000
--- a/datamgmt/load/g-moving-data-between-tables.html.md.erb
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Moving Data between Tables
----
-
-You can use `CREATE TABLE AS` or `INSERT...SELECT` to load external and web external table data into another (non-external) database table, and the data will be loaded in parallel according to the external or web external table definition.
-
-If an external table file or web external table data source has an error, one of the following will happen, depending on the isolation mode used:
-
--   **Tables without error isolation mode**: any operation that reads from that table fails. Loading from external and web external tables without error isolation mode is an all or nothing operation.
--   **Tables with error isolation mode**: the entire file will be loaded, except for the problematic rows (subject to the configured REJECT\_LIMIT)
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb b/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
deleted file mode 100644
index ff1c230..0000000
--- a/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Optimizing Data Load and Query Performance
----
-
-Use the following tip to help optimize your data load and subsequent query performance.
-
--   Run `ANALYZE` after loading data. If you significantly altered the data in a table, run `ANALYZE` or `VACUUM                     ANALYZE` (system catalog tables only) to update table statistics for the query optimizer. Current statistics ensure that the optimizer makes the best decisions during query planning and avoids poor performance due to inaccurate or nonexistent statistics.
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-register_files.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-register_files.html.md.erb b/datamgmt/load/g-register_files.html.md.erb
deleted file mode 100644
index 25c24ca..0000000
--- a/datamgmt/load/g-register_files.html.md.erb
+++ /dev/null
@@ -1,217 +0,0 @@
----
-title: Registering Files into HAWQ Internal Tables
----
-
-The `hawq register` utility loads and registers HDFS data files or folders into HAWQ internal tables. Files can be read directly, rather than having to be copied or loaded, resulting in higher performance and more efficient transaction processing.
-
-Data from the file or directory specified by \<hdfsfilepath\> is loaded into the appropriate HAWQ table directory in HDFS and the utility updates the corresponding HAWQ metadata for the files. Either AO or Parquet-formatted tables in HDFS can be loaded into a corresponding table in HAWQ.
-
-You can use `hawq register` either to:
-
--  Load and register external Parquet-formatted file data generated by an external system such as Hive or Spark.
--  Recover cluster data from a backup cluster for disaster recovery. 
-
-Requirements for running `hawq register` on the  server are:
-
--   All hosts in your HAWQ cluster (master and segments) must have network access between them and the hosts containing the data to be loaded.
--   The Hadoop client must be configured and the hdfs filepath specified.
--   The files to be registered and the HAWQ table must be located in the same HDFS cluster.
--   The target table DDL is configured with the correct data type mapping.
-
-##<a id="topic1__section2"></a>Registering Externally Generated HDFS File Data to an Existing Table
-
-Files or folders in HDFS can be registered into an existing table, allowing them to be managed as a HAWQ internal table. When registering files, you can optionally specify the maximum amount of data to be loaded, in bytes, using the `--eof` option. If registering a folder, the actual file sizes are used. 
-
-Only HAWQ or Hive-generated Parquet tables are supported. Partitioned tables are not supported. Attempting to register these tables will result in an error. 
-
-Metadata for the Parquet file(s) and the destination table must be consistent. Different data types are used by HAWQ tables and Parquet files, so data must be mapped. You must verify that the structure of the Parquet files and the HAWQ table are compatible before running `hawq register`. Not all HIVE data types can be mapped to HAWQ equivalents. The currently-supported HIVE data types are: boolean, int, smallint, tinyint, bigint, float, double, string, binary, char, and varchar.
-
-As a best practice, create a copy of the Parquet file to be registered before running ```hawq register```
-You can then then run ```hawq register``` on the copy,  leaving the original file available for additional Hive queries or if a data mapping error is encountered.
-
-###Limitations for Registering Hive Tables to HAWQ 
-
-The following HIVE data types cannot be converted to HAWQ equivalents: timestamp, decimal, array, struct, map, and union.   
-
-###Example: Registering a Hive-Generated Parquet File
-
-This example shows how to register a HIVE-generated parquet file in HDFS into the table `parquet_table` in HAWQ, which is in the database named `postgres`. The file path of the HIVE-generated file is `hdfs://localhost:8020/temp/hive.paq`.
-
-In this example, the location of the database is `hdfs://localhost:8020/hawq_default`, the tablespace id is 16385, the database id is 16387, the table filenode id is 77160, and the last file under the filenode is numbered 7.
-
-Run the `hawq register` command for the file location  `hdfs://localhost:8020/temp/hive.paq`:
-
-``` pre
-$ hawq register -d postgres -f hdfs://localhost:8020/temp/hive.paq parquet_table
-```
-
-After running the `hawq register` command, the corresponding new location of the file in HDFS is:  `hdfs://localhost:8020/hawq_default/16385/16387/77160/8`. 
-
-The command updates the metadata of the table `parquet_table` in HAWQ, which is contained in the table `pg_aoseg.pg_paqseg_77160`. The pg\_aoseg table is a fixed schema for row-oriented and Parquet AO tables. For row-oriented tables, the table name prefix is pg\_aoseg. For Parquet tables, the table name prefix is pg\_paqseg. 77160 is the relation id of the table.
-
-You can locate the table by one of two methods, either  by relation ID or by table name. 
-
-To find the relation ID, run the following command on the catalog table pg\_class: 
-
-```
-SELECT oid FROM pg_class WHERE relname=$relname
-```
-To find the table name, run the command: 
-
-```
-SELECT segrelid FROM pg_appendonly WHERE relid = $relid
-```
-then run: 
-
-```
-SELECT relname FROM pg_class WHERE oid = segrelid
-```
-
-## <a id="topic1__section3"></a>Registering Data Using Information from a YAML Configuration File
- 
-The `hawq register` command can register HDFS files  by using metadata loaded from a YAML configuration file by using the `--config <yaml_config\>` option. Both AO and Parquet tables can be registered. Tables need not exist in HAWQ before being registered. In disaster recovery, information in a YAML-format file created by the `hawq extract` command can re-create HAWQ tables by using metadata from a backup checkpoint.
-
-You can also use a YAML confguration file to append HDFS files to an existing HAWQ table or create a table and register it into HAWQ.
-
-For disaster recovery, tables can be re-registered using the HDFS files and a YAML file. The clusters are assumed to have data periodically imported from Cluster A to Cluster B. 
-
-Data is registered according to the following conditions: 
-
--  Existing tables have files appended to the existing HAWQ table.
--  If a table does not exist, it is created and registered into HAWQ. The catalog table will be updated with the file size specified by the YAML file.
--  If the -\\\-force option is used, the data in existing catalog tables is erased and re-registered. All HDFS-related catalog contents in `pg_aoseg.pg_paqseg_$relid ` are cleared. The original files on HDFS are retained.
-
-Tables using random distribution are preferred for registering into HAWQ.
-
-There are additional restrictions when registering hash tables. When registering hash-distributed tables using a YAML file, the distribution policy in the YAML file must match that of the table being registered into and the order of the files in the YAML file should reflect the hash distribution. The size of the registered file should be identical to or a multiple of the hash table bucket number. 
-
-Only single-level partitioned tables can be registered into HAWQ.
-
-
-###Example: Registration using a YAML Configuration File
-
-This example shows how to use `hawq register` to register HDFS data using a YAML configuration file generated by hawq extract. 
-
-First, create a table in SQL and insert some data into it.  
-
-```
-=> CREATE TABLE paq1(a int, b varchar(10))with(appendonly=true, orientation=parquet);
-=> INSERT INTO paq1 VALUES(generate_series(1,1000), 'abcde');
-```
-
-Extract the table metadata by using the `hawq extract` utility.
-
-```
-hawq extract -o paq1.yml paq1
-```
-
-Register the data into new table paq2, using the -\\\-config option to identify the YAML file.
-
-```
-hawq register --config paq1.yml paq2
-```
-Select the new table and check to verify that  the content has been registered.
-
-```
-=> SELECT count(*) FROM paq2;
-```
-
-
-## <a id="topic1__section4"></a>Data Type Mapping<a id="topic1__section4"></a>
-
-HIVE and Parquet tables use different data types than HAWQ tables and must be mapped for metadata compatibility. You are responsible for making sure your implementation is mapped to the appropriate data type before running `hawq register`. The tables below show equivalent data types, if available.
-
-<span class="tablecap">Table 1. HAWQ to Parquet Mapping</span>
-
-|HAWQ Data Type   | Parquet Data Type  |
-| :------------| :---------------|
-| bool        | boolean       |
-| int2/int4/date        | int32       |
-| int8/money       | int64      |
-| time/timestamptz/timestamp       | int64      |
-| float4        | float       |
-|float8        | double       |
-|bit/varbit/bytea/numeric       | Byte array       |
-|char/bpchar/varchar/name| Byte array |
-| text/xml/interval/timetz  | Byte array  |
-| macaddr/inet/cidr  | Byte array  |
-
-**Additional HAWQ-to-Parquet Mapping**
-
-**point**:  
-
-``` 
-group {
-    required int x;
-    required int y;
-}
-```
-
-**circle:** 
-
-```
-group {
-    required int x;
-    required int y;
-    required int r;
-}
-```
-
-**box:**  
-
-```
-group {
-    required int x1;
-    required int y1;
-    required int x2;
-    required int y2;
-}
-```
-
-**iseg:** 
-
-
-```
-group {
-    required int x1;
-    required int y1;
-    required int x2;
-    required int y2;
-}
-``` 
-
-**path**:
-  
-```
-group {
-    repeated group {
-        required int x;
-        required int y;
-    }
-}
-```
-
-
-<span class="tablecap">Table 2. HIVE to HAWQ Mapping</span>
-
-|HIVE Data Type   | HAWQ Data Type  |
-| :------------| :---------------|
-| boolean        | bool       |
-| tinyint        | int2       |
-| smallint       | int2/smallint      |
-| int            | int4 / int |
-| bigint         | int8 / bigint      |
-| float        | float4       |
-| double	| float8 |
-| string        | varchar       |
-| binary      | bytea       |
-| char | char |
-| varchar  | varchar  |
-
-
-### Extracting Metadata
-
-For more information on extracting metadata to a YAML file and the output content of the YAML file, refer to the reference page for [hawq extract](../../reference/cli/admin_utilities/hawqextract.html#topic1).
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-representing-null-values.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-representing-null-values.html.md.erb b/datamgmt/load/g-representing-null-values.html.md.erb
deleted file mode 100644
index 4d4ffdd..0000000
--- a/datamgmt/load/g-representing-null-values.html.md.erb
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: Representing NULL Values
----
-
-`NULL` represents an unknown piece of data in a column or field. Within your data files you can designate a string to represent null values. The default string is `\N` (backslash-N) in `TEXT` mode, or an empty value with no quotations in `CSV` mode. You can also declare a different string using the `NULL` clause of `COPY`, `CREATE EXTERNAL                 TABLE `or the `hawq load` control file when defining your data format. For example, you can use an empty string if you do not want to distinguish nulls from empty strings. When using the HAWQ loading tools, any data item that matches the designated null string is considered a null value.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html.md.erb b/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html.md.erb
deleted file mode 100644
index ba0603c..0000000
--- a/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Running COPY in Single Row Error Isolation Mode
----
-
-By default, `COPY` stops an operation at the first error: if the data contains an error, the operation fails and no data loads. If you run `COPY                 FROM` in *single row error isolation mode*, HAWQ skips rows that contain format errors and loads properly formatted rows. Single row error isolation mode applies only to rows in the input file that contain format errors. If the data contains a contraint error such as violation of a `NOT NULL` or `CHECK` constraint, the operation fails and no data loads.
-
-Specifying `SEGMENT REJECT LIMIT` runs the `COPY` operation in single row error isolation mode. Specify the acceptable number of error rows on each segment, after which the entire `COPY FROM` operation fails and no rows load. The error row count is for each HAWQ segment, not for the entire load operation.
-
-If the `COPY` operation does not reach the error limit, HAWQ loads all correctly-formatted rows and discards the error rows. The `LOG ERRORS INTO` clause allows you to keep error rows for further examination. Use `LOG ERRORS` to capture data formatting errors internally in HAWQ. For example:
-
-``` sql
-=> COPY country FROM '/data/gpdb/country_data'
-   WITH DELIMITER '|' LOG ERRORS INTO errtable
-   SEGMENT REJECT LIMIT 10 ROWS;
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-starting-and-stopping-gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-starting-and-stopping-gpfdist.html.md.erb b/datamgmt/load/g-starting-and-stopping-gpfdist.html.md.erb
deleted file mode 100644
index 7e2cca9..0000000
--- a/datamgmt/load/g-starting-and-stopping-gpfdist.html.md.erb
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title: Starting and Stopping gpfdist
----
-
-You can start `gpfdist` in your current directory location or in any directory that you specify. The default port is `8080`.
-
-From your current directory, type:
-
-``` shell
-$ gpfdist &
-```
-
-From a different directory, specify the directory from which to serve files, and optionally, the HTTP port to run on.
-
-To start `gpfdist` in the background and log output messages and errors to a log file:
-
-``` shell
-$ gpfdist -d /var/load_files -p 8081 -l /home/gpadmin/log &
-```
-
-For multiple `gpfdist` instances on the same ETL host (see [External Tables Using Multiple gpfdist Instances with Multiple NICs](g-about-gpfdist-setup-and-performance.html#topic14__du165882)), use a different base directory and port for each instance. For example:
-
-``` shell
-$ gpfdist -d /var/load_files1 -p 8081 -l /home/gpadmin/log1 &
-$ gpfdist -d /var/load_files2 -p 8082 -l /home/gpadmin/log2 &
-```
-
-To stop `gpfdist` when it is running in the background:
-
-First find its process id:
-
-``` shell
-$ ps -ef | grep gpfdist
-```
-
-Then kill the process, for example (where 3456 is the process ID in this example):
-
-``` shell
-$ kill 3456
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-transfer-and-store-the-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-transfer-and-store-the-data.html.md.erb b/datamgmt/load/g-transfer-and-store-the-data.html.md.erb
deleted file mode 100644
index 8a6d7ab..0000000
--- a/datamgmt/load/g-transfer-and-store-the-data.html.md.erb
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Transfer and Store the Data
----
-
-Use one of the following approaches to transform the data with `gpfdist`.
-
--   `GPLOAD` supports only input transformations, but is easier to implement in many cases.
--   `INSERT INTO SELECT FROM` supports both input and output transformations, but exposes more details.
-
--   **[Transforming with GPLOAD](../../datamgmt/load/g-transforming-with-gpload.html)**
-
--   **[Transforming with INSERT INTO SELECT FROM](../../datamgmt/load/g-transforming-with-insert-into-select-from.html)**
-
--   **[Configuration File Format](../../datamgmt/load/g-configuration-file-format.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-transforming-with-gpload.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-transforming-with-gpload.html.md.erb b/datamgmt/load/g-transforming-with-gpload.html.md.erb
deleted file mode 100644
index 438fedb..0000000
--- a/datamgmt/load/g-transforming-with-gpload.html.md.erb
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Transforming with GPLOAD
----
-
-To transform data using the `GPLOAD ` control file, you must specify both the file name for the `TRANSFORM_CONFIG` file and the name of the `TRANSFORM` operation in the `INPUT` section of the `GPLOAD` control file.
-
--   `TRANSFORM_CONFIG `specifies the name of the `gpfdist` configuration file.
--   The `TRANSFORM` setting indicates the name of the transformation that is described in the file named in `TRANSFORM_CONFIG`.
-
-``` pre
----
-VERSION: 1.0.0.1
-DATABASE: ops
-USER: gpadmin
-GPLOAD:
-INPUT:
-- TRANSFORM_CONFIG: config.yaml
-- TRANSFORM: prices_input
-- SOURCE:
-FILE: prices.xml
-```
-
-The transformation operation name must appear in two places: in the `TRANSFORM` setting of the `gpfdist` configuration file and in the `TRANSFORMATIONS` section of the file named in the `TRANSFORM_CONFIG` section.
-
-In the `GPLOAD` control file, the optional parameter `MAX_LINE_LENGTH` specifies the maximum length of a line in the XML transformation data that is passed to hawq load.
-
-The following diagram shows the relationships between the `GPLOAD` control file, the `gpfdist` configuration file, and the XML data file.
-
-<img src="../../images/03-gpload-files.jpg" class="image" width="415" height="258" />
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-transforming-with-insert-into-select-from.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-transforming-with-insert-into-select-from.html.md.erb b/datamgmt/load/g-transforming-with-insert-into-select-from.html.md.erb
deleted file mode 100644
index d91cc93..0000000
--- a/datamgmt/load/g-transforming-with-insert-into-select-from.html.md.erb
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: Transforming with INSERT INTO SELECT FROM
----
-
-Specify the transformation in the `CREATE EXTERNAL TABLE` definition's `LOCATION` clause. For example, the transform is shown in bold in the following command. (Run `gpfdist` first, using the command `gpfdist             -c config.yaml`).
-
-``` sql
-CREATE READABLE EXTERNAL TABLE prices_readable (LIKE prices)
-   LOCATION ('gpfdist://hostname:8081/prices.xml#transform=prices_input')
-   FORMAT 'TEXT' (DELIMITER '|')
-   LOG ERRORS INTO error_log SEGMENT REJECT LIMIT 10;
-```
-
-In the command above, change *hostname* to your hostname. `prices_input` comes from the configuration file.
-
-The following query loads data into the `prices` table.
-
-``` sql
-INSERT INTO prices SELECT * FROM prices_readable;
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-transforming-xml-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-transforming-xml-data.html.md.erb b/datamgmt/load/g-transforming-xml-data.html.md.erb
deleted file mode 100644
index f9520bb..0000000
--- a/datamgmt/load/g-transforming-xml-data.html.md.erb
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Transforming XML Data
----
-
-The HAWQ data loader *gpfdist* provides transformation features to load XML data into a table and to write data from the HAWQ to XML files. The following diagram shows *gpfdist* performing an XML transform.
-
-<a id="topic75__du185408"></a>
-<span class="figtitleprefix">Figure: </span>External Tables using XML Transformations
-
-<img src="../../images/ext-tables-xml.png" class="image" />
-
-To load or extract XML data:
-
--   [Determine the Transformation Schema](g-determine-the-transformation-schema.html#topic76)
--   [Write a Transform](g-write-a-transform.html#topic77)
--   [Write the gpfdist Configuration](g-write-the-gpfdist-configuration.html#topic78)
--   [Load the Data](g-load-the-data.html#topic79)
--   [Transfer and Store the Data](g-transfer-and-store-the-data.html#topic80)
-
-The first three steps comprise most of the development effort. The last two steps are straightforward and repeatable, suitable for production.
-
--   **[Determine the Transformation Schema](../../datamgmt/load/g-determine-the-transformation-schema.html)**
-
--   **[Write a Transform](../../datamgmt/load/g-write-a-transform.html)**
-
--   **[Write the gpfdist Configuration](../../datamgmt/load/g-write-the-gpfdist-configuration.html)**
-
--   **[Load the Data](../../datamgmt/load/g-load-the-data.html)**
-
--   **[Transfer and Store the Data](../../datamgmt/load/g-transfer-and-store-the-data.html)**
-
--   **[XML Transformation Examples](../../datamgmt/load/g-xml-transformation-examples.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-troubleshooting-gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-troubleshooting-gpfdist.html.md.erb b/datamgmt/load/g-troubleshooting-gpfdist.html.md.erb
deleted file mode 100644
index 2e6a450..0000000
--- a/datamgmt/load/g-troubleshooting-gpfdist.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: Troubleshooting gpfdist
----
-
-The segments access `gpfdist` at runtime. Ensure that the HAWQ segment hosts have network access to `gpfdist`. `gpfdist` is a web server: test connectivity by running the following command from each host in the HAWQ array (segments and master):
-
-``` shell
-$ wget http://gpfdist_hostname:port/filename      
-```
-
-The `CREATE EXTERNAL TABLE` definition must have the correct host name, port, and file names for `gpfdist`. Specify file names and paths relative to the directory from which `gpfdist` serves files (the directory path specified when `gpfdist` started). See [Creating External Tables - Examples](creating-external-tables-examples.html#topic44).
-
-If you start `gpfdist` on your system and IPv6 networking is disabled, `gpfdist` displays this warning message when testing for an IPv6 port.
-
-``` pre
-[WRN gpfdist.c:2050] Creating the socket failed
-```
-
-If the corresponding IPv4 port is available, `gpfdist` uses that port and the warning for IPv6 port can be ignored. To see information about the ports that `gpfdist` tests, use the `-V` option.
-
-For information about IPv6 and IPv4 networking, see your operating system documentation.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-unloading-data-from-hawq-database.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-unloading-data-from-hawq-database.html.md.erb b/datamgmt/load/g-unloading-data-from-hawq-database.html.md.erb
deleted file mode 100644
index e0690ad..0000000
--- a/datamgmt/load/g-unloading-data-from-hawq-database.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Unloading Data from HAWQ
----
-
-A writable external table allows you to select rows from other database tables and output the rows to files, named pipes, to applications, or as output targets for parallel MapReduce calculations. You can define file-based and web-based writable external tables.
-
-This topic describes how to unload data from HAWQ using parallel unload (writable external tables) and non-parallel unload (`COPY`).
-
--   **[Defining a File-Based Writable External Table](../../datamgmt/load/g-defining-a-file-based-writable-external-table.html)**
-
--   **[Defining a Command-Based Writable External Web Table](../../datamgmt/load/g-defining-a-command-based-writable-external-web-table.html)**
-
--   **[Unloading Data Using a Writable External Table](../../datamgmt/load/g-unloading-data-using-a-writable-external-table.html)**
-
--   **[Unloading Data Using COPY](../../datamgmt/load/g-unloading-data-using-copy.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-unloading-data-using-a-writable-external-table.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-unloading-data-using-a-writable-external-table.html.md.erb b/datamgmt/load/g-unloading-data-using-a-writable-external-table.html.md.erb
deleted file mode 100644
index 377f2d6..0000000
--- a/datamgmt/load/g-unloading-data-using-a-writable-external-table.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Unloading Data Using a Writable External Table
----
-
-Writable external tables allow only `INSERT` operations. You must grant `INSERT` permission on a table to enable access to users who are not the table owner or a superuser. For example:
-
-``` sql
-GRANT INSERT ON writable_ext_table TO admin;
-```
-
-To unload data using a writable external table, select the data from the source table(s) and insert it into the writable external table. The resulting rows are output to the writable external table. For example:
-
-``` sql
-INSERT INTO writable_ext_table SELECT * FROM regular_table;
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-unloading-data-using-copy.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-unloading-data-using-copy.html.md.erb b/datamgmt/load/g-unloading-data-using-copy.html.md.erb
deleted file mode 100644
index 816a2b5..0000000
--- a/datamgmt/load/g-unloading-data-using-copy.html.md.erb
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Unloading Data Using COPY
----
-
-`COPY TO` copies data from a table to a file (or standard input) on the HAWQ master host using a single process on the HAWQ master instance. Use `COPY` to output a table's entire contents, or filter the output using a `SELECT` statement. For example:
-
-``` sql
-COPY (SELECT * FROM country WHERE country_name LIKE 'A%') 
-TO '/home/gpadmin/a_list_countries.out';
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-url-based-web-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-url-based-web-external-tables.html.md.erb b/datamgmt/load/g-url-based-web-external-tables.html.md.erb
deleted file mode 100644
index a115972..0000000
--- a/datamgmt/load/g-url-based-web-external-tables.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: URL-based Web External Tables
----
-
-A URL-based web table accesses data from a web server using the HTTP protocol. Web table data is dynamic; the data is not rescannable.
-
-Specify the `LOCATION` of files on a web server using `http://`. The web data file(s) must reside on a web server that HAWQ segment hosts can access. The number of URLs specified corresponds to the minimum number of virtual segments that work in parallel to access the web table.
-
-The following sample command defines a web table that gets data from several URLs.
-
-``` sql
-=# CREATE EXTERNAL WEB TABLE ext_expenses (
-    name text, date date, amount float4, category text, description text) 
-LOCATION ('http://intranet.company.com/expenses/sales/file.csv',
-          'http://intranet.company.com/expenses/exec/file.csv',
-          'http://intranet.company.com/expenses/finance/file.csv',
-          'http://intranet.company.com/expenses/ops/file.csv',
-          'http://intranet.company.com/expenses/marketing/file.csv',
-          'http://intranet.company.com/expenses/eng/file.csv' 
-      )
-FORMAT 'CSV' ( HEADER );
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-using-a-custom-format.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-using-a-custom-format.html.md.erb b/datamgmt/load/g-using-a-custom-format.html.md.erb
deleted file mode 100644
index e83744a..0000000
--- a/datamgmt/load/g-using-a-custom-format.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: Using a Custom Format
----
-
-You specify a custom data format in the `FORMAT` clause of `CREATE             EXTERNAL TABLE`.
-
-```
-FORMAT 'CUSTOM' (formatter=format_function, key1=val1,...keyn=valn)
-```
-
-Where the `'CUSTOM'` keyword indicates that the data has a custom format and `formatter` specifies the function to use to format the data, followed by comma-separated parameters to the formatter function.
-
-HAWQ provides functions for formatting fixed-width data, but you must author the formatter functions for variable-width data. The steps are as follows.
-
-1.  Author and compile input and output functions as a shared library.
-2.  Specify the shared library function with `CREATE FUNCTION` in HAWQ.
-3.  Use the `formatter` parameter of `CREATE EXTERNAL                TABLE`'s `FORMAT` clause to call the function.
-
--   **[Importing and Exporting Fixed Width Data](../../datamgmt/load/g-importing-and-exporting-fixed-width-data.html)**
-
--   **[Examples - Read Fixed-Width Data](../../datamgmt/load/g-examples-read-fixed-width-data.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html.md.erb b/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html.md.erb
deleted file mode 100644
index 0c68b2c..0000000
--- a/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html.md.erb
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Using the HAWQ File Server (gpfdist)
----
-
-The `gpfdist` protocol provides the best performance and is the easiest to set up. `gpfdist` ensures optimum use of all segments in your HAWQ system for external table reads.
-
-This topic describes the setup and management tasks for using `gpfdist` with external tables.
-
--   **[About gpfdist Setup and Performance](../../datamgmt/load/g-about-gpfdist-setup-and-performance.html)**
-
--   **[Controlling Segment Parallelism](../../datamgmt/load/g-controlling-segment-parallelism.html)**
-
--   **[Installing gpfdist](../../datamgmt/load/g-installing-gpfdist.html)**
-
--   **[Starting and Stopping gpfdist](../../datamgmt/load/g-starting-and-stopping-gpfdist.html)**
-
--   **[Troubleshooting gpfdist](../../datamgmt/load/g-troubleshooting-gpfdist.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-working-with-file-based-ext-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-working-with-file-based-ext-tables.html.md.erb b/datamgmt/load/g-working-with-file-based-ext-tables.html.md.erb
deleted file mode 100644
index e024a7d..0000000
--- a/datamgmt/load/g-working-with-file-based-ext-tables.html.md.erb
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: Working with File-Based External Tables
----
-
-External tables provide access to data stored in data sources outside of HAWQ as if the data were stored in regular database tables. Data can be read from or written to external tables.
-
-An external table is a HAWQ database table backed with data that resides outside of the database. An external table is either readable or writable. It can be used like a regular database table in SQL commands such as `SELECT` and `INSERT` and joined with other tables. External tables are most often used to load and unload database data.
-
-Web-based external tables provide access to data served by an HTTP server or an operating system process. See [Creating and Using Web External Tables](g-creating-and-using-web-external-tables.html#topic31) for more about web-based tables.
-
--   **[Accessing File-Based External Tables](../../datamgmt/load/g-external-tables.html)**
-
-    External tables enable accessing external files as if they are regular database tables. They are often used to move data into and out of a HAWQ database.
-
--   **[gpfdist Protocol](../../datamgmt/load/g-gpfdist-protocol.html)**
-
--   **[gpfdists Protocol](../../datamgmt/load/g-gpfdists-protocol.html)**
-
--   **[Handling Errors in External Table Data](../../datamgmt/load/g-handling-errors-ext-table-data.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-write-a-transform.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-write-a-transform.html.md.erb b/datamgmt/load/g-write-a-transform.html.md.erb
deleted file mode 100644
index 6b35ab2..0000000
--- a/datamgmt/load/g-write-a-transform.html.md.erb
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: Write a Transform
----
-
-The transform specifies what to extract from the data.You can use any authoring environment and language appropriate for your project. For XML transformations, choose a technology such as XSLT, Joost (STX), Java, Python, or Perl, based on the goals and scope of the project.
-
-In the price example, the next step is to transform the XML data into a simple two-column delimited format.
-
-``` pre
-708421|19.99
-708466|59.25
-711121|24.99
-```
-
-The following STX transform, called *input\_transform.stx*, completes the data transformation.
-
-``` xml
-<?xml version="1.0"?>
-<stx:transform version="1.0"
-   xmlns:stx="http://stx.sourceforge.net/2002/ns"
-   pass-through="none">
-  <!-- declare variables -->
-  <stx:variable name="itemnumber"/>
-  <stx:variable name="price"/>
-  <!-- match and output prices as columns delimited by | -->
-  <stx:template match="/prices/pricerecord">
-    <stx:process-children/>
-    <stx:value-of select="$itemnumber"/>    
-<stx:text>|</stx:text>
-    <stx:value-of select="$price"/>      <stx:text>
-</stx:text>
-  </stx:template>
-  <stx:template match="itemnumber">
-    <stx:assign name="itemnumber" select="."/>
-  </stx:template>
-  <stx:template match="price">
-    <stx:assign name="price" select="."/>
-  </stx:template>
-</stx:transform>
-```
-
-This STX transform declares two temporary variables, `itemnumber` and `price`, and the following rules.
-
-1.  When an element that satisfies the XPath expression `/prices/pricerecord` is found, examine the child elements and generate output that contains the value of the `itemnumber` variable, a `|` character, the value of the price variable, and a newline.
-2.  When an `<itemnumber>` element is found, store the content of that element in the variable `itemnumber`.
-3.  When a &lt;price&gt; element is found, store the content of that element in the variable `price`.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-write-the-gpfdist-configuration.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-write-the-gpfdist-configuration.html.md.erb b/datamgmt/load/g-write-the-gpfdist-configuration.html.md.erb
deleted file mode 100644
index 89733cd..0000000
--- a/datamgmt/load/g-write-the-gpfdist-configuration.html.md.erb
+++ /dev/null
@@ -1,61 +0,0 @@
----
-title: Write the gpfdist Configuration
----
-
-The `gpfdist` configuration is specified as a YAML 1.1 document. It specifies rules that `gpfdist` uses to select a Transform to apply when loading or extracting data.
-
-This example `gpfdist` configuration contains the following items:
-
--   the `config.yaml` file defining `TRANSFORMATIONS`
--   the `input_transform.sh` wrapper script, referenced in the `config.yaml` file
--   the `input_transform.stx` joost transformation, called from `input_transform.sh`
-
-Aside from the ordinary YAML rules, such as starting the document with three dashes (`---`), a `gpfdist` configuration must conform to the following restrictions:
-
-1.  a `VERSION` setting must be present with the value `1.0.0.1`.
-2.  a `TRANSFORMATIONS` setting must be present and contain one or more mappings.
-3.  Each mapping in the `TRANSFORMATION` must contain:
-    -   a `TYPE` with the value 'input' or 'output'
-    -   a `COMMAND` indicating how the transform is run.
-
-4.  Each mapping in the `TRANSFORMATION` can contain optional `CONTENT`, `SAFE`, and `STDERR` settings.
-
-The following `gpfdist` configuration called `config.YAML` applies to the prices example. The initial indentation on each line is significant and reflects the hierarchical nature of the specification. The name `prices_input` in the following example will be referenced later when creating the table in SQL.
-
-``` pre
----
-VERSION: 1.0.0.1
-TRANSFORMATIONS:
-  prices_input:
-    TYPE:     input
-    COMMAND:  /bin/bash input_transform.sh %filename%
-```
-
-The `COMMAND` setting uses a wrapper script called `input_transform.sh` with a `%filename%` placeholder. When `gpfdist` runs the `prices_input` transform, it invokes `input_transform.sh` with `/bin/bash` and replaces the `%filename%` placeholder with the path to the input file to transform. The wrapper script called `input_transform.sh` contains the logic to invoke the STX transformation and return the output.
-
-If Joost is used, the Joost STX engine must be installed.
-
-``` bash
-#!/bin/bash
-# input_transform.sh - sample input transformation, 
-# demonstrating use of Java and Joost STX to convert XML into
-# text to load into HAWQ.
-# java arguments:
-#   -jar joost.jar ��������joost STX engine
-#   -nodecl                  don't generate a <?xml?> declaration
-#   $1                        filename to process
-#   input_transform.stx    the STX transformation
-#
-# the AWK step eliminates a blank line joost emits at the end
-java \
-    -jar joost.jar \
-    -nodecl \
-    $1 \
-    input_transform.stx \
- | awk 'NF>0
-```
-
-The `input_transform.sh` file uses the Joost STX engine with the AWK interpreter. The following diagram shows the process flow as `gpfdist` runs the transformation.
-
-<img src="../../images/02-pipeline.png" class="image" width="462" height="190" />
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-xml-transformation-examples.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-xml-transformation-examples.html.md.erb b/datamgmt/load/g-xml-transformation-examples.html.md.erb
deleted file mode 100644
index 12ad1d6..0000000
--- a/datamgmt/load/g-xml-transformation-examples.html.md.erb
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: XML Transformation Examples
----
-
-The following examples demonstrate the complete process for different types of XML data and STX transformations. Files and detailed instructions associated with these examples can be downloaded from the Apache site `gpfdist_transform` tools demo page. Read the README file before you run the examples.
-
--   **[Command-based Web External Tables](../../datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html)**
-
--   **[Example using IRS MeF XML Files (In demo Directory)](../../datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html)**
-
--   **[Example using WITSML\u2122 Files (In demo Directory)](../../datamgmt/load/g-example-witsml-files-in-demo-directory.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/ddl/ddl-database.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-database.html.md.erb b/ddl/ddl-database.html.md.erb
deleted file mode 100644
index 2ef9f9f..0000000
--- a/ddl/ddl-database.html.md.erb
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: Creating and Managing Databases
----
-
-A HAWQ system is a single instance of HAWQ. There can be several separate HAWQ systems installed, but usually just one is selected by environment variable settings. See your HAWQ administrator for details.
-
-There can be multiple databases in a HAWQ system. This is different from some database management systems \(such as Oracle\) where the database instance *is* the database. Although you can create many databases in a HAWQ system, client programs can connect to and access only one database at a time \u2014 you cannot cross-query between databases.
-
-## <a id="topic3"></a>About Template Databases 
-
-Each new database you create is based on a *template*. HAWQ provides a default database, *template1*. Use *template1* to connect to HAWQ for the first time. HAWQ uses *template1* to create databases unless you specify another template. Do not create any objects in *template1* unless you want those objects to be in every database you create.
-
-HAWQ uses two other database templates, *template0* and *postgres*, internally. Do not drop or modify *template0* or *postgres*. You can use *template0* to create a completely clean database containing only the standard objects predefined by HAWQ at initialization, especially if you modified *template1*.
-
-## <a id="topic4"></a>Creating a Database 
-
-The `CREATE DATABASE` command creates a new database. For example:
-
-``` sql
-=> CREATE DATABASE new_dbname;
-```
-
-To create a database, you must have privileges to create a database or be a HAWQ superuser. If you do not have the correct privileges, you cannot create a database. The HAWQ administrator must either give you the necessary privileges or to create a database for you.
-
-You can also use the client program `createdb` to create a database. For example, running the following command in a command line terminal connects to HAWQ using the provided host name and port and creates a database named *mydatabase*:
-
-``` shell
-$ createdb -h masterhost -p 5432 mydatabase
-```
-
-The host name and port must match the host name and port of the installed HAWQ system.
-
-Some objects, such as roles, are shared by all the databases in a HAWQ system. Other objects, such as tables that you create, are known only in the database in which you create them.
-
-### <a id="topic5"></a>Cloning a Database 
-
-By default, a new database is created by cloning the standard system database template, *template1*. Any database can be used as a template when creating a new database, thereby providing the capability to 'clone' or copy an existing database and all objects and data within that database. For example:
-
-``` sql
-=> CREATE DATABASE new_dbname TEMPLATE old_dbname
-```
-
-## <a id="topic6"></a>Viewing the List of Databases 
-
-If you are working in the `psql` client program, you can use the `\l` meta-command to show the list of databases and templates in your HAWQ system. If using another client program and you are a superuser, you can query the list of databases from the `pg_database` system catalog table. For example:
-
-``` sql
-=> SELECT datname FROM pg_database;
-```
-
-## <a id="topic7"></a>Altering a Database 
-
-The ALTER DATABASE command changes database attributes such as owner, name, or default configuration attributes. For example, the following command alters a database by setting its default schema search path \(the `search_path` configuration parameter\):
-
-``` sql
-=> ALTER DATABASE mydatabase SET search_path TO myschema, public, pg_catalog;
-```
-
-To alter a database, you must be the owner of the database or a superuser.
-
-## <a id="topic8"></a>Dropping a Database 
-
-The `DROP DATABASE` command drops \(or deletes\) a database. It removes the system catalog entries for the database and deletes the database directory on disk that contains the data. You must be the database owner or a superuser to drop a database, and you cannot drop a database while you or anyone else is connected to it. Connect to `template1` \(or another database\) before dropping a database. For example:
-
-``` shell
-=> \c template1
-```
-``` sql
-=> DROP DATABASE mydatabase;
-```
-
-You can also use the client program `dropdb` to drop a database. For example, the following command connects to HAWQ using the provided host name and port and drops the database *mydatabase*:
-
-``` shell
-$ dropdb -h masterhost -p 5432 mydatabase
-```
-
-**Warning:** Dropping a database cannot be undone.


[11/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb b/markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb
new file mode 100644
index 0000000..1b66068
--- /dev/null
+++ b/markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb
@@ -0,0 +1,120 @@
+---
+title: Configuring Resource Management
+---
+
+This topic provides configuration information for system administrators and database superusers responsible for managing resources in a HAWQ system.
+
+To configure resource management in HAWQ, follow these high-level steps:
+
+1.  Decide which kind of resource management you need in your HAWQ deployment. HAWQ supports two modes of global resource management:
+    -   Standalone mode, or no global resource management. When configured to run in standalone mode, HAWQ consumes cluster node resources without considering the resource requirements of co-existing applications, and the HAWQ resource manager assumes it can use all the resources from registered segments, unless configured otherwise. See [Using Standalone Mode](#topic_url_pls_zt).
+    -   External global resource manager mode. Currently HAWQ supports YARN as a global resource manager. When you configure YARN as the global resource manager in a HAWQ cluster, HAWQ becomes an unmanaged YARN application. HAWQ negotiates resources with the YARN resource manager to consume YARN cluster resources.
+2.  If you are using standalone mode for HAWQ resource management, decide on whether to limit the amount of memory and CPU usage allocated per HAWQ segment. See [Configuring Segment Resource Capacity](#topic_htk_fxh_15).
+3.  If you are using YARN as your global resource manager, configure the resource queue in YARN where HAWQ will register itself as a YARN application. Then configure HAWQ with the location and configuration requirements for communicating with YARN's resource manager. See [Integrating YARN with HAWQ](YARNIntegration.html) for details.
+4.  In HAWQ, create and define resource queues. See [Working with Hierarchical Resource Queues](ResourceQueues.html).
+
+## <a id="topic_url_pls_zt"></a>Using Standalone Mode 
+
+Standalone mode means that the HAWQ resource manager assumes it can use all resources from registered segments unless configured otherwise.
+
+To configure HAWQ to run without a global resource manager, add the following property configuration to your `hawq-site.xml` file:
+
+``` xml
+<property>
+      <name>hawq_global_rm_type</name>
+      <value>none</value>
+</property>
+```
+
+### <a id="id_wgb_44m_q5"></a>hawq\_global\_rm\_type 
+
+HAWQ global resource manager type. Valid values are `yarn` and `none`. Setting this parameter to `none` indicates that the HAWQ resource manager will manages its own resources. Setting the value to `yarn` means that HAWQ will negotiate with YARN for resources.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|yarn or none|none|master<br/><br/>system<br/><br/>restart|
+
+## <a id="topic_htk_fxh_15"></a>Configuring Segment Resource Capacity 
+
+When you run the HAWQ resource manager in standalone mode \(`hawq_global_rm_type=none`\), then you can set limits on the resources used by each HAWQ cluster segment.
+
+In `hawq-site.xml`, add the following parameters:
+
+``` xml
+<property>
+   <name>hawq_rm_memory_limit_perseg</name>
+   <value>8GB</value>
+</property>
+<property>
+   <name>hawq_rm_nvcore_limit_perseg</name>
+   <value>4</value>
+</property>
+```
+
+**Note:** Due to XML configuration validation, you must set these properties for either mode even though they do not apply if you are using YARN mode.
+
+You must configure all segments with identical resource capacities. Memory should be set as a multiple of 1GB, such as 1GB per core, 2GB per core or 4GB per core. For example, if you want to use the ratio of 4GB per core, then you must configure all segments to use a 4GB per core resource capacity.
+
+After you set limits on the segments, you can then use resource queues to configure additional resource management rules in HAWQ.
+
+**Note:** To reduce the likelihood of resource fragmentation, you should make sure that the segment resource capacity configured for HAWQ \(`hawq_rm_memory_limit_perseg`\) is a multiple of the resource quotas for all virtual segments.
+
+### <a id="id_qqq_s4m_q5"></a>hawq\_rm\_memory\_limit\_perseg 
+
+Limit of memory usage by a HAWQ segment when `hawq_global_rm_type` is set to `none`. For example, `8GB`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|�no specific lower or upper limit |�64GB |session<br/><br/>reload|
+
+### <a id="id_xpv_t4m_q5"></a>hawq\_rm\_nvcore\_limit\_perseg 
+
+Maximum number of virtual cores that can be used for query execution in a HAWQ segment when `hawq_global_rm_type` is set to `none`. For example, `2.0`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|1.0 to maximum integer|1.0|master<br/><br/>session<br/><br/>reload|
+
+## <a id="topic_g2p_zdq_15"></a>Configuring Resource Quotas for Query Statements 
+
+In some cases, you may want to specify additional resource quotas on the query statement level.
+
+The following configuration properties allow a user to control resource quotas without altering corresponding resource queues.
+
+-   [hawq\_rm\_stmt\_vseg\_memory](../reference/guc/parameter_definitions.html)
+-   [hawq\_rm\_stmt\_nvseg](../reference/guc/parameter_definitions.html)
+
+However, the changed resource quota for the virtual segment cannot exceed the resource queue\u2019s maximum capacity in HAWQ.
+
+In the following example, when executing the next query statement, the HAWQ resource manager will attempt to allocate 10 virtual segments and each segment has a 256MB memory quota.
+
+``` sql
+postgres=# SET hawq_rm_stmt_vseg_memory='256mb';
+SET
+postgres=# SET hawq_rm_stmt_nvseg=10;
+SET
+postgres=# CREATE TABLE t(i integer);
+CREATE TABLE
+postgres=# INSERT INTO t VALUES(1);
+INSERT 0 1
+```
+
+Note that given the dynamic nature of resource allocation in HAWQ, you cannot expect that each segment has reserved resources for every query. The HAWQ resource manager will only attempt to allocate those resources. In addition, the number of virtual segments allocated for the query statement cannot amount to a value larger than the value set in global configuration parameter `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit`.
+
+## <a id="topic_tl5_wq1_f5"></a>Configuring the Maximum Number of Virtual Segments 
+
+You can limit the number of virtual segments used during statement execution on a cluster-wide level.
+
+Limiting the number of virtual segments used during statement execution is useful for preventing resource bottlenecks during data load and the overconsumption of resources without performance benefits. The number of files that can be opened concurrently for write on both NameNode and DataNode are limited. Consider the following scenario:
+
+-   You need to load data into a table with P partitions
+-   There are N nodes in the cluster and V virtual segments per node started for the load query
+
+Then there will be P \* V files opened per DataNode and at least P \* V threads started in the DataNode. If the number of partitions and the number of virtual segments per node is very high, the DataNode becomes a bottleneck. On the NameNode side, there will be V \* N connections. If the number of nodes is very high, then NameNode can become a bottleneck.
+
+To alleviate the load on NameNode, you can limit V, the number of virtual segments started per node. Use the following server configuration parameters:
+
+-   `hawq_rm_nvseg_perquery_limit` limits the maximum number of virtual segments that can be used for one statement execution on a cluster-wide level.  The hash buckets defined in `default_hash_table_bucket_number` cannot exceed this number. The default value is 512.
+-   `default_hash_table_bucket_number` defines the number of buckets used by default when you create a hash table. When you query a hash table, the query's virtual segment resources are fixed and allocated based on the bucket number defined for the table. A best practice is to tune this configuration parameter after you expand the cluster.
+
+You can also limit the number of virtual segments used by queries when configuring your resource queues. \(See [CREATE RESOURCE QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html).\) The global configuration parameters are a hard limit, however, and any limits set on the resource queue or on the statement-level cannot be larger than these limits set on the cluster-wide level.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/HAWQResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/HAWQResourceManagement.html.md.erb b/markdown/resourcemgmt/HAWQResourceManagement.html.md.erb
new file mode 100644
index 0000000..dd5c9b3
--- /dev/null
+++ b/markdown/resourcemgmt/HAWQResourceManagement.html.md.erb
@@ -0,0 +1,69 @@
+---
+title: How HAWQ Manages Resources
+---
+
+HAWQ manages resources (CPU, memory, I/O and file handles) using a variety of mechanisms including global resource management, resource queues and the enforcement of limits on resource usage.
+
+## <a id="global-env"></a>Globally Managed Environments
+
+In Hadoop clusters, resources are frequently managed globally by YARN. YARN provides resources to MapReduce jobs and any other applications that are configured to work with YARN. In this type of environment, resources are allocated in units called containers. In a HAWQ environment, segments and node managers control the consumption of resources and enforce resource limits on each node.
+
+The following diagram depicts the layout of a HAWQ cluster in a YARN-managed Hadoop environment:
+
+![](../mdimages/hawq_high_level_architecture.png)
+
+When you run HAWQ natively in a Hadoop cluster, you can configure HAWQ to register as an application in YARN. After configuration, HAWQ's resource manager communicates with YARN to acquire resources \(when needed to execute queries\) and return resources \(when no longer needed\) back to YARN.
+
+Resources obtained from YARN are then managed in a distributed fashion by HAWQ's resource manager, which is hosted on the HAWQ master.
+
+## <a id="section_w4f_vx4_15"></a>HAWQ Resource Queues 
+
+Resource queues are the main tool for managing the degree of concurrency in a HAWQ system. Resource queues are database objects that you create with the CREATE RESOURCE QUEUE SQL statement. You can use them to manage the number of active queries that may execute concurrently, and the maximum amount of memory and CPU usage each type of query is allocated. Resource queues can also guard against queries that would consume too many resources and degrade overall system performance.
+
+Internally, HAWQ manages its resources dynamically based on a system of hierarchical resource queues. HAWQ uses resource queues to allocate resources efficiently to concurrently running queries. Resource queues are organized as a n-ary tree, as depicted in the diagram below.
+
+![](../mdimages/svg/hawq_resource_queues.svg)
+
+When HAWQ is initialized, there is always one queue named `pg_root` at the root of the tree and one queue named `pg_default`. If YARN is configured, HAWQ's resource manager automatically fetches the capacity of this root queue from the global resource manager. When you create a new resource queue, you must specify a parent queue. This forces all resource queues to organize into a tree.
+
+When a query comes in, after query parsing and semantic analysis, the optimizer coordinates with HAWQ resource manager on the resource usage for the query and get an optimized plan given the resources available for the query. The resource allocation for each query is sent with the plan together to the segments. Consequently, each query executor \(QE\) knows the resource quota for the current query and enforces the resource consumption during the whole execution. When query execution finishes or is cancelled. the resource is returned to the HAWQ resource manager.
+
+**About Branch Queues and Leaf Queues**
+
+In this hierarchical resource queue tree depicted in the diagram, there are branch queues \(rectangles outlined in dashed lines\) and leaf queues \(rectangles drawn with solid lines\). Only leaf queues can be associated with roles and accept queries.
+
+**Query Resource Allocation Policy**
+
+The HAWQ resource manager follows several principles when allocating resources to queries:
+
+-   Resources are allocated only to queues that have running or queued queries.
+-   When multiple queues are busy, the resource manager balances resources among queues based on resource queue capacities.
+-   In one resource queue, when multiple queries are waiting for resources, resources are distributed evenly to each query in a best effort manner.
+
+## Enforcing Limits on Resources
+
+You can configure HAWQ to enforce limits on resource usage by setting memory and CPU usage limits on both segments and resource queues. See [Configuring Segment Resource Capacity](ConfigureResourceManagement.html) and [Creating Resource Queues](ResourceQueues.html).
+
+**Cluster Memory to Core Ratio**
+
+The HAWQ resource manager chooses a cluster memory to core ratio when most segments have registered and when the resource manager has received a cluster report from YARN \(if the resource manager is running in YARN mode.\) The HAWQ resource manager selects the ratio based on the amount of memory available in the cluster and the number of cores available on registered segments. The resource manager selects the smallest ratio possible in order to minimize the waste of resources.
+
+HAWQ trims each segment's resource capacity automatically to match the selected ratio. For example, if the resource manager chooses 1GB per core as the ratio, then a segment with 5GB of memory and 8 cores will have 3 cores cut. These cores will not be used by HAWQ. If a segment has 12GB and 10 cores, then 2GB of memory will be cut by HAWQ.
+
+After the HAWQ resource manager has selected its ratio, then the ratio will not change until you restart the HAWQ master node. Therefore, memory and core resources for any segments added dynamically to the cluster are automtaically cut based on the fixed ratio.
+
+To find out the cluster memory to core ratio selected by the resource manager, check the HAWQ master database logs for messages similar to the following:
+
+```
+Resource manager chooses ratio 1024 MB per core as cluster level memory to core ratio, there are 3072 MB memory 0 CORE resource unable to be utilized.
+```
+
+You can also check the master logs to see how resources are being cut from individual segments due to the cluster memory to core ratio. For example:
+
+```
+Resource manager adjusts segment localhost original resource capacity from (8192 MB, 5 CORE) to (5120 MB, 5 CORE)
+
+Resource manager adjusts segment localhost original global resource manager resource capacity from (8192 MB, 5 CORE) to (5120 MB, 5 CORE)
+```
+
+See [Viewing the Database Server Log Files](../admin/monitor.html#topic28) for more information on working with HAWQ log files.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/ResourceManagerStatus.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/ResourceManagerStatus.html.md.erb b/markdown/resourcemgmt/ResourceManagerStatus.html.md.erb
new file mode 100644
index 0000000..4029642
--- /dev/null
+++ b/markdown/resourcemgmt/ResourceManagerStatus.html.md.erb
@@ -0,0 +1,152 @@
+---
+title: Analyzing Resource Manager Status
+--- 
+
+You can use several queries to force the resource manager to dump more details about active resource context status, current resource queue status, and HAWQ segment status.
+
+## <a id="topic_zrh_pkc_f5"></a>Connection Track Status 
+
+
+Any query execution requiring resource allocation from HAWQ resource manager has one connection track instance tracking the whole resource usage lifecycle. You can find all resource requests and allocated resources in this dump.
+
+The following is an example query to obtain connection track status:
+
+``` sql
+postgres=# SELECT * FROM dump_resource_manager_status(1);
+```
+
+``` pre
+                              dump_resource_manager_status
+----------------------------------------------------------------------------------------
+ Dump resource manager connection track status to /tmp/resource_manager_conntrack_status
+(1 row)
+```
+
+The following output is an example of resource context \(connection track\) status.
+
+``` pre
+Number of free connection ids : 65535
+Number of connection tracks having requests to handle : 0
+Number of connection tracks having responses to send : 0SOCK(client=192.0.2.0:37396:time=2015-11-15-20:54:35.379006),
+CONN(id=44:user=role_2:queue=queue2:prog=3:time=2015-11-15-20:54:35.378631:lastact=2015-11-15-20:54:35.378631:
+headqueue=2015-11-15-20:54:35.378631),ALLOC(session=89:resource=(1024 MB, 0.250000 CORE)x(1:min=1:act=-1):
+slicesize=5:io bytes size=3905568:vseg limit per seg=8:vseg limit per query=1000:fixsegsize=1:reqtime=2015-11-15-20:54:35.379144:
+alloctime=2015-11-15-20:54:35.379144:stmt=128 MB x 0),LOC(size=3:host(sdw3:3905568):host(sdw2:3905568):
+host(sdw1:3905568)),RESOURCE(hostsize=0),MSG(id=259:size=96:contsize=96:recvtime=1969-12-31-16:00:00.0,
+client=192.0.2.0:37396),COMMSTAT(fd=5:readbuffer=0:writebuffer=0
+buffers:toclose=false:forceclose=false)
+```
+
+|Output Field|Description|
+|------------|-----------|
+|`Number of free connection ids`|Provides connection track id resource. HAWQ resource manager supports maximum 65536 live connection track instances.|
+|`Number of connection tracks having requests to handle`|Counts the number of requests accepted by resource manager but not processed yet.|
+|`Number of connection tracks having responses to send`|Counts the number of responses generated by resource manager but not sent out yet.|
+|`SOCK`|Provides the request socket connection information.|
+|`CONN`|Provides the information about the role name, target queue, current status of the request:<br/><br/>`prog=1` means the connection is established<br/><br/>   `prog=2` means the connection is registered by role id<br/><br/>`prog=3` means the connection is waiting for resource in the target queue<br/><br/>`prog=4` means the resource has been allocated to this connection<br/><br/>`prog>5` means some failure or abnormal statuses|
+|`ALLOC`|Provides session id information, resource expectation, session level resource limits, statement level resource settings, estimated query workload by slice number, and so on.|
+|`LOC`|Provides query scan HDFS data locality information.|
+|`RESOURCE`|Provides information on the already allocated resource.|
+|`MSG`|Provides the latest received message information.|
+|`COMMSTAT`|Shows current socket communication buffer status.|
+
+## <a id="resourcqueuestatus"></a>Resource Queue Status 
+
+You can get more details of the status of resource queues.
+
+Besides the information provided in pg\_resqueue\_status, you can also get YARN resource queue maximum capacity report, total number of HAWQ resource queues, and HAWQ resource queues\u2019 derived resource capacities.
+
+The following is a query to obtain resource queue status:
+
+``` sql
+postgres=# SELECT * FROM dump_resource_manager_status(2);
+```
+
+``` pre
+                            dump_resource_manager_status
+-------------------------------------------------------------------------------------
+ Dump resource manager resource queue status to /tmp/resource_manager_resqueue_status
+(1 row)
+```
+
+The possible output of resource queue status is shown as below.
+
+``` pre
+Maximum capacity of queue in global resource manager cluster 1.000000
+
+Number of resource queues : 4
+
+QUEUE(name=pg_root:parent=NULL:children=3:busy=0:paused=0),
+REQ(conn=0:request=0:running=0),
+SEGCAP(ratio=4096:ratioidx=-1:segmem=128MB:segcore=0.031250:segnum=1536:segnummax=1536),
+QUECAP(memmax=196608:coremax=48.000000:memper=100.000000:mempermax=100.000000:coreper=100.000000:corepermax=100.000000),
+QUEUSE(alloc=(0 MB,0.000000 CORE):request=(0 MB,0.000000 CORE):inuse=(0 MB,0.000000 CORE))
+
+QUEUE(name=pg_default:parent=pg_root:children=0:busy=0:paused=0),
+REQ(conn=0:request=0:running=0),
+SEGCAP(ratio=4096:ratioidx=-1:segmem=1024MB:segcore=0.250000:segnum=38:segnummax=76),
+QUECAP(memmax=78643:coremax=19.000000:memper=20.000000:mempermax=40.000000:coreper=20.000000:corepermax=40.000000),
+QUEUSE(alloc=(0 MB,0.000000 CORE):request=(0 MB,0.000000 CORE):inuse=(0 MB,0.000000 CORE))
+```
+
+|Output Field|Description|
+|------------|-----------|
+|`Maximum capacity of queue in global resource manager cluster`|YARN maximum capacity report for the resource queue.|
+|`Number of resource queues`|Total number of HAWQ resource queues.|
+|`QUEUE`|Provides basic structural information about the resource queue and whether it is busy dispatching resources to some queries.|
+|`REQ`|Provides concurrency counter and the status of waiting queues.|
+|`SEGCAP`|Provides the virtual segment resource quota and dispatchable number of virtual segments.|
+|`QUECAP`|Provides derived resource queue capacity and actual percentage of the cluster resource a queue can use.|
+|`QUEUSE`|Provides information about queue resource usage.|
+
+## <a id="segmentstatus"></a>HAWQ Segment Status 
+
+Use the following query to obtain the status of a HAWQ segment.
+
+``` sql
+postgres=# SELECT * FROM dump_resource_manager_status(3);
+```
+
+``` pre
+                           dump_resource_manager_status
+-----------------------------------------------------------------------------------
+ Dump resource manager resource pool status to /tmp/resource_manager_respool_status
+(1 row)
+```
+
+The following output shows the status of a HAWQ segment status. This example describes a host named `sdw1` having resource capacity 64GB memory and 16 vcore. It now has 64GB available resource ready for use and 16 containers are held.
+
+``` pre
+HOST_ID(id=0:hostname:sdw1)
+HOST_INFO(FTSTotalMemoryMB=65536:FTSTotalCore=16:GRMTotalMemoryMB=0:GRMTotalCore=0)
+HOST_AVAILABLITY(HAWQAvailable=true:GLOBAvailable=false)
+HOST_RESOURCE(AllocatedMemory=65536:AllocatedCores=16.000000:AvailableMemory=65536:
+AvailableCores=16.000000:IOBytesWorkload=0:SliceWorkload=0:LastUpdateTime=1447661681125637:
+RUAlivePending=false)
+HOST_RESOURCE_CONTAINERSET(ratio=4096:AllocatedMemory=65536:AvailableMemory=65536:
+AllocatedCore=16.000000:AvailableCore:16.000000)
+        RESOURCE_CONTAINER(ID=0:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=1:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=2:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=3:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=4:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=5:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=6:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=7:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=8:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=9:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=10:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=11:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=12:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=13:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=14:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=15:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+```
+
+|Output Field|Description|
+|------------|-----------|
+|`HOST_ID`|Provides the recognized segment name and internal id.|
+|`HOST_INFO`|Provides the configured segment resource capacities. GRMTotalMemoryMB and GRMTotalCore shows the limits reported by YARN, FTSTotalMemoryMB and FTSTotalCore show the limits configured in HAWQ.|
+|`HOST_AVAILABILITY`|Shows if the segment is available from HAWQ fault tolerance service \(FTS\) view or YARN view.|
+|`HOST_RESOURCE`|Shows current allocated and available resource. Estimated workload counters are also shown here.|
+|`HOST_RESOURCE_CONTAINERSET`|Shows each held containers.|

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/ResourceQueues.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/ResourceQueues.html.md.erb b/markdown/resourcemgmt/ResourceQueues.html.md.erb
new file mode 100644
index 0000000..cd019c6
--- /dev/null
+++ b/markdown/resourcemgmt/ResourceQueues.html.md.erb
@@ -0,0 +1,204 @@
+---
+title: Working with Hierarchical Resource Queues
+---
+
+This section describes how administrators can define and work with resource queues in order to allocate resource usage within HAWQ. By designing hierarchical resource queues, system administrators can balance system resources to queries as needed.
+
+## <a id="resource_queues"></a>HAWQ Resource Queues 
+
+Resource queues are the main tool for managing the degree of concurrency in a HAWQ system. Resource queues are database objects that you create with the CREATE RESOURCE QUEUE SQL statement. You can use them to manage the number of active queries that may execute concurrently, and the maximum amount of memory and CPU usage each type of query is allocated. Resource queues can also guard against queries that would consume too many resources and degrade overall system performance.
+
+Internally, HAWQ manages its resources dynamically based on a system of hierarchical resource queues. HAWQ uses resource queues to allocate resources efficiently to concurrently running queries. Resource queues are organized as a n-ary tree, as depicted in the diagram below.
+
+![](../mdimages/svg/hawq_resource_queues.svg)
+
+When HAWQ is initialized, there is always one queue named `pg_root` at the root of the tree and one queue named `pg_default`. If YARN is configured, HAWQ's resource manager automatically fetches the capacity of this root queue from the global resource manager. When you create a new resource queue, you must specify a parent queue. This forces all resource queues to organize into a tree.
+
+When a query comes in, after query parsing and semantic analysis, the optimizer coordinates with HAWQ resource manager on the resource usage for the query and get an optimized plan given the resources available for the query. The resource allocation for each query is sent with the plan together to the segments. Consequently, each query executor \(QE\) knows the resource quota for the current query and enforces the resource consumption during the whole execution. When query execution finishes or is cancelled. the resource is returned to the HAWQ resource manager.
+
+**About Branch Queues and Leaf Queues**
+
+In this hierarchical resource queue tree depicted in the diagram, there are branch queues \(rectangles outlined in dashed lines\) and leaf queues \(rectangles drawn with solid lines\). Only leaf queues can be associated with roles and accept queries.
+
+**Query Resource Allocation Policy**
+
+The HAWQ resource manager follows several principles when allocating resources to queries:
+
+-   Resources are allocated only to queues that have running or queued queries.
+-   When multiple queues are busy, the resource manager balances resources among queues based on resource queue capacities.
+-   In one resource queue, when multiple queries are waiting for resources, resources are distributed evenly to each query in a best effort manner.
+
+**Enforcing Limits on Resources**
+
+You can configure HAWQ to enforce limits on resource usage by setting memory and CPU usage limits on both segments and resource queues. See [Configuring Segment Resource Capacity](ConfigureResourceManagement.html) and [Creating Resource Queues](ResourceQueues.html). For some best practices on designing and using resource queues in HAWQ, see [Best Practices for Managing Resources](../bestpractices/managing_resources_bestpractices.html).
+
+For a high-level overview of how resource management works in HAWQ, see [Managing Resources](HAWQResourceManagement.html).
+
+## <a id="topic_dyy_pfp_15"></a>Setting the Maximum Number of Resource Queues 
+
+You can configure the maximum number of resource queues allowed in your HAWQ cluster.
+
+By default, the maximum number of resource queues that you can create in HAWQ is 128.
+
+You can configure this property in `hawq-site.xml`. The new maximum takes effect when HAWQ restarts. For example, the configuration below sets this value to 50.
+
+``` xml
+<property>
+   <name>hawq_rm_nresqueue_limit</name>
+   <value>50</value>
+</property>
+```
+
+The minimum value that can be configured is 3, and the maximum is 1024.
+
+To check the currently configured limit, you can execute the following command:
+
+``` sql
+postgres=# SHOW hawq_rm_nresqueue_limit;
+```
+
+``` pre
+ hawq_rm_nresqueue_limit
+----------------------------------------------
+128
+(1 row)
+```
+
+## <a id="topic_p4l_dls_zt"></a>Creating Resource Queues 
+
+Use CREATE RESOURCE QUEUE to create a new resource queue. Only a superuser can run this DDL statement.
+
+Creating a resource queue involves giving it a name, a parent, setting the CPU and memory limits for the queue, and optionally a limit to the number of active statements on the resource queue. See [CREATE RESOURCE QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html).
+
+**Note:** You can only associate roles and queries with leaf-level resource queues. Leaf-level resource queues are resource queues that do not have any children.
+
+### Examples
+
+Create a resource queue as a child of `pg_root` with an active query limit of 20 and memory and core limits of 50%:
+
+``` sql
+CREATE RESOURCE QUEUE myqueue WITH (PARENT='pg_root', ACTIVE_STATEMENTS=20,
+MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%);
+```
+
+Create a resource queue as a child of pg\_root with memory and CPU limits and a resource overcommit factor:
+
+``` sql
+CREATE RESOURCE QUEUE test_queue_1 WITH (PARENT='pg_root',
+MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%, RESOURCE_OVERCOMMIT_FACTOR=2);
+```
+
+## <a id="topic_e1b_2ls_zt"></a>Altering Resource Queues 
+
+Use ALTER RESOURCE QUEUE to modify an existing resource queue. Only a superuser can run this DDL statement.
+
+The ALTER RESOURCE QUEUE statement allows you to modify resource limits and the number of active statements allowed in the queue. You cannot change the parent queue of an existing resource queue, and you are subject to the same constraints that apply to the creation of resource queues.
+
+You can modify an existing resource queue even when it is active or when one of its descendents is active. All queued resource requsts are adjusted based on the modifications to the resource queue.
+
+However, when you alter a resource queue, queued resource requests may encounter some conflicts. For example, a resource deadlock can occur or some requests cannot be satisfied based on the newly modified resource queue capacity.
+
+To prevent conflicts, HAWQ cancels by default all resource requests that are in conflict with the new resource queue definition. This behavior is controlled by the `hawq_rm_force_alterqueue_cancel_queued_request` server configuration parameter, which is by default set to true \(`on`\). If you set the server configuration parameter `hawq_rm_force_alterqueue_cancel_queued_request` to false, the actions specified in ALTER RESOURCE QUEUE are canceled if the resource manager finds at least one resource request that is in conflict with the new resource definitions supplied in the altering command.
+
+For more information, see [ALTER RESOURCE QUEUE](../reference/sql/ALTER-RESOURCE-QUEUE.html).
+
+**Note:** To change the roles \(users\) assigned to a resource queue, use the ALTER ROLE command.
+
+### Examples
+
+Change the memory and core limit of a resource queue:
+
+``` sql
+ALTER RESOURCE QUEUE test_queue_1 WITH (MEMORY_LIMIT_CLUSTER=40%,
+CORE_LIMIT_CLUSTER=40%);
+```
+
+Change the active statements maximum for the resource queue:
+
+``` sql
+ALTER RESOURCE QUEUE test_queue_1 WITH (ACTIVE_STATEMENTS=50);
+```
+
+## <a id="topic_hbp_fls_zt"></a>Dropping Resource Queues 
+
+Use DROP RESOURCE QUEUE to remove an existing resource queue.
+
+DROP RESOURCE QUEUE drops an existing resource queue. Only a superuser can run this DDL statement when the queue is not busy. You cannot drop a resource queue that has at least one child resource queue or a role assigned to it.
+
+The default resource queues `pg_root` and `pg_default` cannot be dropped.
+
+### Examples
+
+Remove a role from a resource queue \(and move the role to the default resource queue, `pg_default`\):
+
+``` sql
+ALTER ROLE bob RESOURCE QUEUE NONE;
+```
+
+Remove the resource queue named `adhoc`:
+
+``` sql
+DROP RESOURCE QUEUE adhoc;
+```
+
+## <a id="topic_lqy_gls_zt"></a>Checking Existing Resource Queues 
+
+The HAWQ catalog table `pg_resqueue` saves all existing resource queues.
+
+The following example shows the data selected from `pg_resqueue`.
+
+``` sql
+postgres=# SELECT rsqname,parentoid,activestats,memorylimit,corelimit,resovercommit,
+allocpolicy,vsegresourcequota,nvsegupperlimit,nvseglowerlimit,nvsegupperlimitperseg,nvseglowerlimitperseg
+FROM pg_resqueue WHERE rsqname='test_queue_1';
+```
+
+``` pre
+   rsqname    | parentoid | activestats | memorylimit | corelimit | resovercommit | allocpolicy | vsegresourcequota | nvsegupperlimit | nvseglowerlimit |nvsegupperlimitperseg  | nvseglowerlimitperseg
+--------------+-----------+-------------+-------------+-----------+---------------+-------------+-------------------+-----------------+-----------------+-----------------------+-----------------------
+ test_queue_1 |      9800 |         100 | 50%         | 50%       |             2 | even        | mem:128mb         | 0               | 0               | 0                     |1
+```
+
+The query displays all the attributes and their values of the selected resource queue. See [CREATE RESOURCE QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html) for a description of these attributes.
+
+You can also check the runtime status of existing resource queues by querying the `pg_resqueue_status` view:
+
+``` sql
+postgres=# SELECT * FROM pg_resqueue_status;
+```
+
+
+``` pre
+  rsqname   | segmem | segcore  | segsize | segsizemax | inusemem | inusecore | rsqholders | rsqwaiters | paused
+------------+--------+----------+---------+------------+----------+-----------+------------+------------+--------
+ pg_root    | 128    | 0.125000 | 64      | 64         | 0        | 0.000000  | 0          | 0          | F
+ pg_default | 128    | 0.125000 | 32      | 64         | 0        | 0.000000  | 0          | 0          | F(2 rows)
+```
+
+The query returns the following pieces of data about the resource queue's runtime status:
+
+|Resource Queue Runtime|Description|
+|----------------------|-----------|
+|rsqname|HAWQ resource queue name|
+|segmem|Virtual segment memory quota in MB|
+|segcore|Virtual segment vcore quota|
+|segsize|Number of virtual segments the resource queue can dispatch for query execution|
+|segsizemax|Maximum number of virtual segments the resource queue can dispatch for query execution when overcommit the other queues\u2019 resource quota|
+|inusemem|Accumulated memory in use in MB by current running statements|
+|inusecore|Accumulated vcore in use by current running statements|
+|rsqholders|The total number of concurrent running statements|
+|rsqwaiters|Total number of queuing statements|
+|paused|Indicates whether the resource queue is temporarily paused due to no resource status changes. \u2018F\u2019 means false, \u2018T\u2019 means true, \u2018R\u2019 means maybe the resource queue has encountered a resource fragmentation problem|
+
+## <a id="topic_scr_3ls_zt"></a>Assigning Roles to Resource Queues 
+
+By default, a role is assigned to `pg_default` resource queue. Assigning a role to a branch queue is not allowed.
+
+The following are some examples of creating and assigning a role to a resource queue:
+
+``` sql
+CREATE ROLE rmtest1 WITH LOGIN RESOURCE QUEUE pg_default;
+
+ALTER ROLE rmtest1 RESOURCE QUEUE test_queue_1;
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/YARNIntegration.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/YARNIntegration.html.md.erb b/markdown/resourcemgmt/YARNIntegration.html.md.erb
new file mode 100644
index 0000000..6898f6c
--- /dev/null
+++ b/markdown/resourcemgmt/YARNIntegration.html.md.erb
@@ -0,0 +1,252 @@
+---
+title: Integrating YARN with HAWQ
+---
+
+HAWQ supports integration with YARN for global resource management. In a YARN managed environment, HAWQ can request resources \(containers\) dynamically from YARN, and return resources when HAWQ's workload is not heavy. This feature makes HAWQ a native citizen of the whole Hadoop eco-system.
+
+To integrate YARN with HAWQ, use the following high-level steps.
+
+1.  Install YARN, if you have not already done so.
+
+    **Note:** If you are using HDP 2.3, you must set `yarn.resourcemanager.system-metrics-publisher.enabled` to `false`. See the Release Notes for additional YARN workaround configurations.
+
+2.  Configure YARN using CapacityScheduler and reserve one application queue exclusively for HAWQ. See [Configuring YARN for HAWQ](#hawqinputformatexample) and [Setting HAWQ Segment Resource Capacity in YARN](#topic_pzf_kqn_c5).
+3.  If desired, enable high availability in YARN. See your Ambari or Hadoop documentation for details.
+3.  Enable YARN mode within HAWQ. See [Enabling YARN Mode in HAWQ](#topic_rtd_cjh_15).
+4.  After you integrate YARN with HAWQ, adjust HAWQ's resource usage as needed by doing any of the following:
+    -   Change the capacity of the corresponding YARN resource queue for HAWQ. For example, see the properties described for CapacityScheduler configuration. You can then refresh the YARN queues without having to restart or reload HAWQ. See See [Configuring YARN for HAWQ](#hawqinputformatexample) and [Setting HAWQ Segment Resource Capacity in YARN](#topic_pzf_kqn_c5).
+    -   Change resource consumption within HAWQ on a finer grained level by altering HAWQ's resource queues. See [Working with Hierarchical Resource Queues](ResourceQueues.html).
+    -   \(Optional\) Tune HAWQ and YARN resource negotiations. For example, you can set a minimum number of YARN containers per segment or modify the idle timeout for YARN resources in HAWQ. See [Tune HAWQ Resource Negotiations with YARN](#topic_wp3_4bx_15).
+
+## <a id="hawqinputformatexample"></a>Configuring YARN for HAWQ 
+
+This topic describes how to configure YARN to manage HAWQ's resources.
+
+When HAWQ has queries that require resources to execute, the HAWQ resource manager negotiates with YARN's resource scheduler to allocate resources. Then, when HAWQ is not busy, HAWQ's resource manager returns resources to YARN's resource scheduler.
+
+To integrate YARN with HAWQ, you must define one YARN application resource queue exclusively for HAWQ. YARN resource queues are configured for a specific YARN resource scheduler. The YARN resource scheduler uses resource queue configuration to allocate resources to applications. There are several available YARN resource schedulers; however, HAWQ currently only supports using CapacityScheduler to manage YARN resources.
+
+### <a id="capacity_scheduler"></a>Using CapacityScheduler for YARN Resource Scheduling 
+
+The following example demonstrates how to configure CapacityScheduler as the YARN resource scheduler. In `yarn-site.xml`, use the following configuration to enable CapacityScheduler.
+
+``` xml
+<property>
+   <name>yarn.resourcemanager.scheduler.class</name>
+   <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
+</property>
+```
+
+Then, define the queues in CapacityScheduler's configuration. In `capacity-scheduler.xml`, you could define the queues as follows:
+
+``` xml
+<property>
+   <name>yarn.scheduler.capacity.root.queues</name>
+   <value>mrque1,mrque2,hawqque</value>
+</property>
+
+```
+
+In the above example configuration, CapacityScheduler has two MapReduce queues \(`mrque1` and `mrque2`\) and one HAWQ queue \(`hawqque`\) configured under the root queue. Only `hawqque` is defined for HAWQ usage, and it coexists with the other two MapReduce queues. These three queues share the resources of the entire cluster.
+
+In the following configuration within `capacity-scheduler.xml,` we configure the additional properties for the queues to control the capacity of each queue. The HAWQ resource queue can utilize 20% to a maximum of 80% resources of the whole cluster.
+
+``` xml
+<property>
+   <name>yarn.scheduler.capacity.hawqque.maximum-applications</name>
+   <value>1</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.hawqque.capacity</name>
+  <value>20</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.hawqque.maximum-capacity</name>
+  <value>80</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.hawqque.user-limit-factor</name>
+  <value>2</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.mrque1.capacity</name>
+  <value>30</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.mrque1.maximum-capacity</name>
+  <value>50</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.mrque2.capacity</name>
+  <value>50</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.mrque2.maximum-capacity</name>
+  <value>50</value>
+</property>
+```
+
+|Item|Description|
+|----|-----------|
+|yarn.scheduler.capacity.*\<queue\_name\>*.maximum-applications|Maximum number of HAWQ applications in the system that can be concurrently active \(both running and pending.\) The current recommendation is to let one HAWQ instance exclusively use one resource queue.|
+|yarn.scheduler.capacity.*\<queue\_name\>*.capacity|Queue capacity in percentage \(%\) as a float \(e.g. 12.5\). The sum of capacities for all queues, at each level, must equal 100. Applications in the queue may consume more resources than the queue's capacity if there are free resources, which provides elasticity.|
+|yarn.scheduler.capacity.*\<queue\_name\>*.maximum-capacity|Maximum queue capacity in percentage \(%\) as a float. This limits the elasticity for applications in the queue. Defaults to -1 which disables it.|
+|yarn.scheduler.capacity.*\<queue\_name\>*.user-limit-factor|Multiple of the queue capacity, which can be configured to allow a single user to acquire more resources. By default this is set to 1, which ensures that a single user can never take more than the queue's configured capacity irrespective of how idle the cluster is. Value is specified as a float.<br/><br/>Setting this to a value higher than 1 allows the overcommittment of resources at the application level. For example, in terms of HAWQ configuration, if we want twice the maximum capacity for the HAWQ's application, we can set this as 2.|
+
+## <a id="topic_pzf_kqn_c5"></a>Setting HAWQ Segment Resource Capacity in YARN 
+
+Similar to how you can set segment resource capacity in HAWQ's standalone mode, you can do the same for HAWQ segments managed by YARN.
+
+In HAWQ standalone mode, you can configure the resource capacity of individual segments as described in [Configuring Segment Resource Capacity](ConfigureResourceManagement.html). If you are using YARN to manage HAWQ resources, then you configure the resource capacity of segments by configuring YARN. We recommend that you configure all segments with identical resource capacity. In `yarn-site.xml`, set the following properties:
+
+``` xml
+<property>
+  <name>yarn.nodemanager.resource.memory-mb</name>
+  <value>4GB</value>
+</property>
+<property>
+  <name>yarn.nodemanager.resource.cpu-vcores</name>
+  <value>1</value>
+</property>
+```
+
+We recommend that in your memory to core ratio that memory is a multiple of 1GB, such as 1GB per core, 2GB per core or 4 GB per core. 
+
+After you set limits on the segments, you can use resource queues to configure additional resource management rules in HAWQ.
+
+### <a id="avoid_fragmentation"></a>Avoiding Resource Fragmentation with YARN Managed Resources 
+
+To reduce the likelihood of resource fragmentation in deployments where resources are managed by YARN, ensure that you have configured the following:
+
+-   Segment resource capacity configured in `yarn.nodemanager.resource.memory-mb` must be a multiple of the virtual segment resource quotas that you configure in your resource queues
+-   CPU to memory ratio must be a multiple of the amount configured for `yarn.scheduler.minimum-allocation-mb`
+
+For example, if you have the following properties set in YARN:
+
+-   `yarn.scheduler.minimum-allocation-mb=1gb`
+
+    **Note:** This is the default value set by Ambari in some cases.
+
+-   `yarn.nodemanager.resource.memory-mb=48gb`
+-   `yarn.nodemanager.resource.cpu-vcores=16`
+
+Then the CPU to memory ratio calculated by HAWQ equals 3GB \(48 divided by 16\). Since `yarn.scheduler.minimum-allocation-mb` is set to 1GB, each YARN container will be 1GB. Since 3GB is a multiple of 1GB, you should not encounter fragmentation.
+
+However, if you had set `yarn.scheduler.minimum-allocation-mb` to 4GB, then it would leave 1GB of fragmented space \(4GB minus 3GB.\) To prevent fragmentation in this scenario, you could reconfigure `yarn.nodemanager.resource.memory-mb=64gb` \(or you could set `yarn.scheduler.minimum-allocation-mb=1gb`.\)
+
+**Note:** If you are specifying 1GB or under for `yarn.scheduler.minimum-allocation-mb` in `yarn-site.xml`, then make sure that the property is an equal subdivision of 1GB. For example, 1024, 512.
+
+See [Handling Segment Resource Fragmentation](../troubleshooting/Troubleshooting.html) for general information on resource fragmentation.
+
+## <a id="topic_rtd_cjh_15"></a>Enabling YARN Mode in HAWQ 
+
+After you have properly configured YARN, you can enable YARN as HAWQ's global resource manager.
+
+To configure YARN as the global resource manager in a HAWQ cluster, add the following property configuration to your `hawq-site.xml` file:
+
+``` xml
+<property>
+      <name>hawq_global_rm_type</name>
+      <value>yarn</value>
+</property>
+```
+
+When enabled, the HAWQ resource manager only uses resources allocated from YARN.
+
+### Configuring HAWQ in YARN Environments
+
+If you set the global resource manager to YARN, you must also configure the following properties in `hawq-site.xml`:
+
+``` xml
+<property>
+      <name>hawq_rm_yarn_address</name>
+      <value>localhost:8032</value>
+</property>
+<property>
+      <name>hawq_rm_yarn_scheduler_address</name>
+      <value>localhost:8030</value>
+</property>
+<property>
+      <name>hawq_rm_yarn_queue_name</name>
+      <value>hawqque</value></property>
+<property>
+      <name>hawq_rm_yarn_app_name</name>
+      <value>hawq</value>
+</property>
+```
+**Note:** If you have enabled high availability for your YARN resource managers, then you must configure `yarn.resourcemanager.ha` and `yarn.resourcemanager.scheduler.ha` in `yarn-client.xml` located in `$GPHOME/etc` instead. The values specified for `hawq_rm_yarn_address` and `hawq_rm_yarn_scheduler_address` are ignored. See [Configuring HAWQ in High Availablity-Enabled YARN Environments](#highlyavailableyarn)
+
+#### <a id="id_uvp_3pm_q5"></a>hawq\_rm\_yarn\_address 
+
+Server address \(host and port\) of the YARN resource manager server \(the value of `yarn.resourcemanager.address`\). User must define this if `hawq_global_rm_type` is set to `yarn`. For example, `localhost:8032`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|valid hostname and port|�none set |master|
+
+#### <a id="id_ocq_jpm_q5"></a>hawq\_rm\_yarn\_scheduler\_address 
+
+Server address \(host and port\) of the YARN resource manager scheduler \(the value of `yarn.resourcemanager.scheduler.address`\). User must define this if `hawq_global_rm_type` is set to `yarn`. For example, `localhost:8030`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|valid hostname and port|�none set |master|
+
+#### <a id="id_y23_kpm_q5"></a>hawq\_rm\_yarn\_queue\_name 
+
+The name of the YARN resource queue to register with HAWQ's resource manager.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|string|default|master|
+
+#### <a id="id_h1c_lpm_q5"></a>hawq\_rm\_yarn\_app\_name 
+
+The name of the YARN application registered with HAWQ's resource manager. For example, `hawq`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|string|hawq|master|
+
+### <a id="highlyavailableyarn"></a>Configuring HAWQ in High Availablity-Enabled YARN Environments 
+
+If you have enabled high-availability for your YARN resource managers, then specify the following parameters in `yarn-client.xml` located in `$GPHOME/etc` instead. 
+
+**Note:** When you use high availability in YARN, HAWQ ignores the values specified for `hawq_rm_yarn_address` and `hawq_rm_yarn_scheduler_address` in `hawq-site.xml` and uses the values specified in `yarn-client.xml` instead.
+
+``` xml
+    <property>
+      <name>yarn.resourcemanager.ha</name>
+      <value>{0}:8032,{1}:8032</value>
+    </property>
+    
+    <property>
+      <name>yarn.resourcemanager.scheduler.ha</name>
+      <value>{0}:8030,{1}:8030</value>
+    </property>
+```
+
+where {0} and {1} are substituted with the fully qualified hostnames of the YARN resource manager host machines.
+
+## <a id="topic_wp3_4bx_15"></a>Tune HAWQ Resource Negotiations with YARN 
+
+To ensure efficient management of resources and highest performance, you can configure some aspects of how HAWQ's resource manager negotiate resources from YARN.
+
+### <a id="min_yarn_containers"></a>Minimum Number of YARN Containers Per Segment 
+
+When HAWQ is integrated with YARN and has no workload, HAWQ does not acquire any resources right away. HAWQ's 's resource manager only requests resource from YARN when HAWQ receives its first query request. In order to guarantee optimal resource allocation for subsequent queries and to avoid frequent YARN resource negotiation, you can adjust `hawq_rm_min_resource_perseg` so HAWQ receives at least some number of YARN containers per segment regardless of the size of the initial query. The default value is 2, which means HAWQ's resource manager acquires at least 2 YARN containers for each segment even if the first query's resource request is small.
+
+This configuration property cannot exceed the capacity of HAWQ\u2019s YARN queue. For example, if HAWQ's queue capacity in YARN is no more than 50% of the whole cluster, and each YARN node has a maximum of 64GB memory and 16 vcores, then `hawq_rm_min_resource_perseg` in HAWQ cannot be set to more than 8 since HAWQ's resource manager acquires YARN containers by vcore. In the case above, the HAWQ resource manager acquires a YARN container quota of 4GB memory and 1 vcore.
+
+### <a id="set_yarn_timeout"></a>Setting a Timeout for YARN Resources 
+
+If the level of HAWQ\u2019s workload is lowered, then HAWQ's resource manager may have some idle YARN resources. You can adjust `hawq_rm_resource_idle_timeout` to let the HAWQ resource manager return idle resources more quickly or more slowly.
+
+For example, when HAWQ's resource manager has to reacquire resources, it can cause latency for query resource requests. To let HAWQ resource manager retain resources longer in anticipation of an upcoming workload, increase the value of `hawq_rm_resource_idle_timeout`. The default value of `hawq_rm_resource_idle_timeout` is 300 seconds.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/best-practices.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/best-practices.html.md.erb b/markdown/resourcemgmt/best-practices.html.md.erb
new file mode 100644
index 0000000..74bd815
--- /dev/null
+++ b/markdown/resourcemgmt/best-practices.html.md.erb
@@ -0,0 +1,15 @@
+---
+title: Best Practices for Configuring Resource Management
+---
+
+When configuring resource management, you can apply certain best practices to ensure that resources are managed both efficiently and for best system performance.
+
+The following is a list of high-level best practices for optimal resource management:
+
+-   Make sure segments do not have identical IP addresses. See [Segments Do Not Appear in gp\_segment\_configuration](../troubleshooting/Troubleshooting.html) for an explanation of this problem.
+-   Configure all segments to have the same resource capacity. See [Configuring Segment Resource Capacity](ConfigureResourceManagement.html).
+-   To prevent resource fragmentation, ensure that your deployment's segment resource capacity \(standalone mode\) or YARN node resource capacity \(YARN mode\) is a multiple of all virtual segment resource quotas. See [Configuring Segment Resource Capacity](ConfigureResourceManagement.html) \(HAWQ standalone mode\) and [Setting HAWQ Segment Resource Capacity in YARN](YARNIntegration.html).
+-   Ensure that enough registered segments are available and usable for query resource requests. If the number of unavailable or unregistered segments is higher than a set limit, then query resource requests are rejected. Also ensure that the variance of dispatched virtual segments across physical segments is not greater than the configured limit. See [Rejection of Query Resource Requests](../troubleshooting/Troubleshooting.html).
+-   Use multiple master and segment temporary directories on separate, large disks (2TB or greater) to load balance writes to temporary files (for example, `/disk1/tmp /disk2/tmp`). For a given query, HAWQ will use a separate temp directory (if available) for each virtual segment to store spill files. Multiple HAWQ sessions will also use separate temp directories where available to avoid disk contention. If you configure too few temp directories, or you place multiple temp directories on the same disk, you increase the risk of disk contention or running out of disk space when multiple virtual segments target the same disk. 
+-   Configure minimum resource levels in YARN, and tune the timeout of when idle resources are returned to YARN. See [Tune HAWQ Resource Negotiations with YARN](YARNIntegration.html).
+-   Make sure that the property `yarn.scheduler.minimum-allocation-mb` in `yarn-site.xml` is an equal subdivision of 1GB. For example, 1024, 512. See [Setting HAWQ Segment Resource Capacity in YARN](YARNIntegration.html#topic_pzf_kqn_c5).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/index.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/index.md.erb b/markdown/resourcemgmt/index.md.erb
new file mode 100644
index 0000000..7efb756
--- /dev/null
+++ b/markdown/resourcemgmt/index.md.erb
@@ -0,0 +1,12 @@
+---
+title: Managing Resources
+---
+
+This section describes how to use HAWQ's resource management features:
+
+*  <a class="subnav" href="./HAWQResourceManagement.html">How HAWQ Manages Resources</a>
+*  <a class="subnav" href="./best-practices.html">Best Practices for Configuring Resource Management</a>
+*  <a class="subnav" href="./ConfigureResourceManagement.html">Configuring Resource Management</a>
+*  <a class="subnav" href="./YARNIntegration.html">Integrating YARN with HAWQ</a>
+*  <a class="subnav" href="./ResourceQueues.html">Working with Hierarchical Resource Queues</a>
+*  <a class="subnav" href="./ResourceManagerStatus.html">Analyzing Resource Manager Status</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/troubleshooting/Troubleshooting.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/troubleshooting/Troubleshooting.html.md.erb b/markdown/troubleshooting/Troubleshooting.html.md.erb
new file mode 100644
index 0000000..2b7414b
--- /dev/null
+++ b/markdown/troubleshooting/Troubleshooting.html.md.erb
@@ -0,0 +1,101 @@
+---
+title: Troubleshooting
+---
+
+This chapter describes how to resolve common problems and errors that occur in a HAWQ system.
+
+
+
+## <a id="topic_dwd_rnx_15"></a>Query Performance Issues
+
+**Problem:** Query performance is slow.
+
+**Cause:** There can be multiple reasons why a query might be performing slowly. For example, the locality of data distribution, the number of virtual segments, or the number of hosts used to execute the query can all affect its performance. The following procedure describes how to investigate query performance issues.
+
+### <a id="task_ayl_pbw_c5"></a>How to Investigate Query Performance Issues
+
+A query is not executing as quickly as you would expect. Here is how to investigate possible causes of slowdown:
+
+1.  Check the health of the cluster.
+    1.  Are any DataNodes, segments or nodes down?
+    2.  Are there many failed disks?
+
+2.  Check table statistics. Have the tables involved in the query been analyzed?
+3.  Check the plan of the query and run [`EXPLAIN ANALYZE`](../reference/sql/EXPLAIN.html) to determine the bottleneck. 
+    Sometimes, there is not enough memory for some operators, such as Hash Join, or spill files are used. If an operator cannot perform all of its work in the memory allocated to it, it caches data on disk in *spill files*. Compared with no spill files, a query will run much slower.
+
+4.  Check data locality statistics using [`EXPLAIN ANALYZE`](../reference/sql/EXPLAIN.html). Alternately you can check the logs. Data locality result for every query could also be found in the log of HAWQ. See [Data Locality Statistics](../query/query-performance.html#topic_amk_drc_d5) for information on the statistics.
+5.  Check resource queue status. You can query view `pg_resqueue_status` to check if the target queue has already dispatched some resource to the queries, or if the target queue is lacking resources. See [Checking Existing Resource Queues](../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+6.  Analyze a dump of the resource manager's status to see more resource queue status. See [Analyzing Resource Manager Status](../resourcemgmt/ResourceQueues.html#topic_zrh_pkc_f5).
+
+## <a id="topic_vm5_znx_15"></a>Rejection of Query Resource Requests
+
+**Problem:** HAWQ resource manager is rejecting query resource allocation requests.
+
+**Cause:** The HAWQ resource manager will reject resource query allocation requests under the following conditions:
+
+-   **Too many physical segments are unavailable.**
+
+    HAWQ resource manager expects that the physical segments listed in file `$GPHOME/etc/slaves` are already registered and can be queried from table `gp_segment_configuration`.
+
+    If the resource manager determines that the number of unregistered or unavailable HAWQ physical segments is greater than [hawq\_rm\_rejectrequest\_nseg\_limit](../reference/guc/parameter_definitions.html#hawq_rm_rejectrequest_nseg_limit), then the resource manager rejects query resource requests directly. The purpose of rejecting the query is to guarantee that queries are run in a full size cluster. This makes diagnosing query performance problems easier. The default value of `hawq_rm_rejectrequest_nseg_limit` is 0.25, which means that if more than 0.25 \* the number segments listed in `$GPHOME/etc/slaves` are found to be unavailable or unregistered, then the resource manager rejects the query's request for resources. For example, if there are 15 segments listed in the slaves file, the resource manager calculates that no more than 4 segments (0.25 \* 15) can be unavailable
+
+    In most cases, you do not need to modify this default value.
+
+-   **There are unused physical segments with virtual segments allocated for the query.**
+
+    The limit defined in [hawq\_rm\_tolerate\_nseg\_limit](../reference/guc/parameter_definitions.html#hawq_rm_tolerate_nseg_limit) has been exceeded.
+
+-   **Virtual segments have been dispatched too unevenly across physical segments.**
+
+    To ensure best query performance, HAWQ resource manager tries to allocate virtual segments for query execution as evenly as possible across physical segments. However, there can be variance in allocations. HAWQ will reject query resource allocation requests that have a variance greater than the value set in [hawq\_rm\_nvseg\_variance\_amon\_seg\_limit](../reference/guc/parameter_definitions.html#hawq_rm_nvseg_variance_amon_seg_limit)
+
+    For example, one query execution causes nine (9) virtual segments to be dispatched to two (2) physical segments. Assume that one segment has been allocated seven (7) virtual segments and another one has allocated two (2) virtual segments. Then the variance between the segments is five (5). If `hawq_rm_nvseg_variance_amon_seg_limit` is set to the default of one (1), then the allocation of resources for this query is rejected and will be reallocated later. However, if a physical segment has five virtual segments and the other physical segment has four (4), then this resource allocation is accepted.
+
+**Solution:** Check on the status of the nodes in the cluster. Restart existing nodes, if necessary, or add new nodes. Modify the [hawq\_rm\_nvseg\_variance\_amon\_seg\_limit](../reference/guc/parameter_definitions.html#hawq_rm_nvseg_variance_amon_seg_limit) (although note that this can affect query performance.)
+
+## <a id="topic_qq4_rkl_wv"></a>Queries Cancelled Due to High VMEM Usage
+
+**Problem:** Certain queries are cancelled due to high virtual memory usage. Example error message:
+
+``` pre
+ERROR: Canceling query because of high VMEM usage. Used: 1748MB, available 480MB, red zone: 9216MB (runaway_cleaner.c:135) (seg74 bcn-w3:5532 pid=33619) (dispatcher.c:1681)
+```
+
+**Cause:** This error occurs when the virtual memory usage on a segment exceeds the virtual memory threshold, which is can configured as a percentage through the [runaway\_detector\_activation\_percent](../reference/guc/parameter_definitions.html#runaway_detector_activation_percent).
+
+If the amount of virtual memory utilized by a physical segment exceeds the calculated threshold, then HAWQ begins terminating queries based on memory usage, starting with the query that is consuming the largest amount of memory. Queries are terminated until the percentage of utilized virtual memory is below the specified percentage.
+
+**Solution:** Try temporarily increasing the value of `hawq_re_memory_overcommit_max` to allow specific queries to run without error.
+
+Check `pg_log` files for more memory usage details on session and QE processes. HAWQ logs terminated query information such as memory allocation history and context information as well as query plan operator memory usage information. This information is sent to the master and segment instance log files.
+
+## <a id="topic_hlj_zxx_15"></a>Segments Do Not Appear in gp\_segment\_configuration
+
+**Problem:** Segments have successfully started, but cannot be found in table `gp_segment_configuration`.
+
+**Cause:** Your segments may have been assigned identical IP addresses.
+
+Some software and projects have virtualized network interfaces that use auto-configured IP addresses. This may cause some HAWQ segments to obtain identical IP addresses. The resource manager's fault tolerance service component will only recognize one of the segments with an identical IP address.
+
+**Solution:** Change your network's configuration to disallow identical IP addresses before starting up the HAWQ cluster.
+
+## <a id="investigatedownsegment"></a>Investigating Segments Marked As Down 
+
+**Problem:** The [HAWQ fault tolerance service (FTS)](../admin/FaultTolerance.html) has marked a segment as down in the [gp_segment_configuration](../reference/catalog/gp_segment_configuration.html) catalog table.
+
+**Cause:**  FTS marks a segment as down when a segment encounters a critical error. For example, a temporary directory on the segment fails due to a hardware error. Other causes might include network or communication errors, resource manager errors, or simply a heartbeat timeout. The segment reports critical failures to the HAWQ master through a heartbeat report.
+
+**Solution:** The actions required for recovering a segment varies depending upon the reason. In some cases, the segment is only marked as down temporarily until the heartbeat interval can recheck the segment's status. To investigate the reason why the segment was marked down, check the gp_configuration_history catalog table for a corresponding reason. See [Viewing the Current Status of a Segment](../admin/FaultTolerance.html#view_segment_status) for a description of various reasons that the fault tolerance service may mark a segment as down.
+
+## <a id="topic_mdz_q2y_15"></a>Handling Segment Resource Fragmentation
+
+Different HAWQ resource queues can have different virtual segment resource quotas, which can result in resource fragmentation. For example, a HAWQ cluster has 4GB memory available for a currently queued query, but the resource queues are configured to split four 512MB memory blocks in 4 different segments. It is impossible to allocate two 1GB memory virtual segments.
+
+In standalone mode, the segment resources are all exclusively occupied by HAWQ. Resource fragmentation can occur when segment capacity is not a multiple of a virtual segment resource quota. For example, a segment has 15GB memory capacity, but the virtual segment resource quota is set to 2GB. The maximum possible memory consumption in a segment is 14GB. Therefore, you should configure segment resource capacity as a multiple of all virtual segment resource quotas.
+
+In YARN mode, resources are allocated from the YARN resource manager. The HAWQ resource manager acquires a YARN container by 1 vcore. For example, if YARN reports that a segment having 64GB memory and 16 vcore is configured for YARN applications, HAWQ requests YARN containers by 4GB memory and 1 vcore. In this manner, HAWQ resource manager acquires YARN containers on demand. If the capacity of the YARN container is not a multiple of the virtual segment resource quota, resource fragmentation may occur. For example, if the YARN container resource capacity is 3GB memory 1 vcore, one segment may have 1 or 3 YARN containers for HAWQ query execution. In this situation, if the virtual segment resource quota is 2GB memory, then HAWQ will always have 1 GB memory that cannot be utilized. Therefore, it is recommended to configure YARN node resource capacity carefully to make YARN container resource quota as a multiple of all virtual segment resource quotas. In addition, make sure your CPU to m
 emory ratio is a multiple of the amount configured for `yarn.scheduler.minimum-allocation-mb`. See [Setting HAWQ Segment Resource Capacity in YARN](../resourcemgmt/YARNIntegration.html#topic_pzf_kqn_c5) for more information.
+
+If resource fragmentation occurs, queued requests are not processed until either some running queries return resources or the global resource manager provides more resources. If you encounter resource fragmentation, you should double check the configured capacities of the resource queues for any errors. For example, an error might be that the global resource manager container's memory to core ratio is not a multiple of virtual segment resource quota.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/02-pipeline.png
----------------------------------------------------------------------
diff --git a/mdimages/02-pipeline.png b/mdimages/02-pipeline.png
deleted file mode 100644
index 26fec1b..0000000
Binary files a/mdimages/02-pipeline.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/03-gpload-files.jpg
----------------------------------------------------------------------
diff --git a/mdimages/03-gpload-files.jpg b/mdimages/03-gpload-files.jpg
deleted file mode 100644
index d50435f..0000000
Binary files a/mdimages/03-gpload-files.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/1-assign-masters.tiff
----------------------------------------------------------------------
diff --git a/mdimages/1-assign-masters.tiff b/mdimages/1-assign-masters.tiff
deleted file mode 100644
index b5c4cb4..0000000
Binary files a/mdimages/1-assign-masters.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/1-choose-services.tiff
----------------------------------------------------------------------
diff --git a/mdimages/1-choose-services.tiff b/mdimages/1-choose-services.tiff
deleted file mode 100644
index d21b706..0000000
Binary files a/mdimages/1-choose-services.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/3-assign-slaves-and-clients.tiff
----------------------------------------------------------------------
diff --git a/mdimages/3-assign-slaves-and-clients.tiff b/mdimages/3-assign-slaves-and-clients.tiff
deleted file mode 100644
index 93ea3bd..0000000
Binary files a/mdimages/3-assign-slaves-and-clients.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/4-customize-services-hawq.tiff
----------------------------------------------------------------------
diff --git a/mdimages/4-customize-services-hawq.tiff b/mdimages/4-customize-services-hawq.tiff
deleted file mode 100644
index c6bfee8..0000000
Binary files a/mdimages/4-customize-services-hawq.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/5-customize-services-pxf.tiff
----------------------------------------------------------------------
diff --git a/mdimages/5-customize-services-pxf.tiff b/mdimages/5-customize-services-pxf.tiff
deleted file mode 100644
index 3812aa1..0000000
Binary files a/mdimages/5-customize-services-pxf.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/6-review.tiff
----------------------------------------------------------------------
diff --git a/mdimages/6-review.tiff b/mdimages/6-review.tiff
deleted file mode 100644
index be7debb..0000000
Binary files a/mdimages/6-review.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/7-install-start-test.tiff
----------------------------------------------------------------------
diff --git a/mdimages/7-install-start-test.tiff b/mdimages/7-install-start-test.tiff
deleted file mode 100644
index b556e9a..0000000
Binary files a/mdimages/7-install-start-test.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/ext-tables-xml.png
----------------------------------------------------------------------
diff --git a/mdimages/ext-tables-xml.png b/mdimages/ext-tables-xml.png
deleted file mode 100644
index f208828..0000000
Binary files a/mdimages/ext-tables-xml.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/ext_tables.jpg
----------------------------------------------------------------------
diff --git a/mdimages/ext_tables.jpg b/mdimages/ext_tables.jpg
deleted file mode 100644
index d5a0940..0000000
Binary files a/mdimages/ext_tables.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/ext_tables_multinic.jpg
----------------------------------------------------------------------
diff --git a/mdimages/ext_tables_multinic.jpg b/mdimages/ext_tables_multinic.jpg
deleted file mode 100644
index fcf09c4..0000000
Binary files a/mdimages/ext_tables_multinic.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gangs.jpg
----------------------------------------------------------------------
diff --git a/mdimages/gangs.jpg b/mdimages/gangs.jpg
deleted file mode 100644
index 0d14585..0000000
Binary files a/mdimages/gangs.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gp_orca_fallback.png
----------------------------------------------------------------------
diff --git a/mdimages/gp_orca_fallback.png b/mdimages/gp_orca_fallback.png
deleted file mode 100644
index 000a6af..0000000
Binary files a/mdimages/gp_orca_fallback.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gpfdist_instances.png
----------------------------------------------------------------------
diff --git a/mdimages/gpfdist_instances.png b/mdimages/gpfdist_instances.png
deleted file mode 100644
index 6fae2d4..0000000
Binary files a/mdimages/gpfdist_instances.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gpfdist_instances_backup.png
----------------------------------------------------------------------
diff --git a/mdimages/gpfdist_instances_backup.png b/mdimages/gpfdist_instances_backup.png
deleted file mode 100644
index 7cd3e1a..0000000
Binary files a/mdimages/gpfdist_instances_backup.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gporca.png
----------------------------------------------------------------------
diff --git a/mdimages/gporca.png b/mdimages/gporca.png
deleted file mode 100644
index 2909443..0000000
Binary files a/mdimages/gporca.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/hawq_architecture_components.png
----------------------------------------------------------------------
diff --git a/mdimages/hawq_architecture_components.png b/mdimages/hawq_architecture_components.png
deleted file mode 100644
index cea50b0..0000000
Binary files a/mdimages/hawq_architecture_components.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/hawq_hcatalog.png
----------------------------------------------------------------------
diff --git a/mdimages/hawq_hcatalog.png b/mdimages/hawq_hcatalog.png
deleted file mode 100644
index 35b74c3..0000000
Binary files a/mdimages/hawq_hcatalog.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/hawq_high_level_architecture.png
----------------------------------------------------------------------
diff --git a/mdimages/hawq_high_level_architecture.png b/mdimages/hawq_high_level_architecture.png
deleted file mode 100644
index d88bf7a..0000000
Binary files a/mdimages/hawq_high_level_architecture.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/partitions.jpg
----------------------------------------------------------------------
diff --git a/mdimages/partitions.jpg b/mdimages/partitions.jpg
deleted file mode 100644
index d366e21..0000000
Binary files a/mdimages/partitions.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/piv-opt.png
----------------------------------------------------------------------
diff --git a/mdimages/piv-opt.png b/mdimages/piv-opt.png
deleted file mode 100644
index f8f192b..0000000
Binary files a/mdimages/piv-opt.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/resource_queues.jpg
----------------------------------------------------------------------
diff --git a/mdimages/resource_queues.jpg b/mdimages/resource_queues.jpg
deleted file mode 100644
index 7f5a54c..0000000
Binary files a/mdimages/resource_queues.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/slice_plan.jpg
----------------------------------------------------------------------
diff --git a/mdimages/slice_plan.jpg b/mdimages/slice_plan.jpg
deleted file mode 100644
index ad8da83..0000000
Binary files a/mdimages/slice_plan.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/source/gporca.graffle
----------------------------------------------------------------------
diff --git a/mdimages/source/gporca.graffle b/mdimages/source/gporca.graffle
deleted file mode 100644
index fb835d5..0000000
Binary files a/mdimages/source/gporca.graffle and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/source/hawq_hcatalog.graffle
----------------------------------------------------------------------
diff --git a/mdimages/source/hawq_hcatalog.graffle b/mdimages/source/hawq_hcatalog.graffle
deleted file mode 100644
index f46bfb2..0000000
Binary files a/mdimages/source/hawq_hcatalog.graffle and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/standby_master.jpg
----------------------------------------------------------------------
diff --git a/mdimages/standby_master.jpg b/mdimages/standby_master.jpg
deleted file mode 100644
index ef195ab..0000000
Binary files a/mdimages/standby_master.jpg and /dev/null differ


[20/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/psql.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/psql.html.md.erb b/markdown/reference/cli/client_utilities/psql.html.md.erb
new file mode 100644
index 0000000..ee245e6
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/psql.html.md.erb
@@ -0,0 +1,760 @@
+---
+title: psql
+---
+
+Interactive command-line interface for HAWQ.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+psql [<option> ...] [<dbname> [<username>]]
+```
+where:
+
+``` pre
+<general options> =
+    [-c '<command>' | --command '<command>'] 
+    [-d <dbname> | --dbname <dbname>] 
+    [-f <filename> | --file <filename>] 
+    [-l | --list]
+    [-v <assignment> | --set=<assignment> | --variable=<name><value>]
+    [-X | --no-psqlrc]
+    [-1 | --single-transaction]
+    [-? | --help]  
+    [--version]  
+<input and output options> =
+    [-a | --echo-all]
+    [-e | --echo-queries]
+    [-E | --echo-hidden]
+    [-L <filename> | --log-file <filename>]
+    [-n | --noreadline]
+    [-o <filename> | --output <filename>]
+    [-q | --quiet]
+    [-s | --single-step]
+    [-S | --single-line]
+<output format options> =
+    [-A | --no-align]
+    [-F <separator> | --field-separator <separator>]
+    [-H | --html]
+    [-P <assignment> | --pset <assignment>]
+    [-R <separator> | --record-separator <separator>]
+    [-t | --tuples-only]
+    [-T <table_options> | --table-attr     [-V | --version]
+    [-x | --expanded]
+<input and output options> =
+    [-a | --echo-all]
+    [-A | --no-align]
+    [-c \u2019<command>\u2019 | --command \u2019<command>\u2019] 
+    [-d <dbname> | --dbname <dbname>] 
+    [-e | --echo-queries]
+    [-E | --echo-hidden]
+    [-f <filename> | --file <filename>] 
+    [-F <separator> | --field-separator <separator>]
+    [-H | --html]
+    [-l | --list]
+    [-L <filename> | --log-file <filename>]
+    [-o <filename> | --output <filename>]
+    [-P <assignment> | --pset <assignment>]
+    [-q | --quiet]
+    [-R <separator> | --record-separator <separator>]
+    [-s | --single-step]
+    [-S | --single-line]
+    [-t | --tuples-only]
+    [-T <table_options> | --table-attr     [-V | --version]
+    [-x | --expanded]
+<connection_options> =
+    [-h <host> | --host <host>] 
+    [-p <port> | -- port <port>] 
+    [-U <username> | --username <username>] 
+    [-W | --password]  
+```
+
+## <a id="topic1__section3"></a>Description
+
+`psql` is a terminal-based front-end to HAWQ. It enables you to type in queries interactively, issue them to HAWQ, and see the query results. Alternatively, input can be from a file. In addition, it provides a number of meta-commands and various shell-like features to facilitate writing scripts and automating a wide variety of tasks.
+
+**Note:** HAWQ queries timeout after a period of 600 seconds. For this reason, long-running queries may appear to hang in `plsql` until results are processed or until the timeout period expires.
+
+## <a id="topic1__section4"></a>Options
+
+**General Options**
+
+<dt>-c, -\\\-command '\<command\>'  </dt>
+<dd>Specifies that `psql` is to execute the specified command string, and then exit. This is useful in shell scripts. \<command\> must be either a command string that is completely parseable by the server, or a single backslash command. Thus you cannot mix SQL and `psql` meta-commands with this option. To achieve that, you could pipe the string into `psql`, like this:
+
+``` shell
+echo '\x \\ SELECT * FROM foo;' | psql
+```
+
+(`\\` is the separator meta-command.)
+
+If the command string contains multiple SQL commands, they are processed in a single transaction, unless there are explicit `BEGIN/COMMIT` commands included in the string to divide it into multiple transactions. This is different from the behavior when the same string is fed to `psql`'s standard input.</dd>
+
+<dt>-d, -\\\-dbname \<dbname\>  </dt>
+<dd>Specifies the name of the database to connect to. This is equivalent to specifying dbname as the first non-option argument on the command line.
+
+If this parameter contains an equals sign, it is treated as a `conninfo` string; for example you can pass `'dbname=postgres user=username        password=mypass'` as `dbname`.</dd>
+
+<dt>-f, -\\\-file \<filename\>  </dt>
+<dd>Use a file as the source of commands instead of reading commands interactively. After the file is processed, `psql` terminates. If \<filename\> is `-` (hyphen), then standard input is read. Using this option is subtly different from writing `psql <`\<filename\> In general, both will do what you expect, but using `-f` enables some nice features such as error messages with line numbers.</dd>
+
+<dt>-l, -\\\-list  </dt>
+<dd>List all available databases, then exit. Other non-connection options are ignored.</dd>
+
+<dt>-v \<assignment\>, -\\\-set \<assignment\>, -\\\-variable \<NAME=VALUE\>  </dt>
+<dd>Perform a variable assignment, like the `\set` internal command. \<NAME\> will be set to \<VALUE\>. Note that you must separate name and value, if any, by an equal sign on the command line. To unset a variable, leave off the equal sign. To just set a variable without a value, use the equal sign but leave off the value. These assignments are done during a very early stage of start-up, so variables reserved for internal purposes could be overwritten later.</dd>
+
+<dt>-X, -\\\-no-psqlrc  </dt>
+<dd>Do not read the start-up file (neither the system-wide `psqlrc` file nor the user's `~/.psqlrc` file).</dd>
+
+<dt>-1, -\\\-single-transaction  </dt>
+<dd>When `psql` executes a script with the `-f` option, adding this option wraps `BEGIN/COMMIT` around the script to execute it as a single transaction. This ensures that either all the commands complete successfully, or no changes are applied.
+
+If the script itself uses `BEGIN`, `COMMIT`, or `ROLLBACK`, this option will not have the desired effects. Also, if the script contains any command that cannot be executed inside a transaction block, specifying this option will cause that command (and hence the whole transaction) to fail.</dd>
+
+<dt>-?, -\\\-help  </dt>
+<dd>Show help about `psql` command line arguments, then exit.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Display version information, then exit.</dd>
+
+**Input and Output Options**
+
+<dt>-a, -\\\-echo-all  </dt>
+<dd>Print all input lines to standard output as they are read. This is more useful for script processing rather than interactive mode.</dd>
+
+<dt>-e, --echo-queries </dt>
+<dd>Copy all SQL commands sent to the server to standard output as well.</dd>
+
+<dt>-E, -\\\-echo-hidden  </dt>
+<dd>Echo the actual queries generated by `\d` and other backslash commands. You can use this to study `psql`'s internal operations.</dd>
+
+<dt>-L \<filename\>, -\\\-log-file \<filename\>  </dt>
+<dd>Write all query output into the specified log file, in addition to the normal output destination.</dd>
+
+<dt>  -n, --no-readline  </dt>
+<dd>Disable enhanced command line editing (readline) </dd>
+
+<dt>-o \<filename\>, -\\\-output \<filename\>  </dt>
+<dd>Put all query output into the specified file.</dd>
+
+<dt>-q, -\\\-quiet  </dt>
+<dd>Specifies that `psql` should do its work quietly. By default, it prints welcome messages and various informational output. If this option is used, none of this happens. This is useful with the `-c` option.</dd>
+
+<dt>-s, -\\\-single-step  </dt>
+<dd>Run in single-step mode. That means the user is prompted before each command is sent to the server, with the option to cancel execution as well. Use this to debug scripts.</dd>
+
+<dt>-S, -\\\-single-line  </dt>
+<dd>Runs in single-line mode where a new line terminates an SQL command, as a semicolon does.</dd>
+
+
+**Output Format Options**
+
+<dt>-A, -\\\-no-align  </dt>
+<dd>Switches to unaligned output mode. (The default output mode is aligned.)</dd>
+
+<dt>-F, -\\\-field-separator \<separator\>  </dt>
+<dd>Use the specified separator as the field separator for unaligned output.</dd>
+
+<dt>-H, -\\\-html  </dt>
+<dd>Turn on HTML tabular output.</dd>
+
+<dt>-P, -\\\-pset \<assignment\>  </dt>
+<dd>Allows you to specify printing options in the style of `\pset` on the command line. Note that here you have to separate name and value with an equal sign instead of a space. Thus to set the output format to LaTeX, you could write `-P        format=latex`.</dd>
+
+<dt>-R, -\\\-record-separator \<separator\>  </dt>
+<dd>Use \<separator\> as the record separator for unaligned output.</dd>
+
+<dt>-t, -\\\-tuples-only  </dt>
+<dd>Turn off printing of column names and result row count footers, etc. This command is equivalent to `\pset tuples_only` and is provided for convenience.</dd>
+
+<dt>-T, -\\\-table-attr \<table\_options\>  </dt>
+<dd>Allows you to specify options to be placed within the HTML table tag. See `\pset` for details.</dd>
+
+<dt>-x, -\\\-expanded  </dt>
+<dd>Turn on the expanded table formatting mode.</dd>
+
+**Connection Options**
+
+<dt>-h, -\\\-host \<host\>  </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p, -\\\-port \<port\>  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U, -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt. `psql` should automatically prompt for a password whenever the server requests password authentication. However, currently password request detection is not totally reliable, hence this option to force a prompt. If no password prompt is issued and the server requires password authentication, the connection attempt will fail.</dd>
+
+<dt>-w -\\\-no-password  </dt>
+<dd>Never issue a password prompt. If the server requires password authentication and a password is not available by other means, such as through a `~/.pgpass` file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
+
+**Note:** This option remains set for the entire session, and so it affects uses of the meta-command `\connect` as well as the initial connection attempt.</dd>
+
+## <a id="topic1__section6"></a>Exit Status
+
+`psql` returns 0 to the shell if it finished normally, 1 if a fatal error of its own (out of memory, file not found) occurs, 2 if the connection to the server went bad and the session was not interactive, and 3 if an error occurred in a script and the variable `ON_ERROR_STOP` was set.
+
+## <a id="topic1__section7"></a>Usage
+
+**Connecting to a Database**
+
+`psql` is a client application for HAWQ. To connect to a database you must know the name of your target database, the host name and port number of the HAWQ master server and what database user name you want to connect as. Use the `-d`, `-h`, `-p`, and `-U` command line options, respectively, to specify these parameters to `psql`. If an argument is found that does not belong to any option, it will be interpreted as the database name (or the user name, if the database name is already given). Not all these options are required; there are useful defaults. If you omit the host name, `psql` will connect via a UNIX-domain socket to a master server on the local host, or via TCP/IP to `localhost` on machines that do not have UNIX-domain sockets. The default master port number is 5432. If you use a different port for the master, you must specify the port. The default database user name is your UNIX user name, as is the default database name. Note that you cannot just connect to any database u
 nder any user name. Your database administrator should have informed you about your access rights.
+
+When the defaults are not right, you can save yourself some typing by setting any or all of the environment variables `PGAPPNAME`, `PGDATABASE`, `PGHOST`, `PGPORT`, and `PGUSER` to appropriate values.
+
+It is also convenient to have a `~/.pgpass` file to avoid regularly having to type in passwords. This file should reside in your home directory and contain lines of the following format:
+
+``` pre
+hostname:port:database:username:password
+```
+
+The permissions on `.pgpass` must disallow any access to world or group (for example: `chmod 0600 ~/.pgpass`). If the permissions are less strict than this, the file will be ignored. (The file permissions are not currently checked on Microsoft Windows clients, however.)
+
+If the connection could not be made for any reason (insufficient privileges, server is not running, etc.), `psql` will return an error and terminate.
+
+**Entering SQL Commands**
+
+In normal operation, `psql` provides a prompt with the name of the database to which `psql` is currently connected, followed by the string `=>` for a regular user or `=#` for a superuser. For example:
+
+``` pre
+testdb=>
+testdb=#
+```
+
+At the prompt, the user may type in SQL commands. Ordinarily, input lines are sent to the server when a command-terminating semicolon is reached. An end of line does not terminate a command. Thus commands can be spread over several lines for clarity. If the command was sent and executed without error, the results of the command are displayed on the screen.
+
+## <a id="topic1__section10"></a>Meta-Commands
+
+Anything you enter in `psql` that begins with an unquoted backslash is a `psql` meta-command that is processed by `psql` itself. These commands help make `psql` more useful for administration or scripting. Meta-commands are more commonly called slash or backslash commands.
+
+The format of a `psql` command is the backslash, followed immediately by a command verb, then any arguments. The arguments are separated from the command verb and each other by any number of whitespace characters.
+
+To include whitespace into an argument you may quote it with a single quote. To include a single quote into such an argument, use two single quotes. Anything contained in single quotes is furthermore subject to C-like substitutions for `\n` (new line), `\t` (tab), `\digits` (octal), and `\xdigits` (hexadecimal).
+
+If an unquoted argument begins with a colon (`:`), it is taken as a `psql` variable and the value of the variable is used as the argument instead.
+
+Arguments that are enclosed in backquotes (`` ` ``) are taken as a command line that is passed to the shell. The output of the command (with any trailing newline removed) is taken as the argument value. The above escape sequences also apply in backquotes.
+
+Some commands take an SQL identifier (such as a table name) as argument. These arguments follow the syntax rules of SQL: Unquoted letters are forced to lowercase, while double quotes (`"`) protect letters from case conversion and allow incorporation of whitespace into the identifier. Within double quotes, paired double quotes reduce to a single double quote in the resulting name. For example, `FOO"BAR"BAZ` is interpreted as `fooBARbaz`, and `"A weird"" name"` becomes `A weird"      name`.
+
+Parsing for arguments stops when another unquoted backslash occurs. This is taken as the beginning of a new meta-command. The special sequence `\\` (two backslashes) marks the end of arguments and continues parsing SQL commands, if any. That way SQL and `psql` commands can be freely mixed on a line. But in any case, the arguments of a meta-command cannot continue beyond the end of the line.
+
+The following meta-commands are defined:
+
+<dt>\\a  </dt>
+<dd>If the current table output format is unaligned, it is switched to aligned. If it is not unaligned, it is set to unaligned. This command is kept for backwards compatibility. See `\pset` for a more general solution.</dd>
+
+<dt>\\cd \[\<directory\>\]  </dt>
+<dd>Changes the current working directory. Without argument, changes to the current user's home directory. To print your current working directory, use `\!pwd`.</dd>
+
+<dt>\\C \[\<title\>\]  </dt>
+<dd>Sets the title of any tables being printed as the result of a query or unset any such title. This command is equivalent to `\pset title`.</dd>
+
+<dt>\\c, \\connect \[\<dbname\> \[\<username\>\] \[\<host\>\] \[\<port\>\]\]  </dt>
+<dd>Establishes a new connection. If the new connection is successfully made, the previous connection is closed. If any of dbname, username, host or port are omitted, the value of that parameter from the previous connection is used. If the connection attempt failed, the previous connection will only be kept if `psql` is in interactive mode. When executing a non-interactive script, processing will immediately stop with an error. This distinction was chosen as a user convenience against typos, and a safety mechanism that scripts are not accidentally acting on the wrong database.</dd>
+
+<dt>\\conninfo  </dt>
+<dd>Displays information about the current connection including the database name, the user name, the type of connection (UNIX domain socket, `TCP/IP`, etc.), the host, and the port.</dd>
+
+<dt>\\copy {\<table\> \[(\<column\_list\>)\] | (\<query\>)} {from | to} {\<filename\> | stdin | stdout | pstdin | pstdout} \[with\] \[binary\] \[oids\] \[delimiter \[as\] '\<character\>'\] \[null \[as\] '\<string\>'\] \[csv \[header\] \[quote \[as\] 'character'\] \[escape \[as\] '\<character\>'\] \[force quote column\_list\] \[force not null column\_list\]\]  </dt>
+<dd>Performs a frontend (client) copy. This is an operation that runs an SQL `COPY` command, but instead of the server reading or writing the specified file, `psql` reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.
+
+The syntax of the command is similar to that of the SQL `COPY` command. Note that, because of this, special parsing rules apply to the `\copy` command. In particular, the variable substitution rules and backslash escapes do not apply.
+
+`\copy ... from stdin | to stdout` reads/writes based on the command input and output respectively. All rows are read from the same source that issued the command, continuing until `\.` is read or the stream reaches `EOF`. Output is sent to the same place as command output. To read/write from `psql`'s standard input or output, use `pstdin` or `pstdout`. This option is useful for populating tables in-line within a SQL script file.
+
+This operation is not as efficient as the SQL `COPY` command because all data must pass through the client/server connection.</dd>
+
+<dt>\\copyright  </dt>
+<dd>Shows the copyright and distribution terms of PostgreSQL on which HAWQ is based.</dd>
+
+<dt>\\d, \\d+, \\dS \[\<relation\_pattern\>\]  </dt>
+<dd>For each relation (table, external table, view, index, or sequence) matching the relation pattern, show all columns, their types, the tablespace (if not the default) and any special attributes such as `NOT NULL` or defaults, if any. Associated indexes, constraints, rules, and triggers are also shown, as is the view definition if the relation is a view.
+
+-   The command form `\d+` is identical, except that more information is displayed: any comments associated with the columns of the table are shown, as is the presence of OIDs in the table.
+-   The command form `\dS` is identical, except that system information is displayed as well as user information.For example, `\dt` displays user tables, but not system tables; `\dtS` displays both user and system tables.Both these commands can take the `+` parameter to display additional information, as in `\dt+` and `\dtS+`.
+
+    If `\d` is used without a pattern argument, it is equivalent to `\dtvs` which will show a list of all tables, views, and sequences.</dd>
+
+<dt>\\da \[\<aggregate\_pattern\>\]  </dt>
+<dd>Lists all available aggregate functions, together with the data types they operate on. If a pattern is specified, only aggregates whose names match the pattern are shown.</dd>
+
+<dt>\\db, \\db+ \[\<tablespace\_pattern\>\]  </dt>
+<dd>Lists all available tablespaces and their corresponding filespace locations. If pattern is specified, only tablespaces whose names match the pattern are shown. If + is appended to the command name, each object is listed with its associated permissions.</dd>
+
+<dt>\\dc \[\<conversion\_pattern\>\]  </dt>
+<dd>Lists all available conversions between character-set encodings. If pattern is specified, only conversions whose names match the pattern are listed.</dd>
+
+<dt>\\dC  </dt>
+<dd>Lists all available type casts.</dd>
+
+<dt>\\dd \[\<object\_pattern\>\]  </dt>
+<dd>Lists all available objects. If pattern is specified, only matching objects are shown.</dd>
+
+<dt>\\dD \[\<domain\_pattern\>\]  </dt>
+<dd>Lists all available domains. If pattern is specified, only matching domains are shown.</dd>
+
+<dt>\\df, \\df+ \[\<function\_pattern\> \]  </dt>
+<dd>Lists available functions, together with their argument and return types. If pattern is specified, only functions whose names match the pattern are shown. If the form `\df+` is used, additional information about each function, including language and description, is shown. To reduce clutter, `\df `does not show data type I/O functions. This is implemented by ignoring functions that accept or return type `cstring`.</dd>
+
+<dt>\\dg \[\<role\_pattern\>\]  </dt>
+<dd>Lists all database roles. If pattern is specified, only those roles whose names match the pattern are listed.</dd>
+
+<dt>\\distPvxS \[index | sequence | table | parent table | view | external\_table | system\_object\]   </dt>
+<dd>This is not the actual command name: the letters `i`, `s`, `t`, `P`, `v`, `x`, `S` stand for index, sequence, table, parent table, view, external table, and system table, respectively. You can specify any or all of these letters, in any order, to obtain a listing of all the matching objects. The letter `S` restricts the listing to system objects; without `S`, only non-system objects are shown. If + is appended to the command name, each object is listed with its associated description, if any. If a pattern is specified, only objects whose names match the pattern are listed.</dd>
+
+<dt>\\dl  </dt>
+<dd>This is an alias for `\lo_list`, which shows a list of large objects.</dd>
+
+<dt>\\dn, \\dn+ \[\<schema\_pattern\>\]  </dt>
+<dd>Lists all available schemas (namespaces). If pattern is specified, only schemas whose names match the pattern are listed. Non-local temporary schemas are suppressed. If `+` is appended to the command name, each object is listed with its associated permissions and description, if any.</dd>
+
+<dt>\\do \[\<operator\_pattern\>\]  </dt>
+<dd>Lists available operators with their operand and return types. If pattern is specified, only operators whose names match the pattern are listed.</dd>
+
+<dt>\\dp \[\<relation\_pattern\_to\_show\_privileges\>\]  </dt>
+<dd>Produces a list of all available tables, views and sequences with their associated access privileges. If pattern is specified, only tables, views and sequences whose names match the pattern are listed. The `GRANT` and `REVOKE` commands are used to set access privileges.</dd>
+
+<dt>\\dT, \\dT+ \[\<datatype\_pattern\>\]  </dt>
+<dd>Lists all data types or only those that match pattern. The command form `\dT+` shows extra information.</dd>
+
+<dt>\\du \[\<role\_pattern\>\]  </dt>
+<dd>Lists all database roles, or only those that match pattern.</dd>
+
+<dt>\\e | \\edit \[\<filename\>\]  </dt>
+<dd>If a file name is specified, the file is edited; after the editor exits, its content is copied back to the query buffer. If no argument is given, the current query buffer is copied to a temporary file which is then edited in the same fashion. The new query buffer is then re-parsed according to the normal rules of `psql`, where the whole buffer is treated as a single line. (Thus you cannot make scripts this way. Use `\i` for that.) This means also that if the query ends with (or rather contains) a semicolon, it is immediately executed. In other cases it will merely wait in the query buffer.
+
+`psql` searches the environment variables `PSQL_EDITOR`, `EDITOR`, and `VISUAL` (in that order) for an editor to use. If all of them are unset, `vi` is used on UNIX systems, `notepad.exe` on Windows systems.</dd>
+
+<dt>\\echotext \[ ... \]  </dt>
+<dd>Prints the arguments to the standard output, separated by one space and followed by a newline. This can be useful to intersperse information in the output of scripts.
+
+If you use the `\o` command to redirect your query output you may wish to use` 'echo` instead of this command.</dd>
+
+<dt>\\encoding \[\<encoding\>\]  </dt>
+<dd>Sets the client character set encoding. Without an argument, this command shows the current encoding.</dd>
+
+<dt>\\f \[\<field\_separator\_string\>\]  </dt>
+<dd>Sets the field separator for unaligned query output. The default is the vertical bar (`|`). See also `\pset` for a generic way of setting output options.</dd>
+
+<dt>\\g \[{\<filename\> | \<command\> }\]  </dt>
+<dd>Sends the current query input buffer to the server and optionally stores the query's output in a file or pipes the output into a separate UNIX shell executing command. A bare `\g` is virtually equivalent to a semicolon. A `\g` with argument is a one-shot alternative to the `\o` command.</dd>
+
+<dt>\\h, \\help \[\<sql\_command\>\]  </dt>
+<dd>Gives syntax help on the specified SQL command. If a command is not specified, then `psql` will list all the commands for which syntax help is available. Use an asterisk (\*) to show syntax help on all SQL commands. To simplify typing, commands that consists of several words do not have to be quoted.</dd>
+
+<dt>\\H  </dt>
+<dd>Turns on HTML query output format. If the HTML format is already on, it is switched back to the default aligned text format. This command is for compatibility and convenience, but see `\pset` about setting other output options.</dd>
+
+<dt>\\i \<input\_filename\>  </dt>
+<dd>Reads input from a file and executes it as though it had been typed on the keyboard. If you want to see the lines on the screen as they are read you must set the variable `ECHO` to all.</dd>
+
+<dt>\\l, \\list, \\l+, \\list+  </dt>
+<dd>List the names, owners, and character set encodings of all the databases in the server. If `+` is appended to the command name, database descriptions are also displayed.</dd>
+
+<dt>\\lo\_export \<loid\> \<filename\>  </dt>
+<dd>Reads the large object with OID \<loid\> from the database and writes it to \<filename\> Note that this is subtly different from the server function `lo_export`, which acts with the permissions of the user that the database server runs as and on the server's file system. Use `\lo_list` to find out the large object's OID.</dd>
+
+<dt>\\lo\_import \<large\_object\_filename\> \[\<comment\>\]  </dt>
+<dd>Stores the file into a large object. Optionally, it associates the given comment with the object. Example:
+
+``` pre
+mydb=> \lo_import '/home/gpadmin/pictures/photo.xcf' 'a 
+picture of me'
+lo_import 152801
+```
+
+The response indicates that the large object received object ID 152801 which one ought to remember if one wants to access the object ever again. For that reason, you should always associate a human-readable comment with every object. Those can then be seen with the `\lo_list` command. Note that this command is subtly different from the server-side `lo_import` because it acts as the local user on the local file system, rather than the server's user and file system.</dd>
+
+<dt>\\lo\_list  </dt>
+<dd>Shows a list of all large objects currently stored in the database, along with any comments provided for them.</dd>
+
+<dt>\\lo\_unlink \<largeobject\_oid\>  </dt>
+<dd>Deletes the large object of the specified OID from the database. Use `\lo_list` to find out the large object's OID.</dd>
+
+<dt>\\o \[ {\<query\_result\_filename\> | \<command\>} \]  </dt>
+<dd>Saves future query results to a file or pipes them into a UNIX shell command. If no arguments are specified, the query output will be reset to the standard output. Query results include all tables, command responses, and notices obtained from the database server, as well as output of various backslash commands that query the database (such as `\d`), but not error messages. To intersperse text output in between query results, use `'echo`.</dd>
+
+<dt>\\p  </dt>
+<dd>Print the current query buffer to the standard output.</dd>
+
+<dt>\\password \[\<username\>\]  </dt>
+<dd>Changes the password of the specified user (by default, the current user). This command prompts for the new password, encrypts it, and sends it to the server as an `ALTER ROLE` command. This makes sure that the new password does not appear in cleartext in the command history, the server log, or elsewhere.</dd>
+
+<dt>\\prompt \[ \<text\> \] \<name\>  </dt>
+<dd>Prompts the user to set a variable \<name\>. Optionally, you can specify a prompt. Enclose prompts longer than one word in single quotes.
+
+By default, `\prompt` uses the terminal for input and output. However, you can use the `-f` command line switch to specify standard input and standard output.</dd>
+
+<dt>\\pset \<print\_option\> \[\<value\>\]  </dt>
+<dd>This command sets options affecting the output of query result tables. \<print\_option\> describes which option is to be set. Adjustable printing options are:
+
+-   **`format`** \u2013 Sets the output format to one of **u**`naligned`, **a**`ligned`, **h**`tml`, **l**`atex`, **t**`roff-ms`, or **w**`rapped`. First letter abbreviations are allowed. Unaligned writes all columns of a row on a line, separated by the currently active field separator. This is intended to create output that might be intended to be read in by other programs. Aligned mode is the standard, human-readable, nicely formatted text output that is default. The HTML and LaTeX modes put out tables that are intended to be included in documents using the respective mark-up language. They are not complete documents! (This might not be so dramatic in HTML, but in LaTeX you must have a complete document wrapper.)
+
+    The wrapped option sets the output format like the `aligned` parameter, but wraps wide data values across lines to make the output fit in the target column width. The target width is set with the `columns` option. To specify the column width and select the wrapped format, use two \\pset commands; for example, to set the with to 72 columns and specify wrapped format, use the commands `\pset columns 72` and then `\pset format wrapped`.
+
+    **Note:** Since `psql` does not attempt to wrap column header titles, the wrapped format behaves the same as aligned if the total width needed for column headers exceeds the target.
+
+-   **`border`** \u2013 The second argument must be a number. In general, the higher the number the more borders and lines the tables will have, but this depends on the particular format. In HTML mode, this will translate directly into the `border=...` attribute, in the others only values `0` (no border), `1` (internal dividing lines), and `2` (table frame) make sense.
+-   **`columns`** \u2013 Sets the target width for the `wrapped` format, and also the width limit for determining whether output is wide enough to require the pager. The default is *zero*. Zero causes the target width to be controlled by the environment variable `COLUMNS`, or the detected screen width if `COLUMNS` is not set. In addition, if `COLUMNS` is zero, then the wrapped format affects screen output only. If `COLUMNS` is nonzero, then file and pipe output is wrapped to that width as well.
+
+    After setting the target width, use the command `\pset format wrapped` to enable the wrapped format.
+
+-   **`expanded`**, **`x`** \u2013 Toggles between regular and expanded format. When expanded format is enabled, query results are displayed in two columns, with the column name on the left and the data on the right. This mode is useful if the data would not fit on the screen in the normal horizontal mode. Expanded mode is supported by all four output formats.
+-   **`linestyle`** \[**`unicode`** | **`ascii`** | **`old-ascii`**\] \u2013 Sets the border line drawing style to one of unicode, ascii, or old-ascii. Unique abbreviations, including one letter, are allowed for the three styles. The default setting is `ascii`. This option only affects the `aligned` and `wrapped` output formats.
+
+    **`ascii`** \u2013 uses plain ASCII characters. Newlines in data are shown using a + symbol in the right-hand margin. When the wrapped format wraps data from one line to the next without a newline character, a dot (.) is shown in the right-hand margin of the first line, and again in the left-hand margin of the following line.
+
+    **`old-ascii`** \u2013 style uses plain ASCII characters, using the formatting style used in PostgreSQL 8.4 and earlier. Newlines in data are shown using a : symbol in place of the left-hand column separator. When the data is wrapped from one line to the next without a newline character, a ; symbol is used in place of the left-hand column separator.
+
+    **`unicode`** \u2013 style uses Unicode box-drawing characters. Newlines in data are shown using a carriage return symbol in the right-hand margin. When the data is wrapped from one line to the next without a newline character, an ellipsis symbol is shown in the right-hand margin of the first line, and again in the left-hand margin of the following line.
+
+    When the `border` setting is greater than zero, this option also determines the characters with which the border lines are drawn. Plain ASCII characters work everywhere, but Unicode characters look nicer on displays that recognize them.
+
+-   **`null          'string'`** \u2013 The second argument is a string to print whenever a column is null. The default is not to print anything, which can easily be mistaken for an empty string. For example, the command `\pset``null '(empty)' `displays *(empty)* in null columns.
+-   **`fieldsep`** \u2013 Specifies the field separator to be used in unaligned output mode. That way one can create, for example, tab- or comma-separated output, which other programs might prefer. To set a tab as field separator, type `\pset�fieldsep          '\t'`. The default field separator is `'|'` (a vertical bar).
+-   **`footer`** \u2013 Toggles the display of the default footer (*x* rows).
+-   **`numericlocale`** \u2013 Toggles the display of a locale-aware character to separate groups of digits to the left of the decimal marker. It also enables a locale-aware decimal marker.
+-   **`recordsep`** \u2013 Specifies the record (line) separator to use in unaligned output mode. The default is a newline character.
+-   **`title`** \[\<text\>\] \u2013 Sets the table title for any subsequently printed tables. This can be used to give your output descriptive tags. If no argument is given, the title is unset.
+-   **`tableattr`**, **`T`** \[\<text\>\] \u2013 Allows you to specify any attributes to be placed inside the HTML table tag. This could for example be `cellpadding` or `bgcolor`. Note that you probably don't want to specify border here, as that is already taken care of by `\pset          border`.
+-   **`tuples_only`**, **`t `** \[ novalue  |  on  |  off \] \u2013 The `\pset tuples_only` command by itself toggles between tuples only and full display. The values `on` and `off` set the tuples display, regardless of the current setting. Full display may show extra information such as column headers, titles, and various footers. In tuples only mode, only actual table data is shown The `\t` command is equivalent to `\pset``tuples_only` and is provided for convenience.
+-   **`pager`** \u2013 Controls the use of a pager for query and `psql` help output. When `on`, if the environment variable `PAGER` is set, the output is piped to the specified program. Otherwise a platform-dependent default (such as `more`) is used. When `off`, the pager is not used. When `on`, the pager is used only when appropriate. Pager can also be set to `always`, which causes the pager to be always used.
+</dd>
+
+<dt>\\q  </dt>
+<dd>Quits the `psql` program.</dd>
+
+<dt>\\qechotext \[ ... \]   </dt>
+<dd>This command is identical to `\echo` except that the output will be written to the query output channel, as set by `\o`.</dd>
+
+<dt>\\r  </dt>
+<dd>Resets (clears) the query buffer.</dd>
+
+<dt>\\s \[\<history\_filename\>\]  </dt>
+<dd>Print or save the command line history to \<filename\> If \<filename\> is omitted, the history is written to the standard output.</dd>
+
+<dt>\\set \[\<name\> \[\<value\> \[ ... \]\]\]  </dt>
+<dd>Sets the internal variable \<name\> to \<value\> or, if more than one value is given, to the concatenation of all of them. If no second argument is given, the variable is just set with no value. To unset a variable, use the `\unset` command.
+
+Valid variable names can contain characters, digits, and underscores. See "Variables" in [Advanced Features](#topic1__section12). Variable names are case-sensitive.
+
+Although you are welcome to set any variable to anything you want, `psql` treats several variables as special. They are documented in the topic about variables.
+
+This command is totally separate from the SQL command `SET`.</dd>
+
+<dt>\\t \[novalue | on | off\]  </dt>
+<dd>The `\t` command by itself toggles a display of output column name headings and row count footer. The values `on` and `off` set the tuples display, regardless of the current setting.This command is equivalent to `\pset        tuples_only` and is provided for convenience.</dd>
+
+<dt>\\T \<table\_options\>  </dt>
+<dd>Allows you to specify attributes to be placed within the table tag in HTML tabular output mode.</dd>
+
+<dt>\\timing \[novalue | on | off\]  </dt>
+<dd>The `\timing` command by itself toggles a display of how long each SQL statement takes, in milliseconds. The values `on` and `off` set the time display, regardless of the current setting.</dd>
+
+<dt>\\w {\<filename\> | \<command\>}  </dt>
+<dd>Outputs the current query buffer to a file or pipes it to a UNIX command.</dd>
+
+<dt>\\x  </dt>
+<dd>Toggles expanded table formatting mode.</dd>
+
+<dt>\\z \[\<relation\_to\_show\_privileges\>\]  </dt>
+<dd>Produces a list of all available tables, views and sequences with their associated access privileges. If a pattern is specified, only tables, views and sequences whose names match the pattern are listed. This is an alias for `\dp`.</dd>
+
+<dt>\\! \[\<command\>\]  </dt>
+<dd>Escapes to a separate UNIX shell or executes the UNIX command. The arguments are not further interpreted, the shell will see them as is.</dd>
+
+<dt>\\?  </dt>
+<dd>Shows help information about the `psql` backslash commands.</dd>
+
+## <a id="topic1__section11"></a>Patterns
+
+The various `\d` commands accept a pattern parameter to specify the object name(s) to be displayed. In the simplest case, a pattern is just the exact name of the object. The characters within a pattern are normally folded to lower case, just as in SQL names; for example, `\dt FOO` will display the table named `foo`. As in SQL names, placing double quotes around a pattern stops folding to lower case. Should you need to include an actual double quote character in a pattern, write it as a pair of double quotes within a double-quote sequence; again this is in accord with the rules for SQL quoted identifiers. For example, `\dt "FOO""BAR"` will display the table named `FOO"BAR` (not `foo"bar`). Unlike the normal rules for SQL names, you can put double quotes around just part of a pattern, for instance `\dt      FOO"FOO"BAR` will display the table named `fooFOObar`.
+
+Within a pattern, `*` matches any sequence of characters (including no characters) and `?` matches any single character. (This notation is comparable to UNIX shell file name patterns.) For example, `\dt int*` displays all tables whose names begin with `int`. But within double quotes, `*` and `?` lose these special meanings and are just matched literally.
+
+A pattern that contains a dot (`.`) is interpreted as a schema name pattern followed by an object name pattern. For example, `\dt foo*.bar*` displays all tables whose table name starts with `bar` that are in schemas whose schema name starts with `foo`. When no dot appears, then the pattern matches only objects that are visible in the current schema search path. Again, a dot within double quotes loses its special meaning and is matched literally.
+
+Advanced users can use regular-expression notations. All regular expression special characters work as specified in the [PostgreSQL documentation on regular expressions](http://www.postgresql.org/docs/8.2/static/functions-matching.html#FUNCTIONS-POSIX-REGEXP), except for `.` which is taken as a separator as mentioned above, `*` which is translated to the regular-expression notation `.*`, and `?` which is translated to `..` You can emulate these pattern characters at need by writing `?` for `.,``(R+|)` for `R*`, or `(R|)` for `R?`. Remember that the pattern must match the whole name, unlike the usual interpretation of regular expressions; write `*` at the beginning and/or end if you don't wish the pattern to be anchored. Note that within double quotes, all regular expression special characters lose their special meanings and are matched literally. Also, the regular expression special characters are matched literally in operator name patterns (such as the argument of `\do`).
+
+Whenever the pattern parameter is omitted completely, the `\d` commands display all objects that are visible in the current schema search path \u2013 this is equivalent to using the pattern `*.` To see all objects in the database, use the pattern `*.*.`
+
+## <a id="topic1__section12"></a>Advanced Features
+
+**Variables**
+
+`psql` provides variable substitution features similar to common UNIX command shells. Variables are simply name/value pairs, where the value can be any string of any length. To set variables, use the `psql` meta-command `\set`:
+
+``` pre
+testdb=> \set foo bar
+```
+
+sets the variable `foo` to the value `bar`. To retrieve the content of the variable, precede the name with a colon and use it as the argument of any slash command:
+
+``` pre
+testdb=> \echo :foo
+bar
+```
+
+**Note:** The arguments of `\set` are subject to the same substitution rules as with other commands. Thus you can construct interesting references such as `\set :foo       'something'` and get 'soft links' or 'variable variables' of Perl or PHP fame, respectively. Unfortunately, there is no way to do anything useful with these constructs. On the other hand, `\set bar :foo` is a perfectly valid way to copy a variable.
+
+If you call `\set` without a second argument, the variable is set, with an empty string as \<value\>. To unset (or delete) a variable, use the command `\unset`.
+
+`psql`'s internal variable names can consist of letters, numbers, and underscores in any order and any number of them. A number of these variables are treated specially by `psql`. They indicate certain option settings that can be changed at run time by altering the value of the variable or represent some state of the application. Although you can use these variables for any other purpose, this is not recommended, as the program might behave unexpectedly. By convention, all specially treated variables consist of all upper-case letters (and possibly numbers and underscores). To ensure maximum compatibility in the future, avoid using such variable names for your own purposes. A list of all specially treated variables are as follows:
+
+<dt>AUTOCOMMIT  </dt>
+<dd>When on (the default), each SQL command is automatically committed upon successful completion. To postpone commit in this mode, you must enter a `BEGIN` or `START TRANSACTION` SQL command. When off or unset, SQL commands are not committed until you explicitly issue `COMMIT` or `END`. The autocommit-on mode works by issuing an implicit `BEGIN` for you, just before any command that is not already in a transaction block and is not itself a `BEGIN` or other transaction-control command, nor a command that cannot be executed inside a transaction block (such as `VACUUM`).
+
+In autocommit-off mode, you must explicitly abandon any failed transaction by entering `ABORT` or `ROLLBACK`. Also keep in mind that if you exit the session without committing, your work will be lost.
+
+The autocommit-on mode is PostgreSQL's traditional behavior, but autocommit-off is closer to the SQL spec. If you prefer autocommit-off, you may wish to set it in your `~/.psqlrc` file.</dd>
+
+<dt>DBNAME  </dt>
+<dd>The name of the database you are currently connected to. This is set every time you connect to a database (including program start-up), but can be unset.</dd>
+
+<dt>ECHO  </dt>
+<dd>If set to all, all lines entered from the keyboard or from a script are written to the standard output before they are parsed or executed. To select this behavior on program start-up, use the switch `-a`. If set to queries, `psql` merely prints all queries as they are sent to the server. The switch for this is `-e`.</dd>
+
+<dt>ECHO\_HIDDEN  </dt>
+<dd>When this variable is set and a backslash command queries the database, the query is first shown. This way you can study the HAWQ internals and provide similar functionality in your own programs. (To select this behavior on program start-up, use the switch `-E`.) If you set the variable to the value `noexec`, the queries are just shown but are not actually sent to the server and executed.</dd>
+
+<dt>ENCODING  </dt>
+<dd>The current client character set encoding.</dd>
+
+<dt>FETCH\_COUNT  </dt>
+<dd>If this variable is set to an integer value &gt; 0, the results of `SELECT` queries are fetched and displayed in groups of that many rows, rather than the default behavior of collecting the entire result set before display. Therefore only a limited amount of memory is used, regardless of the size of the result set. Settings of 100 to 1000 are commonly used when enabling this feature. Keep in mind that when using this feature, a query may fail after having already displayed some rows.
+
+Although you can use any output format with this feature, the default aligned format tends to look bad because each group of `FETCH_COUNT` rows will be formatted separately, leading to varying column widths across the row groups. The other output formats work better.</dd>
+
+<dt>HISTCONTROL  </dt>
+<dd>If this variable is set to `ignorespace`, lines which begin with a space are not entered into the history list. If set to a value of `ignoredups`, lines matching the previous history line are not entered. A value of `ignoreboth` combines the two options. If unset, or if set to any other value than those above, all lines read in interactive mode are saved on the history list.</dd>
+
+<dt>HISTFILE  </dt>
+<dd>The file name that will be used to store the history list. The default value is `~/.psql_history`. For example, putting
+
+``` pre
+\set HISTFILE ~/.psql_history- :DBNAME
+```
+
+in `~/.psqlrc` will cause `psql` to maintain a separate history for each database.</dd>
+
+<dt>HISTSIZE  </dt>
+<dd>The number of commands to store in the command history. The default value is 500.</dd>
+
+<dt>HOST  </dt>
+<dd>The database server host you are currently connected to. This is set every time you connect to a database (including program start-up), but can be unset.</dd>
+
+<dt>IGNOREEOF  </dt>
+<dd>If unset, sending an `EOF` character (usually `CTRL+D`) to an interactive session of `psql` will terminate the application. If set to a numeric value, that many `EOF` characters are ignored before the application terminates. If the variable is set but has no numeric value, the default is `10`.</dd>
+
+<dt>LASTOID  </dt>
+<dd>The value of the last affected OID, as returned from an `INSERT` or `lo_insert` command. This variable is only guaranteed to be valid until after the result of the next SQL command has been displayed.</dd>
+
+<dt>ON\_ERROR\_ROLLBACK  </dt>
+<dd>When on, if a statement in a transaction block generates an error, the error is ignored and the transaction continues. When interactive, such errors are only ignored in interactive sessions, and not when reading script files. When off (the default), a statement in a transaction block that generates an error aborts the entire transaction. The on\_error\_rollback-on mode works by issuing an implicit `SAVEPOINT` for you, just before each command that is in a transaction block, and rolls back to the savepoint on error.</dd>
+
+<dt>ON\_ERROR\_STOP  </dt>
+<dd>By default, if non-interactive scripts encounter an error, such as a malformed SQL command or internal meta-command, processing continues. This has been the traditional behavior of `psql` but it is sometimes not desirable. If this variable is set, script processing will immediately terminate. If the script was called from another script it will terminate in the same fashion. If the outermost script was not called from an interactive `psql` session but rather using the `-f` option, `psql` will return error code 3, to distinguish this case from fatal error conditions (error code 1).</dd>
+
+<dt>PORT  </dt>
+<dd>The database server port to which you are currently connected. This is set every time you connect to a database (including program start-up), but can be unset.</dd>
+
+<dt>PROMPT1  
+PROMPT2  
+PROMPT3  </dt>
+<dd>These specify what the prompt's `psql` issues should look like. See "Prompting," below.</dd>
+
+<dt>QUIET  </dt>
+<dd>This variable is equivalent to the command line option `-q`. It is not very useful in interactive mode.</dd>
+
+<dt>SINGLELINE  </dt>
+<dd>This variable is equivalent to the command line option `-S`.</dd>
+
+<dt>SINGLESTEP  </dt>
+<dd>This variable is equivalent to the command line option `-s`.</dd>
+
+<dt>USER  </dt>
+<dd>The database user you are currently connected as. This is set every time you connect to a database (including program start-up), but can be unset.</dd>
+
+<dt>VERBOSITY  </dt>
+<dd>This variable can be set to the values `default`, `verbose`, or `terse` to control the verbosity of error reports.</dd>
+
+**SQL Interpolation** 
+An additional useful feature of `psql` variables is that you can substitute (interpolate) them into regular SQL statements. The syntax for this is again to prepend the variable name with a colon (`:`).
+
+``` pre
+testdb=> \set foo 'my_table'
+testdb=> SELECT * FROM :foo;
+```
+
+would then query the table `my_table`. The value of the variable is copied literally, so it can even contain unbalanced quotes or backslash commands. You must make sure that it makes sense where you put it. Variable interpolation will not be performed into quoted SQL entities.
+
+A popular application of this facility is to refer to the last inserted OID in subsequent statements to build a foreign key scenario. Another possible use of this mechanism is to copy the contents of a file into a table column. First load the file into a variable and then proceed as above.
+
+``` pre
+testdb=> \set content '''' `cat my_file.txt` ''''
+testdb=> INSERT INTO my_table VALUES (:content);
+```
+
+One problem with this approach is that `my_file.txt` might contain single quotes. These need to be escaped so that they don't cause a syntax error when the second line is processed. This could be done with the program `sed`:
+
+``` pre
+testdb=> \set content '''' `sed -e "s/'/''/g" < my_file.txt` 
+''''
+```
+
+If you are using non-standard-conforming strings, then you'll also need to use double backslashes. This is a bit tricky:
+
+``` pre
+testdb=> \set content '''' `sed -e "s/'/''/g" -e 
+'s/\\/\\\\/g' < my_file.txt` ''''
+```
+
+Note the use of different shell quoting conventions so that neither the single quote marks nor the backslashes are special to the shell. Backslashes are still special to `sed`, however, so we need to double them.
+
+Since colons may legally appear in SQL commands, the following rule applies: the character sequence `":name"` is not changed unless `"name"` is the name of a variable that is currently set. In any case you can escape a colon with a backslash to protect it from substitution. (The colon syntax for variables is standard SQL for embedded query languages, such as ECPG. The colon syntax for array slices and type casts are HAWQ extensions, hence the conflict.)
+
+**Prompting**
+
+The prompts `psql` issues can be customized to your preference. The three variables `PROMPT1`, `PROMPT2`, and `PROMPT3` contain strings and special escape sequences that describe the appearance of the prompt. Prompt 1 is the normal prompt that is issued when `psql` requests a new command. Prompt 2 is issued when more input is expected during command input because the command was not terminated with a semicolon or a quote was not closed. Prompt 3 is issued when you run an SQL `COPY` command and you are expected to type in the row values on the terminal.
+
+The value of the selected prompt variable is printed literally, except where a percent sign (`%`) is encountered. Depending on the next character, certain other text is substituted instead. Defined substitutions are:
+
+<dt>%M  </dt>
+<dd>The full host name (with domain name) of the database server, or `[local]` if the connection is over a UNIX domain socket, or `[local:/dir/name]`, if the UNIX domain socket is not at the compiled in default location.</dd>
+
+<dt>%m  </dt>
+<dd>The host name of the database server, truncated at the first dot, or `[local]` if the connection is over a UNIX domain socket.</dd>
+
+<dt>%&gt;  </dt>
+<dd>The port number at which the database server is listening.</dd>
+
+<dt>%n  </dt>
+<dd>The database session user name. (The expansion of this value might change during a database session as the result of the command `SET SESSION AUTHORIZATION`.)</dd>
+
+<dt>%/  </dt>
+<dd>The name of the current database.</dd>
+
+<dt>%~  </dt>
+<dd>Like `%/`, but the output is `~` (tilde) if the database is your default database.</dd>
+
+<dt>%\#  </dt>
+<dd>If the session user is a database superuser, then a `#`, otherwise a `>`. (The expansion of this value might change during a database session as the result of the command `SET SESSION AUTHORIZATION`.)</dd>
+
+<dt>%R  </dt>
+<dd>In prompt 1, normally `=`, but is `^` if in single-line mode, and `!` if the session is disconnected from the database (which can happen if `\connect` fails). In prompt 2, the sequence is replaced by `-`, `*`, a single quote \(`'`\), a double quote \(`"`\), or a dollar sign \(`$`\), depending on whether `psql` expects more input because: the command is not yet terminated, you are inside a `/* ... */` comment, or you are inside a quoted or dollar-escaped string. In prompt 3, no substitution is produced.</dd>
+
+<dt>%x  </dt>
+<dd>Transaction status: an empty string when not in a transaction block, or `*` when in a transaction block, or `!` when in a failed transaction block, or `?` when the transaction state is indeterminate (for example, because there is no connection).</dd>
+
+<dt>%digits  </dt>
+<dd>The character with the indicated octal code is substituted.</dd>
+
+<dt>%:name:  </dt>
+<dd>The value of the `psql` variable name. See "Variables" in [Advanced Features](#topic1__section12) for details.</dd>
+
+<dt>%\`command\`  </dt>
+<dd>The output of command, similar to ordinary back-tick substitution.</dd>
+
+<dt>%\[ ... %\]  </dt>
+<dd>Prompts may contain terminal control characters which, for example, change the color, background, or style of the prompt text, or change the title of the terminal window. In order for line editing to work properly, these non-printing control characters must be designated as invisible by surrounding them with `%[` and `%]`. Multiple pairs of these may occur within the prompt. For example,
+
+``` pre
+testdb=> \set PROMPT1 '%[%033[1;33;40m%]%n@%/%R%[%033[0m%]%#'
+```
+
+results in a boldfaced (`1;`) yellow-on-black (`33;40`) prompt on VT100-compatible, color-capable terminals. To insert a percent sign into your prompt, write `%%`. The default prompts are `'%/%R%# '` for prompts 1 and 2, and `'>> '` for prompt 3.</dd>
+
+**Command-Line Editing**
+
+`psql` supports the NetBSD libedit library for convenient line editing and retrieval. The command history is automatically saved when `psql` exits and is reloaded when `psql` starts up. Tab-completion is also supported, although the completion logic makes no claim to be an SQL parser. If for some reason you do not like the tab completion, you can turn it off by putting this in a file named `.inputrc` in your home directory:
+
+``` pre
+$if psql
+set disable-completion on
+$endif
+```
+
+## <a id="topic1__section17"></a>Environment
+
+<dt>PAGER  </dt>
+<dd>If the query results do not fit on the screen, they are piped through this command. Typical values are `more` or `less`. The default is platform-dependent. The use of the pager can be disabled by using the `\pset` command.</dd>
+
+<dt>PGDATABASE  
+PGHOST  
+PGPORT  
+PGUSER  </dt>
+<dd>Default connection parameters.</dd>
+
+<dt>PSQL\_EDITOR  
+EDITOR  
+VISUAL  </dt>
+<dd>Editor used by the `\e` command. The variables are examined in the order listed; the first that is set is used.</dd>
+
+<dt>SHELL  </dt>
+<dd>Command executed by the `\!` command.</dd>
+
+<dt>TMPDIR  </dt>
+<dd>Directory for storing temporary files. The default is `/tmp`.</dd>
+
+## <a id="topic1__section18"></a>Files
+
+Before starting up, `psql` attempts to read and execute commands from the user's `~/.psqlrc` file.
+
+The command-line history is stored in the file `~/.psql_history`.
+
+## <a id="topic1__section19"></a>Notes
+
+`psql` only works smoothly with servers of the same version. That does not mean other combinations will fail outright, but subtle and not-so-subtle problems might come up. Backslash commands are particularly likely to fail if the server is of a different version.
+
+## <a id="topic1__section20"></a>Notes for Windows users
+
+`psql` is built as a console application. Since the Windows console windows use a different encoding than the rest of the system, you must take special care when using 8-bit characters within `psql`. If `psql` detects a problematic console code page, it will warn you at startup. To change the console code page, two things are necessary:
+
+Set the code page by entering:
+
+``` pre
+cmd.exe /c chcp 1252
+```
+
+`1252` is a character encoding of the Latin alphabet, used by Microsoft Windows for English and some other Western languages. If you are using Cygwin, you can put this command in `/etc/profile`.
+
+Set the console font to Lucida Console, because the raster font does not work with the ANSI code page.
+
+## <a id="topic1__section21"></a>Examples
+
+Start `psql` in interactive mode:
+
+``` shell
+$ psql -p 54321 -U sally mydatabase
+```
+
+In `psql` interactive mode, spread a command over several lines of input. Notice the changing prompt:
+
+``` sql
+testdb=> CREATE TABLE my_table (
+testdb(>  first integer not null default 0,
+testdb(>  second text)
+testdb-> ;
+CREATE TABLE
+```
+
+Look at the table definition:
+
+``` pre
+testdb=> \d my_table
+             Table "my_table"
+ Attribute |  Type   |      Modifier
+-----------+---------+--------------------
+ first     | integer | not null default 0
+ second    | text    |
+```
+
+Run `psql` in non-interactive mode by passing in a file containing SQL commands:
+
+``` shell
+$ psql -f /home/gpadmin/test/myscript.sql
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/vacuumdb.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/vacuumdb.html.md.erb b/markdown/reference/cli/client_utilities/vacuumdb.html.md.erb
new file mode 100644
index 0000000..cbc37f3
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/vacuumdb.html.md.erb
@@ -0,0 +1,122 @@
+---
+title: vacuumdb
+---
+
+Garbage-collects and analyzes a database.
+
+`vacuumdb` is typically run on system catalog tables. It has no effect when run on HAWQ user tables.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+vacuumdb [<connection_options>] [<vacuum_options>] [<database_name>]
+    
+vacuumdb [-? | --help]
+
+vacuumdb --version
+```
+where:
+
+```
+<connection_options> =
+    [-h <host> | --host <host>] 
+    [-p <port> | --port <port>] 
+    [-U <username> | --username <username>] 
+    [-w | --no-password]
+    [-W | --password] 
+    
+<vacuum_options> =
+    [(-a | --all) | (-d <dbname> | --dbame <dbname>)]
+    [-e | --echo]
+    [-f | --full] 
+    [-F | --freeze] 
+    [-t <tablename> [( column [,...] )] | --table <tablename> [( column [,...] )] ]
+    [(-v | --verbose) | (-q | --quiet)]
+    [-z | --analyze] 
+
+```
+
+## <a id="topic1__section3"></a>Description
+
+`vacuumdb` is a utility for cleaning a PostgreSQL database. `vacuumdb` will also generate internal statistics used by the PostgreSQL query optimizer.
+
+`vacuumdb` is a wrapper around the SQL command `VACUUM`. There is no effective difference between vacuuming databases via this utility and via other methods for accessing the server.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>**\<database\_name\>**</dt>
+<dd>Identifies the name of the database to vacuum. If both this option and the `-d` option are not provided, the environment variable `PGDATABASE` is used. If that is not set, the user name specified for the connection is used.</dd>
+
+**\<vacuum_options\>**
+
+<dt>-a, -\\\-all  </dt>
+<dd>Vacuums all databases.</dd>
+
+<dt>\-d, \-\\\-dbname \<dbname\>  </dt>
+<dd>The name of the database to vacuum. If this option is not specified, \<database\_name\> is not provided, and `--all` is not used, the database name is read from the environment variable `PGDATABASE`. If that is not set, the user name specified for the connection is used.</dd>
+
+<dt>-e, -\\\-echo  </dt>
+<dd>Show the commands being sent to the server.</dd>
+
+<dt>-f, -\\\-full  </dt>
+<dd>Selects a full vacuum, which may reclaim more space, but takes much longer and exclusively locks the table.
+
+**Warning:** A `VACUUM FULL` is not recommended in HAWQ.</dd>
+
+<dt>-F, -\\\-freeze  </dt>
+<dd>Freeze row transaction information.</dd>
+
+<dt>-q, -\\\-quiet  </dt>
+<dd>Do not display a response.</dd>
+
+<dt>-t, -\\\-table \<tablename\>\[(\<column\>)\]  </dt>
+<dd>Clean or analyze this table only. Column names may be specified only in conjunction with the `--analyze` option. If you specify columns, you probably have to escape the parentheses from the shell.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Print detailed information during processing.</dd>
+
+<dt>-z, -\\\-analyze  </dt>
+<dd>Collect statistics for use by the query planner.</dd>
+
+**\<connection_options\>**
+
+<dt>-h, -\\\-host \<host\>  </dt>
+<dd>Specifies the host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p, -\\\-port \<port\>  </dt>
+<dd>Specifies the TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U, -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system user name.</dd>
+
+<dt>-w, -\\\-no-password  </dt>
+<dd>Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a `.pgpass` file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+`vacuumdb` might need to connect several times to the master server, asking for a password each time. It is convenient to have a `~/.pgpass` file for such cases.
+
+## <a id="topic1__section7"></a>Examples
+
+To clean the database `test`:
+
+``` shell
+$ vacuumdb testdb
+```
+
+To clean and analyze a database named `bigdb`:
+
+``` shell
+$ vacuumdb --analyze bigdb
+```
+
+To clean a single table `foo` in a database named `mydb`, and analyze a single column `bar` of the table:
+
+``` shell
+$ vacuumdb --analyze --verbose --table 'foo(bar)' mydb
+```
+
+Note the quotes around the table and column names to escape the parentheses from the shell.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/management_tools.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/management_tools.html.md.erb b/markdown/reference/cli/management_tools.html.md.erb
new file mode 100644
index 0000000..bbc4e3e
--- /dev/null
+++ b/markdown/reference/cli/management_tools.html.md.erb
@@ -0,0 +1,63 @@
+---
+title: HAWQ Management Tools Reference
+---
+
+Reference information for command-line utilities available in HAWQ.
+
+-   **[analyzedb](../../reference/cli/admin_utilities/analyzedb.html)**
+
+-   **[createdb](../../reference/cli/client_utilities/createdb.html)**
+
+-   **[createuser](../../reference/cli/client_utilities/createuser.html)**
+
+-   **[dropdb](../../reference/cli/client_utilities/dropdb.html)**
+
+-   **[dropuser](../../reference/cli/client_utilities/dropuser.html)**
+
+-   **[gpfdist](../../reference/cli/admin_utilities/gpfdist.html)**
+
+-   **[gplogfilter](../../reference/cli/admin_utilities/gplogfilter.html)**
+
+-   **[hawq activate](../../reference/cli/admin_utilities/hawqactivate.html)**
+
+-   **[hawq check](../../reference/cli/admin_utilities/hawqcheck.html)**
+
+-   **[hawq checkperf](../../reference/cli/admin_utilities/hawqcheckperf.html)**
+
+-   **[hawq config](../../reference/cli/admin_utilities/hawqconfig.html)**
+
+-   **[hawq extract](../../reference/cli/admin_utilities/hawqextract.html)**
+
+-   **[hawq filespace](../../reference/cli/admin_utilities/hawqfilespace.html)**
+
+-   **[hawq init](../../reference/cli/admin_utilities/hawqinit.html)**
+
+-   **[hawq load](../../reference/cli/admin_utilities/hawqload.html)**
+
+-   **[hawq register](../../reference/cli/admin_utilities/hawqregister.html)**
+
+-   **[hawq restart](../../reference/cli/admin_utilities/hawqrestart.html)**
+
+-   **[hawq scp](../../reference/cli/admin_utilities/hawqscp.html)**
+
+-   **[hawq ssh](../../reference/cli/admin_utilities/hawqssh.html)**
+
+-   **[hawq ssh-exkeys](../../reference/cli/admin_utilities/hawqssh-exkeys.html)**
+
+-   **[hawq start](../../reference/cli/admin_utilities/hawqstart.html)**
+
+-   **[hawq state](../../reference/cli/admin_utilities/hawqstate.html)**
+
+-   **[hawq stop](../../reference/cli/admin_utilities/hawqstop.html)**
+
+-   **[pg\_dump](../../reference/cli/client_utilities/pg_dump.html)**
+
+-   **[pg\_dumpall](../../reference/cli/client_utilities/pg_dumpall.html)**
+
+-   **[pg\_restore](../../reference/cli/client_utilities/pg_restore.html)**
+
+-   **[psql](../../reference/cli/client_utilities/psql.html)**
+
+-   **[vacuumdb](../../reference/cli/client_utilities/vacuumdb.html)**
+
+


[33/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/svg/hawq_resource_management.svg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/svg/hawq_resource_management.svg b/markdown/mdimages/svg/hawq_resource_management.svg
new file mode 100644
index 0000000..064a3ef
--- /dev/null
+++ b/markdown/mdimages/svg/hawq_resource_management.svg
@@ -0,0 +1,621 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   version="1.1"
+   viewBox="0 0 662.48035 375.4053"
+   stroke-miterlimit="10"
+   id="svg2"
+   inkscape:version="0.91 r13725"
+   sodipodi:docname="hawq_resource_management.svg"
+   width="662.48035"
+   height="375.4053"
+   style="fill:none;stroke:none;stroke-linecap:square;stroke-miterlimit:10">
+  <metadata
+     id="metadata233">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <defs
+     id="defs231" />
+  <sodipodi:namedview
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1"
+     objecttolerance="10"
+     gridtolerance="10"
+     guidetolerance="10"
+     inkscape:pageopacity="0"
+     inkscape:pageshadow="2"
+     inkscape:window-width="1448"
+     inkscape:window-height="846"
+     id="namedview229"
+     showgrid="false"
+     showborder="true"
+     fit-margin-top="0"
+     fit-margin-left="0"
+     fit-margin-right="0"
+     fit-margin-bottom="0"
+     inkscape:zoom="1.0763737"
+     inkscape:cx="435.28584"
+     inkscape:cy="75.697983"
+     inkscape:window-x="0"
+     inkscape:window-y="0"
+     inkscape:window-maximized="0"
+     inkscape:current-layer="g7" />
+  <clipPath
+     id="p.0">
+    <path
+       d="M 0,0 720,0 720,540 0,540 0,0 Z"
+       id="path5"
+       inkscape:connector-curvature="0"
+       style="clip-rule:nonzero" />
+  </clipPath>
+  <g
+     clip-path="url(#p.0)"
+     id="g7"
+     transform="translate(-31.087543,-29.454071)">
+    <path
+       d="m 0,0 720,0 0,540 -720,0 z"
+       id="path9"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 39.249344,35.48819 158.740156,0 0,61.259842 -158.740156,0 z"
+       id="path11"
+       inkscape:connector-curvature="0"
+       style="fill:#fce5cd;fill-rule:nonzero" />
+    <path
+       d="m 39.249344,35.48819 158.740156,0 0,61.259842 -158.740156,0 z"
+       id="path13"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#f6b26b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 106.38019,58.51061 0,3.5 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.10937,0.0625 -0.0625,0.01563 -0.1875,0.03125 -0.125,0.03125 -0.29688,0.03125 -0.1875,0 -0.3125,-0.03125 -0.10937,-0.01563 -0.1875,-0.03125 -0.0781,-0.03125 -0.10937,-0.0625 -0.0312,-0.04687 -0.0312,-0.109375 l 0,-3.5 -2.67188,-5.34375 q -0.0937,-0.171875 -0.10937,-0.265625 -0.0156,-0.09375 0.0312,-0.140625 0.0625,-0.0625 0.20312,-0.0625 0.14063,-0.01563 0.39063,-0.01563 0.21875,0 0.34375,0.01563 0.14062,0 0.21875,0.03125 0.0937,0.03125 0.125,0.07813 0.0469,0.04687 0.0781,0.125 l 1.3125,2.71875 q 0.1875,0.390625 0.35938,0.8125 0.1875,0.421875 0.375,0.859375 l 0.0156,0 q 0.17188,-0.421875 0.34375,-0.828125 0.1875,-0.421875 0.375,-0.828125 l 1.3125,-2.734375 q 0.0312,-0.07813 0.0625,-0.125 0.0469,-0.04687 0.10938,-0.07813 0.0781,-0.03125 0.20312,-0.03125 0.125,-0.01563 0.3125,-0.01563 0.26563,0 0.40625,0.01563 0.15625,0.01563 0.20313,0.07813 0.0625,0.04687 0.0469,0.140625 -0.0156,0.09375 -0.0937,0.25 l 
 -2.6875,5.34375 z m 11.22018,3.234375 q 0.0625,0.171875 0.0625,0.265625 0,0.09375 -0.0625,0.15625 -0.0469,0.04687 -0.1875,0.0625 -0.14062,0.01563 -0.35937,0.01563 -0.23438,0 -0.375,-0.01563 -0.125,-0.01563 -0.20313,-0.03125 -0.0625,-0.03125 -0.0937,-0.07813 -0.0312,-0.04687 -0.0625,-0.109375 l -0.8125,-2.296875 -3.9375,0 -0.78125,2.265625 q -0.0156,0.07813 -0.0625,0.125 -0.0312,0.04687 -0.10937,0.07813 -0.0625,0.03125 -0.1875,0.04687 -0.125,0.01563 -0.34375,0.01563 -0.20313,0 -0.34375,-0.03125 -0.14063,-0.01563 -0.20313,-0.0625 -0.0469,-0.04687 -0.0469,-0.140625 0.0156,-0.109375 0.0781,-0.265625 l 3.17188,-8.8125 q 0.0312,-0.07813 0.0781,-0.125 0.0469,-0.04687 0.14063,-0.07813 0.0937,-0.03125 0.23437,-0.03125 0.14063,-0.01563 0.35938,-0.01563 0.23437,0 0.39062,0.01563 0.15625,0 0.25,0.03125 0.0937,0.03125 0.14063,0.09375 0.0625,0.04687 0.0781,0.125 l 3.1875,8.796875 z m -4.07812,-7.765625 -0.0156,0 -1.625,4.71875 3.29688,0 -1.65625,-4.71875 z m 11.77718,8.03125 q 0,0.0625 -0.0312,0.
 109375 -0.0156,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.20313,0.04687 -0.125,0.01563 -0.34375,0.01563 -0.1875,0 -0.3125,-0.01563 -0.125,-0.01563 -0.20312,-0.04687 -0.0625,-0.03125 -0.10938,-0.09375 -0.0312,-0.0625 -0.0625,-0.140625 l -0.875,-2.234375 q -0.15625,-0.390625 -0.32812,-0.703125 -0.15625,-0.328125 -0.39063,-0.546875 -0.21875,-0.234375 -0.53125,-0.359375 -0.29687,-0.125 -0.73437,-0.125 l -0.84375,0 0,4.03125 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.10938,0.0625 -0.0625,0.01563 -0.1875,0.03125 -0.10937,0.03125 -0.29687,0.03125 -0.1875,0 -0.3125,-0.03125 -0.10938,-0.01563 -0.1875,-0.03125 -0.0781,-0.03125 -0.10938,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-8.78125 q 0,-0.28125 0.14062,-0.390625 0.15625,-0.125 0.32813,-0.125 l 2.01562,0 q 0.35938,0 0.59375,0.03125 0.23438,0.01563 0.42188,0.03125 0.54687,0.09375 0.96875,0.3125 0.42187,0.203125 0.70312,0.515625 0.29688,0.3125 0.4375,0.71875 0.14063,0.40625 0.14063,0.890625 0,0.484375 -0.125,0.859375 -0.125,0.375 -0.
 375,0.671875 -0.23438,0.28125 -0.57813,0.5 -0.32812,0.203125 -0.75,0.359375 0.23438,0.09375 0.42188,0.25 0.1875,0.15625 0.34375,0.375 0.17187,0.21875 0.3125,0.515625 0.15625,0.28125 0.29687,0.640625 l 0.85938,2.09375 q 0.0937,0.25 0.125,0.359375 0.0312,0.109375 0.0312,0.171875 z m -1.89063,-6.65625 q 0,-0.5625 -0.25,-0.9375 -0.25,-0.390625 -0.84375,-0.5625 -0.17187,-0.04687 -0.40625,-0.0625 -0.23437,-0.03125 -0.60937,-0.03125 l -1.0625,0 0,3.1875 1.23437,0 q 0.5,0 0.85938,-0.109375 0.35937,-0.125 0.59375,-0.34375 0.25,-0.21875 0.35937,-0.5 0.125,-0.296875 0.125,-0.640625 z m 10.69226,6.328125 q 0,0.140625 -0.0469,0.25 -0.0469,0.09375 -0.125,0.171875 -0.0781,0.0625 -0.17188,0.09375 -0.0937,0.01563 -0.1875,0.01563 l -0.40625,0 q -0.1875,0 -0.32812,-0.03125 -0.14063,-0.04687 -0.28125,-0.140625 -0.125,-0.109375 -0.25,-0.296875 -0.125,-0.1875 -0.28125,-0.46875 l -2.98438,-5.390625 q -0.23437,-0.421875 -0.48437,-0.875 -0.23438,-0.453125 -0.4375,-0.890625 l -0.0156,0 q 0.0156,0.53125 0.015
 6,1.078125 0.0156,0.546875 0.0156,1.09375 l 0,5.71875 q 0,0.04687 -0.0312,0.09375 -0.0312,0.04687 -0.10937,0.07813 -0.0625,0.01563 -0.17188,0.03125 -0.10937,0.03125 -0.28125,0.03125 -0.1875,0 -0.29687,-0.03125 -0.10938,-0.01563 -0.1875,-0.03125 -0.0625,-0.03125 -0.0937,-0.07813 -0.0156,-0.04687 -0.0156,-0.09375 l 0,-8.75 q 0,-0.296875 0.15625,-0.421875 0.15625,-0.125 0.34375,-0.125 l 0.60938,0 q 0.20312,0 0.34375,0.04687 0.15625,0.03125 0.26562,0.125 0.10938,0.07813 0.21875,0.234375 0.10938,0.140625 0.23438,0.375 l 2.29687,4.15625 q 0.21875,0.375 0.40625,0.75 0.20313,0.359375 0.375,0.71875 0.1875,0.34375 0.35938,0.6875 0.1875,0.328125 0.375,0.671875 l 0,0 q -0.0156,-0.578125 -0.0156,-1.203125 0,-0.625 0,-1.203125 l 0,-5.140625 q 0,-0.04687 0.0156,-0.09375 0.0312,-0.04687 0.0937,-0.07813 0.0781,-0.03125 0.1875,-0.04687 0.125,-0.01563 0.3125,-0.01563 0.15625,0 0.26562,0.01563 0.125,0.01563 0.1875,0.04687 0.0625,0.03125 0.0937,0.07813 0.0312,0.04687 0.0312,0.09375 l 0,8.75 z"
+       id="path15"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 70.24903,80.01061 q 0,0.0625 -0.03125,0.109375 -0.01563,0.03125 -0.09375,0.0625 -0.0625,0.03125 -0.203125,0.04687 -0.125,0.01563 -0.34375,0.01563 -0.1875,0 -0.3125,-0.01563 -0.125,-0.01563 -0.203125,-0.04687 -0.0625,-0.03125 -0.109375,-0.09375 -0.03125,-0.0625 -0.0625,-0.140625 l -0.875,-2.234375 q -0.15625,-0.390625 -0.328125,-0.703125 -0.15625,-0.328125 -0.390625,-0.546875 -0.21875,-0.234375 -0.53125,-0.359375 -0.296875,-0.125 -0.734375,-0.125 l -0.84375,0 0,4.03125 q 0,0.0625 -0.03125,0.109375 -0.03125,0.03125 -0.109375,0.0625 -0.0625,0.01563 -0.1875,0.03125 -0.109375,0.03125 -0.296875,0.03125 -0.1875,0 -0.3125,-0.03125 -0.109375,-0.01563 -0.1875,-0.03125 -0.07813,-0.03125 -0.109375,-0.0625 -0.01563,-0.04687 -0.01563,-0.109375 l 0,-8.78125 q 0,-0.28125 0.140625,-0.390625 0.15625,-0.125 0.328125,-0.125 l 2.015625,0 q 0.359375,0 0.59375,0.03125 0.234375,0.01563 0.421875,0.03125 0.546875,0.09375 0.96875,0.3125 0.421875,0.203125 0.703125,0.515625 0.296875,0.3125 0.4375,0.
 71875 0.140625,0.40625 0.140625,0.890625 0,0.484375 -0.125,0.859375 -0.125,0.375 -0.375,0.671875 -0.234375,0.28125 -0.578125,0.5 -0.328125,0.203125 -0.75,0.359375 0.234375,0.09375 0.421875,0.25 0.1875,0.15625 0.34375,0.375 0.171875,0.21875 0.3125,0.515625 0.15625,0.28125 0.296875,0.640625 l 0.859375,2.09375 q 0.09375,0.25 0.125,0.359375 0.03125,0.109375 0.03125,0.171875 z m -1.890625,-6.65625 q 0,-0.5625 -0.25,-0.9375 -0.25,-0.390625 -0.84375,-0.5625 -0.171875,-0.04687 -0.40625,-0.0625 -0.234375,-0.03125 -0.609375,-0.03125 l -1.0625,0 0,3.1875 1.234375,0 q 0.5,0 0.859375,-0.109375 0.359375,-0.125 0.59375,-0.34375 0.25,-0.21875 0.359375,-0.5 0.125,-0.296875 0.125,-0.640625 z m 9.020386,3.078125 q 0,0.28125 -0.15625,0.40625 -0.140625,0.125 -0.3125,0.125 l -4.328125,0 q 0,0.546875 0.109375,0.984375 0.109375,0.4375 0.359375,0.765625 0.265625,0.3125 0.671875,0.484375 0.421875,0.15625 1.015625,0.15625 0.46875,0 0.828125,-0.07813 0.359375,-0.07813 0.625,-0.171875 0.28125,-0.09375 0.453125,
 -0.171875 0.171875,-0.07813 0.25,-0.07813 0.0625,0 0.09375,0.03125 0.04687,0.01563 0.0625,0.07813 0.03125,0.04687 0.03125,0.140625 0.01563,0.09375 0.01563,0.21875 0,0.09375 -0.01563,0.171875 0,0.0625 -0.01563,0.125 0,0.04687 -0.03125,0.09375 -0.03125,0.04687 -0.07813,0.09375 -0.03125,0.03125 -0.234375,0.125 -0.1875,0.09375 -0.5,0.1875 -0.3125,0.07813 -0.734375,0.140625 -0.40625,0.07813 -0.875,0.07813 -0.8125,0 -1.4375,-0.21875 -0.609375,-0.234375 -1.03125,-0.671875 -0.40625,-0.453125 -0.625,-1.125 -0.203125,-0.6875 -0.203125,-1.578125 0,-0.84375 0.21875,-1.515625 0.21875,-0.6875 0.625,-1.15625 0.421875,-0.46875 1,-0.71875 0.59375,-0.265625 1.3125,-0.265625 0.78125,0 1.328125,0.25 0.546875,0.25 0.890625,0.671875 0.359375,0.421875 0.515625,1 0.171875,0.5625 0.171875,1.203125 l 0,0.21875 z m -1.21875,-0.359375 q 0.01563,-0.953125 -0.421875,-1.484375 -0.4375,-0.546875 -1.3125,-0.546875 -0.453125,0 -0.796875,0.171875 -0.328125,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.1
 25,0.359375 -0.140625,0.765625 l 3.578125,0 z m 7.026718,2.140625 q 0,0.515625 -0.1875,0.90625 -0.1875,0.390625 -0.53125,0.671875 -0.34375,0.265625 -0.828125,0.40625 -0.46875,0.140625 -1.046875,0.140625 -0.34375,0 -0.671875,-0.0625 -0.3125,-0.04687 -0.578125,-0.125 -0.25,-0.09375 -0.421875,-0.1875 -0.171875,-0.09375 -0.25,-0.15625 -0.07813,-0.07813 -0.125,-0.203125 -0.03125,-0.140625 -0.03125,-0.359375 0,-0.140625 0.01563,-0.234375 0.01563,-0.109375 0.03125,-0.15625 0.03125,-0.0625 0.0625,-0.07813 0.04687,-0.03125 0.09375,-0.03125 0.07813,0 0.234375,0.09375 0.15625,0.09375 0.390625,0.21875 0.234375,0.109375 0.546875,0.21875 0.3125,0.09375 0.734375,0.09375 0.296875,0 0.546875,-0.0625 0.25,-0.0625 0.4375,-0.1875 0.1875,-0.140625 0.28125,-0.328125 0.09375,-0.203125 0.09375,-0.46875 0,-0.28125 -0.140625,-0.46875 -0.140625,-0.203125 -0.375,-0.34375 -0.234375,-0.140625 -0.53125,-0.25 -0.296875,-0.125 -0.609375,-0.25 -0.296875,-0.125 -0.59375,-0.28125 -0.296875,-0.15625 -0.53125,-0.375 -0.
 234375,-0.234375 -0.390625,-0.546875 -0.140625,-0.3125 -0.140625,-0.765625 0,-0.375 0.15625,-0.734375 0.15625,-0.359375 0.453125,-0.625 0.296875,-0.265625 0.75,-0.421875 0.453125,-0.171875 1.046875,-0.171875 0.265625,0 0.53125,0.04687 0.265625,0.04687 0.46875,0.109375 0.21875,0.0625 0.359375,0.140625 0.15625,0.07813 0.234375,0.140625 0.07813,0.0625 0.09375,0.109375 0.03125,0.03125 0.04687,0.09375 0.01563,0.04687 0.01563,0.140625 0.01563,0.07813 0.01563,0.1875 0,0.125 -0.01563,0.21875 0,0.09375 -0.03125,0.15625 -0.03125,0.04687 -0.0625,0.07813 -0.03125,0.03125 -0.07813,0.03125 -0.0625,0 -0.1875,-0.07813 -0.125,-0.09375 -0.328125,-0.171875 -0.203125,-0.09375 -0.46875,-0.171875 -0.265625,-0.09375 -0.609375,-0.09375 -0.3125,0 -0.546875,0.07813 -0.234375,0.0625 -0.390625,0.203125 -0.140625,0.125 -0.21875,0.296875 -0.07813,0.171875 -0.07813,0.375 0,0.296875 0.140625,0.484375 0.15625,0.1875 0.390625,0.34375 0.234375,0.140625 0.53125,0.265625 0.3125,0.109375 0.609375,0.234375 0.3125,0.125 0
 .609375,0.28125 0.3125,0.15625 0.546875,0.375 0.234375,0.21875 0.375,0.53125 0.15625,0.296875 0.15625,0.71875 z m 7.716629,-1.5625 q 0,0.796875 -0.21875,1.484375 -0.203125,0.671875 -0.625,1.171875 -0.421875,0.484375 -1.0625,0.765625 -0.625,0.265625 -1.46875,0.265625 -0.8125,0 -1.421875,-0.234375 -0.59375,-0.25 -1,-0.703125 -0.390625,-0.46875 -0.59375,-1.125 -0.203125,-0.65625 -0.203125,-1.5 0,-0.796875 0.203125,-1.46875 0.21875,-0.6875 0.640625,-1.171875 0.421875,-0.5 1.046875,-0.765625 0.625,-0.28125 1.46875,-0.28125 0.8125,0 1.421875,0.25 0.609375,0.234375 1,0.703125 0.40625,0.453125 0.609375,1.125 0.203125,0.65625 0.203125,1.484375 z m -1.265625,0.07813 q 0,-0.53125 -0.109375,-1 -0.09375,-0.484375 -0.328125,-0.84375 -0.21875,-0.359375 -0.609375,-0.5625 -0.390625,-0.21875 -0.96875,-0.21875 -0.53125,0 -0.921875,0.1875 -0.375,0.1875 -0.625,0.546875 -0.25,0.34375 -0.375,0.828125 -0.125,0.46875 -0.125,1.03125 0,0.546875 0.09375,1.03125 0.109375,0.46875 0.328125,0.828125 0.234375,0.343
 75 0.625,0.5625 0.390625,0.203125 0.96875,0.203125 0.53125,0 0.921875,-0.1875 0.390625,-0.203125 0.640625,-0.546875 0.25,-0.34375 0.359375,-0.8125 0.125,-0.484375 0.125,-1.046875 z m 8.510132,3.28125 q 0,0.0625 -0.03125,0.109375 -0.01563,0.03125 -0.09375,0.0625 -0.0625,0.03125 -0.171875,0.04687 -0.09375,0.01563 -0.25,0.01563 -0.171875,0 -0.28125,-0.01563 -0.09375,-0.01563 -0.15625,-0.04687 -0.0625,-0.03125 -0.09375,-0.0625 -0.01563,-0.04687 -0.01563,-0.109375 l 0,-0.875 q -0.5625,0.625 -1.125,0.921875 -0.546875,0.28125 -1.109375,0.28125 -0.65625,0 -1.109375,-0.21875 -0.453125,-0.21875 -0.734375,-0.59375 -0.265625,-0.390625 -0.390625,-0.890625 -0.125,-0.5 -0.125,-1.21875 l 0,-4 q 0,-0.04687 0.03125,-0.09375 0.03125,-0.04687 0.09375,-0.07813 0.07813,-0.03125 0.1875,-0.03125 0.125,-0.01563 0.296875,-0.01563 0.1875,0 0.296875,0.01563 0.125,0 0.1875,0.03125 0.0625,0.03125 0.09375,0.07813 0.03125,0.04687 0.03125,0.09375 l 0,3.84375 q 0,0.578125 0.07813,0.9375 0.09375,0.34375 0.265625,0.59
 375 0.171875,0.234375 0.4375,0.375 0.265625,0.125 0.609375,0.125 0.453125,0 0.90625,-0.3125 0.453125,-0.328125 0.953125,-0.953125 l 0,-4.609375 q 0,-0.04687 0.03125,-0.09375 0.03125,-0.04687 0.09375,-0.07813 0.07813,-0.03125 0.1875,-0.03125 0.125,-0.01563 0.296875,-0.01563 0.171875,0 0.28125,0.01563 0.125,0 0.1875,0.03125 0.07813,0.03125 0.109375,0.07813 0.03125,0.04687 0.03125,0.09375 l 0,6.59375 z m 5.903385,-6.15625 q 0,0.15625 -0.0156,0.265625 0,0.109375 -0.0156,0.171875 -0.0156,0.0625 -0.0625,0.109375 -0.0312,0.03125 -0.0781,0.03125 -0.0625,0 -0.15625,-0.03125 -0.0781,-0.04687 -0.1875,-0.07813 -0.10937,-0.03125 -0.23437,-0.0625 -0.125,-0.03125 -0.28125,-0.03125 -0.1875,0 -0.35938,0.07813 -0.17187,0.07813 -0.375,0.25 -0.1875,0.15625 -0.40625,0.4375 -0.21875,0.28125 -0.46875,0.6875 l 0,4.328125 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.1875,0.04687 -0.10937,0.01563 -0.29687,0.01563 -0.17188,0 -0.29688,-0.01563 -0.10937,-0.01563 -0.1875,-0.04687
  -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-6.59375 q 0,-0.04687 0.0156,-0.09375 0.0312,-0.04687 0.0937,-0.07813 0.0625,-0.03125 0.15625,-0.03125 0.10938,-0.01563 0.28125,-0.01563 0.15625,0 0.26563,0.01563 0.10937,0 0.15625,0.03125 0.0625,0.03125 0.0937,0.07813 0.0312,0.04687 0.0312,0.09375 l 0,0.96875 q 0.26562,-0.40625 0.5,-0.640625 0.23437,-0.25 0.45312,-0.390625 0.21875,-0.15625 0.42188,-0.203125 0.20312,-0.0625 0.42187,-0.0625 0.0937,0 0.20313,0.01563 0.125,0.01563 0.25,0.04687 0.14062,0.01563 0.25,0.0625 0.10937,0.03125 0.15625,0.07813 0.0469,0.03125 0.0469,0.0625 0.0156,0.03125 0.0312,0.09375 0.0156,0.04687 0.0156,0.140625 0,0.09375 0,0.265625 z m 6.00027,5.15625 q 0,0.125 -0.0156,0.21875 0,0.09375 -0.0156,0.15625 -0.0156,0.0625 -0.0469,0.109375 -0.0312,0.04687 -0.125,0.140625 -0.0781,0.09375 -0.29688,0.234375 -0.21875,0.125 -0.5,0.234375 -0.28125,0.09375 -0.60937,0.15625 -0.3125,0.07813 -0.65625,0.07813 -0.70313,0 -1.26563,-0.234375 -0.54687,-0.
 234375 -0.92187,-0.6875 -0.35938,-0.453125 -0.5625,-1.109375 -0.1875,-0.65625 -0.1875,-1.515625 0,-0.96875 0.23437,-1.65625 0.25,-0.703125 0.65625,-1.15625 0.42188,-0.453125 0.96875,-0.65625 0.5625,-0.21875 1.21875,-0.21875 0.3125,0 0.60938,0.0625 0.29687,0.04687 0.54687,0.140625 0.25,0.09375 0.4375,0.21875 0.20313,0.125 0.28125,0.21875 0.0937,0.09375 0.125,0.140625 0.0312,0.04687 0.0469,0.125 0.0312,0.0625 0.0312,0.15625 0.0156,0.07813 0.0156,0.21875 0,0.28125 -0.0625,0.40625 -0.0625,0.109375 -0.15625,0.109375 -0.10937,0 -0.26562,-0.125 -0.14063,-0.125 -0.35938,-0.265625 -0.21875,-0.15625 -0.53125,-0.265625 -0.3125,-0.125 -0.73437,-0.125 -0.875,0 -1.34375,0.671875 -0.45313,0.671875 -0.45313,1.9375 0,0.640625 0.10938,1.125 0.125,0.46875 0.35937,0.796875 0.23438,0.328125 0.57813,0.484375 0.34375,0.15625 0.78125,0.15625 0.42187,0 0.73437,-0.125 0.3125,-0.140625 0.54688,-0.296875 0.23437,-0.15625 0.39062,-0.28125 0.15625,-0.140625 0.23438,-0.140625 0.0625,0 0.0937,0.03125 0.0312,0.0312
 5 0.0625,0.109375 0.0312,0.0625 0.0312,0.171875 0.0156,0.109375 0.0156,0.25 z m 7.08804,-2.578125 q 0,0.28125 -0.15625,0.40625 -0.14062,0.125 -0.3125,0.125 l -4.32812,0 q 0,0.546875 0.10937,0.984375 0.10938,0.4375 0.35938,0.765625 0.26562,0.3125 0.67187,0.484375 0.42188,0.15625 1.01563,0.15625 0.46875,0 0.82812,-0.07813 0.35938,-0.07813 0.625,-0.171875 0.28125,-0.09375 0.45313,-0.171875 0.17187,-0.07813 0.25,-0.07813 0.0625,0 0.0937,0.03125 0.0469,0.01563 0.0625,0.07813 0.0312,0.04687 0.0312,0.140625 0.0156,0.09375 0.0156,0.21875 0,0.09375 -0.0156,0.171875 0,0.0625 -0.0156,0.125 0,0.04687 -0.0312,0.09375 -0.0312,0.04687 -0.0781,0.09375 -0.0312,0.03125 -0.23438,0.125 -0.1875,0.09375 -0.5,0.1875 -0.3125,0.07813 -0.73437,0.140625 -0.40625,0.07813 -0.875,0.07813 -0.8125,0 -1.4375,-0.21875 -0.60938,-0.234375 -1.03125,-0.671875 -0.40625,-0.453125 -0.625,-1.125 -0.20313,-0.6875 -0.20313,-1.578125 0,-0.84375 0.21875,-1.515625 0.21875,-0.6875 0.625,-1.15625 0.42188,-0.46875 1,-0.71875 0.5937
 5,-0.265625 1.3125,-0.265625 0.78125,0 1.32813,0.25 0.54687,0.25 0.89062,0.671875 0.35938,0.421875 0.51563,1 0.17187,0.5625 0.17187,1.203125 l 0,0.21875 z m -1.21875,-0.359375 q 0.0156,-0.953125 -0.42187,-1.484375 -0.4375,-0.546875 -1.3125,-0.546875 -0.45313,0 -0.79688,0.171875 -0.32812,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.125,0.359375 -0.14062,0.765625 l 3.57812,0 z m 16.63699,3.9375 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.10937,0.0625 -0.0625,0.01563 -0.17188,0.03125 -0.10937,0.03125 -0.29687,0.03125 -0.17188,0 -0.29688,-0.03125 -0.10937,-0.01563 -0.1875,-0.03125 -0.0625,-0.03125 -0.0937,-0.0625 -0.0312,-0.04687 -0.0312,-0.109375 l 0,-8.25 -0.0156,0 -3.375,8.28125 q -0.0312,0.04687 -0.0781,0.09375 -0.0312,0.03125 -0.10938,0.0625 -0.0781,0.01563 -0.1875,0.03125 -0.0937,0.01563 -0.25,0.01563 -0.14062,0 -0.25,-0.01563 -0.10937,-0.01563 -0.1875,-0.03125 -0.0781,-0.03125 -0.125,-0.0625 -0.0312,-0.04687 -0.0469,-0.09375 l -3.23438,-8.28125 0,0 0,8.25 q 0,0.
 0625 -0.0312,0.109375 -0.0312,0.03125 -0.10937,0.0625 -0.0625,0.01563 -0.1875,0.03125 -0.10938,0.03125 -0.29688,0.03125 -0.17187,0 -0.29687,-0.03125 -0.10938,-0.01563 -0.1875,-0.03125 -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-8.71875 q 0,-0.3125 0.15625,-0.4375 0.15625,-0.140625 0.35938,-0.140625 l 0.76562,0 q 0.23438,0 0.40625,0.04687 0.17188,0.04687 0.29688,0.140625 0.14062,0.09375 0.21875,0.25 0.0937,0.140625 0.17187,0.34375 l 2.73438,6.859375 0.0469,0 2.84375,-6.84375 q 0.0937,-0.21875 0.1875,-0.375 0.0937,-0.15625 0.20312,-0.234375 0.10938,-0.09375 0.25,-0.140625 0.14063,-0.04687 0.32813,-0.04687 l 0.79687,0 q 0.10938,0 0.20313,0.04687 0.10937,0.03125 0.17187,0.109375 0.0781,0.0625 0.10938,0.171875 0.0469,0.09375 0.0469,0.25 l 0,8.71875 z m 7.06206,0.01563 q 0,0.07813 -0.0625,0.125 -0.0625,0.04687 -0.17188,0.0625 -0.0937,0.03125 -0.29687,0.03125 -0.1875,0 -0.29688,-0.03125 -0.10937,-0.01563 -0.17187,-0.0625 -0.0469,-0.04687 -0.0469,-0.125 l 0,-0.6
 5625 q -0.4375,0.453125 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.51562,0 -0.9375,-0.140625 -0.42187,-0.125 -0.71875,-0.375 -0.29687,-0.265625 -0.46875,-0.640625 -0.15625,-0.375 -0.15625,-0.859375 0,-0.546875 0.21875,-0.953125 0.23438,-0.421875 0.65625,-0.6875 0.4375,-0.265625 1.04688,-0.40625 0.60937,-0.140625 1.39062,-0.140625 l 0.90625,0 0,-0.515625 q 0,-0.375 -0.0937,-0.65625 -0.0781,-0.296875 -0.26562,-0.484375 -0.17188,-0.203125 -0.45313,-0.296875 -0.28125,-0.109375 -0.70312,-0.109375 -0.4375,0 -0.79688,0.109375 -0.35937,0.109375 -0.625,0.234375 -0.26562,0.125 -0.45312,0.234375 -0.17188,0.109375 -0.26563,0.109375 -0.0625,0 -0.10937,-0.03125 -0.0312,-0.03125 -0.0625,-0.09375 -0.0312,-0.0625 -0.0469,-0.140625 -0.0156,-0.09375 -0.0156,-0.203125 0,-0.1875 0.0156,-0.296875 0.0312,-0.109375 0.125,-0.203125 0.10938,-0.09375 0.34375,-0.21875 0.25,-0.125 0.5625,-0.234375 0.3125,-0.109375 0.6875,-0.171875 0.375,-0.07813 0.75,-0.07813 0.71875,0 1.20313,0.171875 0.5,0.15625 0.8125,0.4
 6875 0.3125,0.3125 0.45312,0.78125 0.14063,0.453125 0.14063,1.0625 l 0,4.453125 z m -1.20313,-3.015625 -1.03125,0 q -0.5,0 -0.875,0.09375 -0.35937,0.07813 -0.60937,0.25 -0.23438,0.15625 -0.35938,0.390625 -0.10937,0.21875 -0.10937,0.53125 0,0.515625 0.32812,0.8125 0.32813,0.296875 0.92188,0.296875 0.46875,0 0.875,-0.234375 0.40625,-0.25 0.85937,-0.734375 l 0,-1.40625 z m 8.92665,3 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.1875,0.04687 -0.10937,0.01563 -0.28125,0.01563 -0.1875,0 -0.3125,-0.01563 -0.10937,-0.01563 -0.1875,-0.04687 -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-3.859375 q 0,-0.5625 -0.0937,-0.90625 -0.0937,-0.34375 -0.26563,-0.59375 -0.15625,-0.25 -0.42187,-0.375 -0.26563,-0.140625 -0.625,-0.140625 -0.45313,0 -0.90625,0.328125 -0.45313,0.328125 -0.95313,0.9375 l 0,4.609375 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.1875,0.04687 -0.10937,0.01563 -0.29687,0.01563 -0.17188,0 -0
 .29688,-0.01563 -0.10937,-0.01563 -0.1875,-0.04687 -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-6.59375 q 0,-0.04687 0.0156,-0.09375 0.0312,-0.04687 0.0937,-0.07813 0.0625,-0.03125 0.15625,-0.03125 0.10938,-0.01563 0.28125,-0.01563 0.15625,0 0.26563,0.01563 0.10937,0 0.15625,0.03125 0.0625,0.03125 0.0937,0.07813 0.0312,0.04687 0.0312,0.09375 l 0,0.875 q 0.54687,-0.625 1.09375,-0.90625 0.5625,-0.296875 1.125,-0.296875 0.65625,0 1.10937,0.234375 0.45313,0.21875 0.73438,0.59375 0.28125,0.375 0.39062,0.875 0.125,0.5 0.125,1.203125 l 0,4.015625 z m 6.99713,0.01563 q 0,0.07813 -0.0625,0.125 -0.0625,0.04687 -0.17187,0.0625 -0.0937,0.03125 -0.29688,0.03125 -0.1875,0 -0.29687,-0.03125 -0.10938,-0.01563 -0.17188,-0.0625 -0.0469,-0.04687 -0.0469,-0.125 l 0,-0.65625 q -0.4375,0.453125 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.51563,0 -0.9375,-0.140625 -0.42188,-0.125 -0.71875,-0.375 -0.29688,-0.265625 -0.46875,-0.640625 -0.15625,-0.375 -0.15625,-0.859375 0,-0.546
 875 0.21875,-0.953125 0.23437,-0.421875 0.65625,-0.6875 0.4375,-0.265625 1.04687,-0.40625 0.60938,-0.140625 1.39063,-0.140625 l 0.90625,0 0,-0.515625 q 0,-0.375 -0.0937,-0.65625 -0.0781,-0.296875 -0.26563,-0.484375 -0.17187,-0.203125 -0.45312,-0.296875 -0.28125,-0.109375 -0.70313,-0.109375 -0.4375,0 -0.79687,0.109375 -0.35938,0.109375 -0.625,0.234375 -0.26563,0.125 -0.45313,0.234375 -0.17187,0.109375 -0.26562,0.109375 -0.0625,0 -0.10938,-0.03125 -0.0312,-0.03125 -0.0625,-0.09375 -0.0312,-0.0625 -0.0469,-0.140625 -0.0156,-0.09375 -0.0156,-0.203125 0,-0.1875 0.0156,-0.296875 0.0312,-0.109375 0.125,-0.203125 0.10937,-0.09375 0.34375,-0.21875 0.25,-0.125 0.5625,-0.234375 0.3125,-0.109375 0.6875,-0.171875 0.375,-0.07813 0.75,-0.07813 0.71875,0 1.20312,0.171875 0.5,0.15625 0.8125,0.46875 0.3125,0.3125 0.45313,0.78125 0.14062,0.453125 0.14062,1.0625 l 0,4.453125 z m -1.20312,-3.015625 -1.03125,0 q -0.5,0 -0.875,0.09375 -0.35938,0.07813 -0.60938,0.25 -0.23437,0.15625 -0.35937,0.390625 -0.10
 938,0.21875 -0.10938,0.53125 0,0.515625 0.32813,0.8125 0.32812,0.296875 0.92187,0.296875 0.46875,0 0.875,-0.234375 0.40625,-0.25 0.85938,-0.734375 l 0,-1.40625 z m 8.75478,-3.28125 q 0,0.25 -0.0781,0.375 -0.0625,0.109375 -0.17187,0.109375 l -0.9375,0 q 0.25,0.25 0.34375,0.578125 0.10937,0.3125 0.10937,0.65625 0,0.578125 -0.1875,1.015625 -0.17187,0.4375 -0.51562,0.75 -0.34375,0.296875 -0.8125,0.46875 -0.46875,0.15625 -1.03125,0.15625 -0.40625,0 -0.76563,-0.109375 -0.35937,-0.109375 -0.5625,-0.265625 -0.14062,0.125 -0.21875,0.296875 -0.0781,0.171875 -0.0781,0.390625 0,0.25 0.23437,0.421875 0.23438,0.171875 0.625,0.1875 l 1.73438,0.0625 q 0.48437,0.01563 0.89062,0.140625 0.42188,0.125 0.71875,0.34375 0.29688,0.21875 0.46875,0.546875 0.17188,0.328125 0.17188,0.765625 0,0.453125 -0.20313,0.859375 -0.1875,0.40625 -0.57812,0.71875 -0.39063,0.3125 -1,0.484375 -0.60938,0.1875 -1.4375,0.1875 -0.79688,0 -1.35938,-0.140625 -0.5625,-0.125 -0.92187,-0.359375 -0.35938,-0.234375 -0.51563,-0.5625 -0
 .15625,-0.328125 -0.15625,-0.703125 0,-0.25 0.0469,-0.484375 0.0625,-0.21875 0.1875,-0.421875 0.125,-0.203125 0.29687,-0.390625 0.1875,-0.1875 0.42188,-0.375 -0.35938,-0.171875 -0.53125,-0.453125 -0.17188,-0.28125 -0.17188,-0.609375 0,-0.4375 0.17188,-0.78125 0.1875,-0.359375 0.46875,-0.640625 -0.23438,-0.265625 -0.375,-0.609375 -0.125,-0.34375 -0.125,-0.828125 0,-0.5625 0.1875,-1 0.20312,-0.453125 0.53125,-0.765625 0.34375,-0.3125 0.8125,-0.46875 0.46875,-0.171875 1.03125,-0.171875 0.29687,0 0.54687,0.03125 0.26563,0.03125 0.5,0.09375 l 1.98438,0 q 0.125,0 0.1875,0.125 0.0625,0.125 0.0625,0.375 z m -1.89063,1.734375 q 0,-0.671875 -0.375,-1.046875 -0.35937,-0.390625 -1.04687,-0.390625 -0.34375,0 -0.60938,0.125 -0.25,0.109375 -0.42187,0.328125 -0.17188,0.203125 -0.26563,0.46875 -0.0781,0.265625 -0.0781,0.5625 0,0.640625 0.35937,1.015625 0.375,0.375 1.04688,0.375 0.35937,0 0.60937,-0.109375 0.26563,-0.109375 0.4375,-0.3125 0.1875,-0.203125 0.26563,-0.46875 0.0781,-0.265625 0.0781,-0.5
 46875 z m 0.60938,5.21875 q 0,-0.421875 -0.34375,-0.65625 -0.34375,-0.234375 -0.9375,-0.25 l -1.71875,-0.04687 q -0.23438,0.171875 -0.39063,0.34375 -0.14062,0.15625 -0.23437,0.3125 -0.0781,0.15625 -0.10938,0.296875 -0.0312,0.140625 -0.0312,0.296875 0,0.484375 0.48438,0.71875 0.48437,0.25 1.34375,0.25 0.54687,0 0.90625,-0.109375 0.375,-0.109375 0.60937,-0.28125 0.23438,-0.171875 0.32813,-0.40625 0.0937,-0.21875 0.0937,-0.46875 z m 8.30499,-4.25 q 0,0.28125 -0.15625,0.40625 -0.14063,0.125 -0.3125,0.125 l -4.32813,0 q 0,0.546875 0.10938,0.984375 0.10937,0.4375 0.35937,0.765625 0.26563,0.3125 0.67188,0.484375 0.42187,0.15625 1.01562,0.15625 0.46875,0 0.82813,-0.07813 0.35937,-0.07813 0.625,-0.171875 0.28125,-0.09375 0.45312,-0.171875 0.17188,-0.07813 0.25,-0.07813 0.0625,0 0.0937,0.03125 0.0469,0.01563 0.0625,0.07813 0.0312,0.04687 0.0312,0.140625 0.0156,0.09375 0.0156,0.21875 0,0.09375 -0.0156,0.171875 0,0.0625 -0.0156,0.125 0,0.04687 -0.0312,0.09375 -0.0312,0.04687 -0.0781,0.09375 -0.
 0312,0.03125 -0.23437,0.125 -0.1875,0.09375 -0.5,0.1875 -0.3125,0.07813 -0.73438,0.140625 -0.40625,0.07813 -0.875,0.07813 -0.8125,0 -1.4375,-0.21875 -0.60937,-0.234375 -1.03125,-0.671875 -0.40625,-0.453125 -0.625,-1.125 -0.20312,-0.6875 -0.20312,-1.578125 0,-0.84375 0.21875,-1.515625 0.21875,-0.6875 0.625,-1.15625 0.42187,-0.46875 1,-0.71875 0.59375,-0.265625 1.3125,-0.265625 0.78125,0 1.32812,0.25 0.54688,0.25 0.89063,0.671875 0.35937,0.421875 0.51562,1 0.17188,0.5625 0.17188,1.203125 l 0,0.21875 z m -1.21875,-0.359375 q 0.0156,-0.953125 -0.42188,-1.484375 -0.4375,-0.546875 -1.3125,-0.546875 -0.45312,0 -0.79687,0.171875 -0.32813,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.125,0.359375 -0.14063,0.765625 l 3.57813,0 z m 6.72986,-2.21875 q 0,0.15625 -0.0156,0.265625 0,0.109375 -0.0156,0.171875 -0.0156,0.0625 -0.0625,0.109375 -0.0312,0.03125 -0.0781,0.03125 -0.0625,0 -0.15625,-0.03125 -0.0781,-0.04687 -0.1875,-0.07813 -0.10937,-0.03125 -0.23437,-0.0625 -0.125,-0.03125 -
 0.28125,-0.03125 -0.1875,0 -0.35938,0.07813 -0.17187,0.07813 -0.375,0.25 -0.1875,0.15625 -0.40625,0.4375 -0.21875,0.28125 -0.46875,0.6875 l 0,4.328125 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.1875,0.04687 -0.10937,0.01563 -0.29687,0.01563 -0.17188,0 -0.29688,-0.01563 -0.10937,-0.01563 -0.1875,-0.04687 -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-6.59375 q 0,-0.04687 0.0156,-0.09375 0.0312,-0.04687 0.0937,-0.07813 0.0625,-0.03125 0.15625,-0.03125 0.10938,-0.01563 0.28125,-0.01563 0.15625,0 0.26563,0.01563 0.10937,0 0.15625,0.03125 0.0625,0.03125 0.0937,0.07813 0.0312,0.04687 0.0312,0.09375 l 0,0.96875 q 0.26562,-0.40625 0.5,-0.640625 0.23437,-0.25 0.45312,-0.390625 0.21875,-0.15625 0.42188,-0.203125 0.20312,-0.0625 0.42187,-0.0625 0.0937,0 0.20313,0.01563 0.125,0.01563 0.25,0.04687 0.14062,0.01563 0.25,0.0625 0.10937,0.03125 0.15625,0.07813 0.0469,0.03125 0.0469,0.0625 0.0156,0.03125 0.0312,0.09375 0.0156,0.04687 0.0156
 ,0.140625 0,0.09375 0,0.265625 z"
+       id="path17"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 39.249344,107.73753 158.740156,0 0,38.01575 -158.740156,0 z"
+       id="path19"
+       inkscape:connector-curvature="0"
+       style="fill:#95f3ef;fill-rule:nonzero" />
+    <path
+       d="m 39.249344,107.73753 158.740156,0 0,38.01575 -158.740156,0 z"
+       id="path21"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#34ebe4;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 81.37492,130.48166 q 0,0.125 -0.01563,0.21875 0,0.0781 -0.03125,0.14062 -0.01563,0.0625 -0.04687,0.125 -0.01563,0.0469 -0.09375,0.125 -0.07813,0.0625 -0.3125,0.21875 -0.234375,0.15625 -0.578125,0.29688 -0.34375,0.14062 -0.796875,0.23437 -0.453125,0.10938 -0.984375,0.10938 -0.921875,0 -1.671875,-0.3125 -0.734375,-0.3125 -1.265625,-0.90625 -0.515625,-0.60938 -0.8125,-1.48438 -0.28125,-0.89062 -0.28125,-2.03124 0,-1.1875 0.296875,-2.10938 0.3125,-0.92187 0.859375,-1.5625 0.5625,-0.64062 1.328125,-0.96875 0.765625,-0.34375 1.6875,-0.34375 0.40625,0 0.796875,0.0781 0.390625,0.0781 0.71875,0.20312 0.328125,0.10938 0.578125,0.26563 0.265625,0.14062 0.359375,0.25 0.109375,0.0937 0.140625,0.15625 0.03125,0.0469 0.04687,0.125 0.01563,0.0625 0.01563,0.15625 0.01563,0.0937 0.01563,0.21875 0,0.15625 -0.01563,0.26562 -0.01563,0.0937 -0.04687,0.17188 -0.01563,0.0625 -0.0625,0.0937 -0.03125,0.0312 -0.09375,0.0312 -0.109375,0 -0.296875,-0.14063 -0.171875,-0.14062 -0.46875,-0.3125 -0.2812
 5,-0.17187 -0.703125,-0.3125 -0.40625,-0.15625 -0.984375,-0.15625 -0.640625,0 -1.15625,0.26563 -0.515625,0.25 -0.890625,0.73437 -0.359375,0.48438 -0.5625,1.20313 -0.1875,0.70312 -0.1875,1.60937 0,0.90625 0.1875,1.59375 0.203125,0.6875 0.5625,1.15625 0.359375,0.46875 0.875,0.70312 0.53125,0.23438 1.203125,0.23438 0.5625,0 0.984375,-0.14063 0.421875,-0.14062 0.71875,-0.3125 0.296875,-0.17187 0.484375,-0.29687 0.1875,-0.14063 0.296875,-0.14063 0.0625,0 0.09375,0.0156 0.03125,0.0156 0.04687,0.0781 0.03125,0.0625 0.04687,0.17188 0.01563,0.10937 0.01563,0.28125 z m 6.314758,1.17187 q 0,0.0781 -0.0625,0.125 -0.0625,0.0469 -0.171875,0.0625 -0.09375,0.0312 -0.296875,0.0312 -0.1875,0 -0.296875,-0.0312 -0.109375,-0.0156 -0.171875,-0.0625 -0.04687,-0.0469 -0.04687,-0.125 l 0,-0.65625 q -0.4375,0.45313 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.515625,0 -0.9375,-0.14062 -0.421875,-0.125 -0.71875,-0.375 -0.296875,-0.26563 -0.46875,-0.64063 -0.15625,-0.375 -0.15625,-0.85937 0,-0.54688 0.21875,-
 0.95313 0.234375,-0.42187 0.65625,-0.6875 0.4375,-0.26562 1.046875,-0.40624 0.609375,-0.14063 1.390625,-0.14063 l 0.90625,0 0,-0.51562 q 0,-0.375 -0.09375,-0.65625 -0.07813,-0.29688 -0.265625,-0.48438 -0.171875,-0.20312 -0.453125,-0.29687 -0.28125,-0.10938 -0.703125,-0.10938 -0.4375,0 -0.796875,0.10938 -0.359375,0.10937 -0.625,0.23437 -0.265625,0.125 -0.453125,0.23438 -0.171875,0.10937 -0.265625,0.10937 -0.0625,0 -0.109375,-0.0312 -0.03125,-0.0312 -0.0625,-0.0937 -0.03125,-0.0625 -0.04687,-0.14062 -0.01563,-0.0937 -0.01563,-0.20313 0,-0.1875 0.01563,-0.29687 0.03125,-0.10938 0.125,-0.20313 0.109375,-0.0937 0.34375,-0.21875 0.25,-0.125 0.5625,-0.23437 0.3125,-0.10938 0.6875,-0.17188 0.375,-0.0781 0.75,-0.0781 0.71875,0 1.203125,0.17187 0.5,0.15625 0.8125,0.46875 0.3125,0.3125 0.453125,0.78125 0.140625,0.45313 0.140625,1.0625 l 0,4.45312 z m -1.203125,-3.01562 -1.03125,0 q -0.5,0 -0.875,0.0937 -0.359375,0.0781 -0.609375,0.25 -0.234375,0.15625 -0.359375,0.39062 -0.109375,0.21875 -0.109
 375,0.53125 0,0.51563 0.328125,0.8125 0.328125,0.29688 0.921875,0.29688 0.46875,0 0.875,-0.23438 0.40625,-0.25 0.859375,-0.73437 l 0,-1.40625 z m 6.676651,2.51562 q 0,0.21875 -0.03125,0.34375 -0.03125,0.125 -0.09375,0.1875 -0.04687,0.0469 -0.171875,0.10938 -0.109375,0.0469 -0.265625,0.0781 -0.140625,0.0312 -0.3125,0.0469 -0.171875,0.0312 -0.34375,0.0312 -0.515625,0 -0.875,-0.125 -0.359375,-0.14063 -0.59375,-0.42188 -0.234375,-0.28125 -0.34375,-0.6875 -0.109375,-0.42187 -0.109375,-1 l 0,-3.85937 -0.921875,0 q -0.109375,0 -0.1875,-0.10937 -0.0625,-0.125 -0.0625,-0.375 0,-0.14063 0.01563,-0.23438 0.03125,-0.10937 0.0625,-0.17187 0.03125,-0.0625 0.07813,-0.0781 0.04687,-0.0312 0.09375,-0.0312 l 0.921875,0 0,-1.5625 q 0,-0.0469 0.01563,-0.0937 0.03125,-0.0469 0.09375,-0.0781 0.07813,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.296875,-0.0156 0.1875,0 0.296875,0.0156 0.125,0.0156 0.1875,0.0469 0.07813,0.0312 0.09375,0.0781 0.03125,0.0469 0.03125,0.0937 l 0,1.5625 1.703125,0 q 0.04687,0 0.09375,
 0.0312 0.04687,0.0156 0.07813,0.0781 0.03125,0.0625 0.04687,0.17187 0.01563,0.0937 0.01563,0.23438 0,0.25 -0.0625,0.375 -0.0625,0.10937 -0.171875,0.10937 l -1.703125,0 0,3.6875 q 0,0.67187 0.203125,1.03125 0.203125,0.34375 0.71875,0.34375 0.171875,0 0.296875,-0.0312 0.140625,-0.0312 0.234375,-0.0625 0.109375,-0.0469 0.1875,-0.0781 0.07813,-0.0312 0.125,-0.0312 0.04687,0 0.07813,0.0156 0.03125,0.0156 0.04687,0.0781 0.01563,0.0469 0.03125,0.14063 0.01563,0.0781 0.01563,0.20312 z m 6.456146,0.5 q 0,0.0781 -0.0625,0.125 -0.0625,0.0469 -0.171875,0.0625 -0.09375,0.0312 -0.296875,0.0312 -0.1875,0 -0.296875,-0.0312 -0.109375,-0.0156 -0.171875,-0.0625 -0.04687,-0.0469 -0.04687,-0.125 l 0,-0.65625 q -0.4375,0.45313 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.515625,0 -0.9375,-0.14062 -0.421875,-0.125 -0.71875,-0.375 -0.296875,-0.26563 -0.46875,-0.64063 -0.15625,-0.375 -0.15625,-0.85937 0,-0.54688 0.21875,-0.95313 0.234375,-0.42187 0.65625,-0.6875 0.4375,-0.26562 1.046875,-0.40624 0.609375,-
 0.14063 1.390625,-0.14063 l 0.90625,0 0,-0.51562 q 0,-0.375 -0.09375,-0.65625 -0.07813,-0.29688 -0.265625,-0.48438 -0.171875,-0.20312 -0.453125,-0.29687 -0.28125,-0.10938 -0.703125,-0.10938 -0.4375,0 -0.796875,0.10938 -0.359375,0.10937 -0.625,0.23437 -0.265625,0.125 -0.453125,0.23438 -0.171875,0.10937 -0.265625,0.10937 -0.0625,0 -0.109375,-0.0312 -0.03125,-0.0312 -0.0625,-0.0937 -0.03125,-0.0625 -0.04687,-0.14062 -0.01563,-0.0937 -0.01563,-0.20313 0,-0.1875 0.01563,-0.29687 0.03125,-0.10938 0.125,-0.20313 0.109375,-0.0937 0.34375,-0.21875 0.25,-0.125 0.5625,-0.23437 0.3125,-0.10938 0.6875,-0.17188 0.375,-0.0781 0.75,-0.0781 0.71875,0 1.203125,0.17187 0.5,0.15625 0.8125,0.46875 0.3125,0.3125 0.453125,0.78125 0.140625,0.45313 0.140625,1.0625 l 0,4.45312 z m -1.203125,-3.01562 -1.03125,0 q -0.5,0 -0.875,0.0937 -0.359375,0.0781 -0.609375,0.25 -0.234375,0.15625 -0.359375,0.39062 -0.109375,0.21875 -0.109375,0.53125 0,0.51563 0.328125,0.8125 0.328125,0.29688 0.921875,0.29688 0.46875,0 0.87
 5,-0.23438 0.40625,-0.25 0.859375,-0.73437 l 0,-1.40625 z m 4.457905,3 q 0,0.0625 -0.0312,0.10937 -0.0312,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.29688,0.0156 -0.17187,0 -0.29687,-0.0156 -0.10938,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10937 l 0,-9.78125 q 0,-0.0625 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0781,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.29687,-0.0156 0.1875,0 0.29688,0.0156 0.125,0.0156 0.1875,0.0469 0.0625,0.0312 0.0937,0.0781 0.0312,0.0312 0.0312,0.0937 l 0,9.78125 z m 8.28537,-3.35938 q 0,0.79688 -0.21875,1.48438 -0.20313,0.67187 -0.625,1.17187 -0.42188,0.48438 -1.0625,0.76563 -0.625,0.26562 -1.46875,0.26562 -0.8125,0 -1.42188,-0.23437 -0.59375,-0.25 -1,-0.70313 -0.39062,-0.46875 -0.59375,-1.125 -0.20312,-0.65625 -0.20312,-1.5 0,-0.79687 0.20312,-1.46874 0.21875,-0.6875 0.64063,-1.17188 0.42187,-0.5 1.04687,-0.76562 0.625,-0.28125 1.46875,-0.28125 0.8125,0 1.42188,0.25 0.60937,0.23437 1,0
 .70312 0.40625,0.45313 0.60937,1.125 0.20313,0.65625 0.20313,1.48437 z m -1.26563,0.0781 q 0,-0.53125 -0.10937,-1 -0.0937,-0.48437 -0.32813,-0.84375 -0.21875,-0.35937 -0.60937,-0.5625 -0.39063,-0.21875 -0.96875,-0.21875 -0.53125,0 -0.92188,0.1875 -0.375,0.1875 -0.625,0.54688 -0.25,0.34375 -0.375,0.82812 -0.125,0.46875 -0.125,1.03125 0,0.54687 0.0937,1.03125 0.10938,0.46875 0.32813,0.82812 0.23437,0.34375 0.625,0.5625 0.39062,0.20313 0.96875,0.20313 0.53125,0 0.92187,-0.1875 0.39063,-0.20313 0.64063,-0.54688 0.25,-0.34375 0.35937,-0.8125 0.125,-0.48437 0.125,-1.04687 z m 8.36951,-3 q 0,0.25 -0.0781,0.375 -0.0625,0.10938 -0.17187,0.10938 l -0.9375,0 q 0.25,0.25 0.34375,0.57812 0.10937,0.3125 0.10937,0.65625 0,0.57813 -0.1875,1.01562 -0.17187,0.4375 -0.51562,0.75 -0.34375,0.29688 -0.8125,0.46875 -0.46875,0.15625 -1.03125,0.15625 -0.40625,0 -0.76563,-0.10937 -0.35937,-0.10938 -0.5625,-0.26563 -0.14062,0.125 -0.21875,0.29688 -0.0781,0.17187 -0.0781,0.39062 0,0.25 0.23437,0.42188 0.23438,
 0.17187 0.625,0.1875 l 1.73438,0.0625 q 0.48437,0.0156 0.89062,0.14062 0.42188,0.125 0.71875,0.34375 0.29688,0.21875 0.46875,0.54688 0.17188,0.32812 0.17188,0.76562 0,0.45313 -0.20313,0.85938 -0.1875,0.40625 -0.57812,0.71875 -0.39063,0.3125 -1,0.48437 -0.60938,0.1875 -1.4375,0.1875 -0.79688,0 -1.35938,-0.14062 -0.5625,-0.125 -0.92187,-0.35938 -0.35938,-0.23437 -0.51563,-0.5625 -0.15625,-0.32812 -0.15625,-0.70312 0,-0.25 0.0469,-0.48438 0.0625,-0.21875 0.1875,-0.42187 0.125,-0.20313 0.29687,-0.39063 0.1875,-0.1875 0.42188,-0.375 -0.35938,-0.17187 -0.53125,-0.45312 -0.17188,-0.28125 -0.17188,-0.60938 0,-0.4375 0.17188,-0.78125 0.1875,-0.35937 0.46875,-0.64062 -0.23438,-0.26563 -0.375,-0.60937 -0.125,-0.34375 -0.125,-0.82813 0,-0.5625 0.1875,-1 0.20312,-0.45312 0.53125,-0.76562 0.34375,-0.3125 0.8125,-0.46875 0.46875,-0.17188 1.03125,-0.17188 0.29687,0 0.54687,0.0312 0.26563,0.0312 0.5,0.0937 l 1.98438,0 q 0.125,0 0.1875,0.125 0.0625,0.125 0.0625,0.375 z m -1.89063,1.73438 q 0,-0.67188
  -0.375,-1.04688 -0.35937,-0.39062 -1.04687,-0.39062 -0.34375,0 -0.60938,0.125 -0.25,0.10937 -0.42187,0.32812 -0.17188,0.20313 -0.26563,0.46875 -0.0781,0.26563 -0.0781,0.5625 0,0.64063 0.35937,1.01562 0.375,0.375 1.04688,0.375 0.35937,0 0.60937,-0.10937 0.26563,-0.10938 0.4375,-0.3125 0.1875,-0.20312 0.26563,-0.46875 0.0781,-0.26562 0.0781,-0.54687 z m 0.60938,5.21874 q 0,-0.42187 -0.34375,-0.65625 -0.34375,-0.23437 -0.9375,-0.25 l -1.71875,-0.0469 q -0.23438,0.17187 -0.39063,0.34375 -0.14062,0.15625 -0.23437,0.3125 -0.0781,0.15625 -0.10938,0.29687 -0.0312,0.14063 -0.0312,0.29688 0,0.48437 0.48438,0.71875 0.48437,0.25 1.34375,0.25 0.54687,0 0.90625,-0.10938 0.375,-0.10937 0.60937,-0.28125 0.23438,-0.17187 0.32813,-0.40625 0.0937,-0.21875 0.0937,-0.46875 z m 10.13402,-2.46875 q 0,0.51563 -0.1875,0.90625 -0.1875,0.39063 -0.53125,0.67188 -0.34375,0.26562 -0.82813,0.40625 -0.46875,0.14062 -1.04687,0.14062 -0.34375,0 -0.67188,-0.0625 -0.3125,-0.0469 -0.57812,-0.125 -0.25,-0.0937 -0.42188
 ,-0.1875 -0.17187,-0.0937 -0.25,-0.15625 -0.0781,-0.0781 -0.125,-0.20312 -0.0312,-0.14063 -0.0312,-0.35938 0,-0.14062 0.0156,-0.23437 0.0156,-0.10938 0.0312,-0.15625 0.0312,-0.0625 0.0625,-0.0781 0.0469,-0.0312 0.0937,-0.0312 0.0781,0 0.23437,0.0937 0.15625,0.0937 0.39063,0.21875 0.23437,0.10938 0.54687,0.21875 0.3125,0.0937 0.73438,0.0937 0.29687,0 0.54687,-0.0625 0.25,-0.0625 0.4375,-0.1875 0.1875,-0.14062 0.28125,-0.32812 0.0937,-0.20313 0.0937,-0.46875 0,-0.28125 -0.14062,-0.46875 -0.14063,-0.20313 -0.375,-0.34375 -0.23438,-0.14063 -0.53125,-0.25 -0.29688,-0.125 -0.60938,-0.25 -0.29687,-0.125 -0.59375,-0.28125 -0.29687,-0.15625 -0.53125,-0.375 -0.23437,-0.23437 -0.39062,-0.54687 -0.14063,-0.3125 -0.14063,-0.76563 0,-0.375 0.15625,-0.73437 0.15625,-0.35938 0.45313,-0.625 0.29687,-0.26563 0.75,-0.42188 0.45312,-0.17187 1.04687,-0.17187 0.26563,0 0.53125,0.0469 0.26563,0.0469 0.46875,0.10938 0.21875,0.0625 0.35938,0.14062 0.15625,0.0781 0.23437,0.14063 0.0781,0.0625 0.0937,0.10937 
 0.0312,0.0312 0.0469,0.0937 0.0156,0.0469 0.0156,0.14063 0.0156,0.0781 0.0156,0.1875 0,0.125 -0.0156,0.21875 0,0.0937 -0.0312,0.15625 -0.0312,0.0469 -0.0625,0.0781 -0.0312,0.0312 -0.0781,0.0312 -0.0625,0 -0.1875,-0.0781 -0.125,-0.0937 -0.32813,-0.17188 -0.20312,-0.0937 -0.46875,-0.17187 -0.26562,-0.0937 -0.60937,-0.0937 -0.3125,0 -0.54688,0.0781 -0.23437,0.0625 -0.39062,0.20313 -0.14063,0.125 -0.21875,0.29687 -0.0781,0.17188 -0.0781,0.375 0,0.29688 0.14063,0.48438 0.15625,0.1875 0.39062,0.34375 0.23438,0.14062 0.53125,0.26562 0.3125,0.10938 0.60938,0.23438 0.3125,0.12499 0.60937,0.28124 0.3125,0.15625 0.54688,0.375 0.23437,0.21875 0.375,0.53125 0.15625,0.29688 0.15625,0.71875 z m 7.21663,-1.78125 q 0,0.28125 -0.15625,0.40625 -0.14063,0.125 -0.3125,0.125 l -4.32813,0 q 0,0.54688 0.10938,0.98438 0.10937,0.4375 0.35937,0.76562 0.26563,0.3125 0.67188,0.48438 0.42187,0.15625 1.01562,0.15625 0.46875,0 0.82813,-0.0781 0.35937,-0.0781 0.625,-0.17187 0.28125,-0.0937 0.45312,-0.17188 0.17188,
 -0.0781 0.25,-0.0781 0.0625,0 0.0937,0.0312 0.0469,0.0156 0.0625,0.0781 0.0312,0.0469 0.0312,0.14063 0.0156,0.0937 0.0156,0.21875 0,0.0937 -0.0156,0.17187 0,0.0625 -0.0156,0.125 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.0781,0.0937 -0.0312,0.0312 -0.23437,0.125 -0.1875,0.0937 -0.5,0.1875 -0.3125,0.0781 -0.73438,0.14063 -0.40625,0.0781 -0.875,0.0781 -0.8125,0 -1.4375,-0.21875 -0.60937,-0.23437 -1.03125,-0.67187 -0.40625,-0.45313 -0.625,-1.125 -0.20312,-0.6875 -0.20312,-1.57813 0,-0.84374 0.21875,-1.51562 0.21875,-0.6875 0.625,-1.15625 0.42187,-0.46875 1,-0.71875 0.59375,-0.26562 1.3125,-0.26562 0.78125,0 1.32812,0.25 0.54688,0.25 0.89063,0.67187 0.35937,0.42188 0.51562,1 0.17188,0.5625 0.17188,1.20313 l 0,0.21874 z m -1.21875,-0.35937 q 0.0156,-0.95312 -0.42188,-1.48437 -0.4375,-0.54688 -1.3125,-0.54688 -0.45312,0 -0.79687,0.17188 -0.32813,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.125,0.35937 -0.14063,0.76562 l 3.57813,0 z m 6.72984,-2.21875 q 0,0.15625 -0.0156,0.26
 563 0,0.10937 -0.0156,0.17187 -0.0156,0.0625 -0.0625,0.10938 -0.0312,0.0312 -0.0781,0.0312 -0.0625,0 -0.15625,-0.0312 -0.0781,-0.0469 -0.1875,-0.0781 -0.10937,-0.0312 -0.23437,-0.0625 -0.125,-0.0312 -0.28125,-0.0312 -0.1875,0 -0.35938,0.0781 -0.17187,0.0781 -0.375,0.25 -0.1875,0.15625 -0.40625,0.4375 -0.21875,0.28125 -0.46875,0.6875 l 0,4.32812 q 0,0.0625 -0.0312,0.10937 -0.0312,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10937,0.0156 -0.29687,0.0156 -0.17188,0 -0.29688,-0.0156 -0.10937,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10937 l 0,-6.59375 q 0,-0.0469 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0625,-0.0312 0.15625,-0.0312 0.10938,-0.0156 0.28125,-0.0156 0.15625,0 0.26563,0.0156 0.10937,0 0.15625,0.0312 0.0625,0.0312 0.0937,0.0781 0.0312,0.0469 0.0312,0.0937 l 0,0.96875 q 0.26562,-0.40625 0.5,-0.64062 0.23437,-0.25 0.45312,-0.39063 0.21875,-0.15625 0.42188,-0.20312 0.20312,-0.0625 0.42187,-0.0625 0.0937,0 0.20313,0.0156 0
 .125,0.0156 0.25,0.0469 0.14062,0.0156 0.25,0.0625 0.10937,0.0312 0.15625,0.0781 0.0469,0.0312 0.0469,0.0625 0.0156,0.0312 0.0312,0.0937 0.0156,0.0469 0.0156,0.14063 0,0.0937 0,0.26562 z m 6.67215,-0.45312 q 0,0.0312 -0.0156,0.0781 0,0.0312 -0.0156,0.0625 0,0.0312 -0.0156,0.0781 0,0.0469 -0.0156,0.0937 l -2.25,6.26562 q -0.0312,0.0781 -0.0781,0.14062 -0.0469,0.0469 -0.14062,0.0781 -0.0937,0.0156 -0.25,0.0312 -0.14063,0.0156 -0.35938,0.0156 -0.21875,0 -0.375,-0.0156 -0.14062,-0.0156 -0.23437,-0.0469 -0.0781,-0.0312 -0.14063,-0.0781 -0.0469,-0.0469 -0.0781,-0.125 l -2.23438,-6.26562 q -0.0312,-0.0781 -0.0625,-0.14063 -0.0156,-0.0781 -0.0156,-0.10937 0,-0.0312 0,-0.0625 0,-0.0469 0.0312,-0.0937 0.0312,-0.0469 0.0937,-0.0625 0.0781,-0.0312 0.1875,-0.0312 0.10937,-0.0156 0.28125,-0.0156 0.21875,0 0.34375,0.0156 0.125,0 0.1875,0.0312 0.0781,0.0312 0.10937,0.0781 0.0312,0.0469 0.0625,0.10938 l 1.85938,5.43749 0.0312,0.0781 0.0156,-0.0781 1.84375,-5.43749 q 0.0156,-0.0625 0.0469,-0.10938 0.
 0469,-0.0469 0.10937,-0.0781 0.0625,-0.0312 0.1875,-0.0312 0.125,-0.0156 0.32813,-0.0156 0.15625,0 0.26562,0.0156 0.10938,0 0.17188,0.0312 0.0625,0.0312 0.0937,0.0781 0.0312,0.0312 0.0312,0.0781 z m 2.41652,6.60937 q 0,0.0625 -0.0312,0.10937 -0.0312,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.29688,0.0156 -0.17187,0 -0.29687,-0.0156 -0.10938,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10937 l 0,-6.59375 q 0,-0.0469 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0625 0.0781,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.29687,-0.0156 0.1875,0 0.29688,0.0156 0.125,0.0156 0.1875,0.0469 0.0625,0.0156 0.0937,0.0625 0.0312,0.0469 0.0312,0.0937 l 0,6.59375 z m 0.14062,-8.8125 q 0,0.42188 -0.17187,0.57813 -0.15625,0.15625 -0.57813,0.15625 -0.42187,0 -0.59375,-0.14063 -0.15625,-0.15625 -0.15625,-0.57812 0,-0.42188 0.15625,-0.57813 0.17188,-0.15625 0.60938,-0.15625 0.42187,0 0.57812,0.15625 0.15625,0.14063 0.15625,0.5625 z m 6.75412,7.8125 q 
 0,0.125 -0.0156,0.21875 0,0.0937 -0.0156,0.15625 -0.0156,0.0625 -0.0469,0.10937 -0.0312,0.0469 -0.125,0.14063 -0.0781,0.0937 -0.29688,0.23437 -0.21875,0.125 -0.5,0.23438 -0.28125,0.0937 -0.60937,0.15625 -0.3125,0.0781 -0.65625,0.0781 -0.70313,0 -1.26563,-0.23437 -0.54687,-0.23438 -0.92187,-0.6875 -0.35938,-0.45313 -0.5625,-1.10938 -0.1875,-0.65625 -0.1875,-1.51562 0,-0.96875 0.23437,-1.65625 0.25,-0.70312 0.65625,-1.15625 0.42188,-0.45312 0.96875,-0.65625 0.5625,-0.21875 1.21875,-0.21875 0.3125,0 0.60938,0.0625 0.29687,0.0469 0.54687,0.14063 0.25,0.0937 0.4375,0.21875 0.20313,0.125 0.28125,0.21875 0.0937,0.0937 0.125,0.14062 0.0312,0.0469 0.0469,0.125 0.0312,0.0625 0.0312,0.15625 0.0156,0.0781 0.0156,0.21875 0,0.28125 -0.0625,0.40625 -0.0625,0.10938 -0.15625,0.10938 -0.10937,0 -0.26562,-0.125 -0.14063,-0.125 -0.35938,-0.26563 -0.21875,-0.15625 -0.53125,-0.26562 -0.3125,-0.125 -0.73437,-0.125 -0.875,0 -1.34375,0.67187 -0.45313,0.67188 -0.45313,1.9375 0,0.64062 0.10938,1.125 0.125,0.4
 6875 0.35937,0.79687 0.23438,0.32813 0.57813,0.48438 0.34375,0.15625 0.78125,0.15625 0.42187,0 0.73437,-0.125 0.3125,-0.14063 0.54688,-0.29688 0.23437,-0.15625 0.39062,-0.28125 0.15625,-0.14062 0.23438,-0.14062 0.0625,0 0.0937,0.0312 0.0312,0.0312 0.0625,0.10937 0.0312,0.0625 0.0312,0.17188 0.0156,0.10937 0.0156,0.25 z m 7.08805,-2.57813 q 0,0.28125 -0.15625,0.40625 -0.14063,0.125 -0.3125,0.125 l -4.32813,0 q 0,0.54688 0.10938,0.98438 0.10937,0.4375 0.35937,0.76562 0.26563,0.3125 0.67188,0.48438 0.42187,0.15625 1.01562,0.15625 0.46875,0 0.82813,-0.0781 0.35937,-0.0781 0.625,-0.17187 0.28125,-0.0937 0.45312,-0.17188 0.17188,-0.0781 0.25,-0.0781 0.0625,0 0.0937,0.0312 0.0469,0.0156 0.0625,0.0781 0.0312,0.0469 0.0312,0.14063 0.0156,0.0937 0.0156,0.21875 0,0.0937 -0.0156,0.17187 0,0.0625 -0.0156,0.125 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.0781,0.0937 -0.0312,0.0312 -0.23437,0.125 -0.1875,0.0937 -0.5,0.1875 -0.3125,0.0781 -0.73438,0.14063 -0.40625,0.0781 -0.875,0.0781 -0.8125,0 -1.43
 75,-0.21875 -0.60937,-0.23437 -1.03125,-0.67187 -0.40625,-0.45313 -0.625,-1.125 -0.20312,-0.6875 -0.20312,-1.57813 0,-0.84374 0.21875,-1.51562 0.21875,-0.6875 0.625,-1.15625 0.42187,-0.46875 1,-0.71875 0.59375,-0.26562 1.3125,-0.26562 0.78125,0 1.32812,0.25 0.54688,0.25 0.89063,0.67187 0.35937,0.42188 0.51562,1 0.17188,0.5625 0.17188,1.20313 l 0,0.21874 z m -1.21875,-0.35937 q 0.0156,-0.95312 -0.42188,-1.48437 -0.4375,-0.54688 -1.3125,-0.54688 -0.45312,0 -0.79687,0.17188 -0.32813,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.125,0.35937 -0.14063,0.76562 l 3.57813,0 z"
+       id="path23"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 39.249344,161.14174 148.125976,0 0,101.38582 -148.125976,0 z"
+       id="path25"
+       inkscape:connector-curvature="0"
+       style="fill:#95f3ef;fill-rule:nonzero" />
+    <path
+       d="m 39.249344,161.14174 148.125976,0 0,101.38582 -148.125976,0 z"
+       id="path27"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#34ebe4;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 44.55643,164.66554 148.12599,0 0,104.03148 -148.12599,0 z"
+       id="path29"
+       inkscape:connector-curvature="0"
+       style="fill:#95f3ef;fill-rule:nonzero" />
+    <path
+       d="m 44.55643,164.66554 148.12599,0 0,104.03148 -148.12599,0 z"
+       id="path31"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#34ebe4;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 49.863518,167.77983 148.125982,0 0,106.3307 -148.125982,0 z"
+       id="path33"
+       inkscape:connector-curvature="0"
+       style="fill:#95f3ef;fill-rule:nonzero" />
+    <path
+       d="m 49.863518,167.77983 148.125982,0 0,106.3307 -148.125982,0 z"
+       id="path35"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#34ebe4;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 39.249344,292.16254 148.125976,0 0,94.67715 -148.125976,0 z"
+       id="path37"
+       inkscape:connector-curvature="0"
+       style="fill:#c7ed6f;fill-opacity:0.70619998;fill-rule:nonzero" />
+    <path
+       d="m 39.249344,292.16254 148.125976,0 0,94.67715 -148.125976,0 z"
+       id="path39"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#adce60;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 44.55643,298.35687 148.12599,0 0,94.67716 -148.12599,0 z"
+       id="path41"
+       inkscape:connector-curvature="0"
+       style="fill:#c7ed6f;fill-opacity:0.70619998;fill-rule:nonzero" />
+    <path
+       d="m 44.55643,298.35687 148.12599,0 0,94.67716 -148.12599,0 z"
+       id="path43"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#adce60;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 49.863518,303.83142 148.125982,0 0,94.67715 -148.125982,0 z"
+       id="path45"
+       inkscape:connector-curvature="0"
+       style="fill:#c7ed6f;fill-opacity:0.70619998;fill-rule:nonzero" />
+    <path
+       d="m 49.863518,303.83142 148.125982,0 0,94.67715 -148.125982,0 z"
+       id="path47"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#adce60;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 96.85993,355.73438 q 0,0.14062 -0.04687,0.25 -0.04687,0.0937 -0.125,0.17187 -0.07813,0.0625 -0.171875,0.0937 -0.09375,0.0156 -0.1875,0.0156 l -0.40625,0 q -0.1875,0 -0.328125,-0.0312 -0.140625,-0.0469 -0.28125,-0.14063 -0.125,-0.10937 -0.25,-0.29687 -0.125,-0.1875 -0.28125,-0.46875 l -2.98438,-5.3906 q -0.234375,-0.42187 -0.484375,-0.875 -0.234375,-0.45312 -0.4375,-0.89062 l -0.01563,0 q 0.01563,0.53125 0.01563,1.07812 0.01563,0.54688 0.01563,1.09375 l 0,5.71875 q 0,0.0469 -0.03125,0.0937 -0.03125,0.0469 -0.109375,0.0781 -0.0625,0.0156 -0.171875,0.0312 -0.109375,0.0312 -0.28125,0.0312 -0.1875,0 -0.296875,-0.0312 -0.109375,-0.0156 -0.1875,-0.0312 -0.0625,-0.0312 -0.09375,-0.0781 -0.01563,-0.0469 -0.01563,-0.0937 l 0,-8.75 q 0,-0.29687 0.15625,-0.42187 0.15625,-0.125 0.34375,-0.125 l 0.609375,0 q 0.203125,0 0.34375,0.0469 0.15625,0.0312 0.265625,0.125 0.109375,0.0781 0.21875,0.23438 0.109375,0.14062 0.234375,0.375 l 2.296875,4.15625 q 0.21875,0.375 0.40625,0.75 0.203125,0.
 35937 0.375,0.71875 0.1875,0.34375 0.359375,0.6875 0.1875,0.32812 0.375,0.67187 l 0,0 q -0.01563,-0.57812 -0.01563,-1.20312 0,-0.625 0,-1.20313 l 0,-5.14062 q 0,-0.0469 0.01563,-0.0937 0.03125,-0.0469 0.09375,-0.0781 0.07813,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.3125,-0.0156 0.15625,0 0.265625,0.0156 0.125,0.0156 0.1875,0.0469 0.0625,0.0312 0.09375,0.0781 0.03125,0.0469 0.03125,0.0937 l 0,8.75 z m 7.1326,0.34375 q 0,0.0781 -0.0625,0.125 -0.0625,0.0469 -0.17188,0.0625 -0.0937,0.0312 -0.29687,0.0312 -0.1875,0 -0.29688,-0.0312 -0.10937,-0.0156 -0.17187,-0.0625 -0.0469,-0.0469 -0.0469,-0.125 l 0,-0.65625 q -0.4375,0.45312 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.51562,0 -0.937496,-0.14063 -0.421875,-0.125 -0.71875,-0.375 -0.296875,-0.26562 -0.46875,-0.64062 -0.15625,-0.375 -0.15625,-0.85938 0,-0.54687 0.21875,-0.95312 0.234375,-0.42188 0.65625,-0.6875 0.4375,-0.26563 1.046876,-0.40625 0.60937,-0.14063 1.39062,-0.14063 l 0.90625,0 0,-0.51562 q 0,-0.375 -0.0937,-0.65625 -0.0781,-0.2
 9688 -0.26562,-0.48438 -0.17188,-0.20312 -0.45313,-0.29687 -0.28125,-0.10938 -0.70312,-0.10938 -0.4375,0 -0.79688,0.10938 -0.35937,0.10937 -0.624996,0.23437 -0.265625,0.125 -0.453125,0.23438 -0.171875,0.10937 -0.265625,0.10937 -0.0625,0 -0.109375,-0.0312 -0.03125,-0.0312 -0.0625,-0.0937 -0.03125,-0.0625 -0.04687,-0.14062 -0.01563,-0.0937 -0.01563,-0.20313 0,-0.1875 0.01563,-0.29687 0.03125,-0.10938 0.125,-0.20313 0.109375,-0.0937 0.34375,-0.21875 0.25,-0.125 0.5625,-0.23437 0.312501,-0.10938 0.687501,-0.17188 0.375,-0.0781 0.75,-0.0781 0.71875,0 1.20313,0.17187 0.5,0.15625 0.8125,0.46875 0.3125,0.3125 0.45312,0.78125 0.14063,0.45313 0.14063,1.0625 l 0,4.45313 z m -1.20313,-3.01563 -1.03125,0 q -0.5,0 -0.875,0.0937 -0.35937,0.0781 -0.60937,0.25 -0.23438,0.15625 -0.359376,0.39063 -0.109375,0.21875 -0.109375,0.53125 0,0.51562 0.328121,0.8125 0.32813,0.29687 0.92188,0.29687 0.46875,0 0.875,-0.23437 0.40625,-0.25 0.85937,-0.73438 l 0,-1.40625 z m 13.03603,3 q 0,0.0625 -0.0312,0.10938 -0.
 0312,0.0312 -0.10937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.28125,0.0156 -0.1875,0 -0.3125,-0.0156 -0.10938,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10938 l 0,-4 q 0,-0.42187 -0.0781,-0.76562 -0.0781,-0.34375 -0.23438,-0.59375 -0.15625,-0.25 -0.40625,-0.375 -0.25,-0.14063 -0.59375,-0.14063 -0.40625,0 -0.82812,0.32813 -0.42188,0.32812 -0.9375,0.9375 l 0,4.60937 q 0,0.0625 -0.0312,0.10938 -0.0156,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.29688,0.0156 -0.15625,0 -0.28125,-0.0156 -0.125,-0.0156 -0.20312,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10938 l 0,-4 q 0,-0.42187 -0.0781,-0.76562 -0.0781,-0.34375 -0.25,-0.59375 -0.15625,-0.25 -0.40625,-0.375 -0.23438,-0.14063 -0.57813,-0.14063 -0.42187,0 -0.84375,0.32813 -0.42187,0.32812 -0.92187,0.9375 l 0,4.60937 q 0,0.0625 -0.0312,0.10938 -0.0312,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.29688,0.0156 -0.
 17187,0 -0.29687,-0.0156 -0.10938,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10938 l 0,-6.59375 q 0,-0.0469 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0625,-0.0312 0.15625,-0.0312 0.10937,-0.0156 0.28125,-0.0156 0.15625,0 0.26562,0.0156 0.10938,0 0.15625,0.0312 0.0625,0.0312 0.0937,0.0781 0.0312,0.0469 0.0312,0.0937 l 0,0.875 q 0.54688,-0.625 1.0625,-0.90625 0.53125,-0.29687 1.0625,-0.29687 0.42188,0 0.73438,0.0937 0.32812,0.0937 0.57812,0.28125 0.25,0.17187 0.42188,0.40625 0.1875,0.23437 0.29687,0.53125 0.32813,-0.35938 0.625,-0.60938 0.29688,-0.25 0.57813,-0.40625 0.28125,-0.15625 0.53125,-0.21875 0.26562,-0.0781 0.53125,-0.0781 0.625,0 1.0625,0.23437 0.4375,0.21875 0.70312,0.59375 0.26563,0.375 0.375,0.875 0.125,0.5 0.125,1.0625 l 0,4.15625 z m 7.55157,-3.57812 q 0,0.28125 -0.15625,0.40625 -0.14062,0.125 -0.3125,0.125 l -4.32812,0 q 0,0.54687 0.10937,0.98437 0.10938,0.4375 0.35938,0.76563 0.26562,0.3125 0.67187,0.48437 0.42188,0.15625 1
 .01563,0.15625 0.46875,0 0.82812,-0.0781 0.35938,-0.0781 0.625,-0.17188 0.28125,-0.0937 0.45313,-0.17187 0.17187,-0.0781 0.25,-0.0781 0.0625,0 0.0937,0.0312 0.0469,0.0156 0.0625,0.0781 0.0312,0.0469 0.0312,0.14062 0.0156,0.0937 0.0156,0.21875 0,0.0937 -0.0156,0.17188 0,0.0625 -0.0156,0.125 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.0781,0.0937 -0.0312,0.0312 -0.23438,0.125 -0.1875,0.0937 -0.5,0.1875 -0.3125,0.0781 -0.73437,0.14062 -0.40625,0.0781 -0.875,0.0781 -0.8125,0 -1.4375,-0.21875 -0.60938,-0.23438 -1.03125,-0.67188 -0.40625,-0.45312 -0.625,-1.125 -0.20313,-0.6875 -0.20313,-1.57812 0,-0.84375 0.21875,-1.51563 0.21875,-0.6875 0.625,-1.15625 0.42188,-0.46875 1,-0.71875 0.59375,-0.26562 1.3125,-0.26562 0.78125,0 1.32813,0.25 0.54687,0.25 0.89062,0.67187 0.35938,0.42188 0.51563,1 0.17187,0.5625 0.17187,1.20313 l 0,0.21875 z m -1.21875,-0.35938 q 0.0156,-0.95312 -0.42187,-1.48437 -0.4375,-0.54688 -1.3125,-0.54688 -0.45313,0 -0.79688,0.17188 -0.32812,0.15625 -0.5625,0.4375 -0.21875,0
 .28125 -0.34375,0.65625 -0.125,0.35937 -0.14062,0.76562 l 3.57812,0 z m 13.49637,3.60938 q 0,0.14062 -0.0469,0.25 -0.0469,0.0937 -0.125,0.17187 -0.0781,0.0625 -0.17187,0.0937 -0.0937,0.0156 -0.1875,0.0156 l -0.40625,0 q -0.1875,0 -0.32813,-0.0312 -0.14062,-0.0469 -0.28125,-0.14063 -0.125,-0.10937 -0.25,-0.29687 -0.125,-0.1875 -0.28125,-0.46875 l -2.98437,-5.39063 q -0.23438,-0.42187 -0.48438,-0.875 -0.23437,-0.45312 -0.4375,-0.89062 l -0.0156,0 q 0.0156,0.53125 0.0156,1.07812 0.0156,0.54688 0.0156,1.09375 l 0,5.71875 q 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.10938,0.0781 -0.0625,0.0156 -0.17187,0.0312 -0.10938,0.0312 -0.28125,0.0312 -0.1875,0 -0.29688,-0.0312 -0.10937,-0.0156 -0.1875,-0.0312 -0.0625,-0.0312 -0.0937,-0.0781 -0.0156,-0.0469 -0.0156,-0.0937 l 0,-8.75 q 0,-0.29687 0.15625,-0.42187 0.15625,-0.125 0.34375,-0.125 l 0.60937,0 q 0.20313,0 0.34375,0.0469 0.15625,0.0312 0.26563,0.125 0.10937,0.0781 0.21875,0.23438 0.10937,0.14062 0.23437,0.375 l 2.29688,4.15625 q 0.21875,0.3
 75 0.40625,0.75 0.20312,0.35937 0.375,0.71875 0.1875,0.34375 0.35937,0.6875 0.1875,0.32812 0.375,0.67187 l 0,0 q -0.0156,-0.57812 -0.0156,-1.20312 0,-0.625 0,-1.20313 l 0,-5.14062 q 0,-0.0469 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0781,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.3125,-0.0156 0.15625,0 0.26563,0.0156 0.125,0.0156 0.1875,0.0469 0.0625,0.0312 0.0937,0.0781 0.0312,0.0469 0.0312,0.0937 l 0,8.75 z m 8.28884,-3.03125 q 0,0.79687 -0.21875,1.48437 -0.20312,0.67188 -0.625,1.17188 -0.42187,0.48437 -1.0625,0.76562 -0.625,0.26563 -1.46875,0.26563 -0.8125,0 -1.42187,-0.23438 -0.59375,-0.25 -1,-0.70312 -0.39063,-0.46875 -0.59375,-1.125 -0.20313,-0.65625 -0.20313,-1.5 0,-0.79688 0.20313,-1.46875 0.21875,-0.6875 0.64062,-1.17188 0.42188,-0.5 1.04688,-0.76562 0.625,-0.28125 1.46875,-0.28125 0.8125,0 1.42187,0.25 0.60938,0.23437 1,0.70312 0.40625,0.45313 0.60938,1.125 0.20312,0.65625 0.20312,1.48438 z m -1.26562,0.0781 q 0,-0.53125 -0.10938,-1 -0.0937,-0.48437 -0.32812,-0.84375 -0.
 21875,-0.35937 -0.60938,-0.5625 -0.39062,-0.21875 -0.96875,-0.21875 -0.53125,0 -0.92187,0.1875 -0.375,0.1875 -0.625,0.54688 -0.25,0.34375 -0.375,0.82812 -0.125,0.46875 -0.125,1.03125 0,0.54688 0.0937,1.03125 0.10937,0.46875 0.32812,0.82813 0.23438,0.34375 0.625,0.5625 0.39063,0.20312 0.96875,0.20312 0.53125,0 0.92188,-0.1875 0.39062,-0.20312 0.64062,-0.54687 0.25,-0.34375 0.35938,-0.8125 0.125,-0.48438 0.125,-1.04688 z m 8.51013,3.28125 q 0,0.0625 -0.0312,0.10938 -0.0156,0.0469 -0.0781,0.0781 -0.0625,0.0156 -0.17188,0.0312 -0.0937,0.0156 -0.25,0.0156 -0.14062,0 -0.25,-0.0156 -0.10937,-0.0156 -0.17187,-0.0312 -0.0625,-0.0312 -0.0937,-0.0781 -0.0312,-0.0469 -0.0312,-0.10938 l 0,-0.875 q -0.51563,0.57813 -1.07813,0.89063 -0.5625,0.3125 -1.21875,0.3125 -0.73437,0 -1.25,-0.28125 -0.5,-0.28125 -0.82812,-0.76563 -0.3125,-0.48437 -0.46875,-1.125 -0.14063,-0.65625 -0.14063,-1.375 0,-0.84375 0.17188,-1.53125 0.1875,-0.6875 0.54687,-1.15625 0.35938,-0.48437 0.89063,-0.75 0.53125,-0.26562 1.234
 37,-0.26562 0.57813,0 1.04688,0.26562 0.48437,0.25 0.95312,0.73438 l 0,-3.82813 q 0,-0.0469 0.0312,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0781,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.29688,-0.0156 0.17187,0 0.28125,0.0156 0.125,0.0156 0.1875,0.0469 0.0781,0.0312 0.10937,0.0781 0.0312,0.0469 0.0312,0.0937 l 0,9.75 z m -1.21875,-4.625 q -0.48437,-0.60937 -0.95312,-0.92187 -0.45313,-0.32813 -0.95313,-0.32813 -0.45312,0 -0.78125,0.21875 -0.3125,0.21875 -0.51562,0.57813 -0.20313,0.35937 -0.29688,0.8125 -0.0937,0.45312 -0.0937,0.92187 0,0.5 0.0781,0.98438 0.0781,0.46875 0.26562,0.84375 0.1875,0.35937 0.5,0.59375 0.32813,0.21875 0.79688,0.21875 0.25,0 0.46875,-0.0625 0.21875,-0.0781 0.45312,-0.21875 0.23438,-0.15625 0.48438,-0.40625 0.26562,-0.25 0.54687,-0.60938 l 0,-2.625 z m 8.90338,1.04688 q 0,0.28125 -0.15625,0.40625 -0.14062,0.125 -0.3125,0.125 l -4.32812,0 q 0,0.54687 0.10937,0.98437 0.10938,0.4375 0.35938,0.76563 0.26562,0.3125 0.67187,0.48437 0.42188,0.15625 1.01563,0.15625 0.4687
 5,0 0.82812,-0.0781 0.35938,-0.0781 0.625,-0.17188 0.28125,-0.0937 0.45313,-0.17187 0.17187,-0.0781 0.25,-0.0781 0.0625,0 0.0937,0.0312 0.0469,0.0156 0.0625,0.0781 0.0312,0.0469 0.0312,0.14062 0.0156,0.0937 0.0156,0.21875 0,0.0937 -0.0156,0.17188 0,0.0625 -0.0156,0.125 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.0781,0.0937 -0.0312,0.0312 -0.23438,0.125 -0.1875,0.0937 -0.5,0.1875 -0.3125,0.0781 -0.73437,0.14062 -0.40625,0.0781 -0.875,0.0781 -0.8125,0 -1.4375,-0.21875 -0.60938,-0.23438 -1.03125,-0.67188 -0.40625,-0.45312 -0.625,-1.125 -0.20313,-0.6875 -0.20313,-1.57812 0,-0.84375 0.21875,-1.51563 0.21875,-0.6875 0.625,-1.15625 0.42188,-0.46875 1,-0.71875 0.59375,-0.26562 1.3125,-0.26562 0.78125,0 1.32813,0.25 0.54687,0.25 0.89062,0.67187 0.35938,0.42188 0.51563,1 0.17187,0.5625 0.17187,1.20313 l 0,0.21875 z m -1.21875,-0.35938 q 0.0156,-0.95312 -0.42187,-1.48437 -0.4375,-0.54688 -1.3125,-0.54688 -0.45313,0 -0.79688,0.17188 -0.32812,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.656
 25 -0.125,0.35937 -0.14062,0.76562 l 3.57812,0 z"
+       id="path49"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 56.750656,216.88708 133.259844,0 0,45.63779 -133.259844,0 z"
+       id="path51"
+       inkscape:connector-curvature="0"
+       style="fill:#95f3ef;fill-rule:nonzero" />
+    <path
+       d="m 56.750656,216.88708 133.259844,0 0,45.63779 -133.259844,0 z"
+       id="path53"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:1, 3" />
+    <path
+       d="m 114.59029,230.30534 q 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0781,0.0469 -0.0469,0.0156 -0.14062,0.0312 -0.0781,0.0156 -0.21875,0.0156 -0.14063,0 -0.23438,-0.0156 -0.0781,-0.0156 -0.14062,-0.0312 -0.0469,-0.0156 -0.0625,-0.0469 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-3.07813 -3.17187,0 0,3.07813 q 0,0.0469 -0.0156,0.0781 -0.0156,0.0312 -0.0781,0.0469 -0.0469,0.0156 -0.14063,0.0312 -0.0937,0.0156 -0.21875,0.0156 -0.14062,0 -0.23437,-0.0156 -0.0781,-0.0156 -0.14063,-0.0312 -0.0469,-0.0156 -0.0781,-0.0469 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-6.67188 q 0,-0.0469 0.0156,-0.0781 0.0312,-0.0312 0.0781,-0.0469 0.0625,-0.0156 0.14063,-0.0312 0.0937,-0.0156 0.23437,-0.0156 0.125,0 0.21875,0.0156 0.0937,0.0156 0.14063,0.0312 0.0625,0.0156 0.0781,0.0469 0.0156,0.0312 0.0156,0.0781 l 0,2.78125 3.17187,0 0,-2.78125 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0625,-0.0156 0.14062,-0.0312 0.0937,-0.0156 0.23438,-0.0156 0.14062,0 0.21875,0.0156 0.0937,0.0156 0.14062,0.0312
  0.0625,0.0156 0.0781,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,6.67188 z m 6.82684,-0.1875 q 0.0469,0.125 0.0469,0.20312 0,0.0625 -0.0469,0.10938 -0.0312,0.0312 -0.14062,0.0312 -0.0937,0.0156 -0.26563,0.0156 -0.15625,0 -0.26562,-0.0156 -0.0937,0 -0.14063,-0.0156 -0.0469,-0.0156 -0.0781,-0.0469 -0.0312,-0.0469 -0.0469,-0.0937 l -0.59375,-1.6875 -2.89062,0 -0.5625,1.67187 q -0.0156,0.0469 -0.0469,0.0937 -0.0312,0.0312 -0.0781,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0937,0.0156 -0.25,0.0156 -0.15625,0 -0.26562,-0.0156 -0.0937,-0.0156 -0.14063,-0.0469 -0.0312,-0.0312 -0.0312,-0.10937 0,-0.0781 0.0469,-0.1875 l 2.32812,-6.46875 q 0.0312,-0.0469 0.0625,-0.0781 0.0312,-0.0469 0.0937,-0.0625 0.0781,-0.0312 0.17188,-0.0312 0.10937,-0.0156 0.26562,-0.0156 0.17188,0 0.28125,0.0156 0.125,0 0.1875,0.0312 0.0781,0.0156 0.10938,0.0625 0.0469,0.0312 0.0625,0.0937 l 2.32812,6.45313 z m -2.98437,-5.70313 -0.0156,0 -1.1875,3.46875 2.40625,0 -1.20312,-3.46875 z m 10.60335,5.82813 q -0.0156,0.0781 -0.062
 5,0.125 -0.0469,0.0469 -0.125,0.0781 -0.0625,0.0156 -0.1875,0.0156 -0.10938,0.0156 -0.26563,0.0156 -0.17187,0 -0.28125,-0.0156 -0.10937,0 -0.1875,-0.0156 -0.0781,-0.0312 -0.125,-0.0781 -0.0312,-0.0469 -0.0469,-0.125 l -1.45313,-5.25 -0.0156,0 -1.34375,5.25 q -0.0156,0.0781 -0.0625,0.125 -0.0312,0.0469 -0.10938,0.0781 -0.0625,0.0156 -0.17187,0.0156 -0.10938,0.0156 -0.28125,0.0156 -0.17188,0 -0.29688,-0.0156 -0.10937,0 -0.1875,-0.0156 -0.0781,-0.0312 -0.125,-0.0781 -0.0312,-0.0469 -0.0469,-0.125 l -1.84375,-6.42188 q -0.0312,-0.125 -0.0312,-0.1875 0,-0.0781 0.0469,-0.10937 0.0469,-0.0469 0.14063,-0.0469 0.10937,-0.0156 0.28125,-0.0156 0.17187,0 0.26562,0.0156 0.0937,0 0.14063,0.0312 0.0469,0.0156 0.0625,0.0625 0.0312,0.0312 0.0469,0.0781 l 1.5625,5.82812 0,0 1.48438,-5.8125 q 0.0156,-0.0625 0.0312,-0.0937 0.0312,-0.0469 0.0781,-0.0625 0.0625,-0.0312 0.15625,-0.0312 0.10938,-0.0156 0.26563,-0.0156 0.14062,0 0.23437,0.0156 0.0937,0 0.14063,0.0312 0.0625,0.0156 0.0781,0.0625 0.0312,0.031
 2 0.0469,0.0937 l 1.59375,5.8125 0.0156,0 1.53125,-5.8125 q 0.0156,-0.0625 0.0312,-0.0937 0.0156,-0.0469 0.0625,-0.0625 0.0469,-0.0312 0.14063,-0.0312 0.0937,-0.0156 0.25,-0.0156 0.15625,0 0.25,0.0156 0.10937,0.0156 0.14062,0.0625 0.0469,0.0312 0.0469,0.10938 0,0.0625 -0.0312,0.17187 l -1.84375,6.42188 z m 9.49594,0.8125 q 0,0.125 -0.0156,0.20312 0,0.0937 -0.0312,0.14063 -0.0312,0.0469 -0.0625,0.0625 -0.0312,0.0156 -0.0625,0.0156 -0.10937,0 -0.34375,-0.0937 -0.23437,-0.0937 -0.54687,-0.26562 -0.3125,-0.15625 -0.67188,-0.40625 -0.35937,-0.23438 -0.6875,-0.5625 -0.26562,0.15625 -0.67187,0.28125 -0.39063,0.125 -0.92188,0.125 -0.79687,0 -1.375,-0.23438 -0.5625,-0.23437 -0.9375,-0.67187 -0.375,-0.45313 -0.5625,-1.10938 -0.17187,-0.67187 -0.17187,-1.53125 0,-0.82812 0.1875,-1.5 0.20312,-0.67187 0.59375,-1.14062 0.40625,-0.46875 1,-0.71875 0.60937,-0.25 1.40625,-0.25 0.73437,0 1.29687,0.23437 0.57813,0.21875 0.95313,0.67188 0.39062,0.4375 0.57812,1.09375 0.20313,0.64062 0.20313,1.48437 0,0
 .4375 -0.0625,0.84375 -0.0469,0.39063 -0.15625,0.75 -0.10938,0.34375 -0.28125,0.65625 -0.15625,0.29688 -0.39063,0.53125 0.39063,0.32813 0.6875,0.51563 0.29688,0.17187 0.48438,0.26562 0.20312,0.0937 0.3125,0.125 0.10937,0.0469 0.15625,0.0937 0.0625,0.0469 0.0781,0.14063 0.0156,0.0937 0.0156,0.25 z m -1.8125,-4.10938 q 0,-0.59375 -0.10938,-1.09375 -0.10937,-0.5 -0.35937,-0.875 -0.23438,-0.375 -0.64063,-0.57812 -0.40625,-0.21875 -1.01562,-0.21875 -0.59375,0 -1.01563,0.23437 -0.40625,0.21875 -0.65625,0.59375 -0.25,0.375 -0.35937,0.875 -0.10938,0.5 -0.10938,1.0625 0,0.60938 0.0937,1.125 0.10938,0.51563 0.34375,0.89063 0.25,0.375 0.65625,0.57812 0.40625,0.20313 1.01563,0.20313 0.60937,0 1.01562,-0.21875 0.40625,-0.23438 0.65625,-0.60938 0.26563,-0.39062 0.375,-0.89062 0.10938,-0.51563 0.10938,-1.07813 z"
+       id="path55"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 88.26409,243.30534 q 0,0.0469 -0.01563,0.0781 -0.01563,0.0312 -0.07813,0.0625 -0.04687,0.0156 -0.15625,0.0156 -0.09375,0.0156 -0.25,0.0156 -0.140625,0 -0.234375,-0.0156 -0.07813,0 -0.140625,-0.0156 -0.04687,-0.0312 -0.07813,-0.0781 -0.03125,-0.0469 -0.04687,-0.10938 l -0.640625,-1.64062 q -0.109375,-0.28125 -0.234375,-0.51563 -0.125,-0.23437 -0.296875,-0.39062 -0.15625,-0.17188 -0.390625,-0.26563 -0.21875,-0.0937 -0.53125,-0.0937 l -0.625,0 0,2.95313 q 0,0.0469 -0.03125,0.0781 -0.01563,0.0312 -0.0625,0.0469 -0.04687,0.0156 -0.140625,0.0312 -0.09375,0.0156 -0.21875,0.0156 -0.140625,0 -0.234375,-0.0156 -0.07813,-0.0156 -0.140625,-0.0312 -0.04687,-0.0156 -0.07813,-0.0469 -0.01563,-0.0312 -0.01563,-0.0781 l 0,-6.4375 q 0,-0.20313 0.109375,-0.28125 0.109375,-0.0937 0.234375,-0.0937 l 1.484375,0 q 0.265625,0 0.4375,0.0156 0.171875,0.0156 0.3125,0.0312 0.40625,0.0625 0.703125,0.21875 0.3125,0.15625 0.515625,0.39063 0.21875,0.21875 0.3125,0.51562 0.109375,0.29688 0.109375,0.6562
 5 0,0.35938 -0.09375,0.64063 -0.09375,0.26562 -0.265625,0.48437 -0.171875,0.20313 -0.421875,0.35938 -0.25,0.15625 -0.5625,0.26562 0.171875,0.0781 0.3125,0.20313 0.140625,0.10937 0.265625,0.26562 0.125,0.15625 0.21875,0.375 0.109375,0.20313 0.21875,0.46875 l 0.625,1.53125 q 0.07813,0.1875 0.09375,0.26563 0.03125,0.0781 0.03125,0.125 z m -1.390625,-4.875 q 0,-0.42188 -0.1875,-0.70313 -0.1875,-0.28125 -0.609375,-0.40625 -0.140625,-0.0312 -0.3125,-0.0469 -0.15625,-0.0156 -0.4375,-0.0156 l -0.78125,0 0,2.34375 0.90625,0 q 0.359375,0 0.625,-0.0781 0.265625,-0.0937 0.4375,-0.25 0.1875,-0.17188 0.265625,-0.375 0.09375,-0.21875 0.09375,-0.46875 z m 6.567261,2.25 q 0,0.21875 -0.109375,0.3125 -0.109375,0.0781 -0.234375,0.0781 l -3.171875,0 q 0,0.40625 0.07813,0.73438 0.07813,0.3125 0.265625,0.54687 0.1875,0.23438 0.484375,0.35938 0.3125,0.10937 0.75,0.10937 0.34375,0 0.609375,-0.0469 0.265625,-0.0625 0.453125,-0.125 0.203125,-0.0781 0.328125,-0.125 0.125,-0.0625 0.1875,-0.0625 0.04687,0 0.0625
 ,0.0156 0.03125,0.0156 0.04687,0.0625 0.03125,0.0312 0.03125,0.10938 0.01563,0.0625 0.01563,0.15625 0,0.0781 -0.01563,0.125 0,0.0469 -0.01563,0.0937 0,0.0312 -0.03125,0.0625 -0.01563,0.0312 -0.04687,0.0625 -0.01563,0.0312 -0.171875,0.10937 -0.140625,0.0625 -0.375,0.125 -0.21875,0.0625 -0.53125,0.10938 -0.296875,0.0625 -0.640625,0.0625 -0.59375,0 -1.046875,-0.17188 -0.453125,-0.17187 -0.765625,-0.5 -0.296875,-0.32812 -0.453125,-0.8125 -0.15625,-0.5 -0.15625,-1.15625 0,-0.625 0.15625,-1.10937 0.171875,-0.5 0.46875,-0.84375 0.296875,-0.35938 0.71875,-0.53125 0.4375,-0.1875 0.96875,-0.1875 0.578125,0 0.96875,0.1875 0.40625,0.17187 0.65625,0.48437 0.265625,0.29688 0.390625,0.71875 0.125,0.42188 0.125,0.89063 l 0,0.15625 z m -0.890625,-0.26563 q 0.01563,-0.6875 -0.3125,-1.07812 -0.328125,-0.40625 -0.96875,-0.40625 -0.328125,0 -0.578125,0.125 -0.25,0.125 -0.421875,0.32812 -0.15625,0.20313 -0.25,0.48438 -0.09375,0.26562 -0.09375,0.54687 l 2.625,0 z m 5.098984,1.57813 q 0,0.375 -0.140625,0.6
 7187 -0.140625,0.28125 -0.390625,0.48438 -0.25,0.1875 -0.609375,0.29687 -0.34375,0.10938 -0.765625,0.10938 -0.25,0 -0.484375,-0.0469 -0.234375,-0.0469 -0.421875,-0.10937 -0.1875,-0.0625 -0.3125,-0.125 -0.125,-0.0625 -0.1875,-0.10938 -0.0625,-0.0625 -0.09375,-0.15625 -0.01563,-0.0937 -0.01563,-0.26562 0,-0.10938 0,-0.17188 0.01563,-0.0781 0.03125,-0.10937 0.01563,-0.0469 0.04687,-0.0625 0.03125,-0.0156 0.0625,-0.0156 0.0625,0 0.171875,0.0781 0.125,0.0625 0.296875,0.15625 0.171875,0.0781 0.390625,0.15625 0.234375,0.0625 0.546875,0.0625 0.21875,0 0.390625,-0.0469 0.1875,-0.0469 0.328125,-0.14062 0.140625,-0.0937 0.203125,-0.23438 0.07813,-0.15625 0.07813,-0.34375 0,-0.20312 -0.109375,-0.34375 -0.109375,-0.14062 -0.28125,-0.25 -0.171875,-0.10937 -0.390625,-0.1875 -0.203125,-0.0937 -0.4375,-0.17187 -0.21875,-0.0937 -0.4375,-0.20313 -0.21875,-0.125 -0.390625,-0.28125 -0.171875,-0.17187 -0.28125,-0.40625 -0.109375,-0.23437 -0.109375,-0.5625 0,-0.28125 0.109375,-0.53125 0.125,-0.26562 0.343
 75,-0.45312 0.21875,-0.20313 0.546875,-0.3125 0.328125,-0.125 0.765625,-0.125 0.203125,0 0.390625,0.0312 0.1875,0.0312 0.34375,0.0781 0.15625,0.0469 0.265625,0.10938 0.109375,0.0469 0.171875,0.0937 0.0625,0.0469 0.07813,0.0781 0.01563,0.0312 0.01563,0.0781 0.01563,0.0312 0.01563,0.0937 0.01563,0.0469 0.01563,0.14062 0,0.0937 -0.01563,0.15625 0,0.0625 -0.01563,0.10938 -0.01563,0.0469 -0.04687,0.0625 -0.03125,0.0156 -0.0625,0.0156 -0.04687,0 -0.140625,-0.0469 -0.09375,-0.0625 -0.234375,-0.125 -0.140625,-0.0781 -0.34375,-0.125 -0.1875,-0.0625 -0.453125,-0.0625 -0.21875,0 -0.390625,0.0469 -0.171875,0.0469 -0.28125,0.14063 -0.109375,0.0937 -0.171875,0.23437 -0.04687,0.125 -0.04687,0.26563 0,0.21875 0.09375,0.35937 0.109375,0.14063 0.28125,0.25 0.171875,0.10938 0.390625,0.20313 0.234375,0.0781 0.453125,0.17187 0.234375,0.0781 0.453125,0.20313 0.21875,0.10937 0.390625,0.26562 0.171875,0.15625 0.28125,0.39063 0.109375,0.21875 0.109375,0.53125 z m 5.620925,-1.15625 q 0,0.59375 -0.15625,1.093
 75 -0.15625,0.5 -0.46875,0.85937 -0.29687,0.35938 -0.76562,0.5625 -0.46875,0.20313 -1.07813,0.20313 -0.59375,0 -1.046874,-0.17188 -0.4375,-0.1875 -0.734375,-0.51562 -0.28125,-0.34375 -0.4375,-0.82813 -0.140625,-0.48437 -0.140625,-1.10937 0,-0.57813 0.15625,-1.07813 0.15625,-0.5 0.453125,-0.85937 0.3125,-0.35938 0.765625,-0.54688 0.468754,-0.20312 1.093754,-0.20312 0.59375,0 1.03125,0.17187 0.45312,0.17188 0.73437,0.51563 0.29688,0.34375 0.4375,0.82812 0.15625,0.46875 0.15625,1.07813 z m -0.92187,0.0625 q 0,-0.39063 -0.0781,-0.73438 -0.0781,-0.35937 -0.25,-0.60937 -0.15625,-0.26563 -0.45312,-0.42188 -0.28125,-0.15625 -0.70313,-0.15625 -0.39062,0 -0.67187,0.14063 -0.281254,0.14062 -0.468754,0.40625 -0.171875,0.25 -0.265625,0.59375 -0.09375,0.34375 -0.09375,0.76562 0,0.39063 0.07813,0.75 0.07813,0.34375 0.234375,0.60938 0.171875,0.25 0.453124,0.40625 0.29687,0.15625 0.71875,0.15625 0.39062,0 0.67187,-0.14063 0.28125,-0.14062 0.46875,-0.39062 0.1875,-0.25 0.26563,-0.59375 0.0937,-0.3593
 8 0.0937,-0.78125 z m 6.19763,2.40625 q 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0625,0.0625 -0.0469,0.0156 -0.125,0.0156 -0.0781,0.0156 -0.1875,0.0156 -0.125,0 -0.20313,-0.0156 -0.0781,0 -0.125,-0.0156 -0.0469,-0.0312 -0.0625,-0.0625 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-0.64063 q -0.40625,0.46875 -0.8125,0.6875 -0.40625,0.20313 -0.8125,0.20313 -0.48438,0 -0.82813,-0.15625 -0.32812,-0.17188 -0.53125,-0.45313 -0.20312,-0.28125 -0.29687,-0.64062 -0.0781,-0.375 -0.0781,-0.89063 l 0,-2.9375 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0625,-0.0312 0.14062,-0.0312 0.0937,-0.0156 0.21875,-0.0156 0.14063,0 0.21875,0.0156 0.0937,0 0.14063,0.0312 0.0469,0.0156 0.0625,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,2.8125 q 0,0.42188 0.0625,0.6875 0.0625,0.25 0.1875,0.4375 0.125,0.17188 0.3125,0.28125 0.20312,0.0937 0.45312,0.0937 0.34375,0 0.67188,-0.23437 0.32812,-0.25 0.70312,-0.70313 l 0,-3.375 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0625,-0.0312 0.14062,-0.0312 0
 .0937,-0.0156 0.21875,-0.0156 0.125,0 0.20313,0.0156 0.0937,0 0.14062,0.0312 0.0625,0.0156 0.0781,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,4.82813 z m 4.27057,-4.51563 q 0,0.125 0,0.20313 0,0.0781 -0.0156,0.125 -0.0156,0.0469 -0.0469,0.0781 -0.0156,0.0156 -0.0625,0.0156 -0.0469,0 -0.10938,-0.0156 -0.0625,-0.0312 -0.14062,-0.0469 -0.0781,-0.0312 -0.17188,-0.0469 -0.0937,-0.0312 -0.20312,-0.0312 -0.14063,0 -0.26563,0.0625 -0.125,0.0469 -0.28125,0.17188 -0.14062,0.125 -0.29687,0.32812 -0.15625,0.20313 -0.34375,0.5 l 0,3.17188 q 0,0.0469 -0.0156,0.0781 -0.0156,0.0312 -0.0781,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0781,0.0156 -0.20312,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,0 -0.14063,-0.0156 -0.0469,-0.0312 -0.0625,-0.0625 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-4.82813 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0469,-0.0312 0.10938,-0.0312 0.0781,-0.0156 0.20312,-0.0156 0.125,0 0.20313,0.0156 0.0781,0 0.10937,0.0312 0.0469,0.0156 0.0625,0.0469 0.0312,0.0312 0.0312,0.0781 
 l 0,0.70313 q 0.20313,-0.29688 0.375,-0.46875 0.17188,-0.1875 0.32813,-0.28125 0.15625,-0.10938 0.29687,-0.14063 0.15625,-0.0469 0.3125,-0.0469 0.0781,0 0.15625,0.0156 0.0937,0 0.1875,0.0156 0.10938,0.0156 0.1875,0.0469 0.0781,0.0312 0.10938,0.0625 0.0312,0.0156 0.0312,0.0469 0.0156,0.0156 0.0156,0.0625 0.0156,0.0312 0.0156,0.10937 0,0.0625 0,0.1875 z m 4.37137,3.78125 q 0,0.0937 -0.0156,0.17188 0,0.0625 -0.0156,0.10937 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0781,0.10937 -0.0625,0.0625 -0.23437,0.15625 -0.15625,0.0937 -0.35938,0.17188 -0.20312,0.0781 -0.4375,0.125 -0.23437,0.0625 -0.48437,0.0625 -0.53125,0 -0.9375,-0.17188 -0.39063,-0.17187 -0.67188,-0.5 -0.26562,-0.34375 -0.40625,-0.8125 -0.14062,-0.48437 -0.14062,-1.125 0,-0.70312 0.17187,-1.21875 0.17188,-0.51562 0.46875,-0.84375 0.3125,-0.32812 0.71875,-0.48437 0.42188,-0.15625 0.89063,-0.15625 0.23437,0 0.45312,0.0469 0.21875,0.0312 0.39063,0.10938 0.1875,0.0625 0.32812,0.15625 0.15625,0.0937 0.21875,0.15625 0.0625,0.0625 0.
 0781,0.10937 0.0312,0.0312 0.0469,0.0937 0.0156,0.0469 0.0156,0.10938 0.0156,0.0625 0.0156,0.15625 0,0.20312 -0.0625,0.29687 -0.0469,0.0781 -0.10937,0.0781 -0.0781,0 -0.1875,-0.0781 -0.10938,-0.0937 -0.26563,-0.20312 -0.15625,-0.10938 -0.39062,-0.1875 -0.21875,-0.0937 -0.53125,-0.0937 -0.64063,0 -0.98438,0.5 -0.34375,0.48437 -0.34375,1.40625 0,0.46875 0.0781,0.82812 0.0937,0.34375 0.26562,0.59375 0.17188,0.23438 0.42188,0.34375 0.25,0.10938 0.57812,0.10938 0.3125,0 0.53125,-0.0937 0.23438,-0.0937 0.40625,-0.20313 0.17188,-0.125 0.28125,-0.21875 0.125,-0.0937 0.1875,-0.0937 0.0312,0 0.0625,0.0312 0.0312,0.0156 0.0469,0.0625 0.0156,0.0469 0.0156,0.125 0.0156,0.0781 0.0156,0.1875 z m 5.16226,-1.89062 q 0,0.21875 -0.10938,0.3125 -0.10937,0.0781 -0.23437,0.0781 l -3.17188,0 q 0,0.40625 0.0781,0.73438 0.0781,0.3125 0.26562,0.54687 0.1875,0.23438 0.48438,0.35938 0.3125,0.10937 0.75,0.10937 0.34375,0 0.60937,-0.0469 0.26563,-0.0625 0.45313,-0.125 0.20312,-0.0781 0.32812,-0.125 0.125,-0.0625
  0.1875,-0.0625 0.0469,0 0.0625,0.0156 0.0312,0.0156 0.0469,0.0625 0.0312,0.0312 0.0312,0.10938 0.0156,0.0625 0.0156,0.15625 0,0.0781 -0.0156,0.125 0,0.0469 -0.0156,0.0937 0,0.0312 -0.0312,0.0625 -0.0156,0.0312 -0.0469,0.0625 -0.0156,0.0312 -0.17188,0.10937 -0.14062,0.0625 -0.375,0.125 -0.21875,0.0625 -0.53125,0.10938 -0.29687,0.0625 -0.64062,0.0625 -0.59375,0 -1.04688,-0.17188 -0.45312,-0.17187 -0.76562,-0.5 -0.29688,-0.32812 -0.45313,-0.8125 -0.15625,-0.5 -0.15625,-1.15625 0,-0.625 0.15625,-1.10937 0.17188,-0.5 0.46875,-0.84375 0.29688,-0.35938 0.71875,-0.53125 0.4375,-0.1875 0.96875,-0.1875 0.57813,0 0.96875,0.1875 0.40625,0.17187 0.65625,0.48437 0.26563,0.29688 0.39063,0.71875 0.125,0.42188 0.125,0.89063 l 0,0.15625 z m -0.89063,-0.26563 q 0.0156,-0.6875 -0.3125,-1.07812 -0.32812,-0.40625 -0.96875,-0.40625 -0.32812,0 -0.57812,0.125 -0.25,0.125 -0.42188,0.32812 -0.15625,0.20313 -0.25,0.48438 -0.0937,0.26562 -0.0937,0.54687 l 2.625,0 z m 12.1331,2.89063 q 0,0.0469 -0.0312,0.0781 -
 0.0156,0.0312 -0.0781,0.0469 -0.0469,0.0156 -0.125,0.0312 -0.0781,0.0156 -0.21875,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,-0.0156 -0.14062,-0.0312 -0.0469,-0.0156 -0.0781,-0.0469 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-6.04688 0,0 -2.48438,6.07813 q -0.0156,0.0312 -0.0469,0.0625 -0.0312,0.0312 -0.0937,0.0469 -0.0469,0.0156 -0.125,0.0156 -0.0781,0.0156 -0.1875,0.0156 -0.10938,0 -0.1875,-0.0156 -0.0781,0 -0.14063,-0.0156 -0.0469,-0.0312 -0.0781,-0.0469 -0.0312,-0.0312 -0.0469,-0.0625 l -2.35938,-6.07813 0,0 0,6.04688 q 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0781,0.0469 -0.0469,0.0156 -0.14062,0.0312 -0.0781,0.0156 -0.21875,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,-0.0156 -0.125,-0.0312 -0.0469,-0.0156 -0.0781,-0.0469 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-6.39063 q 0,-0.21875 0.10937,-0.3125 0.125,-0.10937 0.28125,-0.10937 l 0.54688,0 q 0.17187,0 0.29687,0.0312 0.14063,0.0312 0.23438,0.10937 0.0937,0.0625 0.15625,0.17188 0.0625,0.10937 0.125,0.25 l 2.00001,5.03125 0.0312,0 2.09375,-5.0
 1563 q 0.0625,-0.15625 0.125,-0.26562 0.0781,-0.10938 0.15625,-0.17188 0.0937,-0.0781 0.1875,-0.10937 0.10937,-0.0312 0.23437,-0.0312 l 0.59375,0 q 0.0781,0 0.14063,0.0312 0.0781,0.0156 0.125,0.0781 0.0625,0.0469 0.0937,0.125 0.0312,0.0781 0.0312,0.1875 l 0,6.39063 z m 5.09526,0.0156 q 0,0.0625 -0.0469,0.0937 -0.0469,0.0312 -0.125,0.0469 -0.0625,0.0156 -0.21875,0.0156 -0.14062,0 -0.21875,-0.0156 -0.0781,-0.0156 -0.125,-0.0469 -0.0312,-0.0312 -0.0312,-0.0937 l 0,-0.48437 q -0.3125,0.32812 -0.70312,0.53125 -0.39063,0.1875 -0.82813,0.1875 -0.39062,0 -0.70312,-0.10938 -0.29688,-0.0937 -0.51563,-0.28125 -0.21875,-0.1875 -0.34375,-0.45312 -0.10937,-0.28125 -0.10937,-0.64063 0,-0.40625 0.15625,-0.70312 0.17187,-0.29688 0.48437,-0.5 0.3125,-0.20313 0.76563,-0.29688 0.45312,-0.0937 1.01562,-0.0937 l 0.65625,0 0,-0.39062 q 0,-0.26563 -0.0625,-0.48438 -0.0469,-0.21875 -0.1875,-0.35937 -0.125,-0.14063 -0.34375,-0.20313 -0.20312,-0.0781 -0.51562,-0.0781 -0.3125,0 -0.57813,0.0781 -0.26562,0.0781 
 -0.46875,0.17188 -0.1875,0.0937 -0.32812,0.17187 -0.125,0.0781 -0.1875,0.0781 -0.0469,0 -0.0781,-0.0156 -0.0312,-0.0312 -0.0625,-0.0781 -0.0156,-0.0469 -0.0312,-0.10938 0,-0.0625 0,-0.14062 0,-0.14063 0.0156,-0.21875 0.0156,-0.0781 0.0781,-0.14063 0.0781,-0.0781 0.25,-0.17187 0.1875,-0.0937 0.42188,-0.17188 0.23437,-0.0781 0.5,-0.125 0.28125,-0.0469 0.5625,-0.0469 0.51562,0 0.875,0.125 0.375,0.10937 0.59375,0.34375 0.23437,0.21875 0.32812,0.5625 0.10938,0.32812 0.10938,0.78125 l 0,3.26562 z m -0.89063,-2.21875 -0.75,0 q -0.375,0 -0.64062,0.0625 -0.26563,0.0625 -0.45313,0.1875 -0.17187,0.125 -0.25,0.29688 -0.0781,0.15625 -0.0781,0.39062 0,0.375 0.23437,0.59375 0.23438,0.21875 0.67188,0.21875 0.34375,0 0.64062,-0.17187 0.29688,-0.1875 0.625,-0.54688 l 0,-1.03125 z m 6.51064,2.20313 q 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0625,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0781,0.0156 -0.20312,0.0156 -0.14063,0 -0.23438,-0.0156 -0.0781,0 -0.125,-0.0156 -0.0469,-0.0312 -0.0781,-0.0625 -0.
 0156,-0.0312 -0.0156,-0.0781 l 0,-2.82813 q 0,-0.40625 -0.0625,-0.65625 -0.0625,-0.26562 -0.1875,-0.4375 -0.125,-0.1875 -0.32812,-0.28125 -0.1875,-0.0937 -0.4375,-0.0937 -0.34375,0 -0.67188,0.23438 -0.32812,0.23437 -0.70312,0.6875 l 0,3.375 q 0,0.0469 -0.0156,0.0781 -0.0156,0.0312 -0.0781,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0781,0.0156 -0.20312,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,0 -0.14063,-0.0156 -0.0469,-0.0312 -0.0625,-0.0625 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-4.82813 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0469,-0.0312 0.10938,-0.0312 0.0781,-0.0156 0.20312,-0.0156 0.125,0 0.20313,0.0156 0.0781,0 0.10937,0.0312 0.0469,0.0156 0.0625,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,0.64063 q 0.40625,-0.45313 0.8125,-0.65625 0.40625,-0.21875 0.8125,-0.21875 0.48438,0 0.8125,0.17187 0.34375,0.15625 0.54688,0.4375 0.20312,0.26563 0.28125,0.64063 0.0937,0.35937 0.0937,0.875 l 0,2.9375 z m 5.08307,0.0156 q 0,0.0625 -0.0469,0.0937 -0.0469,0.0312 -0.125,0.0469 -0.06
 25,0.0156 -0.21875,0.0156 -0.14062,0 -0.21875,-0.0156 -0.0781,-0.0156 -0.125,-0.0469 -0.0312,-0.0312 -0.0312,-0.0937 l 0,-0.48437 q -0.3125,0.32812 -0.70312,0.53125 -0.39063,0.1875 -0.82813,0.1875 -0.39062,0 -0.70312,-0.10938 -0.29688,-0.0937 -0.51563,-0.28125 -0.21875,-0.1875 -0.34375,-0.45312 -0.10937,-0.28125 -0.10937,-0.64063 0,-0.40625 0.15625,-0.70312 0.17187,-0.29688 0.48437,-0.5 0.3125,-0.20313 0.76563,-0.29688 0.45312,-0.0937 1.01562,-0.0937 l 0.65625,0 0,-0.39062 q 0,-0.26563 -0.0625,-0.48438 -0.0469,-0.21875 -0.1875,-0.35937 -0.125,-0.14063 -0.34375,-0.20313 -0.20312,-0.0781 -0.51562,-0.0781 -0.3125,0 -0.57813,0.0781 -0.26562,0.0781 -0.46875,0.17188 -0.1875,0.0937 -0.32812,0.17187 -0.125,0.0781 -0.1875,0.0781 -0.0469,0 -0.0781,-0.0156 -0.0312,-0.0312 -0.0625,-0.0781 -0.0156,-0.0469 -0.0312,-0.10938 0,-0.0625 0,-0.14062 0,-0.14063 0.0156,-0.21875 0.0156,-0.0781 0.0781,-0.14063 0.0781,-0.0781 0.25,-0.17187 0.1875,-0.0937 0.42188,-0.17188 0.23437,-0.0781 0.5,-0.125 0.28125,-
 0.0469 0.5625,-0.0469 0.51562,0 0.875,0.125 0.375,0.10937 0.59375,0.34375 0.23437,0.21875 0.32812,0.5625 0.10938,0.32812 0.10938,0.78125 l 0,3.26562 z m -0.89063,-2.21875 -0.75,0 q -0.375,0 -0.64062,0.0625 -0.26563,0.0625 -0.45313,0.1875 -0.17187,0.125 -0.25,0.29688 -0.0781,0.15625 -0.0781,0.39062 0,0.375 0.23437,0.59375 0.23438,0.21875 0.67188,0.21875 0.34375,0 0.64062,-0.17187 0.29688,-0.1875 0.625,-0.54688 l 0,-1.03125 z m 6.38564,-2.40625 q 0,0.1875 -0.0469,0.28125 -0.0469,0.0781 -0.14062,0.0781 l -0.6875,0 q 0.1875,0.1875 0.26562,0.42187 0.0781,0.23438 0.0781,0.48438 0,0.42187 -0.14063,0.75 -0.125,0.3125 -0.375,0.54687 -0.25,0.21875 -0.59375,0.34375 -0.34375,0.10938 -0.76562,0.10938 -0.29688,0 -0.5625,-0.0781 -0.26563,-0.0781 -0.40625,-0.20312 -0.10938,0.10937 -0.17188,0.23437 -0.0625,0.10938 -0.0625,0.28125 0,0.1875 0.17188,0.3125 0.1875,0.125 0.46875,0.125 l 1.26562,0.0625 q 0.35938,0 0.65625,0.0937 0.3125,0.0937 0.53125,0.26563 0.21875,0.15625 0.34375,0.39062 0.125,0.23438 0
 .125,0.5625 0,0.32813 -0.14062,0.625 -0.14063,0.29688 -0.4375,0.53125 -0.28125,0.23438 -0.73438,0.35938 -0.4375,0.125 -1.04687,0.125 -0.57813,0 -1,-0.0937 -0.40625,-0.0937 -0.67188,-0.26563 -0.26562,-0.17187 -0.39062,-0.42187 -0.10938,-0.23438 -0.10938,-0.51563 0,-0.17187 0.0469,-0.34375 0.0469,-0.15625 0.125,-0.3125 0.0937,-0.15625 0.21875,-0.28125 0.14062,-0.14062 0.3125,-0.28125 -0.26563,-0.125 -0.39063,-0.32812 -0.125,-0.20313 -0.125,-0.45313 0,-0.32812 0.125,-0.57812 0.14063,-0.26563 0.34375,-0.46875 -0.17187,-0.1875 -0.26562,-0.4375 -0.0937,-0.25 -0.0937,-0.60938 0,-0.40625 0.14062,-0.73437 0.14063,-0.32813 0.375,-0.54688 0.25,-0.23437 0.59375,-0.35937 0.35938,-0.125 0.76563,-0.125 0.21875,0 0.40625,0.0312 0.1875,0.0156 0.35937,0.0625 l 1.45313,0 q 0.0937,0 0.14062,0.0937 0.0469,0.0781 0.0469,0.26562 z m -1.39063,1.28125 q 0,-0.5 -0.26562,-0.76562 -0.26563,-0.28125 -0.76563,-0.28125 -0.26562,0 -0.45312,0.0937 -0.1875,0.0781 -0.3125,0.23437 -0.125,0.14063 -0.1875,0.34375 -0.062
 5,0.1875 -0.0625,0.40625 0,0.46875 0.26562,0.75 0.26563,0.26563 0.76563,0.26563 0.26562,0 0.45312,-0.0781 0.1875,-0.0781 0.3125,-0.21875 0.14063,-0.15625 0.1875,-0.34375 0.0625,-0.20312 0.0625,-0.40625 z m 0.45313,3.82813 q 0,-0.3125 -0.26563,-0.48438 -0.25,-0.17187 -0.6875,-0.1875 l -1.25,-0.0312 q -0.17187,0.125 -0.28125,0.25 -0.10937,0.125 -0.17187,0.23438 -0.0625,0.10937 -0.0937,0.21875 -0.0156,0.10937 -0.0156,0.21875 0,0.34375 0.34375,0.51562 0.35938,0.1875 1,0.1875 0.40625,0 0.67188,-0.0781 0.26562,-0.0781 0.4375,-0.20313 0.17187,-0.125 0.23437,-0.29687 0.0781,-0.17188 0.0781,-0.34375 z m 6.04718,-3.125 q 0,0.21875 -0.10938,0.3125 -0.10937,0.0781 -0.23437,0.0781 l -3.17188,0 q 0,0.40625 0.0781,0.73438 0.0781,0.3125 0.26562,0.54687 0.1875,0.23438 0.48438,0.35938 0.3125,0.10937 0.75,0.10937 0.34375,0 0.60937,-0.0469 0.26563,-0.0625 0.45313,-0.125 0.20312,-0.0781 0.32812,-0.125 0.125,-0.0625 0.1875,-0.0625 0.0469,0 0.0625,0.0156 0.0312,0.0156 0.0469,0.0625 0.0312,0.0312 0.0312,0.
 10938 0.0156,0.0625 0.0156,0.15625 0,0.0781 -0.0156,0.125 0,0.0469 -0.0156,0.0937 0,0.0312 -0.0312,0.0625 -0.0156,0.0312 -0.0469,0.0625 -0.0156,0.0312 -0.17188,0.10937 -0.14062,0.0625 -0.375,0.125 -0.21875,0.0625 -0.53125,0.10938 -0.29687,0.0625 -0.64062,0.0625 -0.59375,0 -1.04688,-0.17188 -0.45312,-0.17187 -0.76562,-0.5 -0.29688,-0.32812 -0.45313,-0.8125 -0.15625,-0.5 -0.15625,-1.15625 0,-0.625 0.15625,-1.10937 0.17188,-0.5 0.46875,-0.84375 0.29688,-0.35938 0.71875,-0.53125 0.4375,-0.1875 0.96875,-0.1875 0.57813,0 0.96875,0.1875 0.40625,0.17187 0.65625,0.48437 0.26563,0.29688 0.39063,0.71875 0.125,0.42188 0.125,0.89063 l 0,0.15625 z m -0.89063,-0.26563 q 0.0156,-0.6875 -0.3125,-1.07812 -0.32812,-0.40625 -0.96875,-0.40625 -0.32812,0 -0.57812,0.125 -0.25,0.125 -0.42188,0.32812 -0.15625,0.20313 -0.25,0.48438 -0.0937,0.26562 -0.0937,0.54687 l 2.625,0 z m 4.88024,-1.625 q 0,0.125 0,0.20313 0,0.0781 -0.0156,0.125 -0.0156,0.0469 -0.0469,0.0781 -0.0156,0.0156 -0.0625,0.0156 -0.0469,0 -0.10
 938,-0.0156 -0.0625,-0.0312 -0.14062,-0.0469 -0.0781,-0.0312 -0.17188,-0.0469 -0.0937,-0.0312 -0.20312,-0.0312 -0.14063,0 -0.26563,0.0625 -0.125,0.0469 -0.28125,0.17188 -0.14062,0.125 -0.29687,0.32812 -0.15625,0.20313 -0.34375,0.5 l 0,3.17188 q 0,0.0469 -0.0156,0.0781 -0.0156,0.0312 -0.0781,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0781,0.0156 -0.20312,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,0 -0.14063,-0.0156 -0.0469,-0.0312 -0.0625,-0.0625 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-4.82813 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0469,-0.0312 0.10938,-0.0312 0.0781,-0.0156 0.20312,-0.0156 0.125,0 0.20313,0.0156 0.0781,0 0.10937,0.0312 0.0469,0.0156 0.0625,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,0.70313 q 0.20313,-0.29688 0.375,-0.46875 0.17188,-0.1875 0.32813

<TRUNCATED>


[27/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/query-performance.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/query-performance.html.md.erb b/markdown/query/query-performance.html.md.erb
new file mode 100644
index 0000000..981d77b
--- /dev/null
+++ b/markdown/query/query-performance.html.md.erb
@@ -0,0 +1,155 @@
+---
+title: Query Performance
+---
+
+<span class="shortdesc">HAWQ dynamically allocates resources to queries. Query performance depends on several factors such as data locality, number of virtual segments used for the query and general cluster health.</span>
+
+-   Dynamic Partition Elimination
+
+    In HAWQ, values available only when a query runs are used to dynamically prune partitions, which improves query processing speed. Enable or disable dynamic partition elimination by setting the server configuration parameter `gp_dynamic_partition_pruning` to `ON` or `OFF`; it is `ON` by default.
+
+-   Memory Optimizations
+
+    HAWQ allocates memory optimally for different operators in a query and frees and re-allocates memory during the stages of processing a query.
+
+-   Runaway Query Termination
+
+    HAWQ can automatically terminate the most memory-intensive queries based on a memory usage threshold. The threshold is set as a configurable percentage ([runaway\_detector\_activation\_percent](../reference/guc/parameter_definitions.html#runaway_detector_activation_percent)) of the resource quota for the segment, which is calculated by HAWQ's resource manager.
+
+    If the amount of virtual memory utilized by a physical segment exceeds the calculated threshold, then HAWQ begins terminating queries based on memory usage, starting with the query that is consuming the largest amount of memory. Queries are terminated until the percentage of utilized virtual memory is below the specified percentage.
+
+    To calculate the memory usage threshold for runaway queries, HAWQ uses the following formula:
+
+    *vmem threshold* = (*virtual memory quota calculated by resource manager* + [hawq\_re\_memory\_overcommit\_max](../reference/guc/parameter_definitions.html#hawq_re_memory_overcommit_max)) \* [runaway\_detector\_activation\_percent](../reference/guc/parameter_definitions.html#runaway_detector_activation_percent).
+
+    For example, if HAWQ resource manager calculates a virtual memory quota of 9GB,`             hawq_re_memory_overcommit_max` is set to 1GB and the value of `runaway_detector_activation_percent` is 95 (95%), then HAWQ starts terminating queries when the utilized virtual memory exceeds 9.5 GB.
+
+    To disable automatic query detection and termination, set the value of `runaway_detector_activation_percent` to 100.
+
+## <a id="id_xkg_znj_f5"></a>How to Investigate Query Performance Issues
+
+A query is not executing as quickly as you would expect. Here is how to investigate possible causes of slowdown:
+
+1.  Check the health of the cluster.
+    1.  Are any DataNodes, segments or nodes down?
+    2.  Are there many failed disks?
+
+2.  Check table statistics. Have the tables involved in the query been analyzed?
+3.  Check the plan of the query and run /3/4 to determine the bottleneck. 
+    Sometimes, there is not enough memory for some operators, such as Hash Join, or spill files are used. If an operator cannot perform all of its work in the memory allocated to it, it caches data on disk in *spill files*. Compared with no spill files, a query will run much slower.
+
+4.  Check data locality statistics using /3/4. Alternately you can check the logs. Data locality result for every query could also be found in the log of HAWQ. See [Data Locality Statistics](query-performance.html#topic_amk_drc_d5) for information on the statistics.
+5.  Check resource queue status. You can query view `pg_resqueue_status` to check if the target queue has already dispatched some resource to the queries, or if the target queue is lacking resources. See [Checking Existing Resource Queues](../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+6.  Analyze a dump of the resource manager's status to see more resource queue status. See [Analyzing Resource Manager Status](../resourcemgmt/ResourceQueues.html#topic_zrh_pkc_f5).
+
+## <a id="topic_amk_drc_d5"></a>Data Locality Statistics
+
+For visibility into query performance, use the EXPLAIN ANALYZE to obtain data locality statistics. For example:
+
+``` sql
+postgres=# CREATE TABLE test (i int);
+postgres=# INSERT INTO test VALUES(2);
+postgres=# EXPLAIN ANALYZE SELECT * FROM test;
+```
+```
+QUERY PLAN
+.......
+Data locality statistics:
+data locality ratio: 1.000; virtual segment number: 1; different host number: 1;
+virtual segment number per host(avg/min/max): (1/1/1);
+segment size(avg/min/max): (32.000 B/32 B/32 B);
+segment size with penalty(avg/min/max): (32.000 B/32 B/32 B);
+continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 7.816 ms;
+resource allocation: 0.615 ms; datalocality calculation: 0.136 ms.
+```
+
+The following table describes the metrics related to data locality. Use these metrics to examine issues behind a query's performance.
+
+<a id="topic_amk_drc_d5__table_q4p_25c_d5"></a>
+
+<table>
+<caption><span class="tablecap">Table 1. Data Locality Statistics</span></caption>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Statistic</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>data locality ratio</td>
+<td><p>Indicates the total local read ratio of a query. The lower the ratio, the more remote read happens. Since remote read on HDFS needs network IO, the execution time of a query may increase.</p>
+<p>For hash distributed tables, all the blocks of a file will be processed by one segment, so if data on HDFS is redistributed, such as by the HDFS Balancer, the data locality ratio will be decreased. In this case, you can redistribute the hash distributed table manually by using CREATE TABLE AS SELECT.</p></td>
+</tr>
+<tr class="even">
+<td>number of virtual segments</td>
+<td>Typically, the more virtual segments are used, the faster the query will be executed. If the virtual segment number is too small, you can check whether <code class="ph codeph">default_hash_table_bucket_number</code>, <code class="ph codeph">hawq_rm_nvseg_perquery_limit</code>, or the bucket number of a hash distributed table is small. See <a href="#topic_wv3_gzc_d5">Number of Virtual Segments</a>.</td>
+</tr>
+<tr class="odd">
+<td>different host number</td>
+<td>Indicates how many hosts are used to run this query. All the hosts should be used when the virtual segment number is bigger than the total number of hosts according to the resource allocation strategy of HAWQ. As a result, if this metric is smaller than the total number of hosts for a big query, it often indicates that some hosts are down. In this case, use \u201cselect gp_segment_configuration\u201d to check the node states first.</td>
+</tr>
+<tr class="even">
+<td>segment size and segment size with penalty</td>
+<td>\u201csegment size\u201d indicates the (avg/min/max) data size which is processed by a virtual segment. \u201csegment size with penalty\u201d is the segment size when remote read is calculated as \u201cnet_disk_ratio\u201d * block size. The virtual segment that contains remote read should process less data than the virtual segment that contains only local read. \u201cnet_disk_ratio\u201d can be tuned to measure how much slower the remote read is than local read for different network environments, while considering the workload balance between the nodes. The default value of \u201cnet_disk_ratio\u201d is 1.01.</td>
+</tr>
+<tr class="odd">
+<td>continuity</td>
+<td>reading a HDFS file discontinuously will introduce additional seek, which will slow the table scan of a query. A low value of continuity indicates that the blocks of a file are not continuously distributed on a DataNode.</td>
+</tr>
+<tr class="even">
+<td>DFS metadatacache</td>
+<td>Indicates the metadatacache time cost for a query. In HAWQ, HDFS block information is cached in a metadatacache process. If cache miss happens, time cost of metadatacache may increase.</td>
+</tr>
+<tr class="odd">
+<td>resource allocation</td>
+<td>Indicates the time cost of acquiring resources from the resource manager.</td>
+</tr>
+<tr class="even">
+<td>datalocality calculation</td>
+<td>Indicates the time to run the algorithm that assigns HDFS blocks to virtual segments and calculates the data locality ratio.</td>
+</tr>
+</tbody>
+</table>
+
+## <a id="topic_wv3_gzc_d5"></a>Number of Virtual Segments
+
+To obtain the best results when querying data in HAWQ, review the best practices described in this topic.
+
+### <a id="virtual_seg_performance"></a>Factors Impacting Query Performance
+
+The number of virtual segments used for a query directly impacts the query's performance. The following factors can impact the degree of parallelism of a query:
+
+-   **Cost of the query**. Small queries use fewer segments and larger queries use more segments. Some techniques used in defining resource queues can influence the number of both virtual segments and general resources allocated to queries.
+-   **Available resources at query time**. If more resources are available in the resource queue, those resources will be used.
+-   **Hash table and bucket number**. If the query involves only hash-distributed tables, the query's parallelism is fixed (equal to the hash table bucket number) under the following conditions:
+
+   - The bucket number (bucketnum) configured for all the hash tables is the same bucket number
+   - The table size for random tables is no more than 1.5 times the size allotted for the hash tables.
+
+  Otherwise, the number of virtual segments depends on the query's cost: hash-distributed table queries behave like queries on randomly-distributed tables.
+
+-   **Query Type**: It can be difficult to calculate  resource costs for queries with some user-defined functions or for queries to external tables. With these queries,  the number of virtual segments is controlled by the  `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, as well as by the ON clause and the location list of external tables. If the query has a hash result table (e.g. `INSERT into hash_table`), the number of virtual segments must be equal to the bucket number of the resulting hash table. If the query is performed in utility mode, such as for `COPY` and `ANALYZE` operations, the virtual segment number is calculated by different policies.
+
+###General Guidelines
+
+The following guidelines expand on the numbers of virtual segments to use, provided there are sufficient resources available.
+
+-   **Random tables exist in the select list:** \#vseg (number of virtual segments) depends on the size of the table.
+-   **Hash tables exist in the select list:** \#vseg depends on the bucket number of the table.
+-   **Random and hash tables both exist in the select list:** \#vseg depends on the bucket number of the table, if the table size of random tables is no more than 1.5 times the size of hash tables. Otherwise, \#vseg depends on the size of the random table.
+-   **User-defined functions exist:** \#vseg depends on the `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit` parameters.
+-   **PXF external tables exist:** \#vseg depends on the `default_hash_table_bucket_number` parameter.
+-   **gpfdist external tables exist:** \#vseg is at least the number of locations in the location list.
+-   **The command for CREATE EXTERNAL TABLE is used:** \#vseg must reflect the value in the command and use the `ON` clause in the command.
+-   **Hash tables are copied to or from files:** \#vseg depends on the bucket number of the hash table.
+-   **Random tables are copied to files:** \#vseg depends on the size of the random table.
+-   **Random tables are copied from files:** \#vseg is a fixed value. \#vseg is 6, when there are sufficient resources.
+-   **ANALYZE table:** Analyzing a nonpartitioned table will use more virtual segments than a partitioned table.
+-   **Relationship between hash distribution results:** \#vseg must be the same as the bucket number for the hash table.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/query-profiling.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/query-profiling.html.md.erb b/markdown/query/query-profiling.html.md.erb
new file mode 100644
index 0000000..ea20e0a
--- /dev/null
+++ b/markdown/query/query-profiling.html.md.erb
@@ -0,0 +1,240 @@
+---
+title: Query Profiling
+---
+
+<span class="shortdesc">Examine the query plans of poorly performing queries to identify possible performance tuning opportunities.</span>
+
+HAWQ devises a *query plan* for each query. Choosing the right query plan to match the query and data structure is necessary for good performance. A query plan defines how HAWQ will run the query in the parallel execution environment.
+
+The query optimizer uses data statistics maintained by the database to choose a query plan with the lowest possible cost. Cost is measured in disk I/O, shown as units of disk page fetches. The goal is to minimize the total execution cost for the plan.
+
+View the plan for a given query with the `EXPLAIN` command. `EXPLAIN` shows the query optimizer's estimated cost for the query plan. For example:
+
+``` sql
+EXPLAIN SELECT * FROM names WHERE id=22;
+```
+
+`EXPLAIN ANALYZE` runs the statement in addition to displaying its plan. This is useful for determining how close the optimizer's estimates are to reality. For example:
+
+``` sql
+EXPLAIN ANALYZE SELECT * FROM names WHERE id=22;
+```
+
+**Note:** The legacy and GPORCA query optimizers coexist in HAWQ. GPORCA is the default HAWQ optimizer. HAWQ uses GPORCA to generate an execution plan for a query when possible. The `EXPLAIN` output generated by GPORCA is different than the output generated by the legacy query optimizer.
+
+When the `EXPLAIN ANALYZE` command uses GPORCA, the `EXPLAIN` plan shows only the number of partitions that are being eliminated. The scanned partitions are not shown. To show name of the scanned partitions in the segment logs set the server configuration parameter `gp_log_dynamic_partition_pruning` to `on`. This example `SET` command enables the parameter.
+
+``` sql
+SET gp_log_dynamic_partition_pruning = on;
+```
+
+For information about GPORCA, see [Querying Data](query.html#topic1).
+
+## <a id="topic40"></a>Reading EXPLAIN Output
+
+A query plan is a tree of nodes. Each node in the plan represents a single operation, such as a table scan, join, aggregation, or sort.
+
+Read plans from the bottom to the top: each node feeds rows into the node directly above it. The bottom nodes of a plan are usually table scan operations. If the query requires joins, aggregations, sorts, or other operations on the rows, there are additional nodes above the scan nodes to perform these operations. The topmost plan nodes are usually HAWQ motion nodes: redistribute, broadcast, or gather motions. These operations move rows between segment instances during query processing.
+
+The output of `EXPLAIN` has one line for each node in the plan tree and shows the basic node type and the following execution cost estimates for that plan node:
+
+-   **cost** \u2014Measured in units of disk page fetches. 1.0 equals one sequential disk page read. The first estimate is the start-up cost of getting the first row and the second is the total cost of cost of getting all rows. The total cost assumes all rows will be retrieved, which is not always true; for example, if the query uses `LIMIT`, not all rows are retrieved.
+-   **rows** \u2014The total number of rows output by this plan node. This number is usually less than the number of rows processed or scanned by the plan node, reflecting the estimated selectivity of any `WHERE` clause conditions. Ideally, the estimate for the topmost node approximates the number of rows that the query actually returns.
+-   **width** \u2014The total bytes of all the rows that this plan node outputs.
+
+Note the following:
+
+-   The cost of a node includes the cost of its child nodes. The topmost plan node has the estimated total execution cost for the plan. This is the number the optimizer intends to minimize.
+-   The cost reflects only the aspects of plan execution that the query optimizer takes into consideration. For example, the cost does not reflect time spent transmitting result rows to the client.
+
+### <a id="topic41"></a>EXPLAIN Example
+
+The following example describes how to read an `EXPLAIN` query plan for a query:
+
+``` sql
+EXPLAIN SELECT * FROM names WHERE name = 'Joelle';
+```
+
+```
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
+ Gather Motion 2:1  (slice1; segments: 2)  (cost=0.00..1.01 rows=1 width=11)
+   ->  Append-only Scan on names  (cost=0.00..1.01 rows=1 width=11)
+         Filter: name::text = 'Joelle'::text
+(3 rows)
+```
+
+Read the plan from the bottom to the top. To start, the query optimizer sequentially scans the *names* table. Notice the `WHERE` clause is applied as a *filter* condition. This means the scan operation checks the condition for each row it scans and outputs only the rows that satisfy the condition.
+
+The results of the scan operation are passed to a *gather motion* operation. In HAWQ, a gather motion is when segments send rows to the master. In this example, we have two segment instances that send to one master instance. This operation is working on `slice1` of the parallel query execution plan. A query plan is divided into *slices* so the segments can work on portions of the query plan in parallel.
+
+The estimated startup cost for this plan is `00.00` (no cost) and a total cost of `1.01` disk page fetches. The optimizer estimates this query will return one row.
+
+## <a id="topic42"></a>Reading EXPLAIN ANALYZE Output
+
+`EXPLAIN ANALYZE` plans and runs the statement. The `EXPLAIN           ANALYZE` plan shows the actual execution cost along with the optimizer's estimates. This allows you to see if the optimizer's estimates are close to reality. `EXPLAIN ANALYZE` also shows the following:
+
+-   The total runtime (in milliseconds) in which the query executed.
+-   The memory used by each slice of the query plan, as well as the memory reserved for the whole query statement.
+-   Statistics for the query dispatcher, including the number of executors used for the current query (total number/number of executors cached by previous queries/number of executors newly connected), dispatcher time (total dispatch time/connection establish time/dispatch data to executor time); and some time(max/min/avg) details for dispatching data, consuming executor data, and freeing executor.
+-   Statistics about data locality. See [Data Locality Statistics](query-performance.html#topic_amk_drc_d5) for details about these statistics.
+-   The number of *workers* (segments) involved in a plan node operation. Only segments that return rows are counted.
+-   The Max/Last statistics are for the segment that output the maximum number of rows and the segment with the longest *&lt;time&gt; to end*.
+-   The segment id of the segment that produced the most rows for an operation.
+-   For relevant operations, the amount of memory (`work_mem`) used by the operation. If the `work_mem` was insufficient to perform the operation in memory, the plan shows the amount of data spilled to disk for the lowest-performing segment. For example:
+
+    ``` pre
+    Work_mem used: 64K bytes avg, 64K bytes max (seg0).
+    Work_mem wanted: 90K bytes avg, 90K byes max (seg0) to lessen
+    workfile I/O affecting 2 workers.
+    ```
+**Note**
+The *work\_mem* property is not configurable. Use resource queues to manage memory use. For more information on resource queues, see [Configuring Resource Management](../resourcemgmt/ConfigureResourceManagement.html) and [Working with Hierarchical Resource Queues](../resourcemgmt/ResourceQueues.html).
+
+-   The time (in milliseconds) in which the segment that produced the most rows retrieved the first row, and the time taken for that segment to retrieve all rows. The result may omit *&lt;time&gt; to first row* if it is the same as the *&lt;time&gt; to end*.
+
+### <a id="topic43"></a>EXPLAIN ANALYZE Example
+
+This example describes how to read an `EXPLAIN ANALYZE` query plan using the same query. The `bold` parts of the plan show actual timing and rows returned for each plan node, as well as memory and time statistics for the whole query.
+
+``` sql
+EXPLAIN ANALYZE SELECT * FROM names WHERE name = 'Joelle';
+```
+
+```
+                                 QUERY PLAN
+------------------------------------------------------------------------
+ Gather Motion 1:1  (slice1; segments: 1)  (cost=0.00..1.01 rows=1 width=7)
+   Rows out:  Avg 1.0 rows x 1 workers at destination.  Max/Last(seg0:ip-10-0-1-16/seg0:ip-10-0-1-16) 1/1 rows with 8.713/8.713 ms to first row, 8.714/8.714 ms to end, start offset by 0.708/0.708 ms.
+   ->  Append-only Scan on names  (cost=0.00..1.01 rows=1 width=7)
+         Filter: name = 'Joelle'::text
+         Rows out:  Avg 1.0 rows x 1 workers.  Max/Last(seg0:ip-10-0-1-16/seg0:ip-10-0-1-16) 1/1 rows with 7.053/7.053 ms to first row, 7.089/7.089 ms to end, start offset by 2.162/2.162 ms.
+ Slice statistics:
+   (slice0)    Executor memory: 159K bytes.
+   (slice1)    Executor memory: 247K bytes (seg0:ip-10-0-1-16).
+ Statement statistics:
+   Memory used: 262144K bytes
+ Dispatcher statistics:
+   executors used(total/cached/new connection): (1/1/0); dispatcher time(total/connection/dispatch data): (0.217 ms/0.000 ms/0.037 ms).
+   dispatch data time(max/min/avg): (0.037 ms/0.037 ms/0.037 ms); consume executor data time(max/min/avg): (0.015 ms/0.015 ms/0.015 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
+ Data locality statistics:
+   data locality ratio: 1.000; virtual segment number: 1; different host number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment size(avg/min/max): (48.000 B/48 B/48 B); segment size with penalty(avg/min/max): (48.000 B/48 B/48 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 9.343 ms; resource allocation: 0.638 ms; datalocality calculation: 0.144 ms.
+ Total runtime: 19.690 ms
+(16 rows)
+```
+
+Read the plan from the bottom to the top. The total elapsed time to run this query was *19.690* milliseconds.
+
+The *Append-only scan* operation had only one segment (*seg0*) that returned rows, and it returned just *1 row*. The Max/Last statistics are identical in this example because only one segment returned rows. It took *7.053* milliseconds to find the first row and *7.089* milliseconds to scan all rows. This result is close to the optimizer's estimate: the query optimizer estimated it would return one row for this query. The *gather motion* (segments sending data to the master) received 1 row. The total elapsed time for this operation was *19.690* milliseconds.
+
+## <a id="topic44"></a>Examining Query Plans to Solve Problems
+
+If a query performs poorly, examine its query plan and ask the following questions:
+
+-   **Do operations in the plan take an exceptionally long time?** Look for an operation that consumes the majority of query processing time. For example, if a scan on a hash table takes longer than expected, the data locality may be low; reloading the data can increase the data locality and speed up the query. Or, adjust `enable_<operator>` parameters to see if you can force the legacy query optimizer (planner) to choose a different plan by disabling a particular query plan operator for that query.
+-   **Are the optimizer's estimates close to reality?** Run `EXPLAIN             ANALYZE` and see if the number of rows the optimizer estimates is close to the number of rows the query operation actually returns. If there is a large discrepancy, collect more statistics on the relevant columns.
+-   **Are selective predicates applied early in the plan?** Apply the most selective filters early in the plan so fewer rows move up the plan tree. If the query plan does not correctly estimate query predicate selectivity, collect more statistics on the relevant columns. You can also try reordering the `WHERE` clause of your SQL statement.
+-   **Does the optimizer choose the best join order?** When you have a query that joins multiple tables, make sure that the optimizer chooses the most selective join order. Joins that eliminate the largest number of rows should be done earlier in the plan so fewer rows move up the plan tree.
+
+    If the plan is not choosing the optimal join order, set `join_collapse_limit=1` and use explicit `JOIN` syntax in your SQL statement to force the legacy query optimizer (planner) to the specified join order. You can also collect more statistics on the relevant join columns.
+
+-   **Does the optimizer selectively scan partitioned tables?** If you use table partitioning, is the optimizer selectively scanning only the child tables required to satisfy the query predicates? Scans of the parent tables should return 0 rows since the parent tables do not contain any data. See [Verifying Your Partition Strategy](../ddl/ddl-partition.html#topic74) for an example of a query plan that shows a selective partition scan.
+-   **Does the optimizer choose hash aggregate and hash join operations where applicable?** Hash operations are typically much faster than other types of joins or aggregations. Row comparison and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer to choose hash operations, there must be sufficient memory available to hold the estimated number of rows. Try increasing work memory to improve performance for a query. If possible, run an `EXPLAIN             ANALYZE` for the query to show which plan operations spilled to disk, how much work memory they used, and how much memory was required to avoid spilling to disk. For example:
+
+    `Work_mem used: 23430K bytes avg, 23430K bytes max (seg0). Work_mem               wanted: 33649K bytes avg, 33649K bytes max (seg0) to lessen workfile I/O affecting 2               workers.`
+
+    The "bytes wanted" message from `EXPLAIN               ANALYZE` is based on the amount of data written to work files and is not exact. The minimum `work_mem` needed can differ from the suggested value.
+
+## <a id="explainplan_plpgsql"></a>Generating EXPLAIN Plan from a PL/pgSQL Function
+
+User-defined PL/pgSQL functions often include dynamically created queries.  You may find it useful to generate the `EXPLAIN` plan for such queries for query performance optimization and tuning. 
+
+Perform the following steps to create and run a user-defined PL/pgSQL function.  This function displays the `EXPLAIN` plan for a simple query on a test database.
+
+1. Log in to the HAWQ master node as user `gpadmin` and set up the HAWQ environment:
+
+    ``` shell
+    $ ssh gpadmin@hawq_master
+    $ . /usr/local/hawq/greenplum_path.sh
+    ```
+
+2. Create a test database named `testdb`:
+
+    ``` shell
+    $ createdb testdb
+    ```
+   
+3. Start the PostgreSQL interactive utility, connecting to `testdb`:
+
+    ``` shell
+    $ psql -d testdb
+    ```
+
+4. Create the table `test_tbl` with a single column named `id` of type `integer`:
+
+    ``` sql
+    testdb=# CREATE TABLE test_tbl (id int);
+    ```
+   
+5. Add some data to the `test_tbl` table:
+
+    ``` sql
+    testdb=# INSERT INTO test_tbl SELECT generate_series(1,100);
+    ```
+   
+    This `INSERT` command adds 100 rows to `test_tbl`, incrementing the `id` for each row.
+   
+6. Create a PL/pgSQL function named `explain_plan_func()` by copying and pasting the following text at the `psql` prompt:
+
+    ``` sql
+    CREATE OR REPLACE FUNCTION explain_plan_func() RETURNS varchar as $$
+   declare
+
+     a varchar;
+     b varchar;
+
+     begin
+       a = '';
+       for b in execute 'explain select count(*) from test_tbl group by id' loop
+         a = a || E'\n' || b;
+       end loop;
+       return a;
+     end;
+   $$
+   LANGUAGE plpgsql
+   VOLATILE;
+    ```
+
+7. Verify the `explain_plan_func()` user-defined function was created successfully:
+
+    ``` shell
+    testdb=# \df+
+    ```
+
+    The `\df+` command lists all user-defined functions.
+   
+8. Perform a query using the user-defined function you just created:
+
+    ``` sql
+    testdb=# SELECT explain_plan_func();
+    ```
+
+    The `EXPLAIN` plan results for the query are displayed:
+    
+    ``` pre
+                                             explain_plan_func                               
+---------------------------------------------------------------------------------------------------------                                                                                             
+ Gather Motion 1:1  (slice2; segments: 1)  (cost=0.00..431.04 rows=100 width=8)                          
+   ->  Result  (cost=0.00..431.03 rows=100 width=8)                         
+         ->  HashAggregate  (cost=0.00..431.03 rows=100 width=8)                
+               Group By: id                                                 
+               ->  Redistribute Motion 1:1  (slice1; segments: 1)  (cost=0.00..431.02 rows=100 width=12) 
+                     Hash Key: id                                              
+                     ->  Result  (cost=0.00..431.01 rows=100 width=12)      
+                           ->  HashAggregate  (cost=0.00..431.01 rows=100 width=12)                      
+                                 Group By: id                               
+                                 ->  Table Scan on test_tbl  (cost=0.00..431.00 rows=100 width=4) 
+ Settings:  default_hash_table_bucket_number=6                              
+ Optimizer status: PQO version 1.627
+(1 row)
+    ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/query.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/query.html.md.erb b/markdown/query/query.html.md.erb
new file mode 100644
index 0000000..9c218c7
--- /dev/null
+++ b/markdown/query/query.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: Querying Data
+---
+
+This topic provides information about using SQL in HAWQ databases.
+
+You enter SQL statements called queries to view and analyze data in a database using the `psql` interactive SQL client and other client tools.
+
+**Note:** HAWQ queries timeout after a period of 600 seconds. For this reason, long-running queries may appear to hang until results are processed or until the timeout period expires.
+
+-   **[About HAWQ Query Processing](../query/HAWQQueryProcessing.html)**
+
+    This topic provides an overview of how HAWQ processes queries. Understanding this process can be useful when writing and tuning queries.
+
+-   **[About GPORCA](../query/gporca/query-gporca-optimizer.html)**
+
+    In HAWQ, you can use GPORCA or the legacy query optimizer.
+
+-   **[Defining Queries](../query/defining-queries.html)**
+
+    HAWQ is based on the PostgreSQL implementation of the SQL standard. SQL commands are typically entered using the standard PostgreSQL interactive terminal `psql`, but other programs that have similar functionality can be used as well.
+
+-   **[Using Functions and Operators](../query/functions-operators.html)**
+
+    HAWQ evaluates functions and operators used in SQL expressions.
+
+-   **[Query Performance](../query/query-performance.html)**
+
+    HAWQ dynamically allocates resources to queries. Query performance depends on several factors such as data locality, number of virtual segments used for the query and general cluster health.
+
+-   **[Query Profiling](../query/query-profiling.html)**
+
+    Examine the query plans of poorly performing queries to identify possible performance tuning opportunities.
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/CharacterSetSupportReference.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/CharacterSetSupportReference.html.md.erb b/markdown/reference/CharacterSetSupportReference.html.md.erb
new file mode 100644
index 0000000..8a12471
--- /dev/null
+++ b/markdown/reference/CharacterSetSupportReference.html.md.erb
@@ -0,0 +1,439 @@
+---
+title: Character Set Support Reference
+---
+
+This topic provides a referene of the character sets supported in HAWQ.
+
+The character set support in HAWQ allows you to store text in a variety of character sets, including single-byte character sets such as the ISO 8859 series and multiple-byte character sets such as EUC (Extended Unix Code), UTF-8, and Mule internal code. All supported character sets can be used transparently by clients, but a few are not supported for use within the server (that is, as a server-side encoding). The default character set is selected while initializing your HAWQ using `hawq init.` It can be overridden when you create a database, so you can have multiple databases each with a different character set.
+
+<table style="width:100%;">
+<colgroup>
+<col width="16%" />
+<col width="16%" />
+<col width="16%" />
+<col width="16%" />
+<col width="16%" />
+<col width="16%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Name</th>
+<th>Description</th>
+<th>Language</th>
+<th>Server</th>
+<th>Bytes/Char</th>
+<th>Aliases</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>BIG5</td>
+<td>Big Five</td>
+<td>Traditional Chinese</td>
+<td>No</td>
+<td>1-2</td>
+<td>WIN950, Windows950</td>
+</tr>
+<tr class="even">
+<td>EUC_CN</td>
+<td>Extended UNIX Code-CN</td>
+<td>Simplified Chinese</td>
+<td>Yes</td>
+<td>1-3</td>
+<td></td>
+</tr>
+<tr class="odd">
+<td>EUC_JP</td>
+<td>Extended UNIX Code-JP</td>
+<td>Japanese</td>
+<td>Yes</td>
+<td>1-3</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>EUC_KR</td>
+<td>Extended UNIX Code-KR</td>
+<td>Korean</td>
+<td>Yes</td>
+<td>1-3</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>EUC_TW</td>
+<td>Extended UNIX Code-TW</td>
+<td>Traditional Chinese, Taiwanese</td>
+<td>Yes</td>
+<td>1-3</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>GB18030</td>
+<td>National Standard</td>
+<td>Chinese</td>
+<td>No</td>
+<td>1-2</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>GBK</td>
+<td>Extended National Standard</td>
+<td>Simplified Chinese</td>
+<td>No</td>
+<td>1-2</td>
+<td>WIN936,Windows936</td>
+</tr>
+<tr class="even">
+<td>ISO_8859_5</td>
+<td>ISO 8859-5, ECMA 113</td>
+<td>Latin/Cyrillic</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>ISO_8859_6</td>
+<td>ISO 8859-6, ECMA 114</td>
+<td>Latin/Arabic</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>ISO_8859_7</td>
+<td>ISO 8859-7, ECMA 118</td>
+<td>Latin/Greek</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>ISO_8859_8</td>
+<td>ISO 8859-8, ECMA 121</td>
+<td>Latin/Hebrew</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>JOHAB</td>
+<td>JOHA</td>
+<td>Korean (Hangul)</td>
+<td>Yes</td>
+<td>1-3</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>KOI8</td>
+<td>KOI8-R(U)</td>
+<td>Cyrillic</td>
+<td>Yes</td>
+<td>1</td>
+<td>KOI8R</td>
+</tr>
+<tr class="even">
+<td>LATIN1</td>
+<td>ISO 8859-1, ECMA 94</td>
+<td>Western European</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO88591</td>
+</tr>
+<tr class="odd">
+<td>LATIN2</td>
+<td>ISO 8859-2, ECMA 94</td>
+<td>Central European</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO88592</td>
+</tr>
+<tr class="even">
+<td>LATIN3</td>
+<td>ISO 8859-3, ECMA 94</td>
+<td>South European</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO88593</td>
+</tr>
+<tr class="odd">
+<td>LATIN4</td>
+<td>ISO 8859-4, ECMA 94</td>
+<td>North European</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO88594</td>
+</tr>
+<tr class="even">
+<td>LATIN5</td>
+<td>ISO 8859-9, ECMA 128</td>
+<td>Turkish</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO88599</td>
+</tr>
+<tr class="odd">
+<td>LATIN6</td>
+<td>ISO 8859-10, ECMA 144</td>
+<td>Nordic</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO885910</td>
+</tr>
+<tr class="even">
+<td>LATIN7</td>
+<td>ISO 8859-13</td>
+<td>Baltic</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO885913</td>
+</tr>
+<tr class="odd">
+<td>LATIN8</td>
+<td>ISO 8859-14</td>
+<td>Celtic</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO885914</td>
+</tr>
+<tr class="even">
+<td>LATIN9</td>
+<td>ISO 8859-15</td>
+<td>LATIN1 with Euro and accents</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO885915</td>
+</tr>
+<tr class="odd">
+<td>LATIN10</td>
+<td>ISO 8859-16, ASRO SR 14111</td>
+<td>Romanian</td>
+<td>Yes</td>
+<td>1</td>
+<td>ISO885916</td>
+</tr>
+<tr class="even">
+<td>MULE_INTERNAL</td>
+<td>Mule internal code</td>
+<td>Multilingual Emacs</td>
+<td>Yes</td>
+<td>1-4</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>SJIS</td>
+<td>Shift JIS</td>
+<td>Japanese</td>
+<td>No</td>
+<td>1-2</td>
+<td>Mskanji, ShiftJIS, WIN932, Windows932</td>
+</tr>
+<tr class="even">
+<td>SQL_ASCII</td>
+<td>unspecified2</td>
+<td>any</td>
+<td>No</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>UHC</td>
+<td>Unified Hangul Code</td>
+<td>Korean</td>
+<td>No</td>
+<td>1-2</td>
+<td>WIN949, Windows949</td>
+</tr>
+<tr class="even">
+<td>UTF8</td>
+<td>Unicode, 8-bit�</td>
+<td>all</td>
+<td>Yes</td>
+<td>1-4</td>
+<td>Unicode</td>
+</tr>
+<tr class="odd">
+<td>WIN866</td>
+<td>Windows CP866</td>
+<td>Cyrillic</td>
+<td>Yes</td>
+<td>1</td>
+<td>ALT</td>
+</tr>
+<tr class="even">
+<td>WIN874</td>
+<td>Windows CP874</td>
+<td>Thai</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>WIN1250</td>
+<td>Windows CP1250</td>
+<td>Central European</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>WIN1251</td>
+<td>Windows CP1251</td>
+<td>Cyrillic</td>
+<td>Yes</td>
+<td>1</td>
+<td>WIN</td>
+</tr>
+<tr class="odd">
+<td>WIN1252</td>
+<td>Windows CP1252</td>
+<td>Western European</td>
+<td>Yes</td>
+<td><p>1</p></td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>WIN1253</td>
+<td>Windows CP1253</td>
+<td>Greek</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>WIN1254</td>
+<td>Windows CP1254</td>
+<td>Turkish</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>WIN1255</td>
+<td>Windows CP1255</td>
+<td>Hebrew</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>WIN1256</td>
+<td>Windows CP1256</td>
+<td>Arabic</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td>WIN1257</td>
+<td>Windows CP1257</td>
+<td>Baltic</td>
+<td>Yes</td>
+<td>1</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td>WIN1258</td>
+<td>Windows CP1258</td>
+<td>Vietnamese</td>
+<td>Yes</td>
+<td>1</td>
+<td>ABC, TCVN, TCVN5712, VSCII�</td>
+</tr>
+</tbody>
+</table>
+
+**Note:**
+
+-   Not all the APIs support all the listed character sets. For example, the JDBC driver does not support MULE\_INTERNAL, LATIN6, LATIN8, and LATIN10.
+-   The SQLASCII setting behaves considerable differently from the other settings. Byte values 0-127 are interpreted according to the ASCII standard, while byte values 128-255 are taken as uninterpreted characters. If you are working with any nonASCII data, it is unwise to use the SQL\_ASCII setting as a client encoding. SQL\_ASCII is not supported as a server encoding.
+
+## <a id="settingthecharacterset"></a>Setting the Character Set
+
+`hawq init` defines the default character set for a HAWQ system by reading the setting of the ENCODING parameter in the gp\_init\_config file at initialization time. The default character set is UNICODE or UTF8.
+
+You can create a database with a different character set besides what is used as the system-wide default. For example:
+
+``` sql
+CREATE DATABASE korean WITH ENCODING 'EUC_KR';
+```
+
+**Note:** Although you can specify any encoding you want for a database, it is unwise to choose an encoding that is not what is expected by the locale you have selected. The LC\_COLLATE and LC\_CTYPE settings imply a particular encoding, and locale-dependent operations (such as sorting) are likely to misinterpret data that is in an incompatible encoding.
+
+Since these locale settings are frozen by hawq init, the apparent flexibility to use different encodings in different databases is more theoretical than real.
+
+One way to use multiple encodings safely is to set the locale to C or POSIX during initialization time, thus disabling any real locale awareness.
+
+## <a id="charactersetconversionbetweenserverandclient"></a>Character Set Conversion Between Server and Client
+
+HAWQ supports automatic character set conversion between server and client for certain character set combinations. The conversion information is stored in the master pg\_conversion system catalog table. HAWQ comes with some predefined conversions or you can create a new conversion using the SQL command CREATE CONVERSION.
+
+| Server Character Set | Available Client Sets                                                                                                          |
+|----------------------|--------------------------------------------------------------------------------------------------------------------------------|
+| BIG5                 | not supported as a server encoding                                                                                             |
+| EUC\_CN              | EUC\_CN, MULE\_INTERNAL, UTF8                                                                                                  |
+| EUC\_JP              | EUC\_JP, MULE\_INTERNAL, SJIS, UTF8                                                                                            |
+| EUC\_KR              | EUC\_KR, MULE\_INTERNAL, UTF8                                                                                                  |
+| EUC\_TW�             | EUC\_TW, BIG5, MULE\_INTERNAL, UTF8�                                                                                           |
+| GB18030              | not supported as a server encoding                                                                                             |
+| GBK                  | not supported as a server encoding                                                                                             |
+| ISO\_8859\_5         | ISO\_8859\_5, KOI8, MULE\_INTERNAL, UTF8, WIN866, WIN1251                                                                      |
+| ISO\_8859\_6         | ISO\_8859\_6, UTF8                                                                                                             |
+| ISO\_8859\_7         | ISO\_8859\_7, UTF8                                                                                                             |
+| ISO\_8859\_8         | ISO\_8859\_8, UTF8                                                                                                             |
+| JOHAB                | JOHAB, UTF8                                                                                                                    |
+| KOI8                 | KOI8, ISO\_8859\_5, MULE\_INTERNAL, UTF8, WIN866, WIN1251                                                                      |
+| LATIN1               | LATIN1, MULE\_INTERNAL, UTF8                                                                                                   |
+| LATIN2               | LATIN2, MULE\_INTERNAL, UTF8, WIN1250                                                                                          |
+| LATIN3               | LATIN3, MULE\_INTERNAL, UTF8                                                                                                   |
+| LATIN4               | LATIN4, MULE\_INTERNAL, UTF8                                                                                                   |
+| LATIN5               | LATIN5, UTF8                                                                                                                   |
+| LATIN6               | LATIN6, UTF8                                                                                                                   |
+| LATIN7               | LATIN7, UTF8                                                                                                                   |
+| LATIN8               | LATIN8, UTF8�                                                                                                                  |
+| LATIN9               | LATIN9, UTF8                                                                                                                   |
+| LATIN10              | LATIN10, UTF8                                                                                                                  |
+| MULE\_INTERNAL       | MULE\_INTERNAL, BIG5, EUC\_CN, EUC\_JP, EUC\_KR, EUC\_TW, ISO\_8859\_5, KOI8, LATIN1 to LATIN4, SJIS, WIN866, WIN1250, WIN1251 |
+| SJIS                 | not supported as a server encoding                                                                                             |
+| SQL\_ASCII           | not supported as a server encoding                                                                                             |
+| UHC                  | not supported as a server encoding                                                                                             |
+| UTF8                 | all supported encodings                                                                                                        |
+| WIN866               | WIN866                                                                                                                         |
+| WIN874               | WIN874, UTF8                                                                                                                   |
+| WIN1250              | WIN1250, LATIN2, MULE\_INTERNAL, UTF8                                                                                          |
+| WIN1251              | WIN1251, ISO\_8859\_5, KOI8, MULE\_INTERNAL, UTF8, WIN866�                                                                     |
+| WIN1252              | WIN1252, UTF8                                                                                                                  |
+| WIN1253              | WIN1253, UTF8                                                                                                                  |
+| WIN1254              | WIN1254, UTF8                                                                                                                  |
+| WIN1255              | WIN1255, UTF8                                                                                                                  |
+| WIN1256              | WIN1256, UTF8                                                                                                                  |
+| WIN1257              | WIN1257, UTF8�                                                                                                                 |
+| WIN1258              | WIN1258, UTF8�                                                                                                                 |
+
+To enable automatic character set conversion, you have to tell HAWQ the character set (encoding) you would like to use in the client. There are several ways to accomplish this:�
+
+-   Using the \\encoding command in psql, which allows you to change client encoding on the fly.
+-   Using SET client\_encoding TO. Setting the client encoding can be done with this SQL command:
+
+    ``` sql
+    SET CLIENT_ENCODING TO 'value';
+    ```
+
+    To query the current client encoding:
+
+    ``` sql
+    SHOW client_encoding;
+    ```
+
+    To return the default encoding:
+
+    ``` sql
+    RESET client_encoding;
+    ```
+
+-   Using the PGCLIENTENCODING environment variable. When PGCLIENTENCODING is defined in the client's environment, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.)
+-   Setting the configuration parameter client\_encoding. If client\_encoding is set in the master `hawq-site.xml` file, that client encoding is automatically selected when a connection to HAWQ is made. (This can subsequently be overridden using any of the other methods mentioned above.)
+
+If the conversion of a particular character is not possible \u2014 suppose you chose EUC\_JP for the server and LATIN1 for the client, then some Japanese characters do not have a representation in LATIN1 \u2014 then an error is reported.�
+
+If the client character set is defined as SQL\_ASCII, encoding conversion is disabled, regardless of the server\u2019s character set. The use of SQL\_ASCII is unwise unless you are working with all-ASCII data. SQL\_ASCII is not supported as a server encoding.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/HAWQDataTypes.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/HAWQDataTypes.html.md.erb b/markdown/reference/HAWQDataTypes.html.md.erb
new file mode 100644
index 0000000..fe5cff7
--- /dev/null
+++ b/markdown/reference/HAWQDataTypes.html.md.erb
@@ -0,0 +1,139 @@
+---
+title: Data Types
+---
+
+This topic provides a reference of the data types supported in HAWQ.
+
+HAWQ has a rich set of native data types available to users. Users may also define new data types using the `CREATE TYPE` command. This reference shows all of the built-in data types. In addition to the types listed here, there are also some internally�used data types, such as **oid**�(object identifier), but those are not documented in this guide.
+
+The following data types are specified by SQL:
+
+-   array (*)
+-   bit
+-   bit varying
+-   boolean
+-   character varying
+-   char
+-   character
+-   date
+-   decimal
+-   double precision
+-   integer
+-   interval
+-   numeric
+-   real
+-   smallint
+-   time (with or without time zone)
+-   timestamp�(with or without time zone)
+-   varchar
+
+**Note**(\*): HAWQ supports the array data type for append-only tables; parquet table storage does *not* support the array type. 
+
+Each data type has an external representation determined by its input and output functions. Many of the built-in types have obvious external formats. However, several types are unique to HAWQ, such as geometric paths, or have several possibilities for formats, such as the date and time types. Some of the input and output functions are not invertible. That is, the result of an output function may lose accuracy when compared to the original input.
+
+ 
+ <span class="tablecap">Table 1. HAWQ Built-in Data Types</span>
+=======
+ 
+
+| Name                                       | Alias               | Size                  | Range                                       | Description                                                                       |
+|--------------------------------------------|---------------------|-----------------------|---------------------------------------------|-----------------------------------------------------------------------------------|
+| array                                     |          [ ]       |    variable (ignored)    | multi-dimensional |   any built-in or user-defined base type, enum type, or composite type                                                               |
+| bigint                                     | int8                | 8 bytes               | -9223372036854775808 to 9223372036854775807 | large range integer                                                               |
+| bigserial                                  | serial8             | 8 bytes               | 1 to 9223372036854775807                    | large autoincrementing integer                                                    |
+| bit \[ (n) \]                              | �                   | n bits                | bit string constant                         | fixed-length bit string                                                           |
+| bit varying \[ (n) \]                      | varbit              | actual number of bits | bit string constant                         | variable-length bit string                                                        |
+| boolean                                    | bool                | 1 byte                | true/false, t/f, yes/no, y/n, 1/0           | logical Boolean (true/false)                                                      |
+| box                                        | �                   | 32 bytes              | ((x1,y1),(x2,y2))                           | rectangular box in the plane - not allowed in distribution key columns.           |
+| bytea                                      | �                   | 1 byte + binarystring | sequence of octets                          | variable-length binary string                                                     |
+| character�\[ (n) \]                        | char �\[ (n) \]     | 1 byte + n            | strings up to n characters in length        | fixed-length, blank padded                                                        |
+| character varying�\[ (n) \]                | varchar� \[ (n) \]  | 1 byte + binarystring | strings up to n characters in length        | variable-length� with limit                                                       |
+| cidr                                       | �                   | 12 or 24 bytes        | �                                           | IPv4 networks                                                                     |
+| circle                                     | �                   | 24 bytes              | &lt;(x,y),r&gt; (center and radius)         | circle in the plane - not allowed in distribution key columns.                    |
+| date                                       | �                   | 4 bytes               | 4713 BC - 294,277 AD                        | �calendar date (year, month, day)                                                 |
+| decimal \[ (p, s) \]                       | numeric \[ (p,s) \] | variable              | no limit                                    | user-specified, inexact                                                           |
+| double precision                           | float8 float        | 8 bytes               | 15 decimal digits�precision                 | variable-precision, inexact                                                       |
+| inet                                       | �                   | 12 or 24 bytes        | �                                           | �IPv4 hosts and networks                                                          |
+| integer                                    | int, int4           | 4 bytes               | -2147483648 to +2147483647                  | usual choice for integer                                                          |
+| interval�\[ (p) \]                         | �                   | 12 bytes              | -178000000 years - 178000000 years          | time span                                                                         |
+| lseg                                       | �                   | 32 bytes              | ((x1,y1),(x2,y2))                           | line segment in the plane - not allowed in distribution key columns.              |
+| macaddr                                    | �                   | 6 bytes               | �                                           | MAC addresses                                                                     |
+| money                                      | �                   | 4 bytes               | -21474836.48 to +21474836.47                | currency amount                                                                   |
+| path                                       | �                   | 16+16n bytes          | \[(x1,y1),...\]                             | geometric path in the plane - not allowed in distribution key columns.            |
+| point                                      | �                   | 16 bytes              | (x, y)                                      | geometric path in the plane - not allowed in distribution key columns.            |
+| polygon                                    | �                   | 40+16n bytes          | �\[(x1,y1),...\]                            | closed geometric path in the plane - not allowed in the distribution key columns. |
+| real                                       | float4              | 4 bytes               | 6 decimal digits precision                  | �variable-precision, inexact                                                      |
+| serial                                     | serial4             | 4 bytes               | 1 to 2147483647                             | autoincrementing integer                                                          |
+| smallint                                   | int2                | 2�bytes               | -32768 to +32767                            | small range integer                                                               |
+| text                                       | �                   | 1 byte�+ string size  | strings of any length                       | variable unlimited length                                                         |
+| time \[ (p) \] \[ without time zone \]     | �                   | 8 bytes               | 00:00:00\[.000000\] - 24:00:00\[.000000\]   | time of day only                                                                  |
+| time \[ (p) \] with time zone              | timetz              | 12 bytes              | 00:00:00+1359 - 24:00:00-1359               | time of day only, with time zone                                                  |
+| timestamp \[ (p) \] \[without time zone \] | �                   | 8 bytes               | 4713 BC - 294,277 AD                        | both date and time                                                                |
+| timestamp \[ (p) \] with time zone         | timestamptz         | 8 bytes               | 4713 BC - 294,277 AD                        | both date and time, with time zone                                                |
+| xml                                        | �                   | 1 byte + xml size     | xml of any length                           | variable unlimited length                                                         |
+
+ 
+For variable length data types (such as char, varchar, text, xml, etc.) if the data is greater than or equal to 127 bytes, the storage overhead is 4 bytes instead of 1.
+
+**Note**: Use these documented built-in types when creating user tables.  Any other data types that might be visible in the source code are for internal use only.
+
+## <a id="timezones"></a>Time Zones
+
+Time zones, and time-zone conventions, are influenced by political decisions, not just earth geometry. Time zones around the world became somewhat standardized during the 1900's, but continue to be prone to arbitrary changes, particularly with respect to daylight-savings rules.�HAWQ�uses the widely-used�zoneinfo�time zone database for information about historical time zone rules. For times in the future, the assumption is that the latest known rules for a given time zone will continue to be observed indefinitely far into the future.
+
+HAWQ is compatible with the�SQL�standard definitions for typical usage. However, the�SQL�standard has an odd mix of date and time types and capabilities. Two obvious problems are:
+
+-   Although the�date�type cannot have an associated time zone, the�time�type can. Time zones in the real world have little meaning unless associated with a date as well as a time, since the offset can vary through the year with daylight-saving time boundaries.
+-   The default time zone is specified as a constant numeric offset from�UTC. It is therefore impossible to adapt to daylight-saving time when doing date/time arithmetic across�DST�boundaries.
+
+To address these difficulties, use date/time types that contain both date and time when using time zones. Do not use the type�time with time zone�(although HAWQ supports this�for legacy applications and for compliance with the�SQL�standard).�HAWQ�assumes your local time zone for any type containing only date or time.
+
+All timezone-aware dates and times are stored internally in�UTC. They are converted to local time in the zone specified by the�timezone�configuration parameter before being displayed to the client.
+
+HAWQ�allows you to specify time zones in three different forms:
+
+-   A full time zone name, for example�America/New\_York.�HAWQ uses the widely-used�zoneinfo�time zone data for this purpose, so the same names are also recognized by much other software.
+-   A time zone abbreviation, for example�PST. Such a specification merely defines a particular offset from UTC, in contrast to full time zone names which can imply a set of daylight savings transition-date rules as well. You cannot set the configuration parameters�timezone�or�log\_timezone�to a time zone abbreviation, but you can use abbreviations in date/time input values and with the�AT TIME ZONE�operator.
+-   In addition to the timezone names and abbreviations, HAWQ /&gt; accepts�POSIX-style time zone specifications of the form�STDoffset�or�STDoffsetDST, where STD�is a zone abbreviation,�offset�is a numeric offset in hours west from UTC, and�DST�is an optional daylight-savings zone abbreviation, assumed to stand for one hour ahead of the given offset. For example, if�EST5EDT�were not already a recognized zone name, it would be accepted and would be functionally equivalent to United States East Coast time. When a daylight-savings zone name is present, it is assumed to be used according to the same daylight-savings transition rules used in the�zoneinfo�time zone database's�posixrules�entry. In a standard�HAWQ�installation,�posixrules�is the same as�US/Eastern, so that POSIX-style time zone specifications follow USA daylight-savings rules. If needed, you can adjust this behavior by replacing the�posixrules�file.
+
+In short, this is the difference between abbreviations and full names: abbreviations always represent a fixed offset from UTC, whereas most of the full names imply a local daylight-savings time rule, and so have two possible UTC offsets.
+
+One should be wary that the POSIX-style time zone feature can lead to silently accepting bogus input, since there is no check on the reasonableness of the zone abbreviations. For example,�SET TIMEZONE TO FOOBAR0�will work, leaving the system effectively using a rather peculiar abbreviation for UTC. Another issue to keep in mind is that in POSIX time zone names, positive offsets are used for locations�west�of Greenwich. Everywhere else,�PostgreSQL�follows the ISO-8601 convention that positive timezone offsets are�east�of Greenwich.
+
+In all cases, timezone names are recognized case-insensitively.�
+
+Neither full names nor abbreviations are hard-wired into the server, see�[Date and Time Configuration Files](#dateandtimeconfigurationfiles).
+
+The�timezone�configuration parameter can be set in the file�`hawq-site.xml`. There are also several special ways to set it:
+
+-   If�timezone�is not specified in�`hawq-site.xml`�or as a server command-line option, the server attempts to use the value of the�TZ�environment variable as the default time zone. If�TZ�is not defined or is not any of the time zone names known to�PostgreSQL, the server attempts to determine the operating system's default time zone by checking the behavior of the C library function�localtime(). The default time zone is selected as the closest match from the known time zones.�
+-   The�SQL�command�SET TIME ZONE�sets the time zone for the session. This is an alternative spelling of�SET TIMEZONE TO�with a more SQL-spec-compatible syntax.
+-   The�PGTZ�environment variable is used by�libpq�clients to send a�SET TIME ZONE�command to the server upon connection.
+
+## <a id="dateandtimeconfigurationfiles"></a>Date and Time Configuration Files
+
+Since timezone abbreviations are not well standardized, HAWQ /&gt;�provides a means to customize the set of abbreviations accepted by the server. The�timezone\_abbreviations�run-time parameter determines the active set of abbreviations. While this parameter can be altered by any database user, the possible values for it are under the control of the database administrator \u2014 they are in fact names of configuration files stored in�.../share/timezonesets/�of the installation directory. By adding or altering files in that directory, the administrator can set local policy for timezone abbreviations.
+
+timezone\_abbreviations�can be set to any file name found in�.../share/timezonesets/, if the file's name is entirely alphabetic. (The prohibition against non-alphabetic characters in�timezone\_abbreviations�prevents reading files outside the intended directory, as well as reading editor backup files and other extraneous files.)
+
+A timezone abbreviation file can contain blank lines and comments beginning with�\#. Non-comment lines must have one of these formats:
+
+``` pre
+time_zone_nameoffsettime_zone_nameoffset D
+@INCLUDE file_name
+@OVERRIDE
+```
+
+A�time\_zone\_name�is just the abbreviation being defined. The�offset�is the zone's offset in seconds from UTC, positive being east from Greenwich and negative being west. For example, -18000 would be five hours west of Greenwich, or North American east coast standard time.�D�indicates that the zone name represents local daylight-savings time rather than standard time. Since all known time zone offsets are on 15 minute boundaries, the number of seconds has to be a multiple of 900.
+
+The�@INCLUDE�syntax allows inclusion of another file in the�.../share/timezonesets/�directory. Inclusion can be nested, to a limited depth.
+
+The�@OVERRIDE�syntax indicates that subsequent entries in the file can override previous entries (i.e., entries obtained from included files). Without this, conflicting definitions of the same timezone abbreviation are considered an error.
+
+In an unmodified installation, the file�Default�contains all the non-conflicting time zone abbreviations for most of the world. Additional files�Australia�and�India�are provided for those regions: these files first include the�Default�file and then add or modify timezones as needed.
+
+For reference purposes, a standard installation also contains files�Africa.txt,�America.txt, etc, containing information about every time zone abbreviation known to be in use according to the�zoneinfo�timezone database. The zone name definitions found in these files can be copied and pasted into a custom configuration file as needed.
+
+**Note:** These files cannot be directly referenced as�timezone\_abbreviations�settings, because of the dot embedded in their names.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/HAWQEnvironmentVariables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/HAWQEnvironmentVariables.html.md.erb b/markdown/reference/HAWQEnvironmentVariables.html.md.erb
new file mode 100644
index 0000000..ce21798
--- /dev/null
+++ b/markdown/reference/HAWQEnvironmentVariables.html.md.erb
@@ -0,0 +1,97 @@
+---
+title: Environment Variables
+---
+
+This topic contains a reference of the environment variables that you set for HAWQ.
+
+Set these in your user\u2019s startup shell profile (such as `~/.bashrc` or `~/.bash_profile`), or in `/etc/profile`, if you want to set them for all users.
+
+## <a id="requiredenvironmentvariables"></a>Required Environment Variables
+
+**Note:** `GPHOME`, `PATH` and `LD_LIBRARY_PATH` can be set by sourcing the `greenplum_path.sh` file from your HAWQ installation directory.
+
+### <a id="gphome"></a>GPHOME
+
+This is the installed location of your HAWQ software. For example:
+
+``` pre
+GPHOME=/usr/local/hawq  
+export GPHOME
+```
+
+### <a id="path"></a>PATH
+
+Your `PATH` environment variable should point to the location of the HAWQ bin directory. For example:
+
+``` pre
+PATH=$GPHOME/bin:$PATH
+export PATH 
+```
+
+### <a id="ld_library_path"></a>LD\_LIBRARY\_PATH
+
+The `LD_LIBRARY_PATH` environment variable should point to the location of the `HAWQ/PostgreSQL` library files. For example:
+
+``` pre
+LD_LIBRARY_PATH=$GPHOME/lib
+export LD_LIBRARY_PATH
+```
+
+## <a id="optionalenvironmentvariables"></a>Optional Environment Variables
+
+The following are HAWQ environment variables. You may want to add the connection-related environment variables to your profile, for convenience. That way, you do not have to type so many options on the command line for client connections. Note that these environment variables should be set on the HAWQ master host only.
+
+
+### <a id="pgappname"></a>PGAPPNAME
+
+This is the name of the application that is usually set by an application when it connects to the server. This name is displayed in the activity view and in log entries. The `PGAPPNAME` environmental variable behaves the same as the `application_name` connection parameter. The default value for `application_name` is `psql`. The name cannot be longer than 63 characters.
+
+### <a id="pgdatabase"></a>PGDATABASE
+
+The name of the default database to use when connecting.
+
+### <a id="pghost"></a>PGHOST
+
+The HAWQ master host name.
+
+### <a id="pghostaddr"></a>PGHOSTADDR
+
+The numeric IP address of the master host. This can be set instead of, or in addition to, `PGHOST`, to avoid DNS lookup overhead.
+
+### <a id="pgpassword"></a>PGPASSWORD
+
+The password used if the server demands password authentication. Use of this environment variable is not recommended, for security reasons (some operating systems allow non-root users to see process environment variables via ps). Instead, consider using the `~/.pgpass` file.
+
+### <a id="pgpassfile"></a>PGPASSFILE
+
+The name of the password file to use for lookups. If not set, it defaults to `~/.pgpass`.
+
+See The Password File under�[Configuring Client Authentication](../clientaccess/client_auth.html).
+
+### <a id="pgoptions"></a>PGOPTIONS
+
+Sets additional configuration parameters for the HAWQ master server.
+
+### <a id="pgport"></a>PGPORT
+
+The port number of the HAWQ server on the master host. The default port is 5432.
+
+### <a id="pguser"></a>PGUSER
+
+The HAWQ user name used to connect.
+
+### <a id="pgdatestyle"></a>PGDATESTYLE
+
+Sets the default style of date/time representation for a session. (Equivalent to `SET datestyle TO....`)
+
+### <a id="pgtz"></a>PGTZ
+
+Sets the default time zone for a session. (Equivalent to `SET timezone                   TO....`)
+
+### <a id="pgclientencoding"></a>PGCLIENTENCODING
+
+Sets the default client character set encoding for a session. (Equivalent to `SET client_encoding TO....`)
+
+��
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/HAWQSampleSiteConfig.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/HAWQSampleSiteConfig.html.md.erb b/markdown/reference/HAWQSampleSiteConfig.html.md.erb
new file mode 100644
index 0000000..d4cae5a
--- /dev/null
+++ b/markdown/reference/HAWQSampleSiteConfig.html.md.erb
@@ -0,0 +1,120 @@
+---
+title: Sample hawq-site.xml Configuration File
+---
+
+```xml
+<configuration>
+        <property>
+                <name>default_hash_table_bucket_number</name>
+                <value>18</value>
+        </property>
+
+        <property>
+                <name>hawq_dfs_url</name>
+                <value>hawq.example.com:8020/hawq_default</value>
+        </property>
+
+        <property>
+                <name>hawq_global_rm_type</name>
+                <value>none</value>
+        </property>
+
+        <property>
+                <name>hawq_master_address_host</name>
+                <value>hawq.example.com</value>
+        </property>
+
+        <property>
+                <name>hawq_master_address_port</name>
+                <value>5432</value>
+        </property>
+
+        <property>
+                <name>hawq_master_directory</name>
+                <value>/data/hawq/master</value>
+        </property>
+
+        <property>
+                <name>hawq_master_temp_directory</name>
+                <value>/tmp/hawq/master</value>
+        </property>
+
+        <property>
+                <name>hawq_re_cgroup_hierarchy_name</name>
+                <value>hawq</value>
+        </property>
+
+        <property>
+                <name>hawq_re_cgroup_mount_point</name>
+                <value>/sys/fs/cgroup</value>
+        </property>
+
+        <property>
+                <name>hawq_re_cpu_enable</name>
+                <value>false</value>
+        </property>
+
+        <property>
+                <name>hawq_rm_memory_limit_perseg</name>
+                <value>64GB</value>
+        </property>
+
+        <property>
+                <name>hawq_rm_nvcore_limit_perseg</name>
+                <value>16</value>
+        </property>
+
+        <property>
+                <name>hawq_rm_nvseg_perquery_limit</name>
+                <value>512</value>
+        </property>
+
+        <property>
+                <name>hawq_rm_nvseg_perquery_perseg_limit</name>
+                <value>6</value>
+        </property>
+
+        <property>
+                <name>hawq_rm_yarn_address</name>
+                <value>rm.example.com:8050</value>
+        </property>
+
+        <property>
+                <name>hawq_rm_yarn_app_name</name>
+                <value>hawq</value>
+        </property>
+
+        <property>
+                <name>hawq_rm_yarn_queue_name</name>
+                <value>default</value>
+        </property>
+
+        <property>
+                <name>hawq_rm_yarn_scheduler_address</name>
+                <value>rm.example.com:8030</value>
+        </property>
+
+        <property>
+                <name>hawq_segment_address_port</name>
+                <value>40000</value>
+        </property>
+
+        <property>
+                <name>hawq_segment_directory</name>
+                <value>/data/hawq/segment</value>
+        </property>
+
+        <property>
+                <name>hawq_segment_temp_directory</name>
+                <value>/tmp/hawq/segment</value>
+        </property>
+
+        <property>
+                <name>hawq_standby_address_host</name>
+                <value>standbyhost.example.com</value>
+        </property>
+
+</configuration>
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/HAWQSiteConfig.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/HAWQSiteConfig.html.md.erb b/markdown/reference/HAWQSiteConfig.html.md.erb
new file mode 100644
index 0000000..3d20297
--- /dev/null
+++ b/markdown/reference/HAWQSiteConfig.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: Server Configuration Parameter Reference
+---
+
+This section describes all server configuration guc/parameters that are available in HAWQ.
+
+Configuration guc/parameters are located in `$GPHOME/etc/hawq-site.xml`. This configuration file resides on all HAWQ instances and is managed either by Ambari or by using the `hawq config` utility. On HAWQ clusters installed and managed by Ambari, always use the Ambari administration interface, and not `hawq config`, to configure HAWQ properties. Ambari will overwrite any changes made using `hawq config`. 
+
+You can use the same configuration file cluster-wide across both master and segments.
+
+**Note:** While `postgresql.conf` still exists in HAWQ, any parameters defined in `hawq-site.xml` will overwrite configurations in `postgresql.conf`. For this reason, we recommend that you only use `hawq-site.xml` to configure your HAWQ cluster.
+
+**Note:** If you install and manage HAWQ using Ambari, be aware that any property changes to `hawq-site.xml` made using the command line could be overwritten by Ambari. For Ambari-managed HAWQ clusters, always use the Ambari administration interface to set or change HAWQ configuration properties.
+
+-   **[About Server Configuration Parameters](../reference/guc/guc_config.html)**
+
+-   **[Configuration Parameter Categories](../reference/guc/guc_category-list.html)**
+
+-   **[Configuration Parameters](../reference/guc/parameter_definitions.html)**
+
+-   **[Sample hawq-site.xml Configuration File](../reference/HAWQSampleSiteConfig.html)**
+
+


[04/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/JsonPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/JsonPXF.html.md.erb b/pxf/JsonPXF.html.md.erb
deleted file mode 100644
index 97195ad..0000000
--- a/pxf/JsonPXF.html.md.erb
+++ /dev/null
@@ -1,197 +0,0 @@
----
-title: Accessing JSON File Data
----
-
-The PXF JSON plug-in reads native JSON stored in HDFS.  The plug-in supports common data types, as well as basic (N-level) projection and arrays.
-
-To access JSON file data with HAWQ, the data must be stored in HDFS and an external table created from the HDFS data store.
-
-## Prerequisites<a id="jsonplugprereq"></a>
-
-Before working with JSON file data using HAWQ and PXF, ensure that:
-
--   The PXF HDFS plug-in is installed on all cluster nodes.
--   The PXF JSON plug-in is installed on all cluster nodes.
--   You have tested PXF on HDFS.
-
-
-## Working with JSON Files<a id="topic_workwjson"></a>
-
-JSON is a text-based data-interchange format.  JSON data is typically stored in a file with a `.json` suffix. A `.json` file will contain a collection of objects.  A JSON object is a collection of unordered name/value pairs.  A value can be a string, a number, true, false, null, or an object or array. Objects and arrays can be nested.
-
-Refer to [Introducing JSON](http://www.json.org/) for specific information on JSON syntax.
-
-Sample JSON data file content:
-
-``` json
-  {
-    "created_at":"MonSep3004:04:53+00002013",
-    "id_str":"384529256681725952",
-    "user": {
-      "id":31424214,
-       "location":"COLUMBUS"
-    },
-    "coordinates":null
-  }
-```
-
-### JSON to HAWQ Data Type Mapping<a id="topic_workwjson"></a>
-
-To represent JSON data in HAWQ, map data values that use a primitive data type to HAWQ columns of the same type. JSON supports complex data types including projections and arrays. Use N-level projection to map members of nested objects and arrays to primitive data types.
-
-The following table summarizes external mapping rules for JSON data.
-
-<caption><span class="tablecap">Table 1. JSON Mapping</span></caption>
-
-<a id="topic_table_jsondatamap"></a>
-
-| JSON Data Type                                                    | HAWQ Data Type                                                                                                                                                                                            |
-|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Primitive type (integer, float, string, boolean, null) | Use the corresponding HAWQ built-in data type; see [Data Types](../reference/HAWQDataTypes.html). |
-| Array                         | Use `[]` brackets to identify a specific array index to a member of primitive type.                                                                                            |
-| Object                | Use dot `.` notation to specify each level of projection (nesting) to a member of a primitive type.                                                                                         |
-
-
-### JSON File Read Modes<a id="topic_jsonreadmodes"></a>
-
-
-The PXF JSON plug-in reads data in one of two modes. The default mode expects one full JSON record per line.  The JSON plug-in also supports a read mode operating on multi-line JSON records.
-
-In the following discussion, a data set defined by a sample schema will be represented using each read mode of the PXF JSON plug-in.  The sample schema contains data fields with the following names and data types:
-
-   - "created_at" - text
-   - "id_str" - text
-   - "user" - object
-      - "id" - integer
-      - "location" - text
-   - "coordinates" - object (optional)
-      - "type" - text
-      - "values" - array
-         - [0] - integer
-         - [1] - integer
-
-
-Example 1 - Data Set for Single-JSON-Record-Per-Line Read Mode:
-
-``` pre
-{"created_at":"FriJun0722:45:03+00002013","id_str":"343136551322136576","user":{
-"id":395504494,"location":"NearCornwall"},"coordinates":{"type":"Point","values"
-: [ 6, 50 ]}},
-{"created_at":"FriJun0722:45:02+00002013","id_str":"343136547115253761","user":{
-"id":26643566,"location":"Austin,Texas"}, "coordinates": null},
-{"created_at":"FriJun0722:45:02+00002013","id_str":"343136547136233472","user":{
-"id":287819058,"location":""}, "coordinates": null}
-```  
-
-Example 2 - Data Set for Multi-Line JSON Record Read Mode:
-
-``` json
-{
-  "root":[
-    {
-      "record_obj":{
-        "created_at":"MonSep3004:04:53+00002013",
-        "id_str":"384529256681725952",
-        "user":{
-          "id":31424214,
-          "location":"COLUMBUS"
-        },
-        "coordinates":null
-      },
-      "record_obj":{
-        "created_at":"MonSep3004:04:54+00002013",
-        "id_str":"384529260872228864",
-        "user":{
-          "id":67600981,
-          "location":"KryberWorld"
-        },
-        "coordinates":{
-          "type":"Point",
-          "values":[
-             8,
-             52
-          ]
-        }
-      }
-    }
-  ]
-}
-```
-
-## Loading JSON Data to HDFS<a id="jsontohdfs"></a>
-
-The PXF JSON plug-in reads native JSON stored in HDFS.�Before JSON data can be queried via HAWQ, it must first be loaded to an HDFS data store.
-
-Copy and paste the single line JSON record data set to a file named `singleline.json`.  Similarly, copy and paste the multi-line JSON record data set to `multiline.json`.
-
-**Note**:  Ensure there are **no** blank lines in your JSON files.
-
-Add the data set files to the HDFS data store:
-
-``` shell
-$ hdfs dfs -mkdir /user/data
-$ hdfs dfs -put singleline.json /user/data
-$ hdfs dfs -put multiline.json /user/data
-```
-
-Once loaded to HDFS, JSON data may be queried and analyzed via HAWQ.
-
-## Querying External JSON Data<a id="jsoncetsyntax1"></a>
-
-Use the following syntax to create an external table representing JSON data:�
-
-``` sql
-CREATE EXTERNAL TABLE <table_name> 
-    ( <column_name> <data_type> [, ...] | LIKE <other_table> )
-LOCATION ( 'pxf://<host>[:<port>]/<path-to-data>?PROFILE=Json[&IDENTIFIER=<value>]' )
-      FORMAT 'CUSTOM' ( FORMATTER='pxfwritable_import' );
-```
-JSON-plug-in-specific keywords and values used in the `CREATE EXTERNAL TABLE` call are described below.
-
-| Keyword  | Value |
-|-------|-------------------------------------|
-| \<host\>    | Specify the HDFS NameNode in the \<host\> field. |
-| PROFILE    | The `PROFILE` keyword must specify the value `Json`. |
-| IDENTIFIER  | Include the `IDENTIFIER` keyword and \<value\> in the `LOCATION` string only when accessing a JSON file with multi-line records. \<value\> should identify the member name used to determine the encapsulating JSON object to return.  (If the JSON file is the multi-line record Example 2 above, `&IDENTIFIER=created_at` would be specified.) |  
-| FORMAT    | The `FORMAT` clause must specify `CUSTOM`. |
-| FORMATTER    | The JSON `CUSTOM` format supports only the built-in `pxfwritable_import` `FORMATTER`. |
-
-
-### Example 1 <a id="jsonexample1"></a>
-
-The following [CREATE EXTERNAL TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) SQL call creates a queryable external table based on the data in the single-line-per-record JSON example.
-
-``` sql 
-CREATE EXTERNAL TABLE sample_json_singleline_tbl(
-  created_at TEXT,
-  id_str TEXT,
-  text TEXT,
-  "user.id" INTEGER,
-  "user.location" TEXT,
-  "coordinates.values[0]" INTEGER,
-  "coordinates.values[1]" INTEGER
-)
-LOCATION('pxf://namenode:51200/user/data/singleline.json?PROFILE=Json')
-FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
-SELECT * FROM sample_json_singleline_tbl;
-```
-
-Notice the use of `.` projection to access the nested fields in the `user` and `coordinates` objects.  Also notice the use of `[]` to access the specific elements of the `coordinates.values` array.
-
-### Example 2 <a id="jsonexample2"></a>
-
-A `CREATE EXTERNAL TABLE` SQL call to create a queryable external table based on the multi-line-per-record JSON data set would be very similar to that of the single line data set above. You might specify a different database name, `sample_json_multiline_tbl` for example. 
-
-The `LOCATION` clause would differ.  The `IDENTIFIER` keyword and an associated value must be specified when reading from multi-line JSON records:
-
-``` sql
-LOCATION('pxf://namenode:51200/user/data/multiline.json?PROFILE=Json&IDENTIFIER=created_at')
-```
-
-`created_at` identifies the member name used to determine the encapsulating JSON object, `record_obj` in this case.
-
-To query this external table populated with JSON data:
-
-``` sql
-SELECT * FROM sample_json_multiline_tbl;
-```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/PXFExternalTableandAPIReference.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/PXFExternalTableandAPIReference.html.md.erb b/pxf/PXFExternalTableandAPIReference.html.md.erb
deleted file mode 100644
index 292616b..0000000
--- a/pxf/PXFExternalTableandAPIReference.html.md.erb
+++ /dev/null
@@ -1,1311 +0,0 @@
----
-title: PXF External Tables and API
----
-
-You can use the PXF API to create�your own connectors to access any other type of parallel data store or processing engine.
-
-The PXF Java API lets you extend PXF functionality and add new services and formats without changing HAWQ. The API includes three classes that are extended to allow HAWQ to access an external data source: Fragmenter, Accessor, and Resolver.
-
-The Fragmenter produces a list of data fragments that can be read in parallel from the data source. The Accessor produces a list of records from a single fragment, and the Resolver both deserializes and serializes records.
-
-Together, the Fragmenter, Accessor, and Resolver classes implement a connector. PXF includes plug-ins for tables in HDFS, HBase, and Hive.
-
-## <a id="creatinganexternaltable"></a>Creating an External Table
-
-The syntax for a readable `EXTERNAL TABLE` that uses the PXF protocol is as follows:
-
-``` sql
-CREATE [READABLE|WRITABLE] EXTERNAL TABLE table_name
-        ( column_name data_type [, ...] | LIKE other_table )
-LOCATION('pxf://host[:port]/path-to-data<pxf parameters>[&custom-option=value...]')
-FORMAT 'custom' (formatter='pxfwritable_import|pxfwritable_export');
-```
-
-�where *&lt;pxf parameters&gt;* is:
-
-``` pre
-   ?FRAGMENTER=fragmenter_class&ACCESSOR=accessor_class&RESOLVER=resolver_class]
- | ?PROFILE=profile-name
-```
-<caption><span class="tablecap">Table 1. Parameter values and description</span></caption>
-
-<a id="creatinganexternaltable__table_pfy_htz_4p"></a>
-
-| Parameter               | Value and description                                                                                                                                                                                                                                                          |
-|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| host                    | The current host of the PXF service.                                                                                                                                                                                                                                           |
-| port�                   | Connection port for the PXF service. If the port is omitted, PXF assumes that High Availability (HA) is enabled and connects to the HA name service port, 51200 by default. The HA name service port can be changed by setting the `pxf_service_port` configuration parameter. |
-| *path\_to\_data*        | A directory, file name, wildcard pattern, table name, etc.                                                                                                                                                                                                                     |
-| FRAGMENTER              | The plug-in (Java class) to use for fragmenting data. Used for READABLE external tables only.                                                                                                                                                                                   |
-| ACCESSOR                | The plug-in (Java class) to use for accessing the data. Used for READABLE and WRITABLE tables.                                                                                                                                                                                  |
-| RESOLVER                | The�plug-in (Java class) to use for serializing and deserializing the data. Used for READABLE and WRITABLE tables.                                                                                                                                                              |
-| *custom-option*=*value* | Additional values to pass to the plug-in class. The parameters are passed at runtime to the plug-ins indicated above. The plug-ins can lookup custom options with `org.apache.hawq.pxf.api.utilities.InputData`.�                                                                 |
-
-**Note:** When creating PXF external tables, you cannot use the `HEADER` option in your `FORMAT` specification.
-
-For more information about this example, see [About the Java Class Services and Formats](#aboutthejavaclassservicesandformats).
-
-## <a id="aboutthejavaclassservicesandformats"></a>About the Java Class Services and Formats
-
-The `LOCATION` string in a PXF `CREATE EXTERNAL TABLE` statement is a URI that specifies the host and port of an external data source and the path to the data in the external data source. The query portion of the URI, introduced by the question mark (?), must include the required parameters `FRAGMENTER` (readable tables only), `ACCESSOR`, and `RESOLVER`, which specify Java class names that extend the base PXF API plug-in classes. Alternatively, the required parameters can be replaced with a `PROFILE` parameter with the name of a profile defined in the `/etc/conf/pxf-profiles.xml` that defines the required classes.
-
-The parameters in the PXF URI are passed from HAWQ as headers to the PXF Java service. You can pass custom information to user-implemented PXF plug-ins by adding optional parameters to the LOCATION string.
-
-The Java PXF service�retrieves the source data from the external data source and converts it to a HAWQ-readable table format.
-
-The Accessor, Resolver, and Fragmenter Java classes extend the `org.apache.hawq.pxf.api.utilities.Plugin` class:
-
-``` java
-package org.apache.hawq.pxf.api.utilities;
-/**
- * Base class for all plug-in types (Accessor, Resolver, Fragmenter, ...).
- * Manages the meta data.
- */
-public class Plugin {
-    protected InputData inputData;
-    /**
-     * Constructs a plug-in.
-     *
-     * @param input the input data
-     */
-    public Plugin(InputData input) {
-        this.inputData = input;
-    }
-    /**
-     * Checks if the plug-in is thread safe or not, based on inputData.
-     *
-     * @return true if plug-in is thread safe
-     */
-    public boolean isThreadSafe() {
-        return true;
-    }
-}
-```
-
-The parameters in the `LOCATION` string are available to the plug-ins through methods in the `org.apache.hawq.pxf.api.utilities.InputData` class. Custom parameters added to the location string can be looked up with the `getUserProperty()` method.
-
-``` java
-/**
- * Common configuration available to all PXF plug-ins. Represents input data
- * coming from client applications, such as HAWQ.
- */
-public class InputData {
-
-    /**
-     * Constructs an InputData from a copy.
-     * Used to create from an extending class.
-     *
-     * @param copy the input data to copy
-     */
-    public InputData(InputData copy);
-
-    /**
-     * Returns value of a user defined property.
-     *
-     * @param userProp the lookup user property
-     * @return property value as a String
-     */
-    public String getUserProperty(String userProp);
-
-    /**
-     * Sets the byte serialization of a fragment meta data
-     * @param location start, len, and location of the fragment
-     */
-    public void setFragmentMetadata(byte[] location);
-
-    /** Returns the byte serialization of a data fragment */
-    public byte[] getFragmentMetadata();
-
-    /**
-     * Gets any custom user data that may have been passed from the
-     * fragmenter. Will mostly be used by the accessor or resolver.
-     */
-    public byte[] getFragmentUserData();
-
-    /**
-     * Sets any custom user data that needs to be shared across plug-ins.
-     * Will mostly be set by the fragmenter.
-     */
-    public void setFragmentUserData(byte[] userData);
-
-    /** Returns the number of segments in GP. */
-    public int getTotalSegments();
-
-    /** Returns the current segment ID. */
-    public int getSegmentId();
-
-    /** Returns true if there is a filter string to parse. */
-    public boolean hasFilter();
-
-    /** Returns the filter string, <tt>null</tt> if #hasFilter is <tt>false</tt> */
-    public String getFilterString();
-
-    /** Returns tuple description. */
-    public ArrayList<ColumnDescriptor> getTupleDescription();
-
-    /** Returns the number of columns in tuple description. */
-    public int getColumns();
-
-    /** Returns column index from tuple description. */
-    public ColumnDescriptor getColumn(int index);
-
-    /**
-     * Returns the column descriptor of the recordkey column. If the recordkey
-     * column was not specified by the user in the create table statement will
-     * return null.
-     */
-    public ColumnDescriptor getRecordkeyColumn();
-
-    /** Returns the data source of the required resource (i.e a file path or a table name). */
-    public String getDataSource();
-
-    /** Sets the data source for the required resource */
-    public void setDataSource(String dataSource);
-
-    /** Returns the ClassName for the java class that was defined as Accessor */
-    public String getAccessor();
-
-    /** Returns the ClassName for the java class that was defined as Resolver */
-    public String getResolver();
-
-    /**
-     * Returns the ClassName for the java class that was defined as Fragmenter
-     * or null if no fragmenter was defined
-     */
-    public String getFragmenter();
-
-    /**
-     * Returns the contents of pxf_remote_service_login set in Hawq.
-     * Should the user set it to an empty string this function will return null.
-     *
-     * @return remote login details if set, null otherwise
-     */
-    public String getLogin();
-
-    /**
-     * Returns the contents of pxf_remote_service_secret set in Hawq.
-     * Should the user set it to an empty string this function will return null.
-     *
-     * @return remote password if set, null otherwise
-     */
-    public String getSecret();
-
-    /**
-     * Returns true if the request is thread safe. Default true. Should be set
-     * by a user to false if the request contains non thread-safe plug-ins or
-     * components, such as BZip2 codec.
-     */
-    public boolean isThreadSafe();
-
-    /**
-     * Returns a data fragment index. plan to deprecate it in favor of using
-     * getFragmentMetadata().
-     */
-    public int getDataFragment();
-}
-```
-
--   **[Fragmenter](../pxf/PXFExternalTableandAPIReference.html#fragmenter)**
-
--   **[Accessor](../pxf/PXFExternalTableandAPIReference.html#accessor)**
-
--   **[Resolver](../pxf/PXFExternalTableandAPIReference.html#resolver)**
-
-### <a id="fragmenter"></a>Fragmenter
-
-**Note:** The Fragmenter Plugin reads data into HAWQ readable external tables. The Fragmenter Plugin cannot write data out of HAWQ into writable external tables.
-
-The Fragmenter is responsible for passing datasource metadata back to HAWQ. It also returns a list of data fragments to the Accessor or Resolver. Each data fragment describes some part of the requested data set. It contains the datasource name, such as the file or table name, including the hostname where it is located. For example, if the source is a HDFS file, the Fragmenter returns a list of data fragments containing a HDFS file block.�Each fragment includes the location of the block. If the source data is an HBase table, the Fragmenter returns information about table regions, including their locations.
-
-The `ANALYZE` command now retrieves advanced statistics for PXF readable tables by estimating the number of tuples in a table, creating a sample table from the external table, and running advanced statistics queries on the sample table in the same way statistics are collected for native HAWQ tables.
-
-The configuration parameter `pxf_enable_stat_collection` controls collection of advanced statistics. If `pxf_enable_stat_collection` is set to false, no analysis is performed on PXF tables. An additional parameter, `pxf_stat_max_fragments`, controls the number of fragments sampled to build a sample table. By default `pxf_stat_max_fragments` is set to 100, which means that even if there are more than 100 fragments, only this number of fragments will be used in `ANALYZE` to sample the data. Increasing this number will result in better sampling, but can also impact performance.
-
-When a PXF table is analyzed and `pxf_enable_stat_collection` is set to off, or an error occurs because the table is not defined correctly, the PXF service is down, or `getFragmentsStats` is not implemented, a warning message is shown and no statistics are gathered for that table. If `ANALYZE` is running over all tables in the database, the next table will be processed \u2013 a failure processing one table does not stop the command.
-
-For a�detailed explanation�about HAWQ statistical data gathering, see `ANALYZE` in the SQL Commands Reference.
-
-**Note:**
-
--   Depending on external table size, the time required to complete an ANALYZE operation can be lengthy. The boolean parameter `pxf_enable_stat_collection` enables statistics collection for PXF. The default value is `on`. Turning this parameter off (disabling PXF statistics collection) can help decrease the time needed for the ANALYZE operation.
--   You can also use *pxf\_stat\_max\_fragments* to limit the number of fragments to be sampled by decreasing it from the default (100). However, if the number is too low, the sample might not be uniform and the statistics might be skewed.
--   You can also implement getFragmentsStats to return an error. This will cause ANALYZE on a table with this Fragmenter to fail immediately, and default statistics values will be used for that table.
-
-The following table lists the Fragmenter plug-in implementations included with the PXF API.
-
-<a id="fragmenter__table_cgs_svp_3s"></a>
-
-<table>
-<caption><span class="tablecap">Table 2. Fragmenter base classes </span></caption>
-<colgroup>
-<col width="50%" />
-<col width="50%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th><p><code class="ph codeph">Fragmenter class</code></p></th>
-<th><p><code class="ph codeph">Description</code></p></th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</td>
-<td>Fragmenter for Hdfs files</td>
-</tr>
-<tr class="even">
-<td>org.apache.hawq.pxf.plugins.hbase.HBaseAtomicDataAccessor</td>
-<td>Fragmenter for HBase tables</td>
-</tr>
-<tr class="odd">
-<td>org.apache.hawq.pxf.plugins.hive.HiveDataFragmenter</td>
-<td>Fragmenter for Hive tables�</td>
-</tr>
-<tr class="even">
-<td>org.apache.hawq.pxf.plugins.hdfs.HiveInputFormatFragmenter</td>
-<td>Fragmenter for Hive tables with RC or text files�</td>
-</tr>
-</tbody>
-</table>
-
-A�Fragmenter class extends `org.apache.hawq.pxf.api.Fragmenter`:
-
-#### <a id="com.pivotal.pxf.api.fragmenter"></a>org.apache.hawq.pxf.api.Fragmenter
-
-``` java
-package org.apache.hawq.pxf.api;
-/**
- * Abstract class that defines the splitting of a data resource into fragments
- * that can be processed in parallel.
- */
-public abstract class Fragmenter extends Plugin {
-        protected List<Fragment> fragments;
-
-    public Fragmenter(InputData metaData) {
-        super(metaData);
-        fragments = new LinkedList<Fragment>();
-    }
-
-       /**
-        * Gets the fragments of a given path (source name and location of each
-        * fragment). Used to get fragments of data that could be read in parallel
-        * from the different segments.
-        */
-    public abstract List<Fragment> getFragments() throws Exception;
-
-    /**
-        * Default implementation of statistics for fragments. The default is:
-        * <ul>
-        * <li>number of fragments - as gathered by {@link #getFragments()}</li>
-        * <li>first fragment size - 64MB</li>
-        * <li>total size - number of fragments times first fragment size</li>
-        * </ul>
-        * Each fragmenter implementation can override this method to better match
-        * its fragments stats.
-        *
-        * @return default statistics
-        * @throws Exception if statistics cannot be gathered
-        */
-       public FragmentsStats getFragmentsStats() throws Exception {
-        List<Fragment> fragments = getFragments();
-        long fragmentsNumber = fragments.size();
-        return new FragmentsStats(fragmentsNumber,
-                FragmentsStats.DEFAULT_FRAGMENT_SIZE, fragmentsNumber
-                        * FragmentsStats.DEFAULT_FRAGMENT_SIZE);
-    }
-}
-  
-```
-
-`getFragments()` returns a string in JSON format of the retrieved fragment. For example, if the input path is a HDFS directory, the source name for each fragment should include the file name including the path for the fragment.
-
-#### <a id="classdescription"></a>Class Description
-
-The Fragmenter.getFragments()�method returns a�List&lt;Fragment&gt;;:
-
-``` java
-package org.apache.hawq.pxf.api;
-/*
- * Fragment holds a data fragment' information.
- * Fragmenter.getFragments() returns a list of fragments.
- */
-public class Fragment
-{
-    private String sourceName;    // File path+name, table name, etc.
-    private int index;            // Fragment index (incremented per sourceName)
-    private String[] replicas;    // Fragment replicas (1 or more)
-    private byte[]   metadata;    // Fragment metadata information (starting point + length, region location, etc.)
-    private byte[]   userData;    // ThirdParty data added to a fragment. Ignored if null
-    ...
-}
-```
-
-#### <a id="topic_fzd_tlv_c5"></a>org.apache.hawq.pxf.api.FragmentsStats
-
-The `Fragmenter.getFragmentsStats()` method returns a `FragmentsStats`:
-
-``` java
-package org.apache.hawq.pxf.api;
-/**
- * FragmentsStats holds statistics for a given path.
- */
-public class FragmentsStats {
-
-    // number of fragments
-    private long fragmentsNumber;
-    // first fragment size
-    private SizeAndUnit firstFragmentSize;
-    // total fragments size
-    private SizeAndUnit totalSize;
-
-   /**
-     * Enum to represent unit (Bytes/KB/MB/GB/TB)
-     */
-    public enum SizeUnit {
-        /**
-         * Byte
-         */
-        B,
-        /**
-         * KB
-         */
-        KB,
-        /**
-         * MB
-         */
-        MB,
-        /**
-         * GB
-         */
-        GB,
-        /**
-         * TB
-         */
-        TB;
-    };
-
-    /**
-     * Container for size and unit
-     */
-    public class SizeAndUnit {
-        long size;
-        SizeUnit unit;
-    ... 
-
-```
-
-`getFragmentsStats()` returns a string in JSON format of statistics for the data source. For example, if the input path is a HDFS directory of 3 files, each one of 1 block, the output will be the number of fragments (3), the size of the first file, and the size of all files in that directory.
-
-### <a id="accessor"></a>Accessor
-
-The Accessor retrieves specific fragments and passes records back to the Resolver.�For example, the HDFS plug-ins create a `org.apache.hadoop.mapred.FileInputFormat` and a `org.apache.hadoop.mapred.RecordReader` for an HDFS file and sends this to the Resolver.�In the case of HBase or Hive files, the Accessor returns single rows from an HBase or Hive table. PXF 1.x or higher contains the following Accessor implementations:
-
-<a id="accessor__table_ewm_ttz_4p"></a>
-
-<table>
-<caption><span class="tablecap">Table 3. Accessor base classes </span></caption>
-<colgroup>
-<col width="50%" />
-<col width="50%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th><p><code class="ph codeph">Accessor class</code></p></th>
-<th><p><code class="ph codeph">Description</code></p></th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>org.apache.hawq.pxf.plugins.hdfs.HdfsAtomicDataAccessor</td>
-<td>Base class for accessing datasources which cannot be split. These will be accessed by a single HAWQ segment</td>
-</tr>
-<tr class="even">
-<td>org.apache.hawq.pxf.plugins.hdfs.QuotedLineBreakAccessor</td>
-<td>Accessor for TEXT files that have records with embedded linebreaks</td>
-</tr>
-<tr class="odd">
-<td>org.apache.hawq.pxf.plugins.hdfs.HdfsSplittableDataAccessor</td>
-<td><p>Base class for accessing HDFS files using <code class="ph codeph">RecordReaders</code></p></td>
-</tr>
-<tr class="even">
-<td>org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor</td>
-<td>Accessor for TEXT files (replaced the deprecated <code class="ph codeph">TextFileAccessor</code>, <code class="ph codeph">LineReaderAccessor</code>)</td>
-</tr>
-<tr class="odd">
-<td>org.apache.hawq.pxf.plugins.hdfs.AvroFileAccessor</td>
-<td>Accessor for Avro files</td>
-</tr>
-<tr class="even">
-<td>org.apache.hawq.pxf.plugins.hdfs.SequenceFileAccessor</td>
-<td>Accessor for Sequence files</td>
-</tr>
-<tr class="odd">
-<td>org.apache.hawq.pxf.plugins.hbase.HBaseAccessor�</td>
-<td>Accessor for HBase tables�</td>
-</tr>
-<tr class="even">
-<td>org.apache.hawq.pxf.plugins.hive.HiveAccessor</td>
-<td>Accessor for Hive tables�</td>
-</tr>
-<tr class="odd">
-<td>org.apache.hawq.pxf.plugins.hive.HiveLineBreakAccessor</td>
-<td>Accessor for Hive tables with text files</td>
-</tr>
-<tr class="even">
-<td>org.apache.hawq.pxf.plugins.hive.HiveRCFileAccessor</td>
-<td>Accessor for Hive tables with RC files</td>
-</tr>
-</tbody>
-</table>
-
-The class must extend the `org.apache.hawq.pxf.Plugin`� class, and�implement one or both interfaces:
-
--   `org.apache.hawq.pxf.api.ReadAccessor`
--   `org.apache.hawq.pxf.api.WriteAccessor`
-
-``` java
-package org.apache.hawq.pxf.api;
-/*
- * Internal interface that defines the access to data on the source
- * data store (e.g, a file on HDFS, a region of an HBase table, etc).
- * All classes that implement actual access to such data sources must
- * respect this interface
- */
-public interface ReadAccessor {
-    boolean openForRead() throws Exception;
-    OneRow readNextObject() throws Exception;
-    void closeForRead() throws Exception;
-}
-```
-
-``` java
-package org.apache.hawq.pxf.api;
-/*
- * An interface for writing data into a data store
- * (e.g, a sequence file on HDFS).
- * All classes that implement actual access to such data sources must
- * respect this interface
- */
-public interface WriteAccessor {
-    boolean openForWrite() throws Exception;
-    OneRow writeNextObject(OneRow onerow) throws Exception;
-    void closeForWrite() throws Exception;
-}
-```
-
-The Accessor calls `openForRead()` to read existing data. After reading the data, it calls `closeForRead()`. `readNextObject()` returns one of the�following:
-
--   a single record, encapsulated in a OneRow object
--   null if it reaches `EOF`
-
-The Accessor calls `openForWrite()` to write data out. After writing the data, it�writes a `OneRow` object with `writeNextObject()`, and when done calls `closeForWrite()`. `OneRow` represents a key-value item.
-
-#### <a id="com.pivotal.pxf.api.onerow"></a>org.apache.hawq.pxf.api.OneRow
-
-``` java
-package org.apache.hawq.pxf.api;
-/*
- * Represents one row in the external system data store. Supports
- * the general case where one row contains both a record and a
- * separate key like in the HDFS key/value model for MapReduce
- * (Example: HDFS sequence file)
- */
-public class OneRow {
-    /*
-     * Default constructor
-     */
-    public OneRow();
-
-    /*
-     * Constructor sets key and data
-     */
-    public OneRow(Object inKey, Object inData);
-
-    /*
-     * Setter for key
-     */
-    public void setKey(Object inKey);
-    
-    /*
-     * Setter for data
-     */
-    public void setData(Object inData);
-
-    /*
-     * Accessor for key
-     */
-    public Object getKey();
-
-    /*
-     * Accessor for data
-     */
-    public Object getData();
-
-    /*
-     * Show content
-     */
-    public String toString();
-}
-```
-
-### <a id="resolver"></a>Resolver
-
-The Resolver deserializes records in the `OneRow` format and serializes them to a list of `OneField` objects. PXF converts a `OneField` object to a HAWQ-readable�`GPDBWritable` format.�PXF 1.x or higher contains the following implementations:
-
-<a id="resolver__table_nbd_d5z_4p"></a>
-
-<table>
-<caption><span class="tablecap">Table 4. Resolver base classes</span></caption>
-<colgroup>
-<col width="50%" />
-<col width="50%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th><p><code class="ph codeph">Resolver class</code></p></th>
-<th><p><code class="ph codeph">Description</code></p></th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.StringPassResolver</code></p></td>
-<td><p><code class="ph codeph">StringPassResolver</code> replaced the deprecated <code class="ph codeph">TextResolver</code>. It passes whole records (composed of any data types) as strings without parsing them</p></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.WritableResolver</code></p></td>
-<td><p>Resolver for custom Hadoop Writable implementations. Custom class can be specified with the schema in DATA-SCHEMA. Supports the following types:</p>
-<pre class="pre codeblock"><code>DataType.BOOLEAN
-DataType.INTEGER
-DataType.BIGINT
-DataType.REAL
-DataType.FLOAT8
-DataType.VARCHAR
-DataType.BYTEA</code></pre></td>
-</tr>
-<tr class="odd">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.AvroResolver</code></p></td>
-<td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code>.�</p></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hbase.HBaseResolver</code></p></td>
-<td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code> and also supports the following:</p>
-<pre class="pre codeblock"><code>DataType.SMALLINT
-DataType.NUMERIC
-DataType.TEXT
-DataType.BPCHAR
-DataType.TIMESTAMP</code></pre></td>
-</tr>
-<tr class="odd">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveResolver</code></p></td>
-<td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code> and also supports the following:</p>
-<pre class="pre codeblock"><code>DataType.SMALLINT
-DataType.TEXT
-DataType.TIMESTAMP</code></pre></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveStringPassResolver</code></p></td>
-<td>Specialized <code class="ph codeph">HiveResolver</code> for a Hive table stored as Text files. Should be used together with <code class="ph codeph">HiveInputFormatFragmenter</code>/<code class="ph codeph">HiveLineBreakAccessor</code>.</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveColumnarSerdeResolver</code></td>
-<td>Specialized <code class="ph codeph">HiveResolver</code> for a Hive table stored as RC file. Should be used together with <code class="ph codeph">HiveInputFormatFragmenter</code>/<code class="ph codeph">HiveRCFileAccessor</code>.</td>
-</tr>
-</tbody>
-</table>
-
-The class needs to extend the `org.apache.hawq.pxf.resolvers.Plugin class                `, and�implement one or both interfaces:
-
--   `org.apache.hawq.pxf.api.ReadResolver`
--   `org.apache.hawq.pxf.api.WriteResolver`
-
-``` java
-package org.apache.hawq.pxf.api;
-/*
- * Interface that defines the deserialization of one record brought from
- * the data Accessor. Every implementation of a deserialization method
- * (e.g, Writable, Avro, ...) must implement this interface.
- */
-public interface ReadResolver {
-    public List<OneField> getFields(OneRow row) throws Exception;
-}
-```
-
-``` java
-package org.apache.hawq.pxf.api;
-/*
-* Interface that defines the serialization of data read from the DB
-* into a OneRow object.
-* Every implementation of a serialization method
-* (e.g, Writable, Avro, ...) must implement this interface.
-*/
-public interface WriteResolver {
-    public OneRow setFields(List<OneField> record) throws Exception;
-}
-```
-
-**Note:**
-
--   getFields should return a List&lt;OneField&gt;, each OneField representing a single field.
--   `setFields�`should return a single�`OneRow�`object, given a List&lt;OneField&gt;.
-
-#### <a id="com.pivotal.pxf.api.onefield"></a>org.apache.hawq.pxf.api.OneField
-
-``` java
-package org.apache.hawq.pxf.api;
-/*
- * Defines one field on a deserialized record.
- * 'type' is in OID values recognized by GPDBWritable
- * 'val' is the actual field value
- */
-public class OneField {
-    public OneField() {}
-    public OneField(int type, Object val) {
-        this.type = type;
-        this.val = val;
-    }
-
-    public int type;
-    public Object val;
-}
-```
-
-The value of `type` should follow the org.apache.hawq.pxf.api.io.DataType�`enums`. `val` is the appropriate Java class. Supported types are as follows:
-
-<a id="com.pivotal.pxf.api.onefield__table_f4x_35z_4p"></a>
-
-<table>
-<caption><span class="tablecap">Table 5. Resolver supported types</span></caption>
-<colgroup>
-<col width="50%" />
-<col width="50%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th><p>DataType recognized OID</p></th>
-<th><p>Field value</p></th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td><p><code class="ph codeph">DataType.SMALLINT</code></p></td>
-<td><p><code class="ph codeph">Short</code></p></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">DataType.INTEGER</code></p></td>
-<td><p><code class="ph codeph">Integer</code></p></td>
-</tr>
-<tr class="odd">
-<td><p><code class="ph codeph">DataType.BIGINT</code></p></td>
-<td><p><code class="ph codeph">Long</code></p></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">DataType.REAL</code></p></td>
-<td><p><code class="ph codeph">Float</code></p></td>
-</tr>
-<tr class="odd">
-<td><p><code class="ph codeph">DataType.FLOAT8</code></p></td>
-<td><p><code class="ph codeph">Double</code></p></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">DataType.NUMERIC</code></p></td>
-<td><p><code class="ph codeph">String (&quot;651687465135468432168421&quot;)</code></p></td>
-</tr>
-<tr class="odd">
-<td><p><code class="ph codeph">DataType.BOOLEAN</code></p></td>
-<td><p><code class="ph codeph">Boolean</code></p></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">DataType.VARCHAR</code></p></td>
-<td><p><code class="ph codeph">String</code></p></td>
-</tr>
-<tr class="odd">
-<td><p><code class="ph codeph">DataType.BPCHAR</code></p></td>
-<td><p><code class="ph codeph">String</code></p></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">DataType.TEXT</code></p></td>
-<td><p><code class="ph codeph">String</code></p></td>
-</tr>
-<tr class="odd">
-<td><p><code class="ph codeph">DataType.BYTEA</code></p></td>
-<td><p><code class="ph codeph">byte []</code></p></td>
-</tr>
-<tr class="even">
-<td><p><code class="ph codeph">DataType.TIMESTAMP</code></p></td>
-<td><p><code class="ph codeph">Timestamp</code></p></td>
-</tr>
-<tr class="odd">
-<td><p><code class="ph codeph">DataType.Date</code></p></td>
-<td><p><code class="ph codeph">Date</code></p></td>
-</tr>
-</tbody>
-</table>
-
-### <a id="analyzer"></a>Analyzer
-
-The Analyzer has been deprecated. A new function in the Fragmenter API (Fragmenter.getFragmentsStats) is used to gather initial statistics for the data source, and provides PXF statistical data for the HAWQ query optimizer. For a�detailed explanation�about HAWQ statistical data gathering, see `ANALYZE` in the SQL Command Reference.
-
-Using the Analyzer API will result in an error message. Use the Fragmenter and getFragmentsStats to gather advanced statistics.
-
-## <a id="aboutcustomprofiles"></a>About Custom Profiles
-
-Administrators can add new profiles or edit the built-in profiles in�`/etc/conf/pxf-profiles.xml` file. See [Using Profiles to Read and Write Data](ReadWritePXF.html#readingandwritingdatawithpxf) for information on how to add custom profiles.
-
-## <a id="aboutqueryfilterpush-down"></a>About Query Filter Push-Down
-
-If a query includes a number of WHERE clause filters, �HAWQ may push all or some queries to PXF. If pushed to PXF, the Accessor can use the filtering information when accessing the data source to fetch tuples. These filters�only return records that pass filter evaluation conditions.�This reduces data processing�and�reduces network traffic from the SQL engine.
-
-This topic includes the following information:
-
--   Filter Availability and Ordering�
--   Creating�a Filter Builder class
--   Filter Operations
--   Sample Implementation
--   Using Filters
-
-### <a id="filteravailabilityandordering"></a>Filter Availability and Ordering
-
-PXF�allows push-down filtering if the following rules are met:
-
--   Uses only�single expressions or a group of AND'ed expressions - no OR'ed expressions.
--   Uses only expressions of supported data types and operators.
-
-FilterParser�scans the pushed down filter list and uses the user's build() implementation to build the filter.
-
--   For simple expressions (e.g, a &gt;= 5), FilterParser places column objects on the left of the�expression and constants on the right.
--   For compound expressions (e.g &lt;expression&gt; AND &lt;expression&gt;) it handles three cases in the build() function:
-    1.  Simple Expression: &lt;Column Index&gt; &lt;Operation&gt; &lt;Constant&gt;
-    2.  Compound Expression: &lt;Filter Object&gt; AND &lt;Filter Object&gt;
-    3.  Compound Expression: &lt;List of Filter Objects&gt; AND &lt;Filter Object&gt;
-
-### <a id="creatingafilterbuilderclass"></a>Creating a Filter Builder Class
-
-To check�if a filter queried PXF, call the `InputData                   hasFilter()` function:
-
-``` java
-/*
-�* Returns true if there is a filter string to parse
-�*/
-public boolean hasFilter()
-{
-   return filterStringValid;
-}
-```
-
-If `hasFilter()` returns `false`, there is no filter information. If it returns `true`,�PXF parses the serialized filter string into a meaningful filter object to use later. To do so, create a filter builder�class that implements the�`FilterParser.FilterBuilder�` interface:
-
-``` java
-package org.apache.hawq.pxf.api;
-/*
-�* Interface a user of FilterParser should implement
-�* This is used to let the user build filter expressions in the manner she�
-�* sees fit
-�*
-�* When an operator is parsed, this function is called to let the user decide
-�* what to do with its operands.
-�*/
-interface FilterBuilder {
-   public Object build(Operation operation, Object left, Object right) throws Exception;
-}
-```
-
-While PXF parses the serialized filter string from the incoming HAWQ query, it calls the `build() interface` function. PXF�calls this function for each condition or filter pushed down to PXF. Implementing this function returns some Filter object or representation that the Fragmenter, Accessor, or Resolver uses in runtime to filter out records. The `build()` function accepts an Operation as input, and�left and right operands.
-
-### <a id="filteroperations"></a>Filter Operations
-
-``` java
-/*
-�* Operations supported by the parser
-�*/
-public enum Operation
-{
-    HDOP_LT, //less than
-    HDOP_GT, //greater than
-    HDOP_LE, //less than or equal
-    HDOP_GE, //greater than or equal
-    HDOP_EQ, //equal
-    HDOP_NE, //not equal
-    HDOP_AND //AND'ed conditions
-};
-```
-
-#### <a id="filteroperands"></a>Filter Operands
-
-There are three types of operands:
-
--   Column Index
--   Constant
--   Filter Object
-
-#### <a id="columnindex"></a>Column Index
-
-``` java
-/*
-�* Represents a column index
- */
-public class ColumnIndex
-{
-   public ColumnIndex(int idx);
-
-   public int index();
-}
-```
-
-#### <a id="constant"></a>Constant
-
-``` java
-/*
- * The class represents a constant object (String, Long, ...)
-�*/
-public class Constant
-{
-    public Constant(Object obj);
-
-    public Object constant();
-}
-```
-
-#### <a id="filterobject"></a>Filter Object
-
-Filter Objects can be internal, such as those you define; or external, those that the remote system uses. For example, for HBase, you define the HBase�`Filter` class (`org.apache.hadoop.hbase.filter.Filter`), while�for Hive, you use an internal default representation created by the PXF framework, called�`BasicFilter`. You can decide�the filter object to use, including writing a new one. `BasicFilter` is the most common:
-
-``` java
-/*
-�* Basic filter provided for cases where the target storage system does not provide its own filter
-�* For example: Hbase storage provides its own filter but for a Writable based record in a SequenceFile
-�* there is no filter provided and so we need to have a default
-�*/
-static public class BasicFilter
-{
-   /*
-�   * C'tor
-�   */
-   public BasicFilter(Operation inOper, ColumnIndex inColumn, Constant inConstant);
-
-   /*
-��  * Returns oper field
-��  */
-   public Operation getOperation();
-
-   /*
-��  * Returns column field
-��  */
-   public ColumnIndex getColumn();
-
-   /*
-��  * Returns constant field
-��  */
-   public Constant getConstant();
-}
-```
-
-### <a id="sampleimplementation"></a>Sample Implementation
-
-Let's look at the following sample implementation of the filter builder class and its `build()` function that handles all 3 cases. Let's assume that BasicFilter was used to hold our filter operations.
-
-``` java
-import java.util.LinkedList;
-import java.util.List;
-
-import org.apache.hawq.pxf.api.FilterParser;
-import org.apache.hawq.pxf.api.utilities.InputData;
-
-public class MyDemoFilterBuilder implements FilterParser.FilterBuilder
-{
-    private InputData inputData;
-
-    public MyDemoFilterBuilder(InputData input)
-    {
-        inputData = input;
-    }
-
-    /*
-     * Translates a filterString into a FilterParser.BasicFilter or a list of such filters
-     */
-    public Object getFilterObject(String filterString) throws Exception
-    {
-        FilterParser parser = new FilterParser(this);
-        Object result = parser.parse(filterString);
-
-        if (!(result instanceof FilterParser.BasicFilter) && !(result instanceof List))
-            throw new Exception("String " + filterString + " resolved to no filter");
-
-        return result;
-    }
- 
-    public Object build(FilterParser.Operation opId,
-                        Object leftOperand,
-                        Object rightOperand) throws Exception
-    {
-        if (leftOperand instanceof FilterParser.BasicFilter)
-        {
-            //sanity check
-            if (opId != FilterParser.Operation.HDOP_AND || !(rightOperand instanceof FilterParser.BasicFilter))
-                throw new Exception("Only AND is allowed between compound expressions");
-
-            //case 3
-            if (leftOperand instanceof List)
-                return handleCompoundOperations((List<FilterParser.BasicFilter>)leftOperand, (FilterParser.BasicFilter)rightOperand);
-            //case 2
-            else
-                return handleCompoundOperations((FilterParser.BasicFilter)leftOperand, (FilterParser.BasicFilter)rightOperand);
-        }
-
-        //sanity check
-        if (!(rightOperand instanceof FilterParser.Constant))
-            throw new Exception("expressions of column-op-column are not supported");
-
-        //case 1 (assume column is on the left)
-        return handleSimpleOperations(opId, (FilterParser.ColumnIndex)leftOperand, (FilterParser.Constant)rightOperand);
-    }
-
-    private FilterParser.BasicFilter handleSimpleOperations(FilterParser.Operation opId,
-                                                            FilterParser.ColumnIndex column,
-                                                            FilterParser.Constant constant)
-    {
-        return new FilterParser.BasicFilter(opId, column, constant);
-    }
-
-    private  List handleCompoundOperations(List<FilterParser.BasicFilter> left,
-                                       FilterParser.BasicFilter right)
-    {
-        left.add(right);
-        return left;
-    }
-
-    private List handleCompoundOperations(FilterParser.BasicFilter left,
-                                          FilterParser.BasicFilter right)
-    {
-        List<FilterParser.BasicFilter> result = new LinkedList<FilterParser.BasicFilter>();
-
-        result.add(left);
-        result.add(right);
-        return result;
-    }
-}
-```
-
-Here is an example of creating a filter-builder class to implement the Filter interface, implement the `build()` function, and generate�the Filter object. To do this, use either the Accessor, Resolver, or both to�call the `getFilterObject` function:
-
-``` java
-if (inputData.hasFilter())
-{
-    String filterStr = inputData.filterString();
-    MyDemoFilterBuilder demobuilder = new MyDemoFilterBuilder(inputData);
-    Object filter = demobuilder.getFilterObject(filterStr);
-    ...
-}
-```
-
-### <a id="usingfilters"></a>Using Filters
-
-Once you have�built the Filter object(s), you can use them to read data and filter out records that do not meet the filter conditions:
-
-1.  Check whether you have a single or multiple filters.
-2.  Evaluate each�filter and iterate over each filter in�the list. Disqualify the record if filter conditions fail.
-
-``` java
-if (filter instanceof List)
-{
-    for (Object f : (List)filter)
-        <evaluate f>; //may want to break if evaluation results in negative answer for any filter.
-}
-else
-{
-    <evaluate filter>;
-}
-```
-
-Example of evaluating a single filter:
-
-``` java
-//Get our BasicFilter Object
-FilterParser.BasicFilter bFilter = (FilterParser.BasicFilter)filter;
-
- 
-//Get operation and operator values
-FilterParser.Operation op = bFilter.getOperation();
-int colIdx = bFilter.getColumn().index();
-String val = bFilter.getConstant().constant().toString();
-
-//Get more info about the column if desired
-ColumnDescriptor col = input.getColumn(colIdx);
-String colName = filterColumn.columnName();
- 
-//Now evaluate it against the actual column value in the record...
-```
-
-## <a id="reference"></a>Examples
-
-This�section contains the following information:
-
--   [External Table Examples](#externaltableexamples)
--   [Plug-in Examples](#pluginexamples)
-
--   **[External Table Examples](../pxf/PXFExternalTableandAPIReference.html#externaltableexamples)**
-
--   **[Plug-in Examples](../pxf/PXFExternalTableandAPIReference.html#pluginexamples)**
-
-### <a id="externaltableexamples"></a>External Table Examples
-
-#### <a id="example1"></a>Example 1
-
-Shows an external table that can analyze all `Sequencefiles` that are populated `Writable` serialized records and exist inside the hdfs directory `sales/2012/01`. `SaleItem.class` is a Java class that implements the `Writable` interface and describes a Java record that includes�three class members.
-
-**Note:** In this example, the class member names do not necessarily match the database attribute names, but the types match. `SaleItem.class` must exist in the classpath of every DataNode and NameNode.
-
-``` sql
-CREATE EXTERNAL TABLE jan_2012_sales (id int, total int, comments varchar)
-LOCATION ('pxf://10.76.72.26:51200/sales/2012/01/*.seq'
-          '?FRAGMENTER=org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter'
-          '&ACCESSOR=org.apache.hawq.pxf.plugins.hdfs.SequenceFileAccessor'
-          '&RESOLVER=org.apache.hawq.pxf.plugins.hdfs.WritableResolver'
-          '&DATA-SCHEMA=SaleItem')
-FORMAT 'custom' (formatter='pxfwritable_import');
-```
-
-#### <a id="example2"></a>Example 2
-
-Example 2 shows an external table that can analyze an HBase table called `sales`. It has 10 column families `(cf1 \u2013 cf10)` and many qualifier names in each family. This example focuses on the `rowkey`, the qualifier `saleid` inside column family `cf1`, and the qualifier `comments` inside column family `cf8` and uses direct mapping:
-
-``` sql
-CREATE EXTERNAL TABLE hbase_sales
-  (hbaserowkey text, "cf1:saleid" int, "cf8:comments" varchar)
-LOCATION ('pxf://10.76.72.26:51200/sales?PROFILE=HBase')
-FORMAT 'custom' (formatter='pxfwritable_import');
-```
-
-#### <a id="example3"></a>Example 3
-
-This example uses indirect mapping. Note how the attribute name changes and how they correspond to the HBase lookup table. Executing `SELECT FROM                      my_hbase_sales`, the attribute names automatically convert to their HBase correspondents.
-
-``` sql
-CREATE EXTERNAL TABLE my_hbase_sales (hbaserowkey text, id int, cmts varchar)
-LOCATION
-('pxf://10.76.72.26:51200/sales?PROFILE=HBase')
-FORMAT 'custom' (formatter='pxfwritable_import');
-```
-
-#### <a id="example4"></a>Example 4
-
-Shows an example for a writable table of compressed data.�
-
-``` sql
-CREATE WRITABLE EXTERNAL TABLE sales_aggregated_2012
-    (id int, total int, comments varchar)
-LOCATION ('pxf://10.76.72.26:51200/sales/2012/aggregated'
-          '?PROFILE=HdfsTextSimple'
-          '&COMPRESSION_CODEC=org.apache.hadoop.io.compress.BZip2Codec')
-FORMAT 'TEXT';
-```
-
-#### <a id="example5"></a>Example 5
-
-Shows an example for a writable table into a sequence file, using a schema file. For writable tables, the formatter is `pxfwritable_export`.
-
-``` sql
-CREATE WRITABLE EXTERNAL TABLE sales_max_2012
-    (id int, total int, comments varchar)
-LOCATION ('pxf://10.76.72.26:51200/sales/2012/max'
-          '?FRAGMENTER=org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter'
-          '&ACCESSOR=org.apache.hawq.pxf.plugins.hdfs.SequenceFileAccessor'
-          '&RESOLVER=org.apache.hawq.pxf.plugins.hdfs.WritableResolver'
-          '&DATA-SCHEMA=SaleItem')
-FORMAT 'custom' (formatter='pxfwritable_export');
-```
-
-### <a id="pluginexamples"></a>Plug-in Examples
-
-This section contains sample dummy implementations of all three plug-ins. It also contains a usage example.
-
-#### <a id="dummyfragmenter"></a>Dummy Fragmenter
-
-``` java
-import org.apache.hawq.pxf.api.Fragmenter;
-import org.apache.hawq.pxf.api.Fragment;
-import org.apache.hawq.pxf.api.utilities.InputData;
-import java.util.List;
-
-/*
- * Class that defines the splitting of a data resource into fragments that can
- * be processed in parallel
- * getFragments() returns the fragments information of a given path (source name and location of each fragment).
- * Used to get fragments of data that could be read in parallel from the different segments.
- * Dummy implementation, for documentation
- */
-public class DummyFragmenter extends Fragmenter {
-    public DummyFragmenter(InputData metaData) {
-        super(metaData);
-    }
-    /*
-     * path is a data source URI that can appear as a file name, a directory name or a wildcard
-     * returns the data fragments - identifiers of data and a list of available hosts
-     */
-    @Override
-    public List<Fragment> getFragments() throws Exception {
-        String localhostname = java.net.InetAddress.getLocalHost().getHostName();
-        String[] localHosts = new String[]{localhostname, localhostname};
-        fragments.add(new Fragment(inputData.getDataSource() + ".1" /* source name */,
-                localHosts /* available hosts list */,
-                "fragment1".getBytes()));
-        fragments.add(new Fragment(inputData.getDataSource() + ".2" /* source name */,
-                localHosts /* available hosts list */,
-                "fragment2".getBytes()));
-        fragments.add(new Fragment(inputData.getDataSource() + ".3" /* source name */,
-                localHosts /* available hosts list */,
-                "fragment3".getBytes()));
-        return fragments;
-    }
-}
-```
-
-#### <a id="dummyaccessor"></a>Dummy Accessor
-
-``` java
-import org.apache.hawq.pxf.api.WriteAccessor;
-import org.apache.hawq.pxf.api.OneRow;
-import org.apache.hawq.pxf.api.utilities.InputData;
-import org.apache.hawq.pxf.api.utilities.Plugin;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
-/*
- * Internal interface that defines the access to a file on HDFS.  All classes
- * that implement actual access to an HDFS file (sequence file, avro file,...)
- * must respect this interface
- * Dummy implementation, for documentation
- */
-public class DummyAccessor extends Plugin implements ReadAccessor, WriteAccessor {
-    private static final Log LOG = LogFactory.getLog(DummyAccessor.class);
-    private int rowNumber;
-    private int fragmentNumber;
-    public DummyAccessor(InputData metaData) {
-        super(metaData);
-    }
-    @Override
-    public boolean openForRead() throws Exception {
-        /* fopen or similar */
-        return true;
-    }
-    @Override
-    public OneRow readNextObject() throws Exception {
-        /* return next row , <key=fragmentNo.rowNo, val=rowNo,text,fragmentNo>*/
-        /* check for EOF */
-        if (fragmentNumber > 0)
-            return null; /* signal EOF, close will be called */
-        int fragment = inputData.getDataFragment();
-        String fragmentMetadata = new String(inputData.getFragmentMetadata());
-        /* generate row */
-        OneRow row = new OneRow(fragment + "." + rowNumber, /* key */
-                rowNumber + "," + fragmentMetadata + "," + fragment /* value */);
-        /* advance */
-        rowNumber += 1;
-        if (rowNumber == 2) {
-            rowNumber = 0;
-            fragmentNumber += 1;
-        }
-        /* return data */
-        return row;
-    }
-    @Override
-    public void closeForRead() throws Exception {
-        /* fclose or similar */
-    }
-    @Override
-    public boolean openForWrite() throws Exception {
-        /* fopen or similar */
-        return true;
-    }
-    @Override
-    public boolean writeNextObject(OneRow onerow) throws Exception {
-        LOG.info(onerow.getData());
-        return true;
-    }
-    @Override
-    public void closeForWrite() throws Exception {
-        /* fclose or similar */
-    }
-}
-```
-
-#### <a id="dummyresolver"></a>Dummy Resolver
-
-``` java
-import org.apache.hawq.pxf.api.OneField;
-import org.apache.hawq.pxf.api.OneRow;
-import org.apache.hawq.pxf.api.ReadResolver;
-import org.apache.hawq.pxf.api.WriteResolver;
-import org.apache.hawq.pxf.api.utilities.InputData;
-import org.apache.hawq.pxf.api.utilities.Plugin;
-import java.util.LinkedList;
-import java.util.List;
-import static org.apache.hawq.pxf.api.io.DataType.INTEGER;
-import static org.apache.hawq.pxf.api.io.DataType.VARCHAR;
-
-/*
- * Class that defines the deserializtion of one record brought from the external input data.
- * Every implementation of a deserialization method (Writable, Avro, BP, Thrift, ...)
- * must inherit this abstract class
- * Dummy implementation, for documentation
- */
-public class DummyResolver extends Plugin implements ReadResolver, WriteResolver {
-    private int rowNumber;
-    public DummyResolver(InputData metaData) {
-        super(metaData);
-        rowNumber = 0;
-    }
-    @Override
-    public List<OneField> getFields(OneRow row) throws Exception {
-        /* break up the row into fields */
-        List<OneField> output = new LinkedList<OneField>();
-        String[] fields = ((String) row.getData()).split(",");
-        output.add(new OneField(INTEGER.getOID() /* type */, Integer.parseInt(fields[0]) /* value */));
-        output.add(new OneField(VARCHAR.getOID(), fields[1]));
-        output.add(new OneField(INTEGER.getOID(), Integer.parseInt(fields[2])));
-        return output;
-    }
-    @Override
-    public OneRow setFields(List<OneField> record) throws Exception {
-        /* should read inputStream row by row */
-        return rowNumber > 5
-                ? null
-                : new OneRow(null, "row number " + rowNumber++);
-    }
-}
-```
-
-#### <a id="usageexample"></a>Usage Example
-
-``` sql
-psql=# CREATE EXTERNAL TABLE dummy_tbl
-    (int1 integer, word text, int2 integer)
-LOCATION ('pxf://localhost:51200/dummy_location'
-          '?FRAGMENTER=DummyFragmenter'
-          '&ACCESSOR=DummyAccessor'
-          '&RESOLVER=DummyResolver')
-FORMAT 'custom' (formatter = 'pxfwritable_import');
- 
-CREATE EXTERNAL TABLE
-psql=# SELECT * FROM dummy_tbl;
-int1 | word | int2
-------+------+------
-0 | fragment1 | 0
-1 | fragment1 | 0
-0 | fragment2 | 0
-1 | fragment2 | 0
-0 | fragment3 | 0
-1 | fragment3 | 0
-(6 rows)
-
-psql=# CREATE WRITABLE EXTERNAL TABLE dummy_tbl_write
-    (int1 integer, word text, int2 integer)
-LOCATION ('pxf://localhost:51200/dummy_location'
-          '?ACCESSOR=DummyAccessor'
-          '&RESOLVER=DummyResolver')
-FORMAT 'custom' (formatter = 'pxfwritable_export');
- 
-CREATE EXTERNAL TABLE
-psql=# INSERT INTO dummy_tbl_write VALUES (1, 'a', 11), (2, 'b', 22);
-INSERT 0 2
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/ReadWritePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/ReadWritePXF.html.md.erb b/pxf/ReadWritePXF.html.md.erb
deleted file mode 100644
index 18f655d..0000000
--- a/pxf/ReadWritePXF.html.md.erb
+++ /dev/null
@@ -1,123 +0,0 @@
----
-title: Using Profiles to Read and Write Data
----
-
-PXF profiles are collections of common metadata attributes that can be used to simplify the reading and writing of data. You can use any of the built-in profiles that come with PXF or you can create your own.
-
-For example, if you are writing single line records to text files on HDFS, you could use the built-in HdfsTextSimple profile. You specify this profile when you create the PXF external table used to write the data to HDFS.
-
-## <a id="built-inprofiles"></a>Built-In Profiles
-
-PXF comes with a number of built-in profiles that group together�a collection of metadata attributes. PXF built-in profiles simplify access to the following types of data storage systems:
-
--   HDFS File Data (Read + Write)
--   Hive (Read only)
--   HBase (Read only)
--   JSON (Read only)
-
-You can specify a built-in profile when you want to read data that exists inside HDFS files, Hive tables, HBase tables, and JSON files and for writing data into HDFS files.
-
-<table>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Profile</th>
-<th>Description</th>
-<th>Fragmenter/Accessor/Resolver</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>HdfsTextSimple</td>
-<td>Read or write delimited single line records from or to plain text files on HDFS.</td>
-<td><ul>
-<li>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</li>
-<li>org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor</li>
-<li>org.apache.hawq.pxf.plugins.hdfs.StringPassResolver</li>
-</ul></td>
-</tr>
-<tr class="even">
-<td>HdfsTextMulti</td>
-<td>Read delimited single or multi-line records (with quoted linefeeds) from plain text files on HDFS. This profile is not splittable (non parallel); reading is slower than reading with HdfsTextSimple.</td>
-<td><ul>
-<li>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</li>
-<li>org.apache.hawq.pxf.plugins.hdfs.QuotedLineBreakAccessor</li>
-<li>org.apache.hawq.pxf.plugins.hdfs.StringPassResolver</li>
-</ul></td>
-</tr>
-<tr class="odd">
-<td>Hive</td>
-<td>Read a Hive table with any of the available storage formats: text, RC, ORC, Sequence, or Parquet.</td>
-<td><ul>
-<li>org.apache.hawq.pxf.plugins.hive.HiveDataFragmenter</li>
-<li>org.apache.hawq.pxf.plugins.hive.HiveAccessor</li>
-<li>org.apache.hawq.pxf.plugins.hive.HiveResolver</li>
-</ul></td>
-</tr>
-<tr class="even">
-<td>HiveRC</td>
-<td>Optimized read of a Hive table where each partition is stored as an RCFile. 
-<div class="note note">
-Note: The <code class="ph codeph">DELIMITER</code> parameter is mandatory.
-</div></td>
-<td><ul>
-<li>org.apache.hawq.pxf.plugins.hive.HiveInputFormatFragmenter</li>
-<li>org.apache.hawq.pxf.plugins.hive.HiveRCFileAccessor</li>
-<li>org.apache.hawq.pxf.plugins.hive.HiveColumnarSerdeResolver</li>
-</ul></td>
-</tr>
-<tr class="odd">
-<td>HiveText</td>
-<td>Optimized read of a Hive table where each partition is stored as a text file.
-<div class="note note">
-Note: The <code class="ph codeph">DELIMITER</code> parameter is mandatory.
-</div></td>
-<td><ul>
-<li>org.apache.hawq.pxf.plugins.hive.HiveInputFormatFragmenter</li>
-<li>org.apache.hawq.pxf.plugins.hive.HiveLineBreakAccessor</li>
-<li>org.apache.hawq.pxf.plugins.hive.HiveStringPassResolver</li>
-</ul></td>
-</tr>
-<tr class="even">
-<td>HBase</td>
-<td>Read an HBase data store engine.</td>
-<td><ul>
-<li>org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter</li>
-<li>org.apache.hawq.pxf.plugins.hbase.HBaseAccessor</li>
-<li>org.apache.hawq.pxf.plugins.hbase.HBaseResolver</li>
-</ul></td>
-</tr>
-<tr class="odd">
-<td>Avro</td>
-<td>Read Avro files (fileName.avro).</td>
-<td><ul>
-<li>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</li>
-<li>org.apache.hawq.pxf.plugins.hdfs.AvroFileAccessor</li>
-<li>org.apache.hawq.pxf.plugins.hdfs.AvroResolver</li>
-</ul></td>
-</tr>
-<tr class="odd">
-<td>JSON</td>
-<td>Read JSON files (fileName.json) from HDFS.</td>
-<td><ul>
-<li>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</li>
-<li>org.apache.hawq.pxf.plugins.json.JsonAccessor</li>
-<li>org.apache.hawq.pxf.plugins.json.JsonResolver</li>
-</ul></td>
-</tr>
-</tbody>
-</table>
-
-## <a id="addingandupdatingprofiles"></a>Adding and Updating Profiles
-
-Each profile has a mandatory unique�name and an optional�description. In addition, each profile contains a set of plug-ins�that�are an�extensible set of metadata attributes.  Administrators can add new profiles or edit the built-in profiles defined in�`/etc/pxf/conf/pxf-profiles.xml`. 
-
-**Note:** Add the JAR files associated with custom PXF plug-ins to the `/etc/pxf/conf/pxf-public.classpath` configuration file.
-
-After you make changes in `pxf-profiles.xml` (or any other PXF configuration file), propagate the changes to all nodes with PXF installed, and then restart the PXF service on all nodes.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/TroubleshootingPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/TroubleshootingPXF.html.md.erb b/pxf/TroubleshootingPXF.html.md.erb
deleted file mode 100644
index 9febe09..0000000
--- a/pxf/TroubleshootingPXF.html.md.erb
+++ /dev/null
@@ -1,273 +0,0 @@
----
-title: Troubleshooting PXF
----
-
-## <a id="pxerrortbl"></a>PXF Errors
-
-The following table lists some common errors encountered while using PXF:
-
-<table>
-<caption><span class="tablecap">Table 1. PXF Errors and Explanation</span></caption>
-<colgroup>
-<col width="50%" />
-<col width="50%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Error</th>
-<th>Common Explanation</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>ERROR:� invalid URI pxf://localhost:51200/demo/file1: missing options section</td>
-<td><code class="ph codeph">LOCATION</code> does not include options after the file name: <code class="ph codeph">&lt;path&gt;?&lt;key&gt;=&lt;value&gt;&amp;&lt;key&gt;=&lt;value&gt;...</code></td>
-</tr>
-<tr class="even">
-<td>ERROR:� protocol &quot;pxf&quot; does not exist</td>
-<td>HAWQ is not compiled with PXF�protocol. It requires�the GPSQL�version of�HAWQ</td>
-</tr>
-<tr class="odd">
-<td>ERROR:� remote component error (0) from '&lt;x&gt;': There is no pxf servlet listening on the host and port specified in the external table url.</td>
-<td>Wrong server or port, or the service is not started</td>
-</tr>
-<tr class="even">
-<td>ERROR:� Missing FRAGMENTER option in the pxf uri: pxf://localhost:51200/demo/file1?a=a</td>
-<td>No <code class="ph codeph">FRAGMENTER</code> option was specified in <code class="ph codeph">LOCATION</code>.</td>
-</tr>
-<tr class="odd">
-<td>ERROR:� remote component error (500) from '&lt;x&gt;': � type �Exception report � message � org.apache.hadoop.mapred.InvalidInputException:
-<p>Input path does not exist: hdfs://0.0.0.0:8020/demo/file1��</p></td>
-<td>File or pattern given in <code class="ph codeph">LOCATION</code> doesn't exist on specified path.</td>
-</tr>
-<tr class="even">
-<td>ERROR:�remote component error (500) from '&lt;x&gt;': � type �Exception report � message � org.apache.hadoop.mapred.InvalidInputException : Input Pattern hdfs://0.0.0.0:8020/demo/file* matches 0 files�</td>
-<td>File or pattern given in <code class="ph codeph">LOCATION</code> doesn't exist on specified path.</td>
-</tr>
-<tr class="odd">
-<td>ERROR:� remote component error (500) from '&lt;x&gt;': PXF not correctly installed in CLASSPATH</td>
-<td>Cannot find PXF Jar</td>
-</tr>
-<tr class="even">
-<td>ERROR:� PXF API encountered a HTTP 404 error. Either the PXF service (tomcat) on the DataNode was not started or the PXF webapp was not started.</td>
-<td>Either the required DataNode does not exist or PXF service (tcServer) on the DataNode is not started or PXF webapp was not started</td>
-</tr>
-<tr class="odd">
-<td>ERROR: �remote component error (500) from '&lt;x&gt;': �type �Exception report � message � java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/HTableInterface</td>
-<td>One of the classes required for running PXF or one of its plug-ins is missing. Check that all resources in the PXF classpath files exist on the cluster nodes</td>
-</tr>
-<tr class="even">
-<td>ERROR: remote component error (500)�from '&lt;x&gt;': � type �Exception report � message � java.io.IOException: Can't get Master Kerberos principal for use as renewer</td>
-<td>Secure PXF: YARN isn't properly configured for secure (Kerberized) HDFS installs</td>
-</tr>
-<tr class="odd">
-<td>ERROR: fail to get filesystem credential for uri hdfs://&lt;namenode&gt;:8020/</td>
-<td>Secure PXF: Wrong HDFS host or port is not 8020 (this is a limitation that will be removed in the next release)</td>
-</tr>
-<tr class="even">
-<td>ERROR: remote component error (413) from '&lt;x&gt;': HTTP status code is 413 but HTTP response string is empty</td>
-<td>The PXF table number of attributes and their name sizes are too large for tcServer to accommodate in its request buffer. The solution is to increase the value of the maxHeaderCount and maxHttpHeaderSize parameters on server.xml on tcServer instance on all nodes and then restart PXF:
-<p>&lt;Connector acceptCount=&quot;100&quot; connectionTimeout=&quot;20000&quot; executor=&quot;tomcatThreadPool&quot; maxKeepAliveRequests=&quot;15&quot;maxHeaderCount=&quot;&lt;some larger value&gt;&quot;maxHttpHeaderSize=&quot;&lt;some larger value in bytes&gt;&quot; port=&quot;${bio.http.port}&quot; protocol=&quot;org.apache.coyote.http11.Http11Protocol&quot; redirectPort=&quot;${bio.https.port}&quot;/&gt;</p></td>
-</tr>
-<tr class="odd">
-<td>ERROR: remote component error (500) from '&lt;x&gt;': type Exception report message java.lang.Exception: Class com.pivotal.pxf.&lt;plugin name&gt; does not appear in classpath. Plugins provided by PXF must start with &quot;org.apache.hawq.pxf&quot;</td>
-<td>Querying a PXF table that still uses the old package name (&quot;com.pivotal.pxf.*&quot;) results in an error message that recommends moving to the new package name (&quot;org.apache.hawq.pxf&quot;). </td>
-</tr>
-<tr class="even">
-<td><strong>HBase Specific Errors</strong></td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>ERROR:� remote component error (500) from '&lt;x&gt;': � type �Exception report � message � �org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for t1,,99999999999999 after 10 tries.</td>
-<td>HBase service is down, probably HRegionServer</td>
-</tr>
-<tr class="even">
-<td>ERROR:� remote component error (500) from '&lt;x&gt;':� type �Exception report � message � org.apache.hadoop.hbase.TableNotFoundException: nosuch</td>
-<td>HBase cannot find the requested table</td>
-</tr>
-<tr class="odd">
-<td>ERROR: �remote component error (500) from '&lt;x&gt;': �type �Exception report � message � java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/HTableInterface</td>
-<td>PXF cannot find a required JAR file, probably HBase's</td>
-</tr>
-<tr class="even">
-<td>ERROR:� remote component error (500) from '&lt;x&gt;': � type �Exception report � message � java.lang.NoClassDefFoundError: org/apache/zookeeper/KeeperException</td>
-<td>PXF cannot find ZooKeeper's JAR</td>
-</tr>
-<tr class="odd">
-<td>ERROR: �remote component error (500) from '&lt;x&gt;': �type �Exception report � message � java.lang.Exception: java.lang.IllegalArgumentException: Illegal HBase column name a, missing :</td>
-<td>PXF table has an illegal field name. Each field name must correspond to an HBase column in the syntax &lt;column family&gt;:&lt;field name&gt;</td>
-</tr>
-<tr class="even">
-<td>ERROR: remote component error (500) from '&lt;x&gt;': type Exception report message org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family a does not exist in region t1,,1405517248353.85f4977bfa88f4d54211cb8ac0f4e644. in table 't1', {NAME =&amp;gt; 'cf', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', BLOOMFILTER =&amp;gt; 'ROW', REPLICATION_SCOPE =&amp;gt; '0', COMPRESSION =&amp;gt; 'NONE', VERSIONS =&amp;gt; '1', TTL =&amp;gt; '2147483647', MIN_VERSIONS =&amp;gt; '0', KEEP_DELETED_CELLS =&amp;gt; 'false', BLOCKSIZE =&amp;gt; '65536', ENCODE_ON_DISK =&amp;gt; 'true', IN_MEMORY =&amp;gt; 'false', BLOCKCACHE =&amp;gt; 'true'}</td>
-<td>Required HBase table does not contain the requested column</td>
-</tr>
-<tr class="odd">
-<td><strong>Hive-Specific Errors</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>ERROR:� remote component error (500) from '&lt;x&gt;': �type �Exception report � message ��java.lang.RuntimeException: Failed to connect to Hive metastore: java.net.ConnectException: Connection refused</td>
-<td>Hive Metastore service is down</td>
-</tr>
-<tr class="odd">
-<td>ERROR:� remote component error (500) from '&lt;x&gt;':�type �Exception report � message
-<p>NoSuchObjectException(message:default.players table not found)</p></td>
-<td>Table doesn't exist in Hive</td>
-</tr>
-<tr class="even">
-<td><strong>JSON-Specific Errors</strong></td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>ERROR: No fields in record (seg0 slice1 host:&ltn&gt pid=&ltn&gt)
-<p>DETAIL: External table &lttablename&gt</p></td>
-<td>Check your JSON file for empty lines; remove them and try again</td>
-</tr>
-<tr class="odd">
-<td>ERROR:  remote component error (500) from host:51200:  type  Exception report   message   &lttext&gt[0] is not an array node    description   The server encountered an internal error that prevented it from fulfilling this request.    exception   java.io.IOException: &lttext&gt[0] is not an array node (libchurl.c:878)  (seg4 host:40000 pid=&ltn&gt)  
-<p>DETAIL:  External table &lttablename&gt</p></td>
-<td>JSON field assumed to be an array, but it is a scalar field.
-</td>
-</tr>
-
-</tbody>
-</table>
-
-
-## <a id="pxflogging"></a>PXF Logging
-Enabling more verbose logging may aid PXF troubleshooting efforts.
-
-PXF provides two categories of message logging - service-level and database-level.
-
-### <a id="pxfsvclogmsg"></a>Service-Level Logging
-
-PXF utilizes `log4j` for service-level logging. PXF-service-related log messages are captured in a log file specified by PXF's `log4j` properties file, `/etc/pxf/conf/pxf-log4j.properties`. The default PXF logging configuration will write `INFO` and more severe level logs to `/var/log/pxf/pxf-service.log`.
-
-PXF provides more detailed logging when the `DEBUG` level is enabled.  To configure PXF `DEBUG` logging, uncomment the following line in `pxf-log4j.properties`:
-
-``` shell
-#log4j.logger.org.apache.hawq.pxf=DEBUG
-```
-
-and restart the PXF service:
-
-``` shell
-$ sudo service pxf-service restart
-```
-
-With `DEBUG` level logging now enabled, perform your PXF operations; for example, creating and querying an external table. (Make note of the time; this will direct you to the relevant log messages in `/var/log/pxf/pxf-service.log`.)
-
-``` shell
-$ psql
-```
-
-``` sql
-gpadmin=# CREATE EXTERNAL TABLE hivetest(id int, newid int)
-    LOCATION ('pxf://namenode:51200/pxf_hive1?PROFILE=Hive')
-    FORMAT 'CUSTOM' (formatter='pxfwritable_import');
-gpadmin=# select * from hivetest;
-<select output>
-```
-
-Examine/collect the log messages from `pxf-service.log`.
-
-**Note**: `DEBUG` logging is verbose and has a performance impact.  Remember to turn off PXF service `DEBUG` logging after you have collected the desired information.
- 
-
-### <a id="pxfdblogmsg"></a>Database-Level Logging
-
-Enable HAWQ and PXF debug message logging during operations on PXF external tables by setting the `client_min_messages` server configuration parameter to `DEBUG2` in your `psql` session.
-
-``` shell
-$ psql
-```
-
-``` sql
-gpadmin=# SET client_min_messages=DEBUG2
-gpadmin=# SELECT * FROM hivetest;
-...
-DEBUG2:  churl http header: cell #19: X-GP-URL-HOST: localhost
-DEBUG2:  churl http header: cell #20: X-GP-URL-PORT: 51200
-DEBUG2:  churl http header: cell #21: X-GP-DATA-DIR: pxf_hive1
-DEBUG2:  churl http header: cell #22: X-GP-profile: Hive
-DEBUG2:  churl http header: cell #23: X-GP-URI: pxf://namenode:51200/pxf_hive1?profile=Hive
-...
-```
-
-Examine/collect the log messages from `stdout`.
-
-**Note**: `DEBUG2` database session logging has a performance impact.  Remember to turn off `DEBUG2` logging after you have collected the desired information.
-
-``` sql
-gpadmin=# SET client_min_messages=NOTICE
-```
-
-
-## <a id="pxf-memcfg"></a>Addressing PXF Memory Issues
-
-The Java heap size can be a limiting factor in PXF\u2019s ability to serve many concurrent requests or to run queries against large tables.
-
-You may run into situations where a query will hang or fail with an Out of Memory exception (OOM). This typically occurs when many threads are reading different data fragments from an external table and insufficient heap space exists to open all fragments at the same time. To avert or remedy this situation, Pivotal recommends first increasing the Java maximum heap size or decreasing the Tomcat maximum number of threads, depending upon what works best for your system configuration.
-
-**Note**: The configuration changes described in this topic require modifying config files on *each* PXF node in your HAWQ cluster. After performing the updates, be sure to verify that the configuration on all PXF nodes is the same.
-
-You will need to re-apply these configuration changes after any PXF version upgrades.
-
-### <a id="pxf-heapcfg"></a>Increasing the Maximum Heap Size
-
-Each PXF node is configured with a default Java heap size of 512MB. If the nodes in your cluster have an ample amount of memory, increasing the amount allocated to the PXF agents is the best approach. Pivotal recommends a heap size value between 1-2GB.
-
-Perform the following steps to increase the PXF agent heap size in your HAWQ  deployment. **You must perform the configuration changes on each PXF node in your HAWQ cluster.**
-
-1. Open `/var/pxf/pxf-service/bin/setenv.sh` in a text editor.
-
-    ``` shell
-    root@pxf-node$ vi /var/pxf/pxf-service/bin/setenv.sh
-    ```
-
-2. Update the `-Xmx` option to the desired value in the `JVM_OPTS` setting:
-
-    ``` shell
-    JVM_OPTS="-Xmx1024M -Xss256K"
-    ```
-
-3. Restart PXF:
-
-    1. If you use Ambari to manage your cluster, restart the PXF service via the Ambari console.
-    2. If you do not use Ambari, restart the PXF service from the command line on each node:
-
-        ``` shell
-        root@pxf-node$ service pxf-service restart
-        ```
-
-### <a id="pxf-heapcfg"></a>Decreasing the Maximum Number of  Threads
-
-If increasing the maximum heap size is not suitable for your HAWQ cluster, try decreasing the number of concurrent working threads configured for the underlying Tomcat web application. A decrease in the number of running threads will prevent any PXF node from exhausting its memory, while ensuring that current queries run to completion (albeit a bit slower). As Tomcat's default behavior is to queue requests until a thread is free, decreasing this value will not result in denied requests.
-
-The Tomcat default maximum number of threads is 300. Pivotal recommends  decreasing the maximum number of threads to under 6. (If you plan to run large workloads on a large number of files using a Hive profile, Pivotal recommends you pick an even lower value.)
-
-Perform the following steps to decrease the maximum number of Tomcat threads in your HAWQ PXF deployment. **You must perform the configuration changes on each PXF node in your HAWQ cluster.**
-
-1. Open the `/var/pxf/pxf-service/conf/server.xml` file in a text editor.
-
-    ``` shell
-    root@pxf-node$ vi /var/pxf/pxf-service/conf/server.xml
-    ```
-
-2. Update the `Catalina` `Executor` block to identify the desired `maxThreads` value:
-
-    ``` xml
-    <Executor maxThreads="2"
-              minSpareThreads="50"
-              name="tomcatThreadPool"
-              namePrefix="tomcat-http--"/>
-    ```
-
-3. Restart PXF:
-
-    1. If you use Ambari to manage your cluster, restart the PXF service via the Ambari console.
-    2. If you do not use Ambari, restart the PXF service from the command line on each node:
-
-        ``` shell
-        root@pxf-node$ service pxf-service restart
-        ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/HAWQQueryProcessing.html.md.erb
----------------------------------------------------------------------
diff --git a/query/HAWQQueryProcessing.html.md.erb b/query/HAWQQueryProcessing.html.md.erb
deleted file mode 100644
index 1d221f4..0000000
--- a/query/HAWQQueryProcessing.html.md.erb
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title: About HAWQ Query Processing
----
-
-This topic provides an overview of how HAWQ processes queries. Understanding this process can be useful when writing and tuning queries.
-
-Users issue queries to HAWQ as they would to any database management system. They connect to the database instance on the HAWQ master host using a client application such as `psql` and submit SQL statements.
-
-## <a id="topic2"></a>Understanding Query Planning and Dispatch
-
-After a query is accepted on master, the master parses and analyzes the query. After completing its analysis, the master generates a query tree and provides the query tree to the query optimizer.
-
-The query optimizer generates a query plan. Given the cost information of the query plan, resources are requested from the HAWQ resource manager. After the resources are obtained, the dispatcher starts virtual segments and dispatches the query plan to virtual segments for execution.
-
-This diagram depicts basic query flow in HAWQ.
-
-<img src="../images/basic_query_flow.png" id="topic2__image_ezs_wbh_sv" class="image" width="672" />
-
-## <a id="topic3"></a>Understanding HAWQ Query Plans
-
-A query plan is the set of operations HAWQ will perform to produce the answer to a query. Each *node* or step in the plan represents a database operation such as a table scan, join, aggregation, or sort. Plans are read and executed from bottom to top.
-
-In addition to common database operations such as tables scans, joins, and so on, HAWQ has an additional operation type called *motion*. A motion operation involves moving tuples between the segments during query processing. Note that not every query requires a motion. For example, a targeted query plan does not require data to move across the interconnect.
-
-To achieve maximum parallelism during query execution, HAWQ divides the work of the query plan into *slices*. A slice is a portion of the plan that segments can work on independently. A query plan is sliced wherever a *motion* operation occurs in the plan, with one slice on each side of the motion.
-
-For example, consider the following simple query involving a join between two tables:
-
-``` sql
-SELECT customer, amount
-FROM sales JOIN customer USING (cust_id)
-WHERE dateCol = '04-30-2008';
-```
-
-[Query Slice Plan](#topic3__iy140224) shows the query plan. Each segment receives a copy of the query plan and works on it in parallel.
-
-The query plan for this example has a *redistribute motion* that moves tuples between the segments to complete the join. The redistribute motion is necessary because the customer table is distributed across the segments by `cust_id`, but the sales table is distributed across the segments by `sale_id`. To perform the join, the `sales` tuples must be redistributed by `cust_id`. The plan is sliced on either side of the redistribute motion, creating *slice 1* and *slice 2*.
-
-This query plan has another type of motion operation called a *gather motion*. A gather motion is when the segments send results back up to the master for presentation to the client. Because a query plan is always sliced wherever a motion occurs, this plan also has an implicit slice at the very top of the plan (*slice 3*). Not all query plans involve a gather motion. For example, a `CREATE TABLE x AS SELECT...` statement would not have a gather motion because tuples are sent to the newly created table, not to the master.
-
-<a id="topic3__iy140224"></a>
-<span class="figtitleprefix">Figure: </span>Query Slice Plan
-
-<img src="../images/slice_plan.jpg" class="image" width="462" height="382" />
-
-## <a id="topic4"></a>Understanding Parallel Query Execution
-
-HAWQ creates a number of database processes to handle the work of a query. On the master, the query worker process is called the *query dispatcher* (QD). The QD is responsible for creating and dispatching the query plan. It also accumulates and presents the final results. On virtual segments, a query worker process is called a *query executor* (QE). A QE is responsible for completing its portion of work and communicating its intermediate results to the other worker processes.
-
-There is at least one worker process assigned to each *slice* of the query plan. A worker process works on its assigned portion of the query plan independently. During query execution, each virtual segment will have a number of processes working on the query in parallel.
-
-Related processes that are working on the same slice of the query plan but on different virtual segments are called *gangs*. As a portion of work is completed, tuples flow up the query plan from one gang of processes to the next. This inter-process communication between virtual segments is referred to as the *interconnect* component of HAWQ.
-
-[Query Worker Processes](#topic4__iy141495) shows the query worker processes on the master and two virtual segment instances for the query plan illustrated in [Query Slice Plan](#topic3__iy140224).
-
-<a id="topic4__iy141495"></a>
-<span class="figtitleprefix">Figure: </span>Query Worker Processes
-
-<img src="../images/gangs.jpg" class="image" width="318" height="288" />
-



[05/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/plext/using_plr.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_plr.html.md.erb b/plext/using_plr.html.md.erb
deleted file mode 100644
index 367a1d0..0000000
--- a/plext/using_plr.html.md.erb
+++ /dev/null
@@ -1,229 +0,0 @@
----
-title: Using PL/R in HAWQ
----
-
-PL/R is a procedural language. With the HAWQ PL/R extension, you can write database functions in the R programming language and use R packages that contain R functions and data sets.
-
-**Note**: To use PL/R in HAWQ, R must be installed on each node in your HAWQ cluster. Additionally, you must install the PL/R package on an existing HAWQ deployment or have specified PL/R as a build option when compiling HAWQ.
-
-## <a id="plrexamples"></a>PL/R Examples 
-
-This section contains simple PL/R examples.
-
-### <a id="example1"></a>Example 1: Using PL/R for Single Row Operators 
-
-This function generates an array of numbers with a normal distribution using the R function `rnorm()`.
-
-```sql
-CREATE OR REPLACE FUNCTION r_norm(n integer, mean float8, 
-  std_dev float8) RETURNS float8[ ] AS
-$$
-  x<-rnorm(n,mean,std_dev)
-  return(x)
-$$
-LANGUAGE 'plr';
-```
-
-The following `CREATE TABLE` command uses the `r_norm` function to populate the table. The `r_norm` function creates an array of 10 numbers.
-
-```sql
-CREATE TABLE test_norm_var
-  AS SELECT id, r_norm(10,0,1) AS x
-  FROM (SELECT generate_series(1,30:: bigint) AS ID) foo
-  DISTRIBUTED BY (id);
-```
-
-### <a id="example2"></a>Example 2: Returning PL/R data.frames in Tabular Form 
-
-Assuming your PL/R function returns an R `data.frame` as its output \(unless you want to use arrays of arrays\), some work is required in order for HAWQ to see your PL/R `data.frame` as a simple SQL table:
-
-Create a TYPE in HAWQ with the same dimensions as your R `data.frame`:
-
-```sql
-CREATE TYPE t1 AS ...
-```
-
-Use this TYPE when defining your PL/R function:
-
-```sql
-... RETURNS SET OF t1 AS ...
-```
-
-Sample SQL for this situation is provided in the next example.
-
-### <a id="example3"></a>Example 3: Process Employee Information Using PL/R 
-
-The SQL below defines a TYPE and a function to process employee information with `data.frame` using PL/R:
-
-```sql
--- Create type to store employee information
-DROP TYPE IF EXISTS emp_type CASCADE;
-CREATE TYPE emp_type AS (name text, age int, salary numeric(10,2));
-
--- Create function to process employee information and return data.frame
-DROP FUNCTION IF EXISTS get_emps();
-CREATE OR REPLACE FUNCTION get_emps() RETURNS SETOF emp_type AS '
-    names <- c("Joe","Jim","Jon")
-    ages <- c(41,25,35)
-    salaries <- c(250000,120000,50000)
-    df <- data.frame(name = names, age = ages, salary = salaries)
-
-    return(df)
-' LANGUAGE 'plr';
-
--- Call the function
-SELECT * FROM get_emps();
-```
-
-
-## <a id="downloadinstallplrlibraries"></a>Downloading and Installing R Packages 
-
-R packages are modules that contain R functions and data sets. You can install R packages to extend R and PL/R functionality in HAWQ.
-
-**Note**: If you expand HAWQ and add segment hosts, you must install the R packages in the R installation of *each* of the new hosts.</p>
-
-1. For an R package, identify all dependent R packages and each package web URL. The information can be found by selecting the given package from the following navigation page:
-
-	[http://cran.r-project.org/web/packages/available_packages_by_name.html](http://cran.r-project.org/web/packages/available_packages_by_name.html)
-
-	As an example, the page for the R package `arm` indicates that the package requires the following R libraries: `Matrix`, `lattice`, `lme4`, `R2WinBUGS`, `coda`, `abind`, `foreign`, and `MASS`.
-	
-	You can also try installing the package with `R CMD INSTALL` command to determine the dependent packages.
-	
-	For the R installation included with the HAWQ PL/R extension, the required R packages are installed with the PL/R extension. However, the Matrix package requires a newer version.
-	
-1. From the command line, use the `wget` utility to download the tar.gz files for the `arm` package to the HAWQ master host:
-
-	```shell
-	$ wget http://cran.r-project.org/src/contrib/Archive/arm/arm_1.5-03.tar.gz
-	$ wget http://cran.r-project.org/src/contrib/Archive/Matrix/Matrix_0.9996875-1.tar.gz
-	```
-
-1. Use the `hawq scp` utility and the `hawq_hosts` file to copy the tar.gz files to the same directory on all nodes of the HAWQ cluster. The `hawq_hosts` file contains a list of all of the HAWQ segment hosts. You might require root access to do this.
-
-	```shell
-	$ hawq scp -f hosts_all Matrix_0.9996875-1.tar.gz =:/home/gpadmin 
-	$ hawq scp -f hawq_hosts arm_1.5-03.tar.gz =:/home/gpadmin
-	```
-
-1. Use the `hawq ssh` utility in interactive mode to log into each HAWQ segment host (`hawq ssh -f hawq_hosts`). Install the packages from the command prompt using the `R CMD INSTALL` command. Note that this may require root access. For example, this R install command installs the packages for the `arm` package.
-
-	```shell
-	$ R CMD INSTALL Matrix_0.9996875-1.tar.gz arm_1.5-03.tar.gz
-	```
-	**Note**: Some packages require compilation. Refer to the package documentation for possible build requirements.
-
-1. Ensure that the R package was installed in the `/usr/lib64/R/library` directory on all the segments (`hawq ssh` can be used to install the package). For example, this `hawq ssh` command lists the contents of the R library directory.
-
-	```shell
-	$ hawq ssh -f hawq_hosts "ls /usr/lib64/R/library"
-	```
-	
-1. Verify the R package can be loaded.
-
-	This function performs a simple test to determine if an R package can be loaded:
-	
-	```sql
-	CREATE OR REPLACE FUNCTION R_test_require(fname text)
-	RETURNS boolean AS
-	$BODY$
-    	return(require(fname,character.only=T))
-	$BODY$
-	LANGUAGE 'plr';
-	```
-
-	This SQL command calls the previous function to determine if the R package `arm` can be loaded:
-	
-	```sql
-	SELECT R_test_require('arm');
-	```
-
-## <a id="rlibrarydisplay"></a>Displaying R Library Information 
-
-You can use the R command line to display information about the installed libraries and functions on the HAWQ host. You can also add and remove libraries from the R installation. To start the R command line on the host, log in to the host as the `gpadmin` user and run the script R.
-
-``` shell
-$ R
-```
-
-This R function lists the available R packages from the R command line:
-
-```r
-> library()
-```
-
-Display the documentation for a particular R package
-
-```r
-> library(help="package_name")
-> help(package="package_name")
-```
-
-Display the help file for an R function:
-
-```r
-> help("function_name")
-> ?function_name
-```
-
-To see what packages are installed, use the R command `installed.packages()`. This will return a matrix with a row for each package that has been installed. Below, we look at the first 5 rows of this matrix.
-
-```r
-> installed.packages()
-```
-
-Any package that does not appear in the installed packages matrix must be installed and loaded before its functions can be used.
-
-An R package can be installed with `install.packages()`:
-
-```r
-> install.packages("package_name") 
-> install.packages("mypkg", dependencies = TRUE, type="source")
-```
-
-Load a package from the R command line.
-
-```r
-> library(" package_name ") 
-```
-An R package can be removed with remove.packages
-
-```r
-> remove.packages("package_name")
-```
-
-You can use the R command `-e` option to run functions from the command line. For example, this command displays help on the R package named `MASS`.
-
-```shell
-$ R -e 'help("MASS")'
-```
-
-## <a id="plrreferences"></a>References 
-
-[http://www.r-project.org/](http://www.r-project.org/) - The R Project home page
-
-[https://github.com/pivotalsoftware/gp-r](https://github.com/pivotalsoftware/gp-r) - GitHub repository that contains information about using R.
-
-[https://github.com/pivotalsoftware/PivotalR](https://github.com/pivotalsoftware/PivotalR) - GitHub repository for PivotalR, a package that provides an R interface to operate on HAWQ tables and views that is similar to the R `data.frame`. PivotalR also supports using the machine learning package MADlib directly from R.
-
-R documentation is installed with the R package:
-
-```shell
-/usr/share/doc/R-N.N.N
-```
-
-where N.N.N corresponds to the version of R installed.
-
-### <a id="rfunctions"></a>R Functions and Arguments 
-
-See [http://www.joeconway.com/plr/doc/plr-funcs.html](http://www.joeconway.com/plr/doc/plr-funcs.html).
-
-### <a id="passdatavalues"></a>Passing Data Values in R 
-
-See [http://www.joeconway.com/plr/doc/plr-data.html](http://www.joeconway.com/plr/doc/plr-data.html).
-
-### <a id="aggregatefunctions"></a>Aggregate Functions in R 
-
-See [http://www.joeconway.com/plr/doc/plr-aggregate-funcs.html](http://www.joeconway.com/plr/doc/plr-aggregate-funcs.html).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/ConfigurePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/ConfigurePXF.html.md.erb b/pxf/ConfigurePXF.html.md.erb
deleted file mode 100644
index fec6b27..0000000
--- a/pxf/ConfigurePXF.html.md.erb
+++ /dev/null
@@ -1,69 +0,0 @@
----
-title: Configuring PXF
----
-
-This topic describes how to configure the PXF service.
-
-**Note:** After you make any changes to a PXF configuration file (such as `pxf-profiles.xml` for adding custom profiles), propagate the changes to all nodes with PXF installed, and then restart the PXF service on all nodes.
-
-## <a id="settingupthejavaclasspath"></a>Setting up the Java Classpath
-
-The classpath for the PXF service is set during the plug-in installation process. Administrators should only modify it when adding new PXF connectors. The classpath is defined in two files:
-
-1.  `/etc/pxf/conf/pxf-private.classpath`�\u2013 contains all the required resources to run the PXF service, including pxf-hdfs, pxf-hbase, and pxf-hive plug-ins. This file must not be edited or removed.
-2.  `/etc/pxf/conf/pxf-public.classpath` \u2013 plug-in jar files and any dependent jar files for custom plug-ins and custom profiles should be added here. The classpath resources should be defined one per line. Wildcard characters can be used in the name of the resource, but not in the full path. See [Adding and Updating Profiles](ReadWritePXF.html#addingandupdatingprofiles) for information on adding custom profiles.
-
-After changing the classpath files, the PXF service must be restarted.�
-
-## <a id="settingupthejvmcommandlineoptionsforpxfservice"></a>Setting up the JVM Command Line Options for the PXF Service
-
-The PXF service JVM command line options can be added or modified for each pxf-service instance in the `/var/pxf/pxf-service/bin/setenv.sh` file:
-
-Currently the `JVM_OPTS` parameter is set with the following values for maximum Java heap size�and thread stack size:
-
-``` shell
-JVM_OPTS="-Xmx512M -Xss256K"
-```
-
-After adding or modifying�the JVM command line options, the PXF service must be restarted.
-
-(Refer to [Addressing PXF Memory Issues](TroubleshootingPXF.html#pxf-memcfg) for a related discussion of the configuration options available to address memory issues in your PXF deployment.)
-
-## <a id="topic_i3f_hvm_ss"></a>Using PXF on a Secure HDFS Cluster
-
-You can use PXF on a secure HDFS cluster.�Read, write, and analyze operations for PXF tables on HDFS files are enabled.�It requires no changes to preexisting PXF tables from a previous version.
-
-### <a id="requirements"></a>Requirements
-
--   Both HDFS and YARN principals are created and are properly configured.
--   HAWQ is correctly configured to work in secure mode.
-
-Please refer to [Troubleshooting PXF](TroubleshootingPXF.html) for common errors related to PXF security and their meaning.
-
-## <a id="credentialsforremoteservices"></a>Credentials for Remote Services
-
-Credentials for remote services allows a PXF plug-in to access a remote service that requires credentials.
-
-### <a id="inhawq"></a>In HAWQ
-
-Two parameters for credentials are implemented in HAWQ:
-
--   `pxf_remote_service_login` \u2013 a string of characters detailing information regarding login (i.e. user name).
--   `pxf_remote_service_secret` \u2013 a string of characters detailing information that is considered secret (i.e. password).
-
-Currently, the contents of the two parameters are stored in memory, without any security, for the whole session.�Leaving the session will insecurely drop the contents of the parameters.
-
-**Important:** These parameters are temporary and could soon be deprecated, in favor of a complete solution for managing credentials for remote services in PXF.
-
-### <a id="inapxfplugin"></a>In a PXF Plug-in
-
-In a PXF plug-in, the contents of the two credentials parameters is available through the following InputData API functions:
-
-``` java
-string getLogin()
-string getSecret()
-```
-
-Both functions return 'null' if the corresponding HAWQ parameter was set to an empty string or was not set at all.�
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/HBasePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/HBasePXF.html.md.erb b/pxf/HBasePXF.html.md.erb
deleted file mode 100644
index 8b89730..0000000
--- a/pxf/HBasePXF.html.md.erb
+++ /dev/null
@@ -1,105 +0,0 @@
----
-title: Accessing HBase Data
----
-
-## <a id="installingthepxfhbaseplugin"></a>Prerequisites
-
-Before trying to access HBase data with PXF, verify the following:
-
--   The `/etc/hbase/conf/hbase-env.sh` configuration file must reference the `pxf-hbase.jar`. For example, `/etc/hbase/conf/hbase-env.sh` should include the line:
-
-    ``` bash
-    export HBASE_CLASSPATH=${HBASE_CLASSPATH}:/usr/lib/pxf/pxf-hbase.jar
-    ```
-
-    **Note:** You must restart HBase after making any changes to the HBase configuration.
-
--   PXF HBase plug-in is installed on all cluster nodes.
--   HBase and ZooKeeper jars are installed on all cluster nodes.
-
-## <a id="syntax3"></a>Syntax
-
-To create an external HBase table, use the following syntax:
-
-``` sql
-CREATE [READABLE|WRITABLE] EXTERNAL TABLE table_name 
-    ( column_name data_type [, ...] | LIKE other_table )
-LOCATION ('pxf://namenode[:port]/hbase-table-name?Profile=HBase')
-FORMAT 'CUSTOM' (Formatter='pxfwritable_import');
-```
-
-The HBase profile is equivalent to the following PXF parameters:
-
--   Fragmenter=org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter
--   Accessor=org.apache.hawq.pxf.plugins.hbase.HBaseAccessor
--   Resolver=org.apache.hawq.pxf.plugins.hbase.HBaseResolver
-
-## <a id="columnmapping"></a>Column Mapping
-
-Most HAWQ external tables (PXF or others) require that the HAWQ table attributes match the source data record layout, and include all�the available attributes. With HAWQ, however, you use the PXF HBase plug-in to�specify the subset of HBase qualifiers that define the HAWQ PXF table.�To set up a�clear mapping between each attribute in the PXF table and a specific qualifier in the HBase table, you can use either direct mapping or indirect mapping. In addition, the HBase row key is handled in a special way.
-
-### <a id="rowkey"></a>Row Key
-
-You can use the HBase table row key in several ways. For example,�you can see them using query results,�or�you can run a�WHERE clause filter on a range of row key values. To use the row key in the HAWQ query, define the HAWQ table with the reserved PXF attribute�`recordkey.`�This attribute name tells PXF to return the�record key in any key-value based system and in HBase.
-
-**Note:** Because HBase is byte and not character-based, you should define the recordkey as type bytea. This may result in better ability to filter data and increase performance.
-
-``` sql
-CREATE EXTERNAL TABLE <tname> (recordkey bytea, ... ) LOCATION ('pxf:// ...')
-```
-
-### <a id="directmapping"></a>Direct Mapping
-
-Use�Direct Mapping�to map HAWQ table attributes to HBase qualifiers. You can specify the HBase qualifier names of interest, with column family names included, as quoted values.�
-
-For example, you have defined an HBase table called�`hbase_sales` with multiple column families and many qualifiers. To create a HAWQ table with these attributes:
-
--   `rowkey`
--   qualifier `saleid` in the�column family `cf1`
--   qualifier `comments` in the�column family `cf8`�
-
-use the following `CREATE EXTERNAL TABLE` syntax:
-
-``` sql
-CREATE EXTERNAL TABLE hbase_sales (
-  recordkey bytea,
-  "cf1:saleid" int,
-  "cf8:comments" varchar
-) ...
-```
-
-The PXF HBase plug-in uses these attribute names as-is and returns the values of these HBase qualifiers.
-
-### <a id="indirectmappingvialookuptable"></a>Indirect Mapping (via Lookup Table)
-
-The direct mapping method is fast and intuitive, but using�indirect mapping�helps to�reconcile HBase qualifier names with HAWQ behavior:
-
--   HBase qualifier names may be longer than 32 characters. HAWQ has a 32-character limit on attribute name size.
--   HBase qualifier names�can be binary or non-printable. HAWQ attribute names are character based.
-
-In�either case, Indirect Mapping uses a lookup table on HBase. You can create the lookup table to store all necessary lookup information.�This works as a template for any future queries. The name of the lookup table must be�`pxflookup` and must include�the column family named�`mapping`.
-
-Using the sales example in Direct Mapping, if our `rowkey` represents the HBase table name and the�`mapping` column family includes the actual attribute mapping in the key value form of`<hawq attr name>=<hbase                             cf:qualifier>`.
-
-#### <a id="example5"></a>Example
-
-This example maps the `saleid` qualifier in the `cf1` column family to the HAWQ `id` column and the `comments` qualifier in the `cf8` family to the HAWQ `cmts` column.
-
-| (row key) | mapping           |
-|-----------|-------------------|
-| sales     | id=cf1:saleid     |
-| sales     | cmts=cf8:comments |
-
-The mapping assigned new names for each qualifier.�You can use these names in your HAWQ table definition:
-
-``` sql
-CREATE EXTERNAL TABLE hbase_sales (
-  recordkey bytea
-  id int,
-  cmts varchar
-) ...
-```
-
-PXF automatically matches HAWQ to HBase column names when a�`pxflookup` table exists in HBase.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/HDFSFileDataPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/HDFSFileDataPXF.html.md.erb b/pxf/HDFSFileDataPXF.html.md.erb
deleted file mode 100644
index 2021565..0000000
--- a/pxf/HDFSFileDataPXF.html.md.erb
+++ /dev/null
@@ -1,452 +0,0 @@
----
-title: Accessing HDFS File Data
----
-
-HDFS is the primary distributed storage mechanism used by Apache Hadoop applications. The PXF HDFS plug-in reads file data stored in HDFS.  The plug-in supports plain delimited and comma-separated-value format text files.  The HDFS plug-in also supports the Avro binary format.
-
-This section describes how to use PXF to access HDFS data, including how to create and query an external table from files in the HDFS data store.
-
-## <a id="hdfsplugin_prereq"></a>Prerequisites
-
-Before working with HDFS file data using HAWQ and PXF, ensure that:
-
--   The HDFS plug-in is installed on all cluster nodes. See [Installing PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
--   All HDFS users have read permissions to HDFS services and that write permissions have been restricted to specific users.
-
-## <a id="hdfsplugin_fileformats"></a>HDFS File Formats
-
-The PXF HDFS plug-in supports reading the following file formats:
-
-- Text File - comma-separated value (.csv) or delimited format plain text file
-- Avro - JSON-defined, schema-based data serialization format
-
-The PXF HDFS plug-in includes the following profiles to support the file formats listed above:
-
-- `HdfsTextSimple` - text files
-- `HdfsTextMulti` - text files with embedded line feeds
-- `Avro` - Avro files
-
-If you find that the pre-defined PXF HDFS profiles do not meet your needs, you may choose to create a custom HDFS profile from the existing HDFS serialization and deserialization classes. Refer to [Adding and Updating Profiles](ReadWritePXF.html#addingandupdatingprofiles) for information on creating a custom profile.
-
-## <a id="hdfsplugin_cmdline"></a>HDFS Shell Commands
-Hadoop includes command-line tools that interact directly with HDFS.  These tools support typical file system operations including copying and listing files, changing file permissions, and so forth.
-
-The HDFS file system command syntax is `hdfs dfs <options> [<file>]`. Invoked with no options, `hdfs dfs` lists the file system options supported by the tool.
-
-The user invoking the `hdfs dfs` command must have sufficient privileges to the HDFS data store to perform HDFS file system operations. Specifically, the user must have write permission to HDFS to create directories and files.
-
-`hdfs dfs` options used in this topic are:
-
-| Option  | Description |
-|-------|-------------------------------------|
-| `-cat`    | Display file contents. |
-| `-mkdir`    | Create directory in HDFS. |
-| `-put`    | Copy file from local file system to HDFS. |
-
-Examples:
-
-Create a directory in HDFS:
-
-``` shell
-$ hdfs dfs -mkdir -p /data/exampledir
-```
-
-Copy a text file to HDFS:
-
-``` shell
-$ hdfs dfs -put /tmp/example.txt /data/exampledir/
-```
-
-Display the contents of a text file in HDFS:
-
-``` shell
-$ hdfs dfs -cat /data/exampledir/example.txt
-```
-
-
-## <a id="hdfsplugin_queryextdata"></a>Querying External HDFS Data
-The PXF HDFS plug-in supports the `HdfsTextSimple`, `HdfsTextMulti`, and `Avro` profiles.
-
-Use the following syntax to create a HAWQ external table representing HDFS data:�
-
-``` sql
-CREATE EXTERNAL TABLE <table_name> 
-    ( <column_name> <data_type> [, ...] | LIKE <other_table> )
-LOCATION ('pxf://<host>[:<port>]/<path-to-hdfs-file>
-    ?PROFILE=HdfsTextSimple|HdfsTextMulti|Avro[&<custom-option>=<value>[...]]')
-FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);
-```
-
-HDFS-plug-in-specific keywords and values used in the [CREATE EXTERNAL TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the table below.
-
-| Keyword  | Value |
-|-------|-------------------------------------|
-| \<host\>[:\<port\>]    | The HDFS NameNode and port. |
-| \<path-to-hdfs-file\>    | The path to the file in the HDFS data store. |
-| PROFILE    | The `PROFILE` keyword must specify one of the values `HdfsTextSimple`, `HdfsTextMulti`, or `Avro`. |
-| \<custom-option\>  | \<custom-option\> is profile-specific. Profile-specific options are discussed in the relevant profile topic later in this section.|
-| FORMAT 'TEXT' | Use '`TEXT`' `FORMAT` with the `HdfsTextSimple` profile when \<path-to-hdfs-file\> references a plain text delimited file.  |
-| FORMAT 'CSV' | Use '`CSV`' `FORMAT` with `HdfsTextSimple` and `HdfsTextMulti` profiles when \<path-to-hdfs-file\> references a comma-separated value file.  |
-| FORMAT 'CUSTOM' | Use the`CUSTOM` `FORMAT` with  the `Avro` profile. The `Avro` '`CUSTOM`' `FORMAT` supports only the built-in `(formatter='pxfwritable_import')` \<formatting-property\> |
- \<formatting-properties\>    | \<formatting-properties\> are profile-specific. Profile-specific formatting options are discussed in the relevant profile topic later in this section. |
-
-*Note*: When creating PXF external tables, you cannot use the `HEADER` option in your `FORMAT` specification.
-
-## <a id="profile_hdfstextsimple"></a>HdfsTextSimple Profile
-
-Use the `HdfsTextSimple` profile when reading plain text delimited or .csv files where each row is a single record.
-
-\<formatting-properties\> supported by the `HdfsTextSimple` profile include:
-
-| Keyword  | Value |
-|-------|-------------------------------------|
-| delimiter    | The delimiter character in the file. Default value is a comma `,`.|
-
-### <a id="profile_hdfstextsimple_query"></a>Example: Using the HdfsTextSimple Profile
-
-Perform the following steps to create a sample data file, copy the file to HDFS, and use the `HdfsTextSimple` profile to create PXF external tables to query the data:
-
-1. Create an HDFS directory for PXF example data files:
-
-    ``` shell
-    $ hdfs dfs -mkdir -p /data/pxf_examples
-    ```
-
-2. Create a delimited plain text data file named `pxf_hdfs_simple.txt`:
-
-    ``` shell
-    $ echo 'Prague,Jan,101,4875.33
-Rome,Mar,87,1557.39
-Bangalore,May,317,8936.99
-Beijing,Jul,411,11600.67' > /tmp/pxf_hdfs_simple.txt
-    ```
-
-    Note the use of the comma `,` to separate the four data fields.
-
-4. Add the data file to HDFS:
-
-    ``` shell
-    $ hdfs dfs -put /tmp/pxf_hdfs_simple.txt /data/pxf_examples/
-    ```
-
-5. Display the contents of the `pxf_hdfs_simple.txt` file stored in HDFS:
-
-    ``` shell
-    $ hdfs dfs -cat /data/pxf_examples/pxf_hdfs_simple.txt
-    ```
-
-1. Use the `HdfsTextSimple` profile to create a queryable HAWQ external table from the `pxf_hdfs_simple.txt` file you previously created and added to HDFS:
-
-    ``` sql
-    gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_textsimple(location text, month text, num_orders int, total_sales float8)
-                LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_hdfs_simple.txt?PROFILE=HdfsTextSimple')
-              FORMAT 'TEXT' (delimiter=E',');
-    gpadmin=# SELECT * FROM pxf_hdfs_textsimple;          
-    ```
-
-    ``` pre
-       location    | month | num_orders | total_sales 
-    ---------------+-------+------------+-------------
-     Prague        | Jan   |        101 |     4875.33
-     Rome          | Mar   |         87 |     1557.39
-     Bangalore     | May   |        317 |     8936.99
-     Beijing       | Jul   |        411 |    11600.67
-    (4 rows)
-    ```
-
-2. Create a second external table from `pxf_hdfs_simple.txt`, this time using the `CSV` `FORMAT`:
-
-    ``` sql
-    gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_textsimple_csv(location text, month text, num_orders int, total_sales float8)
-                LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_hdfs_simple.txt?PROFILE=HdfsTextSimple')
-              FORMAT 'CSV';
-    gpadmin=# SELECT * FROM pxf_hdfs_textsimple_csv;          
-    ```
-
-    When specifying `FORMAT 'CSV'` for a comma-separated value file, no `delimiter` formatter option is required, as comma is the default.
-
-## <a id="profile_hdfstextmulti"></a>HdfsTextMulti Profile
-
-Use the `HdfsTextMulti` profile when reading plain text files with delimited single- or multi- line records that include embedded (quoted) linefeed characters.
-
-\<formatting-properties\> supported by the `HdfsTextMulti` profile include:
-
-| Keyword  | Value |
-|-------|-------------------------------------|
-| delimiter    | The delimiter character in the file. |
-
-### <a id="profile_hdfstextmulti_query"></a>Example: Using the HdfsTextMulti Profile
-
-Perform the following steps to create a sample data file, copy the file to HDFS, and use the `HdfsTextMulti` profile to create a PXF external table to query the data:
-
-1. Create a second delimited plain text file:
-
-    ``` shell
-    $ vi /tmp/pxf_hdfs_multi.txt
-    ```
-
-2. Copy/paste the following data into `pxf_hdfs_multi.txt`:
-
-    ``` pre
-    "4627 Star Rd.
-    San Francisco, CA  94107":Sept:2017
-    "113 Moon St.
-    San Diego, CA  92093":Jan:2018
-    "51 Belt Ct.
-    Denver, CO  90123":Dec:2016
-    "93114 Radial Rd.
-    Chicago, IL  60605":Jul:2017
-    "7301 Brookview Ave.
-    Columbus, OH  43213":Dec:2018
-    ```
-
-    Notice the use of the colon `:` to separate the three fields. Also notice the quotes around the first (address) field. This field includes an embedded line feed separating the street address from the city and state.
-
-3. Add the data file to HDFS:
-
-    ``` shell
-    $ hdfs dfs -put /tmp/pxf_hdfs_multi.txt /data/pxf_examples/
-    ```
-
-4. Use the `HdfsTextMulti` profile to create a queryable external table from the `pxf_hdfs_multi.txt` HDFS file, making sure to identify the `:` as the field separator:
-
-    ``` sql
-    gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_textmulti(address text, month text, year int)
-                LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_hdfs_multi.txt?PROFILE=HdfsTextMulti')
-              FORMAT 'CSV' (delimiter=E':');
-    ```
-    
-2. Query the `pxf_hdfs_textmulti` table:
-
-    ``` sql
-    gpadmin=# SELECT * FROM pxf_hdfs_textmulti;
-    ```
-
-    ``` pre
-             address          | month | year 
-    --------------------------+-------+------
-     4627 Star Rd.            | Sept  | 2017
-     San Francisco, CA  94107           
-     113 Moon St.             | Jan   | 2018
-     San Diego, CA  92093               
-     51 Belt Ct.              | Dec   | 2016
-     Denver, CO  90123                  
-     93114 Radial Rd.         | Jul   | 2017
-     Chicago, IL  60605                 
-     7301 Brookview Ave.      | Dec   | 2018
-     Columbus, OH  43213                
-    (5 rows)
-    ```
-
-## <a id="profile_hdfsavro"></a>Avro Profile
-
-Apache Avro is a data serialization framework where the data is serialized in a compact binary format. 
-
-Avro specifies that data types be defined in JSON. Avro format files have an independent schema, also defined in JSON. An Avro schema, together with its data, is fully self-describing.
-
-### <a id="profile_hdfsavrodatamap"></a>Data Type Mapping
-
-Avro supports both primitive and complex data types. 
-
-To represent Avro primitive data types in HAWQ, map data values to HAWQ columns of the same type. 
-
-Avro supports complex data types including arrays, maps, records, enumerations, and fixed types. Map top-level fields of these complex data types to the HAWQ `TEXT` type. While HAWQ does not natively support these types, you can create HAWQ functions or application code to extract or further process subcomponents of these complex data types.
-
-The following table summarizes external mapping rules for Avro data.
-
-<a id="topic_oy3_qwm_ss__table_j4s_h1n_ss"></a>
-
-| Avro Data Type                                                    | PXF/HAWQ Data Type                                                                                                                                                                                            |
-|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Primitive type (int, double, float, long, string, bytes, boolean) | Use the corresponding HAWQ built-in data type; see [Data Types](../reference/HAWQDataTypes.html). |
-| Complex type: Array, Map, Record, or Enum                         | TEXT, with delimiters inserted between collection items, mapped key-value pairs, and record data.                                                                                           |
-| Complex type: Fixed                                               | BYTEA                                                                                                                                                                                               |
-| Union                                                             | Follows the above conventions for primitive or complex data types, depending on the union; supports Null values.                                                                     |
-
-### <a id="profile_hdfsavroptipns"></a>Avro-Specific Custom Options
-
-For complex types, the PXF `Avro` profile inserts default delimiters between collection items and values. You can use non-default delimiter characters by identifying values for specific `Avro` custom options in the `CREATE EXTERNAL TABLE` call. 
-
-The `Avro` profile supports the following \<custom-options\>:
-
-| Option Name   | Description       
-|---------------|--------------------|                                                                                        
-| COLLECTION_DELIM | The delimiter character(s) to place between entries in a top-level array, map, or record field when PXF maps an Avro complex data type to a text column. The default is the comma `,` character. |
-| MAPKEY_DELIM | The delimiter character(s) to place between the key and value of a map entry when PXF maps an Avro complex data type to a text column. The default is the colon `:` character. |
-| RECORDKEY_DELIM | The delimiter character(s) to place between the field name and value of a record entry when PXF maps an Avro complex data type to a text column. The default is the colon `:` character. |
-
-
-### <a id="topic_tr3_dpg_ts__section_m2p_ztg_ts"></a>Avro Schemas and Data
-
-Avro schemas are defined using JSON, and composed of the same primitive and complex types identified in the data mapping section above. Avro schema files typically have a `.avsc` suffix.
-
-Fields in an Avro schema file are defined via an array of objects, each of which is specified by a name and a type.
-
-
-### <a id="topic_tr3_dpg_ts_example"></a>Example: Using the Avro Profile
-
-The examples in this section will operate on Avro data with the following record schema:
-
-- id - long
-- username - string
-- followers - array of string
-- fmap - map of long
-- address - record comprised of street number (int), street name (string), and city (string)
-- relationship - enumerated type
-
-
-#### <a id="topic_tr3_dpg_ts__section_m2p_ztg_ts_99"></a>Create Schema
-
-Perform the following operations to create an Avro schema to represent the example schema described above.
-
-1. Create a file named `avro_schema.avsc`:
-
-    ``` shell
-    $ vi /tmp/avro_schema.avsc
-    ```
-
-2. Copy and paste the following text into `avro_schema.avsc`:
-
-    ``` json
-    {
-    "type" : "record",
-      "name" : "example_schema",
-      "namespace" : "com.example",
-      "fields" : [ {
-        "name" : "id",
-        "type" : "long",
-        "doc" : "Id of the user account"
-      }, {
-        "name" : "username",
-        "type" : "string",
-        "doc" : "Name of the user account"
-      }, {
-        "name" : "followers",
-        "type" : {"type": "array", "items": "string"},
-        "doc" : "Users followers"
-      }, {
-        "name": "fmap",
-        "type": {"type": "map", "values": "long"}
-      }, {
-        "name": "relationship",
-        "type": {
-            "type": "enum",
-            "name": "relationshipEnum",
-            "symbols": ["MARRIED","LOVE","FRIEND","COLLEAGUE","STRANGER","ENEMY"]
-        }
-      }, {
-        "name": "address",
-        "type": {
-            "type": "record",
-            "name": "addressRecord",
-            "fields": [
-                {"name":"number", "type":"int"},
-                {"name":"street", "type":"string"},
-                {"name":"city", "type":"string"}]
-        }
-      } ],
-      "doc:" : "A basic schema for storing messages"
-    }
-    ```
-
-#### <a id="topic_tr3_dpgspk_15g_tsdata"></a>Create Avro Data File (JSON)
-
-Perform the following steps to create a sample Avro data file conforming to the above schema.
-
-1.  Create a text file named `pxf_hdfs_avro.txt`:
-
-    ``` shell
-    $ vi /tmp/pxf_hdfs_avro.txt
-    ```
-
-2. Enter the following data into `pxf_hdfs_avro.txt`:
-
-    ``` pre
-    {"id":1, "username":"john","followers":["kate", "santosh"], "relationship": "FRIEND", "fmap": {"kate":10,"santosh":4}, "address":{"number":1, "street":"renaissance drive", "city":"san jose"}}
-    
-    {"id":2, "username":"jim","followers":["john", "pam"], "relationship": "COLLEAGUE", "fmap": {"john":3,"pam":3}, "address":{"number":9, "street":"deer creek", "city":"palo alto"}}
-    ```
-
-    The sample data uses a comma `,` to separate top level records and a colon `:` to separate map/key values and record field name/values.
-
-3. Convert the text file to Avro format. There are various ways to perform the conversion, both programmatically and via the command line. In this example, we use the [Java Avro tools](http://avro.apache.org/releases.html); the jar file resides in the current directory:
-
-    ``` shell
-    $ java -jar ./avro-tools-1.8.1.jar fromjson --schema-file /tmp/avro_schema.avsc /tmp/pxf_hdfs_avro.txt > /tmp/pxf_hdfs_avro.avro
-    ```
-
-    The generated Avro binary data file is written to `/tmp/pxf_hdfs_avro.avro`. 
-    
-4. Copy the generated Avro file to HDFS:
-
-    ``` shell
-    $ hdfs dfs -put /tmp/pxf_hdfs_avro.avro /data/pxf_examples/
-    ```
-    
-#### <a id="topic_avro_querydata"></a>Query With Avro Profile
-
-Perform the following steps to create and query an external table accessing the `pxf_hdfs_avro.avro` file you added to HDFS in the previous section. When creating the table:
-
--  Map the top-level primitive fields, `id` (type long) and `username` (type string), to their equivalent HAWQ types (bigint and text). 
--  Map the remaining complex fields to type text.
--  Explicitly set the record, map, and collection delimiters using the Avro profile custom options.
-
-
-1. Use the `Avro` profile to create a queryable external table from the `pxf_hdfs_avro.avro` file:
-
-    ``` sql
-    gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_avro(id bigint, username text, followers text, fmap text, relationship text, address text)
-                LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_hdfs_avro.avro?PROFILE=Avro&COLLECTION_DELIM=,&MAPKEY_DELIM=:&RECORDKEY_DELIM=:')
-              FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
-    ```
-
-2. Perform a simple query of the `pxf_hdfs_avro` table:
-
-    ``` sql
-    gpadmin=# SELECT * FROM pxf_hdfs_avro;
-    ```
-
-    ``` pre
-     id | username |   followers    |        fmap         | relationship |                      address                      
-    ----+----------+----------------+--------------------+--------------+---------------------------------------------------
-      1 | john     | [kate,santosh] | {kate:10,santosh:4} | FRIEND       | {number:1,street:renaissance drive,city:san jose}
-      2 | jim      | [john,pam]     | {pam:3,john:3}      | COLLEAGUE    | {number:9,street:deer creek,city:palo alto}
-    (2 rows)
-    ```
-
-    The simple query of the external table shows the components of the complex type data separated with the delimiters identified in the `CREATE EXTERNAL TABLE` call.
-
-
-3. Process the delimited components in the text columns as necessary for your application. For example, the following command uses the HAWQ internal `string_to_array` function to convert entries in the `followers` field to a text array column in a new view.
-
-    ``` sql
-    gpadmin=# CREATE VIEW followers_view AS 
-  SELECT username, address, string_to_array(substring(followers FROM 2 FOR (char_length(followers) - 2)), ',')::text[] 
-        AS followers 
-      FROM pxf_hdfs_avro;
-    ```
-
-4. Query the view to filter rows based on whether a particular follower appears in the array:
-
-    ``` sql
-    gpadmin=# SELECT username, address FROM followers_view WHERE followers @> '{john}';
-    ```
-
-    ``` pre
-     username |                   address                   
-    ----------+---------------------------------------------
-     jim      | {number:9,street:deer creek,city:palo alto}
-    ```
-
-## <a id="accessdataonahavhdfscluster"></a>Accessing HDFS Data in a High Availability HDFS Cluster
-
-To�access external HDFS data in a High Availability HDFS cluster, change the `CREATE EXTERNAL TABLE` `LOCATION` clause to use \<HA-nameservice\> rather than  \<host\>[:\<port\>].
-
-``` sql
-gpadmin=# CREATE EXTERNAL TABLE <table_name> ( <column_name> <data_type> [, ...] | LIKE <other_table> )
-            LOCATION ('pxf://<HA-nameservice>/<path-to-hdfs-file>?PROFILE=HdfsTextSimple|HdfsTextMulti|Avro[&<custom-option>=<value>[...]]')
-         FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);
-```
-
-The opposite is true when a highly available HDFS cluster is reverted to a single NameNode configuration. In that case, any table definition that has specified \<HA-nameservice\> should use the \<host\>[:\<port\>] syntax.�
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/HawqExtensionFrameworkPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/HawqExtensionFrameworkPXF.html.md.erb b/pxf/HawqExtensionFrameworkPXF.html.md.erb
deleted file mode 100644
index 578d13f..0000000
--- a/pxf/HawqExtensionFrameworkPXF.html.md.erb
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: Using PXF with Unmanaged Data
----
-
-HAWQ Extension Framework (PXF) is an extensible framework that allows HAWQ to query external system data.�
-
-PXF includes built-in connectors for accessing data inside HDFS files, Hive tables, and HBase tables. PXF also integrates with HCatalog to query Hive tables directly.
-
-PXF allows users to create custom connectors to access other parallel data stores or processing engines.�To create these connectors using Java plug-ins, see the [PXF External Tables and API](PXFExternalTableandAPIReference.html).
-
--   **[Installing PXF Plug-ins](../pxf/InstallPXFPlugins.html)**
-
-    This topic describes how to install the built-in PXF service plug-ins that are required to connect PXF to HDFS, Hive, and HBase. You should install the appropriate RPMs on each node in your cluster.
-
--   **[Configuring PXF](../pxf/ConfigurePXF.html)**
-
-    This topic describes how to configure the PXF service.
-
--   **[Accessing HDFS File Data](../pxf/HDFSFileDataPXF.html)**
-
-    This topic describes how to access HDFS file data using PXF.
-
--   **[Accessing Hive Data](../pxf/HivePXF.html)**
-
-    This topic describes how to access Hive data using PXF. You have several options for querying data stored in Hive. You can create external tables in PXF and then query those tables, or you can easily query Hive tables by using HAWQ and PXF's integration with HCatalog. HAWQ accesses Hive table metadata stored in HCatalog.
-
--   **[Accessing HBase Data](../pxf/HBasePXF.html)**
-
-    This topic describes how to access HBase data using PXF.
-
--   **[Accessing JSON Data](../pxf/JsonPXF.html)**
-
-    This topic describes how to access JSON data using PXF.
-
--   **[Using Profiles to Read and Write Data](../pxf/ReadWritePXF.html)**
-
-    PXF profiles are collections of common metadata attributes that can be used to simplify the reading and writing of data. You can use any of the built-in profiles that come with PXF or you can create your own.
-
--   **[PXF External Tables and API](../pxf/PXFExternalTableandAPIReference.html)**
-
-    You can use the PXF API to create�your own connectors to access any other type of parallel data store or processing engine.
-
--   **[Troubleshooting PXF](../pxf/TroubleshootingPXF.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/HivePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/HivePXF.html.md.erb b/pxf/HivePXF.html.md.erb
deleted file mode 100644
index 199c7a1..0000000
--- a/pxf/HivePXF.html.md.erb
+++ /dev/null
@@ -1,700 +0,0 @@
----
-title: Accessing Hive Data
----
-
-Apache Hive is a distributed data warehousing infrastructure.  Hive facilitates managing large data sets supporting multiple data formats, including comma-separated value (.csv), RC, ORC, and parquet. The PXF Hive plug-in reads data stored in Hive, as well as HDFS or HBase.
-
-This section describes how to use PXF to access Hive data. Options for querying data stored in Hive include:
-
--  Creating an external table in PXF and querying that table
--  Querying Hive tables via PXF's integration with HCatalog
-
-## <a id="installingthepxfhiveplugin"></a>Prerequisites
-
-Before accessing Hive data with HAWQ and PXF, ensure that:
-
--   The PXF HDFS plug-in is installed on all cluster nodes. See [Installing PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
--   The PXF Hive plug-in is installed on all cluster nodes.
--   The Hive JAR files and conf directory�are installed on all cluster nodes.
--   You have tested PXF on HDFS.
--   You are running the Hive Metastore service on a machine in your cluster.�
--   You have set the `hive.metastore.uris`�property in the�`hive-site.xml` on the NameNode.
-
-## <a id="topic_p2s_lvl_25"></a>Hive File Formats
-
-The PXF Hive plug-in supports several file formats and profiles for accessing these formats:
-
-| File Format  | Description | Profile |
-|-------|---------------------------|-------|
-| TextFile | Flat file with data in comma-, tab-, or space-separated value format or JSON notation. | Hive, HiveText |
-| SequenceFile | Flat file consisting of binary key/value pairs. | Hive |
-| RCFile | Record columnar data consisting of binary key/value pairs; high row compression rate. | Hive, HiveRC |
-| ORCFile | Optimized row columnar data with stripe, footer, and postscript sections; reduces data size. | Hive |
-| Parquet | Compressed columnar data representation. | Hive |
-| Avro | JSON-defined, schema-based data serialization format. | Hive |
-
-Refer to [File Formats](https://cwiki.apache.org/confluence/display/Hive/FileFormats) for detailed information about the file formats supported by Hive.
-
-## <a id="topic_p2s_lvl_29"></a>Data Type Mapping
-
-### <a id="hive_primdatatypes"></a>Primitive Data Types
-
-To represent Hive data in HAWQ, map data values that use a primitive data type to HAWQ columns of the same type.
-
-The following table summarizes external mapping rules for Hive primitive types.
-
-| Hive Data Type  | Hawq Data Type |
-|-------|---------------------------|
-| boolean    | bool |
-| int   | int4 |
-| smallint   | int2 |
-| tinyint   | int2 |
-| bigint   | int8 |
-| float   | float4 |
-| double   | float8 |
-| string   | text |
-| binary   | bytea |
-| timestamp   | timestamp |
-
-
-### <a id="topic_b4v_g3n_25"></a>Complex Data Types
-
-Hive supports complex data types including array, struct, map, and union. PXF maps each of these complex types to `text`.  While HAWQ does not natively support these types, you can create HAWQ functions or application code to extract subcomponents of these complex data types.
-
-An example using complex data types is provided later in this topic.
-
-
-## <a id="hive_sampledataset"></a>Sample Data Set
-
-Examples used in this topic will operate on a common data set. This simple data set models a retail sales operation and includes fields with the following names and data types:
-
-| Field Name  | Data Type |
-|-------|---------------------------|
-| location | text |
-| month | text |
-| number\_of\_orders | integer |
-| total\_sales | double |
-
-Prepare the sample data set for use:
-
-1. First, create a text file:
-
-    ```
-    $ vi /tmp/pxf_hive_datafile.txt
-    ```
-
-2. Add the following data to `pxf_hive_datafile.txt`; notice the use of the comma `,` to separate the four field values:
-
-    ```
-    Prague,Jan,101,4875.33
-    Rome,Mar,87,1557.39
-    Bangalore,May,317,8936.99
-    Beijing,Jul,411,11600.67
-    San Francisco,Sept,156,6846.34
-    Paris,Nov,159,7134.56
-    San Francisco,Jan,113,5397.89
-    Prague,Dec,333,9894.77
-    Bangalore,Jul,271,8320.55
-    Beijing,Dec,100,4248.41
-    ```
-
-Make note of the path to `pxf_hive_datafile.txt`; you will use it in later exercises.
-
-
-## <a id="hivecommandline"></a>Hive Command Line
-
-The Hive command line is a subsystem similar to that of `psql`. To start the Hive command line:
-
-``` shell
-$ HADOOP_USER_NAME=hdfs hive
-```
-
-The default Hive database is named `default`. 
-
-### <a id="hivecommandline_createdb"></a>Example: Create a Hive Database
-
-Create a Hive table to expose our sample data set.
-
-1. Create a Hive table named `sales_info` in the `default` database:
-
-    ``` sql
-    hive> CREATE TABLE sales_info (location string, month string,
-            number_of_orders int, total_sales double)
-            ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
-            STORED AS textfile;
-    ```
-
-    Notice that:
-    - The `STORED AS textfile` subclause instructs Hive to create the table in Textfile (the default) format.  Hive Textfile format supports comma-, tab-, and space-separated values, as well as data specified in JSON notation.
-    - The `DELIMITED FIELDS TERMINATED BY` subclause identifies the field delimiter within a data record (line). The `sales_info` table field delimiter is a comma (`,`).
-
-2. Load the `pxf_hive_datafile.txt` sample data file into the `sales_info` table you just created:
-
-    ``` sql
-    hive> LOAD DATA LOCAL INPATH '/tmp/pxf_hive_datafile.txt'
-            INTO TABLE sales_info;
-    ```
-
-3. Perform a query on `sales_info` to verify that the data was loaded successfully:
-
-    ``` sql
-    hive> SELECT * FROM sales_info;
-    ```
-
-In examples later in this section, you will access the `sales_info` Hive table directly via PXF. You will also insert `sales_info` data into tables of other Hive file format types, and use PXF to access those directly as well.
-
-## <a id="topic_p2s_lvl_28"></a>Querying External Hive Data
-
-The PXF Hive plug-in supports several Hive-related profiles. These include `Hive`, `HiveText`, and `HiveRC`.
-
-Use the following syntax to create a HAWQ external table representing Hive data:
-
-``` sql
-CREATE EXTERNAL TABLE <table_name>
-    ( <column_name> <data_type> [, ...] | LIKE <other_table> )
-LOCATION ('pxf://<host>[:<port>]/<hive-db-name>.<hive-table-name>
-    ?PROFILE=Hive|HiveText|HiveRC[&DELIMITER=<delim>'])
-FORMAT 'CUSTOM|TEXT' (formatter='pxfwritable_import' | delimiter='<delim>')
-```
-
-Hive-plug-in-specific keywords and values used in the [CREATE EXTERNAL TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described below.
-
-| Keyword  | Value |
-|-------|-------------------------------------|
-| \<host\>[:<port\>]    | The HDFS NameNode and port. |
-| \<hive-db-name\>    | The name of the Hive database. If omitted, defaults to the Hive database named `default`. |
-| \<hive-table-name\>    | The name of the Hive table. |
-| PROFILE    | The `PROFILE` keyword must specify one of the values `Hive`, `HiveText`, or `HiveRC`. |
-| DELIMITER    | The `DELIMITER` clause is required for both the `HiveText` and `HiveRC` profiles and identifies the field delimiter used in the Hive data set.  \<delim\> must be a single ascii character or specified in hexadecimal representation. |
-| FORMAT (`Hive` profile)   | The `FORMAT` clause must specify `CUSTOM`. The `CUSTOM` format supports only the built-in `pxfwritable_import` `formatter`.   |
-| FORMAT (`HiveText` and `HiveRC` profiles) | The `FORMAT` clause must specify `TEXT`. The `delimiter` must be specified a second time in '\<delim\>'. |
-
-
-## <a id="profile_hive"></a>Hive Profile
-
-The `Hive` profile works with any Hive file format. It can access heterogenous format data in a single table where each partition may be stored as a different file format.
-
-While you can use the `Hive` profile to access any file format, the more specific profiles perform better for those single file format types.
-
-
-### <a id="profile_hive_using"></a>Example: Using the Hive Profile
-
-Use the `Hive` profile to create a queryable HAWQ external table from the Hive `sales_info` textfile format table created earlier.
-
-1. Create a queryable HAWQ external table from the Hive `sales_info` textfile format table created earlier:
-
-    ``` sql
-    postgres=# CREATE EXTERNAL TABLE salesinfo_hiveprofile(location text, month text, num_orders int, total_sales float8)
-                LOCATION ('pxf://namenode:51200/default.sales_info?PROFILE=Hive')
-              FORMAT 'custom' (formatter='pxfwritable_import');
-    ```
-
-2. Query the table:
-
-    ``` sql
-    postgres=# SELECT * FROM salesinfo_hiveprofile;
-    ```
-
-    ``` shell
-       location    | month | num_orders | total_sales
-    ---------------+-------+------------+-------------
-     Prague        | Jan   |        101 |     4875.33
-     Rome          | Mar   |         87 |     1557.39
-     Bangalore     | May   |        317 |     8936.99
-     ...
-
-    ```
-
-## <a id="profile_hivetext"></a>HiveText Profile
-
-Use the `HiveText` profile to query text format files. The `HiveText` profile is more performant than the `Hive` profile.
-
-**Note**: When using the `HiveText` profile, you *must* specify a delimiter option in *both* the `LOCATION` and `FORMAT` clauses.
-
-### <a id="profile_hivetext_using"></a>Example: Using the HiveText Profile
-
-Use the PXF `HiveText` profile to create a queryable HAWQ external table from the Hive `sales_info` textfile format table created earlier.
-
-1. Create the external table:
-
-    ``` sql
-    postgres=# CREATE EXTERNAL TABLE salesinfo_hivetextprofile(location text, month text, num_orders int, total_sales float8)
-                 LOCATION ('pxf://namenode:51200/default.sales_info?PROFILE=HiveText&DELIMITER=\x2c')
-               FORMAT 'TEXT' (delimiter=E',');
-    ```
-
-    (You can safely ignore the "nonstandard use of escape in a string literal" warning and related messages.)
-
-    Notice that:
-    - The `LOCATION` subclause `DELIMITER` value is specified in hexadecimal format. `\x` is a prefix that instructs PXF to interpret the following characters as hexadecimal. `2c` is the hex value for the comma character.
-    - The `FORMAT` subclause `delimiter` value is specified as the single ascii comma character `','`. `E` escapes the character.
-
-2. Query the external table:
-
-    ``` sql
-    postgres=# SELECT * FROM salesinfo_hivetextprofile WHERE location="Beijing";
-    ```
-
-    ``` shell
-     location | month | num_orders | total_sales
-    ----------+-------+------------+-------------
-     Beijing  | Jul   |        411 |    11600.67
-     Beijing  | Dec   |        100 |     4248.41
-    (2 rows)
-    ```
-
-## <a id="profile_hiverc"></a>HiveRC Profile
-
-The RCFile Hive format is used for row columnar formatted data. The `HiveRC` profile provides access to RCFile data.
-
-### <a id="profile_hiverc_rcfiletbl_using"></a>Example: Using the HiveRC Profile
-
-Use the `HiveRC` profile to query RCFile-formatted data in Hive tables. The `HiveRC` profile is more performant than the `Hive` profile for this file format type.
-
-1. Create a Hive table with RCFile format:
-
-    ``` shell
-    $ HADOOP_USER_NAME=hdfs hive
-    ```
-
-    ``` sql
-    hive> CREATE TABLE sales_info_rcfile (location string, month string,
-            number_of_orders int, total_sales double)
-          ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
-          STORED AS rcfile;
-    ```
-
-2. Insert the data from the `sales_info` table into `sales_info_rcfile`:
-
-    ``` sql
-    hive> INSERT INTO TABLE sales_info_rcfile SELECT * FROM sales_info;
-    ```
-
-    A copy of the sample data set is now stored in RCFile format in `sales_info_rcfile`. 
-    
-3. Perform a Hive query on `sales_info_rcfile` to verify that the data was loaded successfully:
-
-    ``` sql
-    hive> SELECT * FROM sales_info_rcfile;
-    ```
-
-4. Use the PXF `HiveRC` profile to create a queryable HAWQ external table from the Hive `sales_info_rcfile` table created in the previous step. When using the `HiveRC` profile, you **must** specify a delimiter option in *both* the `LOCATION` and `FORMAT` clauses.:
-
-    ``` sql
-    postgres=# CREATE EXTERNAL TABLE salesinfo_hivercprofile(location text, month text, num_orders int, total_sales float8)
-                 LOCATION ('pxf://namenode:51200/default.sales_info_rcfile?PROFILE=HiveRC&DELIMITER=\x2c')
-               FORMAT 'TEXT' (delimiter=E',');
-    ```
-
-    (Again, you can safely ignore the "nonstandard use of escape in a string literal" warning and related messages.)
-
-5. Query the external table:
-
-    ``` sql
-    postgres=# SELECT location, total_sales FROM salesinfo_hivercprofile;
-    ```
-
-    ``` shell
-       location    | total_sales
-    ---------------+-------------
-     Prague        |     4875.33
-     Rome          |     1557.39
-     Bangalore     |     8936.99
-     Beijing       |    11600.67
-     ...
-    ```
-
-## <a id="topic_dbb_nz3_ts"></a>Accessing Parquet-Format Hive Tables
-
-The PXF `Hive` profile supports both non-partitioned and partitioned Hive tables that use the Parquet storage format in HDFS. Simply map the table columns using equivalent HAWQ data types. For example, if a Hive table is created using:
-
-``` sql
-hive> CREATE TABLE hive_parquet_table (fname string, lname string, custid int, acctbalance double)
-        STORED AS parquet;
-```
-
-Define the HAWQ external table using:
-
-``` sql
-postgres=# CREATE EXTERNAL TABLE pxf_parquet_table (fname text, lname text, custid int, acctbalance double precision)
-    LOCATION ('pxf://namenode:51200/hive-db-name.hive_parquet_table?profile=Hive')
-    FORMAT 'CUSTOM' (formatter='pxfwritable_import');
-```
-
-And query the HAWQ external table using:
-
-``` sql
-postgres=# SELECT fname,lname FROM pxf_parquet_table;
-```
-
-
-## <a id="profileperf"></a>Profile Performance Considerations
-
-The `HiveRC` and `HiveText` profiles are faster than the generic `Hive` profile.
-
-
-## <a id="complex_dt_example"></a>Complex Data Type Example
-
-This example will employ the array and map complex types, specifically an array of integers and a string key/value pair map.
-
-The data schema for this example includes fields with the following names and data types:
-
-| Field Name  | Data Type |
-|-------|---------------------------|
-| index | int |
-| name | string
-| intarray | array of integers |
-| propmap | map of string key and value pairs |
-
-When specifying an array field in a Hive table, you must identify the terminator for each item in the collection. Similarly, the map key termination character must also be specified.
-
-1. Create a text file from which you will load the data set:
-
-    ```
-    $ vi /tmp/pxf_hive_complex.txt
-    ```
-
-2. Add the following data to `pxf_hive_complex.txt`.  The data uses a comma `,` to separate field values, the percent symbol `%` to separate collection items, and a `:` to terminate map key values:
-
-    ```
-    3,Prague,1%2%3,zone:euro%status:up
-    89,Rome,4%5%6,zone:euro
-    400,Bangalore,7%8%9,zone:apac%status:pending
-    183,Beijing,0%1%2,zone:apac
-    94,Sacramento,3%4%5,zone:noam%status:down
-    101,Paris,6%7%8,zone:euro%status:up
-    56,Frankfurt,9%0%1,zone:euro
-    202,Jakarta,2%3%4,zone:apac%status:up
-    313,Sydney,5%6%7,zone:apac%status:pending
-    76,Atlanta,8%9%0,zone:noam%status:down
-    ```
-
-3. Create a Hive table to represent this data:
-
-    ``` shell
-    $ HADOOP_USER_NAME=hdfs hive
-    ```
-
-    ``` sql
-    hive> CREATE TABLE table_complextypes( index int, name string, intarray ARRAY<int>, propmap MAP<string, string>)
-             ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
-             COLLECTION ITEMS TERMINATED BY '%'
-             MAP KEYS TERMINATED BY ':'
-             STORED AS TEXTFILE;
-    ```
-
-    Notice that:
-    - `FIELDS TERMINATED BY` identifies a comma as the field terminator.
-    - The `COLLECTION ITEMS TERMINATED BY` subclause specifies the percent sign as the collection items (array item, map key/value pair) terminator.
-    - `MAP KEYS TERMINATED BY` identifies a colon as the terminator for map keys.
-
-4. Load the `pxf_hive_complex.txt` sample data file into the `table_complextypes` table you just created:
-
-    ``` sql
-    hive> LOAD DATA LOCAL INPATH '/tmp/pxf_hive_complex.txt' INTO TABLE table_complextypes;
-    ```
-
-5. Perform a query on Hive table `table_complextypes` to verify that the data was loaded successfully:
-
-    ``` sql
-    hive> SELECT * FROM table_complextypes;
-    ```
-
-    ``` shell
-    3	Prague	[1,2,3]	{"zone":"euro","status":"up"}
-    89	Rome	[4,5,6]	{"zone":"euro"}
-    400	Bangalore	[7,8,9]	{"zone":"apac","status":"pending"}
-    ...
-    ```
-
-6. Use the PXF `Hive` profile to create a queryable HAWQ external table representing the Hive `table_complextypes`:
-
-    ``` sql
-    postgres=# CREATE EXTERNAL TABLE complextypes_hiveprofile(index int, name text, intarray text, propmap text)
-                 LOCATION ('pxf://namenode:51200/table_complextypes?PROFILE=Hive')
-               FORMAT 'CUSTOM' (formatter='pxfwritable_import');
-    ```
-
-    Notice that the integer array and map complex types are mapped to type text.
-
-7. Query the external table:
-
-    ``` sql
-    postgres=# SELECT * FROM complextypes_hiveprofile;
-    ```
-
-    ``` shell     
-     index |    name    | intarray |              propmap
-    -------+------------+----------+------------------------------------
-         3 | Prague     | [1,2,3]  | {"zone":"euro","status":"up"}
-        89 | Rome       | [4,5,6]  | {"zone":"euro"}
-       400 | Bangalore  | [7,8,9]  | {"zone":"apac","status":"pending"}
-       183 | Beijing    | [0,1,2]  | {"zone":"apac"}
-        94 | Sacramento | [3,4,5]  | {"zone":"noam","status":"down"}
-       101 | Paris      | [6,7,8]  | {"zone":"euro","status":"up"}
-        56 | Frankfurt  | [9,0,1]  | {"zone":"euro"}
-       202 | Jakarta    | [2,3,4]  | {"zone":"apac","status":"up"}
-       313 | Sydney     | [5,6,7]  | {"zone":"apac","status":"pending"}
-        76 | Atlanta    | [8,9,0]  | {"zone":"noam","status":"down"}
-    (10 rows)
-    ```
-
-    `intarray` and `propmap` are each text strings.
-
-## <a id="hcatalog"></a>Using PXF and HCatalog to Query Hive
-
-Hive tables can be queried directly through HCatalog integration with HAWQ and PXF, regardless of the underlying file storage format.
-
-In previous sections, you created an external table in PXF that described the target table's Hive metadata. Another option for querying Hive tables is to take advantage of HAWQ's integration with HCatalog. This integration allows HAWQ to directly use table metadata stored in HCatalog.
-
-HCatalog is built on top of the Hive metastore and incorporates Hive's DDL. This provides several advantages:
-
--   You do not need to know the table schema of your Hive tables
--   You do not need to manually enter information about Hive table location or format
--   If Hive table metadata changes, HCatalog provides updated metadata. This is in contrast to the use of static external PXF tables to define Hive table metadata for HAWQ.
-
-The following diagram depicts how HAWQ integrates with HCatalog to query Hive tables:
-
-<img src="../images/hawq_hcatalog.png" id="hcatalog__image_ukw_h2v_c5" class="image" width="672" />
-
-1.  HAWQ retrieves table metadata from HCatalog using PXF.
-2.  HAWQ creates in-memory catalog tables from the retrieved metadata. If a table is referenced multiple times in a transaction, HAWQ uses its in-memory metadata to reduce external calls to HCatalog.
-3.  PXF queries Hive using table metadata that is stored in the HAWQ in-memory catalog tables. Table metadata is dropped at the end of the transaction.
-
-
-### <a id="topic_j1l_enabling"></a>Enabling HCatalog Integration
-
-To enable HCatalog query integration in HAWQ, perform the following steps:
-
-1.  Make sure your deployment meets the requirements listed in [Prerequisites](#installingthepxfhiveplugin).
-2.  If necessary, set the `pxf_service_address` global configuration property to the hostname or IP address and port where you have installed the PXF Hive plug-in. By default, the value is set to `localhost:51200`.
-
-    ``` sql
-    postgres=# SET pxf_service_address TO <hivenode>:51200
-    ```
-
-3.  HCatalog internally uses the `pxf` protocol to query.  Grant this protocol privilege to all roles requiring access:
-
-    ``` sql
-    postgres=# GRANT ALL ON PROTOCOL pxf TO <role>;
-    ```
-
-4. It is not recommended to create a HAWQ table using the `WITH (OIDS)` clause. If any user tables were created using the `WITH (OIDS)` clause, additional operations are required to enable HCatalog integration. To access a Hive table via HCatalog when user tables were created using `WITH (OIDS)`, HAWQ users must have `SELECT` permission to query every user table within the same schema that was created using the `WITH (OIDS)` clause. 
-
-    1. Determine which user tables were created using the `WITH (OIDS)` clause:
-
-        ``` sql
-        postgres=# SELECT oid, relname FROM pg_class 
-                     WHERE relhasoids = true 
-                       AND relnamespace <> (SELECT oid FROM pg_namespace WHERE nspname = 'pg_catalog');
-        ```
-
-    2. Grant `SELECT` privilege on all returned tables to all roles to which you chose to provide HCatalog query access. For example:
-
-        ``` sql
-        postgres=# GRANT SELECT ON <table-created-WITH-OIDS> TO <role>
-        ``` 
-
-### <a id="topic_j1l_y55_c5"></a>Usage    
-
-To query a Hive table with HCatalog integration, query HCatalog directly from HAWQ. The query syntax is:
-
-``` sql
-postgres=# SELECT * FROM hcatalog.hive-db-name.hive-table-name;
-```
-
-For example:
-
-``` sql
-postgres=# SELECT * FROM hcatalog.default.sales_info;
-```
-
-To obtain a description of a Hive table with HCatalog integration, you can use the `psql` client interface.
-
--   Within HAWQ, use either the `\d                                         hcatalog.hive-db-name.hive-table-name` or `\d+                                         hcatalog.hive-db-name.hive-table-name` commands to describe a single table.  `\d` displays only HAWQ's interpretation of the underlying source (Hive in this case) data type, while `\d+` displays both the HAWQ interpreted and Hive source data types. For example, from the `psql` client interface:
-
-    ``` shell
-    $ psql -d postgres
-    ```
-
-    ``` sql
-    postgres=# \d+ hcatalog.default.sales_info_rcfile;
-    ```
-
-    ``` shell
-    PXF Hive Table "default.sales_info_rcfile"
-          Column      |  Type  | Source type 
-    ------------------+--------+-------------
-     location         | text   | string
-     month            | text   | string
-     number_of_orders | int4   | int
-     total_sales      | float8 | double
-    ```
--   Use `\d hcatalog.hive-db-name.*` to describe the whole database schema, i.e. all tables in `hive-db-name`.
--   Use `\d hcatalog.*.*` to describe the whole schema, i.e. all databases and tables.
-
-When using `\d` or `\d+` commands in the `psql` HAWQ client, `hcatalog` will not be listed as a database. If you use other `psql` compatible clients, `hcatalog` will be listed as a database with a size value of `-1` since `hcatalog` is not a real database in HAWQ.
-
-Alternatively, you can use the `pxf_get_item_fields` user-defined function (UDF) to obtain Hive table descriptions from other client interfaces or third-party applications. The UDF takes a PXF profile and a table pattern string as its input parameters.  **Note:** The only supported input profile at this time is `'Hive'`.
-
-- The following statement returns a description of a specific table. The description includes path, itemname (table), fieldname, and fieldtype.
-
-    ``` sql
-    postgres=# SELECT * FROM pxf_get_item_fields('Hive','default.sales_info_rcfile');
-    ```
-
-    ``` pre
-      path   |     itemname      |    fieldname     | fieldtype
-    ---------+-------------------+------------------+-----------
-     default | sales_info_rcfile | location         | text
-     default | sales_info_rcfile | month            | text
-     default | sales_info_rcfile | number_of_orders | int4
-     default | sales_info_rcfile | total_sales      | float8
-    ```
-
-- The following statement returns table descriptions from the default database.
-
-    ``` sql
-    postgres=# SELECT * FROM pxf_get_item_fields('Hive','default.*');
-    ```
-
-- The following statement returns a description of the entire schema.
-
-    ``` sql
-    postgres=# SELECT * FROM pxf_get_item_fields('Hive', '*.*');
-    ```
-
-### <a id="topic_r5k_pst_25"></a>Limitations
-
-HCatalog integration has the following limitations:
-
--   HCatalog integration queries and describe commands do not support complex types; only primitive types are supported. Use PXF external tables to query complex types in Hive. (See [Complex Types Example](#complex_dt_example).)
--   Even for primitive types, HCatalog metadata descriptions produced by `\d` are HAWQ's interpretation of the underlying Hive data types. For example, the Hive type `tinyint` is converted to HAWQ type `int2`. (See [Data Type Mapping](#hive_primdatatypes).)
--   HAWQ reserves the database name `hcatalog` for system use. You cannot connect to or alter the system `hcatalog` database.
-
-## <a id="partitionfiltering"></a>Partition Filtering
-
-The PXF Hive plug-in supports the Hive partitioning feature and directory structure. This enables partition exclusion on selected HDFS files comprising the Hive table.�To use�the partition filtering�feature to reduce network traffic and I/O, run a PXF query using a `WHERE` clause�that refers to a specific partition in the partitioned Hive table.
-
-To take advantage of PXF partition filtering push-down, the Hive and PXF partition field names should be the same. Otherwise, PXF ignores partition filtering and the filtering is performed on the HAWQ side, impacting�performance.
-
-**Note:** The Hive plug-in filters only on partition columns, not on other table attributes.
-
-### <a id="partitionfiltering_pushdowncfg"></a>Configure Partition Filtering Push-Down
-
-PXF partition filtering push-down is enabled by default. To disable PXF partition filtering push-down, set the `pxf_enable_filter_pushdown` HAWQ server configuration parameter to `off`:
-
-``` sql
-postgres=# SHOW pxf_enable_filter_pushdown;
- pxf_enable_filter_pushdown
------------------------------
- on
-(1 row)
-postgres=# SET pxf_enable_filter_pushdown=off;
-```
-
-### <a id="example2"></a>Create Partitioned Hive Table
-
-Create a�Hive table `sales_part`�with two partition columns, `delivery_state` and `delivery_city:`
-
-``` sql
-hive> CREATE TABLE sales_part (name string, type string, supplier_key int, price double)
-        PARTITIONED BY (delivery_state string, delivery_city string)
-        ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
-```
-
-Load data into this Hive table and�add some partitions:
-
-``` sql
-hive> INSERT INTO TABLE sales_part 
-        PARTITION(delivery_state = 'CALIFORNIA', delivery_city = 'Fresno') 
-        VALUES ('block', 'widget', 33, 15.17);
-hive> INSERT INTO TABLE sales_part 
-        PARTITION(delivery_state = 'CALIFORNIA', delivery_city = 'Sacramento') 
-        VALUES ('cube', 'widget', 11, 1.17);
-hive> INSERT INTO TABLE sales_part 
-        PARTITION(delivery_state = 'NEVADA', delivery_city = 'Reno') 
-        VALUES ('dowel', 'widget', 51, 31.82);
-hive> INSERT INTO TABLE sales_part 
-        PARTITION(delivery_state = 'NEVADA', delivery_city = 'Las Vegas') 
-        VALUES ('px49', 'pipe', 52, 99.82);
-```
-
-The Hive storage directory structure for the `sales_part` table appears as follows:
-
-``` pre
-$ sudo -u hdfs hdfs dfs -ls -R /apps/hive/warehouse/sales_part
-/apps/hive/warehouse/sales_part/delivery_state=CALIFORNIA/delivery_city=\u2019Fresno\u2019/
-/apps/hive/warehouse/sales_part/delivery_state=CALIFORNIA/delivery_city=Sacramento/
-/apps/hive/warehouse/sales_part/delivery_state=NEVADA/delivery_city=Reno/
-/apps/hive/warehouse/sales_part/delivery_state=NEVADA/delivery_city=\u2019Las Vegas\u2019/
-```
-
-To define a HAWQ PXF table that will read this Hive table�and�take advantage of partition filter push-down, define the fields corresponding to the Hive partition fields at the end of the `CREATE EXTERNAL TABLE` attribute list.�In HiveQL,�a�`SELECT *`�statement on a partitioned table shows the partition fields at the end of the record.
-
-``` sql
-postgres=# CREATE EXTERNAL TABLE pxf_sales_part(
-  item_name TEXT, 
-  item_type TEXT, 
-  supplier_key INTEGER, 
-  item_price DOUBLE PRECISION, 
-  delivery_state TEXT, 
-  delivery_city TEXT
-)
-LOCATION ('pxf://namenode:51200/sales_part?Profile=Hive')
-FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
-
-postgres=# SELECT * FROM pxf_sales_part;
-```
-
-### <a id="example3"></a>Query Without Pushdown
-
-In the following example, the HAWQ query filters the `delivery_city` partition `Sacramento`. The filter on� `item_name` is not pushed down, since it is not a partition column. It is performed on the HAWQ side after all the data on `Sacramento` is transferred for processing.
-
-``` sql
-postgres=# SELECT * FROM pxf_sales_part WHERE delivery_city = 'Sacramento' AND item_name = 'cube';
-```
-
-### <a id="example4"></a>Query With Pushdown
-
-The following HAWQ query reads all the data under�`delivery_state` partition `CALIFORNIA`, regardless of the city.
-
-``` sql
-postgres=# SET pxf_enable_filter_pushdown=on;
-postgres=# SELECT * FROM pxf_sales_part WHERE delivery_state = 'CALIFORNIA';
-```
-
-## <a id="topic_fdm_zrh_1s"></a>Using PXF with Hive Default Partitions
-
-This topic describes a difference in query results between Hive and PXF queries when Hive tables use a default partition. When dynamic partitioning is enabled in Hive, a partitioned table may store data in a default partition. Hive creates a default partition when the value of a partitioning column does not match the defined type of the column (for example, when a NULL value is used for any partitioning column). In Hive, any query that includes a filter on a partition column *excludes* any data that is stored in the table's default partition.
-
-Similar to Hive, PXF represents a table's partitioning columns as columns that are appended to the end of the table. However, PXF translates any column value in a default partition to a NULL value. This means that a HAWQ query that includes an IS NULL filter on a partitioning column can return different results than the same Hive query.
-
-Consider a Hive partitioned table that is created with the statement:
-
-``` sql
-hive> CREATE TABLE sales (order_id bigint, order_amount float) PARTITIONED BY (xdate date);
-```
-
-The table is loaded with five rows that contain the following data:
-
-``` pre
-1.0    1900-01-01
-2.2    1994-04-14
-3.3    2011-03-31
-4.5    NULL
-5.0    2013-12-06
-```
-
-The insertion of row 4 creates a Hive default partition, because the partition column `xdate` contains a null value.
-
-In Hive, any query that filters on the partition column omits data in the default partition. For example, the following query returns no rows:
-
-``` sql
-hive> SELECT * FROM sales WHERE xdate is null;
-```
-
-However, if you map this table as a PXF external table in HAWQ, all default partition values are translated into actual NULL values. In HAWQ, executing the same query against the PXF table returns row 4 as the result, because the filter matches the NULL value.
-
-Keep this behavior in mind when executing IS NULL queries on Hive partitioned tables.
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/pxf/InstallPXFPlugins.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/InstallPXFPlugins.html.md.erb b/pxf/InstallPXFPlugins.html.md.erb
deleted file mode 100644
index 4ae4101..0000000
--- a/pxf/InstallPXFPlugins.html.md.erb
+++ /dev/null
@@ -1,81 +0,0 @@
----
-title: Installing PXF Plug-ins
----
-
-This topic describes how to install the built-in PXF service plug-ins that are required to connect PXF to HDFS, Hive, HBase, and JSON. 
-
-**Note:** PXF requires that you run Tomcat on the host machine. Tomcat reserves ports 8005, 8080, and 8009. If you have configured Oozie JXM reporting on a host that will run PXF, make sure that the reporting service uses a port other than 8005. This helps to prevent port conflict errors from occurring when you start the PXF service.
-
-## <a id="directories_and_logs"></a>PXF Installation and Log File Directories
-
-Installing PXF plug-ins, regardless of method, creates directories and log files on each node receiving the plug-in installation:
-
-| Directory                      | Description                                                                                                                                                                                                                                |
-|--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `/usr/lib/pxf`                 | PXF library location                                                                                                                                                                                                                       |
-| `/etc/pxf/conf`                | PXF configuration directory. This directory contains the `pxf-public.classpath` and `pxf-private.classpath` configuration files. See [Setting up the Java Classpath](ConfigurePXF.html#settingupthejavaclasspath). |
-| `/var/pxf/pxf-service`         | PXF service instance location                                                                                                                                                                                                              |
-| `/var/log/pxf` | This directory includes `pxf-service.log` and all Tomcat-related logs including `catalina.out`. Logs are owned by user:group `pxf`:`pxf`. Other users have read access.                                                                          |
-| `/var/run/pxf/catalina.pid`    | PXF Tomcat container PID location                                                                                                                                                                                                          |
-
-
-## <a id="install_pxf_plug_ambari"></a>Installing PXF Using Ambari
-
-If you are using Ambari to install and manage your HAWQ cluster, you do *not* need to follow the manual installation steps in this topic. Installing using the Ambari web interface installs all of the necessary PXF plug-in components.
-
-## <a id="install_pxf_plug_cmdline"></a>Installing PXF from the Command Line
-
-Each PXF service plug-in resides in its own RPM.  You may have built these RPMs in the Apache HAWQ open source project repository (see [PXF Build Instructions](https://github.com/apache/incubator-hawq/blob/master/pxf/README.md)), or these RPMs may have been included in a commercial product download package.
-
-Perform the following steps on **_each_** node in your cluster to install PXF:
-
-1. Install the PXF software, including Apache, the PXF service, and all PXF plug-ins: HDFS, HBase, Hive, JSON:
-
-    ```shell
-    $ sudo yum install -y pxf
-    ```
-
-    Installing PXF in this manner:
-    * installs the required version of `apache-tomcat`
-    * creates a `/etc/pxf/pxf-n.n.n` directory, adding a softlink from `/etc/pxf` to this directory
-    * sets up the PXF service configuration files in `/etc/pxf`
-    * creates a `/usr/lib/pxf-n.n.n` directory, adding a softlink from `/usr/lib/pxf` to this directory
-    * copies the PXF service JAR file `pxf-service-n.n.n.jar` to `/usr/lib/pxf-n.n.n/`
-    * copies JAR files for each of the PXF plugs-ins to `/usr/lib/pxf-n.n.n/`
-    * creates softlinks from `pxf-xxx.jar` in `/usr/lib/pxf-n.n.n/`
-
-2. Initialize the PXF service:
-
-    ```shell
-    $ sudo service pxf-service init
-    ```
-
-2. Start the PXF service:
-
-    ```shell
-    $ sudo service pxf-service start
-    ```
-    
-    Additional `pxf-service` command options include `stop`, `restart`, and `status`.
-
-2. If you choose to use the HBase plug-in, perform the following configuration:
-
-    1. Add the PXF HBase plug-in JAR file to the HBase `CLASSPATH` by updating the `HBASE_CLASSPATH` environment variable setting in the HBase environment file `/etc/hbase/conf/hbase-env.sh`:
-
-        ``` shell
-        export HBASE_CLASSPATH=${HBASE_CLASSPATH}:/usr/lib/pxf/pxf-hbase.jar
-        ```
-
-    3. Restart the HBase service after making this update to HBase configuration.
-
-        On the HBase Master node:
-
-        ``` shell
-        $ su -l hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh restart master; sleep 25"
-       ```
-
-        On an HBase Region Server node:
-
-        ```shell
-        $ su -l hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh restart regionserver"
-        ```


[34/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/svg/hawq_hcatalog.svg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/svg/hawq_hcatalog.svg b/markdown/mdimages/svg/hawq_hcatalog.svg
new file mode 100644
index 0000000..4a99830
--- /dev/null
+++ b/markdown/mdimages/svg/hawq_hcatalog.svg
@@ -0,0 +1,3 @@
+<?xml version="1.0" encoding="utf-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xl="http://www.w3.org/1999/xlink" version="1.1" viewBox="144 110 600 195" width="50pc" height="195pt" xmlns:dc="http://purl.org/dc/elements/1.1/"><metadata> Produced by OmniGraffle 6.0.5 <dc:date>2015-11-30 20:39Z</dc:date></metadata><defs><filter id="Shadow" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" result="blur" stdDeviation="1.308"/><feOffset in="blur" result="offset" dx="0" dy="2"/><feFlood flood-color="black" flood-opacity=".5" result="flood"/><feComposite in="flood" in2="offset" operator="in"/></filter><filter id="Shadow_2" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" result="blur" stdDeviation="1.3030978"/><feOffset in="blur" result="offset" dx="0" dy="2"/><feFlood flood-color="black" flood-opacity=".2" result="flood"/><feComposite in="flood" in2="offset" operator="in" result="color"/><feMerge><feMergeNode in="color"/><feMergeNode in="SourceGraphic"/></feMerge></filter><font-face font-family="H
 elvetica" font-size="14" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="532.22656" cap-height="719.72656" ascent="770.01953" descent="-229.98047" font-weight="bold"><font-face-src><font-face-name name="Helvetica-Bold"/></font-face-src></font-face><font-face font-family="Helvetica" font-size="12" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="532.22656" cap-height="719.72656" ascent="770.01953" descent="-229.98047" font-weight="bold"><font-face-src><font-face-name name="Helvetica-Bold"/></font-face-src></font-face><font-face font-family="Helvetica" font-size="9" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="522.94922" cap-height="717.28516" ascent="770.01953" descent="-229.98047" font-weight="500"><font-face-src><font-face-name name="Helvetica"/></font-face-src></font-face><marker orient="auto" overflow="visible" m
 arkerUnits="strokeWidth" id="FilledArrow_Marker" viewBox="-1 -4 10 8" markerWidth="10" markerHeight="8" color="black"><g><path d="M 8 0 L 0 -3 L 0 3 Z" fill="currentColor" stroke="currentColor" stroke-width="1"/></g></marker><font-face font-family="Helvetica" font-size="11" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="532.22656" cap-height="719.72656" ascent="770.01953" descent="-229.98047" font-weight="bold"><font-face-src><font-face-name name="Helvetica-Bold"/></font-face-src></font-face><font-face font-family="Helvetica" font-size="11" units-per-em="1000" underline-position="-75.683594" underline-thickness="49.316406" slope="0" x-height="522.94922" cap-height="717.28516" ascent="770.01953" descent="-229.98047" font-weight="500"><font-face-src><font-face-name name="Helvetica"/></font-face-src></font-face></defs><g stroke="none" stroke-opacity="1" stroke-dasharray="none" fill="none" fill-opacity="1"><title>Canvas 1</title><
 g><title>Layer 1</title><g><xl:use xl:href="#id24_Graphic" filter="url(#Shadow)"/><xl:use xl:href="#id10_Graphic" filter="url(#Shadow)"/></g><g filter="url(#Shadow_2)"><path d="M 594 183 L 627.75 123 L 695.25 123 L 729 183 L 695.25 243 L 627.75 243 Z" fill="white"/><path d="M 594 183 L 627.75 123 L 695.25 123 L 729 183 L 695.25 243 L 627.75 243 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(599 174.5)" fill="black"><tspan font-family="Helvetica" font-size="14" font-weight="bold" x="46.16211" y="14" textLength="32.675781">HIVE</tspan></text></g><path d="M 540.59675 203 L 540.59675 183 L 583 183 L 583 173 L 603 193 L 583 213 L 583 203 Z" fill="#a9b7c2"/><path d="M 540.59675 203 L 540.59675 183 L 583 183 L 583 173 L 603 193 L 583 213 L 583 203 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(553.39716 186)" fill="black"><tspan font-family="Helvetica" font-size="12
 " font-weight="bold" x="6.7322738" y="11" textLength="23.33789">PXF</tspan></text><path d="M 540.59675 243 L 540.59675 223 L 583 223 L 583 213 L 603 233 L 583 253 L 583 243 Z" fill="#a9b7c2"/><path d="M 540.59675 243 L 540.59675 223 L 583 223 L 583 213 L 603 233 L 583 253 L 583 243 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(553.39716 226)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="6.7322738" y="11" textLength="23.33789">PXF</tspan></text><path d="M 540.59675 163 L 540.59675 143 L 583 143 L 583 133 L 603 153 L 583 173 L 583 163 Z" fill="#a9b7c2"/><path d="M 540.59675 163 L 540.59675 143 L 583 143 L 583 133 L 603 153 L 583 173 L 583 163 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(553.39716 146)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="6.7322738" y="11" textLength="23.33789">P
 XF</tspan></text><g filter="url(#Shadow_2)"><rect x="414" y="234" width="81" height="45" fill="#a9b7c1"/><rect x="414" y="234" width="81" height="45" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(419 251)" fill="black"><tspan font-family="Helvetica" font-size="9" font-weight="500" x="5.729248" y="9" textLength="59.541504">table metadata</tspan></text></g><line x1="358" y1="211.5" x2="442.10014" y2="211.94734" marker-end="url(#FilledArrow_Marker)" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><g filter="url(#Shadow_2)"><circle cx="400.5" cy="184.5" r="13.5000216" fill="#dbdbdb"/><circle cx="400.5" cy="184.5" r="13.5000216" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(394.7 177.5)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="2.463086" y="11" textLength="6.673828">1</tspan></text></g><line x1="4
 14" y1="243" x2="367.9" y2="243" marker-end="url(#FilledArrow_Marker)" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><g id="id24_Graphic"><path d="M 452 240.8 L 452 183.2 C 452 179.2256 468.128 176 488 176 C 507.872 176 524 179.2256 524 183.2 L 524 240.8 C 524 244.7744 507.872 248 488 248 C 468.128 248 452 244.7744 452 240.8" fill="#a9b7c1"/><path d="M 452 240.8 L 452 183.2 C 452 179.2256 468.128 176 488 176 C 507.872 176 524 179.2256 524 183.2 L 524 240.8 C 524 244.7744 507.872 248 488 248 C 468.128 248 452 244.7744 452 240.8 M 452 183.2 C 452 187.1744 468.128 190.4 488 190.4 C 507.872 190.4 524 187.1744 524 183.2" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(457 207.1)" fill="black"><tspan font-family="Helvetica" font-size="14" font-weight="bold" x=".2758789" y="14" textLength="61.448242">HCatalog</tspan></text></g><line x1="360" y1="153" x2="530.69675" y2="153" marker-end="url(#FilledA
 rrow_Marker)" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><path d="M 254 261 L 225 261 L 198 261 L 198 243.9" marker-end="url(#FilledArrow_Marker)" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><g id="id10_Graphic"><path d="M 250 272.7 L 250 150.3 C 250 141.8544 274.192 135 304 135 C 333.808 135 358 141.8544 358 150.3 L 358 272.7 C 358 281.1456 333.808 288 304 288 C 274.192 288 250 281.1456 250 272.7" fill="white"/><path d="M 250 272.7 L 250 150.3 C 250 141.8544 274.192 135 304 135 C 333.808 135 358 141.8544 358 150.3 L 358 272.7 C 358 281.1456 333.808 288 304 288 C 274.192 288 250 281.1456 250 272.7 M 250 150.3 C 250 158.7456 274.192 165.6 304 165.6 C 333.808 165.6 358 158.7456 358 150.3" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(255 210.65)" fill="black"><tspan font-family="Helvetica" font-size="14" font-weight="bold" x="27.220703" y="14" textLengt
 h="20.220703">HA</tspan><tspan font-family="Helvetica" font-size="14" font-weight="bold" x="46.67578" y="14" textLength="24.103516">WQ</tspan></text></g><g filter="url(#Shadow_2)"><path d="M 172.29774 210.86712 C 155.8125 207 162.3864 174.4452 188.68404 180 C 191.12388 169.17192 221.7045 170.92944 221.50458 180 C 240.67956 168.39864 265.18404 191.53152 248.74776 203.13288 C 268.47048 208.75752 248.49888 239.06232 232.3125 234 C 231.0171 242.43768 202.08072 245.3904 199.54092 234 C 183.15564 246.1644 148.98972 227.46096 172.29774 210.86712 Z" fill="white"/><path d="M 172.29774 210.86712 C 155.8125 207 162.3864 174.4452 188.68404 180 C 191.12388 169.17192 221.7045 170.92944 221.50458 180 C 240.67956 168.39864 265.18404 191.53152 248.74776 203.13288 C 268.47048 208.75752 248.49888 239.06232 232.3125 234 C 231.0171 242.43768 202.08072 245.3904 199.54092 234 C 183.15564 246.1644 148.98972 227.46096 172.29774 210.86712 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" strok
 e-width="1"/><text transform="translate(179.3 187.5)" fill="black"><tspan font-family="Helvetica" font-size="11" font-weight="bold" x=".75078125" y="10" textLength="62.95459">in-memory: </tspan><tspan font-family="Helvetica" font-size="11" font-weight="500" x="2.2600586" y="23" textLength="56.879883">pg_exttable</tspan><tspan font-family="Helvetica" font-size="11" font-weight="500" x="3.4927246" y="36" textLength="54.41455">pg_class\u2026</tspan></text></g><g filter="url(#Shadow_2)"><circle cx="220.5" cy="265.5" r="13.5000216" fill="#dbdbdb"/><circle cx="220.5" cy="265.5" r="13.5000216" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(214.7 258.5)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="2.463086" y="11" textLength="6.673828">2</tspan></text></g><g filter="url(#Shadow_2)"><circle cx="431.1501" cy="153" r="13.5000216" fill="#dbdbdb"/><circle cx="431.1501" cy="153" r="13.5000216" stroke="bl
 ack" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(425.3501 146)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="2.463086" y="11" textLength="6.673828">3</tspan></text></g></g><g><title>Layer 2</title><path d="M 369.59675 221 L 369.59675 201 L 412 201 L 412 191 L 432 211 L 412 231 L 412 221 Z" fill="#a9b7c2"/><path d="M 369.59675 221 L 369.59675 201 L 412 201 L 412 191 L 432 211 L 412 231 L 412 221 Z" stroke="black" stroke-linecap="round" stroke-linejoin="round" stroke-width="1"/><text transform="translate(382.39716 204)" fill="black"><tspan font-family="Helvetica" font-size="12" font-weight="bold" x="6.7322738" y="11" textLength="23.33789">PXF</tspan></text></g></g></svg>


[19/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/guc/guc_category-list.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/guc/guc_category-list.html.md.erb b/markdown/reference/guc/guc_category-list.html.md.erb
new file mode 100644
index 0000000..c42bba9
--- /dev/null
+++ b/markdown/reference/guc/guc_category-list.html.md.erb
@@ -0,0 +1,418 @@
+---
+title: Configuration Parameter Categories
+---
+
+Configuration parameters affect categories of server behaviors, such as resource consumption, query tuning, and authentication. The following sections describe HAWQ configuration parameter categories.
+
+## <a id="topic_hfd_1tl_zp"></a>Append-Only Table Parameters
+
+The following parameters configure the <span class="ph">append-only</span> tables feature of HAWQ.
+
+-   [max\_appendonly\_tables](parameter_definitions.html#max_appendonly_tables)
+-   [optimizer\_parts\_to\_force\_sort\_on\_insert](parameter_definitions.html#optimizer_parts_to_force_sort_on_insert)
+
+## <a id="topic39"></a>Client Connection Default Parameters
+
+These configuration parameters set defaults that are used for client connections.
+
+### <a id="topic40"></a>Statement Behavior Parameters
+
+-   [check\_function\_bodies](parameter_definitions.html#check_function_bodies)
+-   [default\_tablespace](parameter_definitions.html#default_tablespace)
+-   [default\_transaction\_isolation](parameter_definitions.html#default_transaction_isolation)
+-   [default\_transaction\_read\_only](parameter_definitions.html#default_transaction_read_only)
+-   [search\_path](parameter_definitions.html#search_path)
+-   [statement\_timeout](parameter_definitions.html#statement_timeout)
+-   [vacuum\_freeze\_min\_age](parameter_definitions.html#vacuum_freeze_min_age)
+
+### <a id="topic41"></a>Locale and Formatting Parameters
+
+-   [client\_encoding](parameter_definitions.html#client_encoding)
+-   [DateStyle](parameter_definitions.html#DateStyle)
+-   [extra\_float\_digits](parameter_definitions.html#extra_float_digits)
+-   [IntervalStyle](parameter_definitions.html#IntervalStyle)
+-   [lc\_collate](parameter_definitions.html#lc_collate)
+-   [lc\_ctype](parameter_definitions.html#lc_ctype)
+-   [lc\_messages](parameter_definitions.html#lc_messages)
+-   [lc\_monetary](parameter_definitions.html#lc_monetary)
+-   [lc\_numeric](parameter_definitions.html#lc_numeric)
+-   [lc\_time](parameter_definitions.html#lc_time)
+-   [TimeZone](parameter_definitions.html#TimeZone)
+
+### <a id="topic42"></a>Other Client Default Parameters
+
+-   [dynamic\_library\_path](parameter_definitions.html#dynamic_library_path)
+-   [explain\_memory\_verbosity](parameter_definitions.html#explain_memory_verbosity)
+-   [explain\_pretty\_print](parameter_definitions.html#explain_pretty_print)
+-   [local\_preload\_libraries](parameter_definitions.html#local_preload_libraries)
+
+## <a id="topic12"></a>Connection and Authentication Parameters
+
+These parameters control how clients connect and authenticate to HAWQ.
+
+### <a id="topic13"></a>Connection Parameters
+
+-   [listen\_addresses](parameter_definitions.html#listen_addresses)
+-   [max\_connections](parameter_definitions.html#max_connections)
+-   [max\_prepared\_transactions](parameter_definitions.html#max_prepared_transactions)
+-   [superuser\_reserved\_connections](parameter_definitions.html#superuser_reserved_connections)
+-   [tcp\_keepalives\_count](parameter_definitions.html#tcp_keepalives_count)
+-   [tcp\_keepalives\_idle](parameter_definitions.html#tcp_keepalives_idle)
+-   [tcp\_keepalives\_interval](parameter_definitions.html#tcp_keepalives_interval)
+-   [unix\_socket\_directory](parameter_definitions.html#unix_socket_directory)
+-   [unix\_socket\_group](parameter_definitions.html#unix_socket_group)
+-   [unix\_socket\_permissions](parameter_definitions.html#unix_socket_permissions)
+
+### <a id="topic14"></a>Security and Authentication Parameters
+
+-   [authentication\_timeout](parameter_definitions.html#authentication_timeout)
+-   [db\_user\_namespace](parameter_definitions.html#db_user_namespace)
+-   [enable\_secure\_filesystem](parameter_definitions.html#enable_secure_filesystem)
+-   [krb\_caseins\_users](parameter_definitions.html#krb_caseins_users)
+-   [krb\_server\_keyfile](parameter_definitions.html#krb_server_keyfile)
+-   [krb\_srvname](parameter_definitions.html#krb_srvname)
+-   [password\_encryption](parameter_definitions.html#password_encryption)
+-   [password\_hash\_algorithm](parameter_definitions.html#password_hash_algorithm)
+-   [ssl](parameter_definitions.html#ssl)
+-   [ssl\_ciphers](parameter_definitions.html#ssl_ciphers)
+
+## <a id="topic47"></a>Database and Tablespace/Filespace Parameters
+
+The following parameters configure the maximum number of databases, tablespaces, and filespaces allowed in a system.
+
+-   [gp\_max\_tablespaces](parameter_definitions.html#gp_max_tablespaces)
+-   [gp\_max\_filespaces](parameter_definitions.html#gp_max_filespaces)
+-   [gp\_max\_databases](parameter_definitions.html#gp_max_databases)
+
+## <a id="topic29"></a>Error Reporting and Logging Parameters
+
+These configuration parameters control HAWQ logging.
+
+### <a id="topic30"></a>Log Rotation
+
+-   [log\_rotation\_age](parameter_definitions.html#log_rotation_age)
+-   [log\_rotation\_size](parameter_definitions.html#log_rotation_size)
+-   [log\_truncate\_on\_rotation](parameter_definitions.html#log_truncate_on_rotation)
+
+### <a id="topic31"></a>When to Log
+
+-   [client\_min\_messages](parameter_definitions.html#client_min_messages)
+-   [log\_error\_verbosity](parameter_definitions.html#log_error_verbosity)
+-   [log\_min\_duration\_statement](parameter_definitions.html#log_min_duration_statement)
+-   [log\_min\_error\_statement](parameter_definitions.html#log_min_error_statement)
+-   [log\_min\_messages](parameter_definitions.html#log_min_messages)
+-   [optimizer\_minidump](parameter_definitions.html#optimizer_minidump)
+
+### <a id="topic32"></a>What to Log
+
+-   [debug\_pretty\_print](parameter_definitions.html#debug_pretty_print)
+-   [debug\_print\_parse](parameter_definitions.html#debug_print_parse)
+-   [debug\_print\_parse](parameter_definitions.html#debug_print_parse)
+-   [debug\_print\_plan](parameter_definitions.html#debug_print_plan)
+-   [debug\_print\_prelim\_plan](parameter_definitions.html#debug_print_prelim_plan)
+-   [debug\_print\_rewritten](parameter_definitions.html#debug_print_rewritten)
+-   [debug\_print\_slice\_table](parameter_definitions.html#debug_print_slice_table)
+-   [log\_autostats](parameter_definitions.html#log_autostats)
+-   [log\_connections](parameter_definitions.html#log_connections)
+-   [log\_disconnections](parameter_definitions.html#log_disconnections)
+-   [log\_dispatch\_stats](parameter_definitions.html#log_dispatch_stats)
+-   [log\_duration](parameter_definitions.html#log_duration)
+-   [log\_executor\_stats](parameter_definitions.html#log_executor_stats)
+-   [log\_hostname](parameter_definitions.html#log_hostname)
+-   [log\_parser\_stats](parameter_definitions.html#log_parser_stats)
+-   [log\_planner\_stats](parameter_definitions.html#log_planner_stats)
+-   [log\_statement](parameter_definitions.html#log_statement)
+-   [log\_statement\_stats](parameter_definitions.html#log_statement_stats)
+-   [log\_timezone](parameter_definitions.html#log_timezone)
+-   [gp\_debug\_linger](parameter_definitions.html#gp_debug_linger)
+-   [gp\_log\_format](parameter_definitions.html#gp_log_format)
+-   [gp\_max\_csv\_line\_length](parameter_definitions.html#gp_max_csv_line_length)
+-   [gp\_reraise\_signal](parameter_definitions.html#gp_reraise_signal)
+
+## <a id="topic45"></a>External Table Parameters
+
+The following parameters configure the external tables feature of HAWQ.
+
+-   [gp\_external\_enable\_exec](parameter_definitions.html#gp_external_enable_exec)
+-   [gp\_external\_grant\_privileges](parameter_definitions.html#gp_external_grant_privileges)
+-   [gp\_external\_max\_segs](parameter_definitions.html#gp_external_max_segs)
+-   [gp\_reject\_percent\_threshold](parameter_definitions.html#gp_reject_percent_threshold)
+
+## <a id="topic57"></a>GPORCA Parameters
+
+These parameters control the usage of GPORCA by HAWQ. For information about GPORCA, see [About GPORCA](../../query/gporca/query-gporca-optimizer.html#topic_i4y_prl_vp).
+
+-   [optimizer](parameter_definitions.html#optimizer)
+-   [optimizer\_analyze\_root\_partition](parameter_definitions.html#optimizer_analyze_root_partition)
+-   [optimizer\_minidump](parameter_definitions.html#optimizer_minidump)
+-   [optimizer\_parts\_to\_force\_sort\_on\_insert](parameter_definitions.html#optimizer_parts_to_force_sort_on_insert)
+-   [optimizer\_prefer\_scalar\_dqa\_multistage\_agg](parameter_definitions.html#optimizer_prefer_scalar_dqa_multistage_agg)
+
+## <a id="topic49"></a>HAWQ Array Configuration Parameters
+
+The parameters in this topic control the configuration of the HAWQ array and its components: segments, master, distributed transaction manager, master mirror, and interconnect.
+
+### <a id="topic50"></a>Interconnect Configuration Parameters
+
+-   [gp\_interconnect\_fc\_method](parameter_definitions.html#gp_interconnect_fc_method)
+-   [gp\_interconnect\_hash\_multiplier](parameter_definitions.html#gp_interconnect_hash_multiplier)
+-   [gp\_interconnect\_queue\_depth](parameter_definitions.html#gp_interconnect_queue_depth)
+-   [gp\_interconnect\_snd\_queue\_depth](parameter_definitions.html#gp_interconnect_snd_queue_depth)
+-   [gp\_interconnect\_setup\_timeout](parameter_definitions.html#gp_interconnect_setup_timeout)
+-   [gp\_interconnect\_type](parameter_definitions.html#gp_interconnect_type)
+-   [gp\_max\_packet\_size](parameter_definitions.html#gp_max_packet_size)
+
+### <a id="topic51"></a>Dispatch Configuration Parameters
+
+-   [gp\_cached\_segworkers\_threshold](parameter_definitions.html#gp_cached_segworkers_threshold)
+-   [gp\_connections\_per\_thread](parameter_definitions.html#gp_connections_per_thread)
+-   [gp\_enable\_direct\_dispatch](parameter_definitions.html#gp_enable_direct_dispatch)
+-   [gp\_segment\_connect\_timeout](parameter_definitions.html#gp_segment_connect_timeout)
+-   [gp\_set\_proc\_affinity](parameter_definitions.html#gp_set_proc_affinity)
+-   [gp\_vmem\_idle\_resource\_timeout](parameter_definitions.html#gp_vmem_idle_resource_timeout)
+
+### <a id="topic52"></a>Fault Operation Parameters
+
+-   [gp\_set\_read\_only](parameter_definitions.html#gp_set_read_only)
+
+### <a id="topic_ctl_sww_vv"></a>Filepace Parameters
+
+-   [hawq\_dfs\_url](parameter_definitions.html#hawq_dfs_url)
+
+### <a id="topic_r4m_5ww_vv"></a>Master Configuration Parameters
+
+-   [hawq\_master\_address\_host](parameter_definitions.html#hawq_master_address_host)
+-   [hawq\_master\_address\_port](parameter_definitions.html#hawq_master_address_port)
+-   [hawq\_master\_directory](parameter_definitions.html#hawq_master_directory)
+-   [hawq\_master\_temp\_directory](parameter_definitions.html#hawq_master_temp_directory)
+
+### <a id="topic54"></a>Read-Only Parameters
+
+-   [gp\_command\_count](parameter_definitions.html#gp_command_count)
+-   [gp\_role](parameter_definitions.html#gp_role)
+-   [gp\_session\_id](parameter_definitions.html#gp_session_id)
+
+### <a id="topic_zgm_vww_vv"></a>Segment Configuration Parameters
+
+-   [hawq\_segment\_address\_port](parameter_definitions.html#hawq_segment_address_port)
+-   [hawq\_segment\_directory](parameter_definitions.html#hawq_segment_directory)
+-   [hawq\_segment\_temp\_directory](parameter_definitions.html#hawq_segment_temp_directory)
+
+## <a id="topic_pxfparam"></a>HAWQ Extension Framework (PXF) Parameters
+
+The parameters in this topic control configuration, query analysis, and statistics collection in the HAWQ Extension Framework (PXF).
+
+-   [pxf\_enable\_filter\_pushdown](parameter_definitions.html#pxf_enable_filter_pushdown)
+-   [pxf\_enable\_stat\_collection](parameter_definitions.html#pxf_enable_stat_collection)
+-   [pxf\_remote\_service\_login](parameter_definitions.html#pxf_remote_service_login)
+-   [pxf\_remote\_service\_secret](parameter_definitions.html#pxf_remote_service_secret)
+-   [pxf\_service\_address](parameter_definitions.html#pxf_service_address)
+-   [pxf\_service\_port] (parameter_definitions.html#pxf_service_port)
+-   [pxf\_stat\_max\_fragments] (parameter_definitions.html#pxf_stat_max_fragments)
+
+## <a id="topic56"></a>HAWQ PL/Java Extension Parameters
+
+The parameters in this topic control the configuration of HAWQ PL/Java extensions.
+
+-   [pljava\_classpath](parameter_definitions.html#pljava_classpath)
+-   [pljava\_statement\_cache\_size](parameter_definitions.html#pljava_statement_cache_size)
+-   [pljava\_release\_lingering\_savepoints](parameter_definitions.html#pljava_release_lingering_savepoints)
+-   [pljava\_vmoptions](parameter_definitions.html#pljava_vmoptions)
+
+## <a id="hawq_resource_management"></a>HAWQ Resource Management Parameters
+
+The following configuration parameters configure the HAWQ resource management feature.
+
+-   [hawq\_global\_rm\_type](parameter_definitions.html#hawq_global_rm_type)
+-   [hawq\_re\_memory\_overcommit\_max](parameter_definitions.html#hawq_re_memory_overcommit_max)
+-   [hawq\_rm\_cluster\_report\_period](parameter_definitions.html#hawq_rm_cluster_report)
+-   [hawq\_rm\_force\_alterqueue\_cancel\_queued\_request](parameter_definitions.html#hawq_rm_force_alterqueue_cancel_queued_request)
+-   [hawq\_rm\_master\_port](parameter_definitions.html#hawq_rm_master_port)
+-   [hawq\_rm\_memory\_limit\_perseg](parameter_definitions.html#hawq_rm_memory_limit_perseg)
+-   [hawq\_rm\_min\_resource\_perseg](parameter_definitions.html#hawq_rm_min_resource_perseg)
+-   [hawq\_rm\_nresqueue\_limit](parameter_definitions.html#hawq_rm_nresqueue_limit)
+-   [hawq\_rm\_nslice\_perseg\_limit](parameter_definitions.html#hawq_rm_nslice_perseg_limit)
+-   [hawq\_rm\_nvcore\_limit\_perseg](parameter_definitions.html#hawq_rm_nvcore_limit_perseg)
+-   [hawq\_rm\_nvseg\_perquery\_limit](parameter_definitions.html#hawq_rm_nvseg_perquery_limit)
+-   [hawq\_rm\_nvseg\_perquery\_perseg\_limit](parameter_definitions.html#hawq_rm_nvseg_perquery_perseg_limit)
+-   [hawq\_rm\_nvseg\_variance\_amon\_seg\_limit](parameter_definitions.html#hawq_rm_nvseg_variance_amon_seg_limit)
+-   [hawq\_rm\_rejectrequest\_nseg\_limit](parameter_definitions.html#hawq_rm_rejectrequest_nseg_limit)
+-   [hawq\_rm\_resource\_idle\_timeout](parameter_definitions.html#hawq_rm_resource_idle_timeout)
+-   [hawq\_rm\_return\_percent\_on\_overcommit](parameter_definitions.html#hawq_rm_return_percent_on_overcommit)
+-   [hawq\_rm\_segment\_heartbeat\_interval](parameter_definitions.html#hawq_rm_segment_heartbeat_interval)
+-   [hawq\_rm\_segment\_port](parameter_definitions.html#hawq_rm_segment_port)
+-   [hawq\_rm\_stmt\_nvseg](parameter_definitions.html#hawq_rm_stmt_nvseg)
+-   [hawq\_rm\_stmt\_vseg\_memory](parameter_definitions.html#hawq_rm_stmt_vseg_memory)
+-   [hawq\_rm\_tolerate\_nseg\_limit](parameter_definitions.html#hawq_rm_tolerate_nseg_limit)
+-   [hawq\_rm\_yarn\_address](parameter_definitions.html#hawq_rm_yarn_address)
+-   [hawq\_rm\_yarn\_app\_name](parameter_definitions.html#hawq_rm_yarn_app_name)
+-   [hawq\_rm\_yarn\_queue\_name](parameter_definitions.html#hawq_rm_yarn_queue_name)
+-   [hawq\_rm\_yarn\_scheduler\_address](parameter_definitions.html#hawq_rm_yarn_scheduler_address)
+
+## <a id="topic43"></a>Lock Management Parameters
+
+These configuration parameters set limits for locks and deadlocks.
+
+-   [deadlock\_timeout](parameter_definitions.html#deadlock_timeout)
+-   [max\_locks\_per\_transaction](parameter_definitions.html#max_locks_per_transaction)
+
+## <a id="topic48"></a>Past PostgreSQL Version Compatibility Parameters
+
+The following parameters provide compatibility with older PostgreSQL versions. You do not need to change these parameters in HAWQ.
+
+-   [add\_missing\_from](parameter_definitions.html#add_missing_from)
+-   [array\_nulls](parameter_definitions.html#array_nulls)
+-   [backslash\_quote](parameter_definitions.html#backslash_quote)
+-   [escape\_string\_warning](parameter_definitions.html#escape_string_warning)
+-   [regex\_flavor](parameter_definitions.html#regex_flavor)
+-   [standard\_conforming\_strings](parameter_definitions.html#standard_conforming_strings)
+-   [transform\_null\_equals](parameter_definitions.html#transform_null_equals)
+
+## <a id="topic21"></a>Query Tuning Parameters
+
+These parameters control aspects of SQL query processing such as query operators and operator settings and statistics sampling.
+
+### <a id="topic22"></a>Legacy Query Optimizer Operator Control Parameters
+
+The following parameters control the types of plan operations the legacy query optimizer can use. Enable or disable plan operations to force the legacy optimizer to choose a different plan. This is useful for testing and comparing query performance using different plan types.
+
+-   [enable\_bitmapscan](parameter_definitions.html#enable_bitmapscan)
+-   [enable\_groupagg](parameter_definitions.html#enable_groupagg)
+-   [enable\_hashagg](parameter_definitions.html#enable_hashagg)
+-   [enable\_hashjoin](parameter_definitions.html#enable_hashjoin)
+-   [enable\_indexscan](parameter_definitions.html#enable_indexscan)
+-   [enable\_mergejoin](parameter_definitions.html#enable_mergejoin)
+-   [enable\_nestloop](parameter_definitions.html#enable_nestloop)
+-   [enable\_seqscan](parameter_definitions.html#enable_seqscan)
+-   [enable\_sort](parameter_definitions.html#enable_sort)
+-   [enable\_tidscan](parameter_definitions.html#enable_tidscan)
+-   [gp\_enable\_agg\_distinct](parameter_definitions.html#gp_enable_agg_distinct)
+-   [gp\_enable\_agg\_distinct\_pruning](parameter_definitions.html#gp_enable_agg_distinct_pruning)
+-   [gp\_enable\_direct\_dispatch](parameter_definitions.html#gp_enable_direct_dispatch)
+-   [gp\_enable\_fallback\_plan](parameter_definitions.html#gp_enable_fallback_plan)
+-   [gp\_enable\_fast\_sri](parameter_definitions.html#gp_enable_fast_sri)
+-   [gp\_enable\_groupext\_distinct\_gather](parameter_definitions.html#gp_enable_groupext_distinct_gather)
+-   [gp\_enable\_groupext\_distinct\_pruning](parameter_definitions.html#gp_enable_groupext_distinct_pruning)
+-   [gp\_enable\_multiphase\_agg](parameter_definitions.html#gp_enable_multiphase_agg)
+-   [gp\_enable\_predicate\_propagation](parameter_definitions.html#gp_enable_predicate_propagation)
+-   [gp\_enable\_preunique](parameter_definitions.html#gp_enable_preunique)
+-   [gp\_enable\_sequential\_window\_plans](parameter_definitions.html#gp_enable_sequential_window_plans)
+-   [gp\_enable\_sort\_distinct](parameter_definitions.html#gp_enable_sort_distinct)
+-   [gp\_enable\_sort\_limit](parameter_definitions.html#gp_enable_sort_limit)
+
+### <a id="topic23"></a>Legacy Query Optimizer Costing Parameters
+
+**Warning:** Do not adjust these query costing parameters. They are tuned to reflect HAWQ hardware configurations and typical workloads. All of these parameters are related. Changing one without changing the others can have adverse affects on performance.
+
+-   [cpu\_index\_tuple\_cost](parameter_definitions.html#cpu_index_tuple_cost)
+-   [cpu\_operator\_cost](parameter_definitions.html#cpu_operator_cost)
+-   [cpu\_tuple\_cost](parameter_definitions.html#cpu_tuple_cost)
+-   [cursor\_tuple\_fraction](parameter_definitions.html#cursor_tuple_fraction)
+-   [effective\_cache\_size](parameter_definitions.html#effective_cache_size)
+-   [gp\_motion\_cost\_per\_row](parameter_definitions.html#gp_motion_cost_per_row)
+-   [gp\_segments\_for\_planner](parameter_definitions.html#gp_segments_for_planner)
+-   [random\_page\_cost](parameter_definitions.html#random_page_cost)
+-   [seq\_page\_cost](parameter_definitions.html#seq_page_cost)
+
+### <a id="topic24"></a>Database Statistics Sampling Parameters
+
+These parameters adjust the amount of data sampled by an `ANALYZE` operation. Adjusting these parameters affects statistics collection system-wide. You can configure statistics collection on particular tables and columns by using the `ALTER TABLE` `SET STATISTICS` clause. See [About Database Statistics](../../datamgmt/about_statistics.html).
+
+-   [default\_statistics\_target](parameter_definitions.html#default_statistics_target)
+-   [gp\_analyze\_relative\_error](parameter_definitions.html#gp_analyze_relative_error)
+
+### <a id="topic25"></a>Sort Operator Configuration Parameters
+
+-   [gp\_enable\_sort\_distinct](parameter_definitions.html#gp_enable_sort_distinct)
+-   [gp\_enable\_sort\_limit](parameter_definitions.html#gp_enable_sort_limit)
+
+### <a id="topic26"></a>Aggregate Operator Configuration Parameters
+
+-   [gp\_enable\_agg\_distinct](parameter_definitions.html#gp_enable_agg_distinct)
+-   [gp\_enable\_agg\_distinct\_pruning](parameter_definitions.html#gp_enable_agg_distinct_pruning)
+-   [gp\_enable\_multiphase\_agg](parameter_definitions.html#gp_enable_multiphase_agg)
+-   [gp\_enable\_preunique](parameter_definitions.html#gp_enable_preunique)
+-   [gp\_enable\_groupext\_distinct\_gather](parameter_definitions.html#gp_enable_groupext_distinct_gather)
+-   [gp\_enable\_groupext\_distinct\_pruning](parameter_definitions.html#gp_enable_groupext_distinct_pruning)
+-   [gp\_workfile\_compress\_algorithm](parameter_definitions.html#gp_workfile_compress_algorithm)
+
+### <a id="topic27"></a>Join Operator Configuration Parameters
+
+-   [join\_collapse\_limit](parameter_definitions.html#join_collapse_limit)
+-   [gp\_adjust\_selectivity\_for\_outerjoins](parameter_definitions.html#gp_adjust_selectivity_for_outerjoins)
+-   [gp\_hashjoin\_tuples\_per\_bucket](parameter_definitions.html#gp_hashjoin_tuples_per_bucket)
+-   [gp\_statistics\_use\_fkeys](parameter_definitions.html#gp_statistics_use_fkeys)
+-   [gp\_workfile\_compress\_algorithm](parameter_definitions.html#gp_workfile_compress_algorithm)
+
+### <a id="topic28"></a>Other Legacy Query Optimizer Configuration Parameters
+
+-   [from\_collapse\_limit](parameter_definitions.html#from_collapse_limit)
+-   [gp\_enable\_predicate\_propagation](parameter_definitions.html#gp_enable_predicate_propagation)
+-   [gp\_max\_plan\_size](parameter_definitions.html#gp_max_plan_size)
+-   [gp\_statistics\_pullup\_from\_child\_partition](parameter_definitions.html#gp_statistics_pullup_from_child_partition)
+
+## <a id="statistics_collection"></a>Statistics Collection Parameters
+
+### <a id="topic_qvz_nz3_yv"></a>Automatic Statistics Collection
+
+When automatic statistics collection is enabled, you can run `ANALYZE` automatically in the same transaction as an `INSERT`, `COPY` or `CREATE TABLE...AS SELECT` statement when a certain threshold of rows is affected (`on_change`), or when a newly generated table has no statistics (`on_no_stats`). To enable this feature, set the following server configuration parameters in your HAWQ `hawq-site.xml` file by using the `hawq config` utility and restart HAWQ:
+
+-   [gp\_autostats\_mode](parameter_definitions.html#gp_autostats_mode)
+-   [log\_autostats](parameter_definitions.html#log_autostats)
+
+**Note:** If you install and manage HAWQ using Ambari, be aware that property changes made using `hawq config` could be overwritten by Ambari. For Ambari-managed HAWQ clusters, always use the Ambari administration interface to set or change HAWQ configuration properties.
+
+### <a id="topic37"></a>Runtime Statistics Collection Parameters
+
+These parameters control the server statistics collection feature. When statistics collection is enabled, you can access the statistics data using the *pg\_stat* and *pg\_statio* family of system catalog views.
+
+-   [track\_activities](parameter_definitions.html#track_activities)
+-   [track\_counts](parameter_definitions.html#track_counts)
+-   [update\_process\_title](parameter_definitions.html#update_process_title)
+
+## <a id="topic15"></a>System Resource Consumption Parameters
+
+These parameters set the limits for system resources consumed by HAWQ.
+
+### <a id="topic16"></a>Memory Consumption Parameters
+
+These parameters control system memory usage. You can adjust `hawq_re_memory_overcommit_max` to avoid running out of memory at the segment hosts during query processing. See also [HAWQ Resource Management](#hawq_resource_management).
+
+-   [hawq\_re\_memory\_overcommit\_max](parameter_definitions.html#hawq_re_memory_overcommit_max)
+-   [gp\_vmem\_protect\_segworker\_cache\_limit](parameter_definitions.html#gp_vmem_protect_segworker_cache_limit)
+-   [gp\_workfile\_limit\_files\_per\_query](parameter_definitions.html#gp_workfile_limit_files_per_query)
+-   [gp\_workfile\_limit\_per\_query](parameter_definitions.html#gp_workfile_limit_per_query)
+-   [gp\_workfile\_limit\_per\_segment](parameter_definitions.html#gp_workfile_limit_per_segment)
+-   [maintenance\_work\_mem](parameter_definitions.html#maintenance_work_mem)
+-   [max\_stack\_depth](parameter_definitions.html#max_stack_depth)
+-   [shared\_buffers](parameter_definitions.html#shared_buffers)
+-   [temp\_buffers](parameter_definitions.html#temp_buffers)
+
+### <a id="topic17"></a>Free Space Map Parameters
+
+These parameters control the sizing of the *free space map, which contains* expired rows. Use `VACUUM` to reclaim the free space map disk space.
+
+-   [max\_fsm\_pages](parameter_definitions.html#max_fsm_pages)
+-   [max\_fsm\_relations](parameter_definitions.html#max_fsm_relations)
+
+### <a id="topic18"></a>OS Resource Parameters
+
+-   [max\_files\_per\_process](parameter_definitions.html#max_files_per_process)
+-   [shared\_preload\_libraries](parameter_definitions.html#shared_preload_libraries)
+
+### <a id="topic19"></a>Cost-Based Vacuum Delay Parameters
+
+**Warning:** Avoid using cost-based vacuum delay because it runs asynchronously among the segment instances. The vacuum cost limit and delay is invoked at the segment level without taking into account the state of the entire HAWQ array
+
+You can configure the execution cost of `VACUUM` and `ANALYZE` commands to reduce the I/O impact on concurrent database activity. When the accumulated cost of I/O operations reaches the limit, the process performing the operation sleeps for a while, Then resets the counter and continues execution
+
+-   [vacuum\_cost\_delay](parameter_definitions.html#vacuum_cost_delay)
+-   [vacuum\_cost\_limit](parameter_definitions.html#vacuum_cost_limit)
+-   [vacuum\_cost\_page\_dirty](parameter_definitions.html#vacuum_cost_page_dirty)
+-   [vacuum\_cost\_page\_miss](parameter_definitions.html#vacuum_cost_page_miss)
+
+### <a id="topic20"></a>Transaction ID Management Parameters
+
+-   [xid\_stop\_limit](parameter_definitions.html#xid_stop_limit)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/guc/guc_config.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/guc/guc_config.html.md.erb b/markdown/reference/guc/guc_config.html.md.erb
new file mode 100644
index 0000000..56a2456
--- /dev/null
+++ b/markdown/reference/guc/guc_config.html.md.erb
@@ -0,0 +1,77 @@
+---
+title: About Server Configuration Parameters
+---
+
+There are many HAWQ server configuration parameters that affect the behavior of the HAWQ system. Many of these configuration parameters have the same names, settings, and behaviors as in a regular PostgreSQL database system.
+
+-   [Parameter Types and Values](#topic_vsn_22l_z4) describes the parameter data types and values.
+-   [Setting Parameters](#topic_cyz_p2l_z4) describes limitations on who can change them and where or when they can be set.
+-   [Configuration Parameter Categories](guc_category-list.html#guc-cat-list) organizes parameters by functionality.
+-   [Configuration Parameters](parameter_definitions.html) lists the parameter descriptions in alphabetic order.
+
+## <a id="topic_vsn_22l_z4"></a>Parameter Types and Values
+
+All parameter names are case-insensitive. Every parameter takes a value of one of four types: `Boolean`, `integer`, `floating point`, or `string`. Boolean values may be written as `ON`, `OFF`, `TRUE`, `FALSE`, `YES`, `NO`, `1`, `0` (all case-insensitive).
+
+Some settings specify a memory size or time value. Each of these has an implicit unit, which is either kilobytes, blocks (typically eight kilobytes), milliseconds, seconds, or minutes. Valid memory size units are `kB` (kilobytes), `MB` (megabytes), and `GB` (gigabytes). Valid time units are `ms` (milliseconds), `s` (seconds), `min` (minutes), `h` (hours), and `d` (days). Note that the multiplier for memory units is 1024, not 1000. A valid time expression contains a number and a unit. When specifying a memory or time unit using the `SET` command, enclose the value in quotes. For example:
+
+``` pre
+SET hawq_rm_stmt_vseg_memory TO '4GB';
+```
+
+**Note:** There is no space between the value and the unit names.
+
+## <a id="topic_cyz_p2l_z4"></a>Setting Parameters
+
+Many of the configuration parameters have limitations on who can change them and where or when they can be set. For example, to change certain parameters, you must be a HAWQ superuser. Other parameters require a restart of the system for the changes to take effect. A parameter that is classified as *session* can be set at the system level (in the `hawq-site.xml` file), at the database-level (using `ALTER DATABASE`), at the role-level (using `ALTER ROLE`), or at the session-level (using `SET`). System parameters can only be set by using the `hawq config` utility or by directly modifying a `hawq-site.xml` file.
+
+By design, all HAWQ instances (including master and segments) host identical `hawq-site.xml` files. Using a common `hawq-site.xml` file across all HAWQ instances simplifies configuration of the cluster. Within each `hawq-site.xml` configuration file, some parameters are considered *segment* parameters, meaning that each segment instance looks to its own `hawq-site.xml` file to get the value of that parameter. By convention, these parameter names begin with the string `hawq_segment`. Others parameters are considered *master* parameters. By convention, these parameter names begin with the string `hawq_master`. Master parameters are only applied at the master instance and ignored by segments.
+
+**Note:** If you use the `hawq config` utility to set configuration parameter values in `hawq-site.xml`, the utility synchronizes all configuration files. Any manual modifications that you made to individual `hawq-site.xml` files may be lost. Additionally, if you install and manage HAWQ using Ambari, do not use `hawq config` to configure HAWQ properties. If the cluster is restarted, Ambari will overwrite any changes made by `hawq config` For Ambari-managed HAWQ clusters, only  use the Ambari administration interface to set or change HAWQ configuration properties.
+
+This table describes the values in the Set Classifications column of the table in the description of a server configuration parameter.
+
+<a id="topic_cyz_p2l_z4__ih389119"></a>
+
+<table>
+<caption><span class="tablecap">Table 1. Set Classifications</span></caption>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Set Classification</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>master or local</td>
+<td>A <em>master</em> parameter must be set in the <code class="ph codeph">hawq-site.xml</code> file of the HAWQ master instance. The value for this parameter is then either passed to (or ignored by) the segments at run time.
+<p>A <em>local</em> parameter is also defined in the <code class="ph codeph">hawq-site.xml</code> file of the master AND every segment instance. Each HAWQ instance looks to its own configuration to get the value for the parameter.</p>
+<p>We recommend that you use the same configuration parameter values across all HAWQ instances to maintain a single, consistent <code class="ph codeph">hawq-site.xml</code> configuration file. The <code class="ph codeph">hawq                     config</code> utility will enforce this consistency.</p>
+<p>Changes to master or local parameters always require a system restart for changes to take effect.</p></td>
+</tr>
+<tr class="even">
+<td>session or system</td>
+<td><em>Session</em> parameters can be changed on the fly within a database session, and can have a hierarchy of settings: at the system level (through <code class="ph codeph">hawq-site.xml</code>or <code class="ph codeph">hawq config</code> utility), at the database level (<code class="ph codeph">ALTER DATABASE...SET</code>), at the role level (<code class="ph codeph">ALTER ROLE...SET</code>), or at the session level (<code class="ph codeph">SET</code>). If the parameter is set at multiple levels, then the most granular setting takes precedence (for example, session overrides role, role overrides database, and database overrides system).
+<p>A <em>system</em> parameter can only be changed via <code class="ph codeph">hawq config</code> utility or the <code class="ph codeph">hawq-site.xml</code> file(s).</p></td>
+</tr>
+<tr class="odd">
+<td>restart or reload</td>
+<td>When changing parameter values in the <code class="ph codeph">hawq-site.xml</code> file(s), some require a <em>restart</em> of HAWQ for the change to take effect. Other parameter values can be refreshed by just reloading the configuration file (using <code class="ph codeph">hawq stop object -u</code>), and do not require stopping the system.</td>
+</tr>
+<tr class="even">
+<td>superuser</td>
+<td>These session parameters can only be set by a database superuser. Regular database users cannot set this parameter.</td>
+</tr>
+<tr class="odd">
+<td>read only</td>
+<td>These parameters are not settable by database users or superusers. The current value of the parameter can be shown but not altered.</td>
+</tr>
+</tbody>
+</table>
+
+
+


[15/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-OPERATOR-CLASS.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-OPERATOR-CLASS.html.md.erb b/markdown/reference/sql/CREATE-OPERATOR-CLASS.html.md.erb
new file mode 100644
index 0000000..9c093c1
--- /dev/null
+++ b/markdown/reference/sql/CREATE-OPERATOR-CLASS.html.md.erb
@@ -0,0 +1,153 @@
+---
+title: CREATE OPERATOR CLASS
+---
+
+Defines a new operator class.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE OPERATOR CLASS <name> [DEFAULT] FOR TYPE <data_type>��
+  USING <index_method> AS 
+��{ 
+��OPERATOR <strategy_number>
+            <op_name> [(<op_type>, <op_type>)] [RECHECK]
+��| FUNCTION <support_number>
+            <funcname> (<argument_type> [, ...] )
+��| STORAGE <storage_type>
+��} [, ... ]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE OPERATOR CLASS` creates a new operator class. An operator class defines how a particular data type can be used with an index. The operator class specifies that certain operators will fill particular roles or strategies for this data type and this index method. The operator class also specifies the support procedures to be used by the index method when the operator class is selected for an index column. All the operators and functions used by an operator class must be defined before the operator class is created. Any functions used to implement the operator class must be defined as `IMMUTABLE`.
+
+`CREATE OPERATOR CLASS` does not presently check whether the operator class definition includes all the operators and functions required by the index method, nor whether the operators and functions form a self-consistent set. It is the user's responsibility to define a valid operator class.
+
+You must be a superuser to create an operator class.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The (optionally schema-qualified) name of the operator class to be defined. Two operator classes in the same schema can have the same name only if they are for different index methods.</dd>
+
+<dt>DEFAULT  </dt>
+<dd>Makes the operator class the default operator class for its data type. At most one operator class can be the default for a specific data type and index method.</dd>
+
+<dt> \<data\_type\>   </dt>
+<dd>The column data type that this operator class is for.</dd>
+
+<dt> \<index\_method\>   </dt>
+<dd>The name of the index method this operator class is for. Choices are `btree`, `bitmap`, and `gist`.</dd>
+
+<dt> \<strategy\_number\>   </dt>
+<dd>The operators associated with an operator class are identified by \<strategy number\>s, which serve to identify the semantics of each operator within the context of its operator class. For example, B-trees impose a strict ordering on keys, lesser to greater, and so operators like *less than* and *greater than or equal to* are interesting with respect to a B-tree. These strategies can be thought of as generalized operators. Each operator class specifies which actual operator corresponds to each strategy for a particular data type and interpretation of the index semantics. The corresponding strategy numbers for each index method are as follows: <a id="topic1__bx145491"></a>
+
+<span class="tablecap">Table 1. B-tree and Bitmap Strategies</span>
+
+| Operation             | Strategy Number |
+|-----------------------|-----------------|
+| less than             | 1               |
+| less than or equal    | 2               |
+| equal                 | 3               |
+| greater than or equal | 4               |
+| greater than          | 5               |
+
+<span class="tablecap">Table 2. GiST Two-Dimensional Strategies (R-Tree)</span>
+
+<a id="topic1__bx145491a"></a>
+
+| Operation                   | Strategy Number |
+|-----------------------------|-----------------|
+| strictly left of            | 1               |
+| does not extend to right of | 2               |
+| overlaps                    | 3               |
+| does not extend to left of  | 4               |
+| strictly right of           | 5               |
+| same                        | 6               |
+| contains                    | 7               |
+| contained by                | 8               |
+| does not extend above       | 9               |
+| strictly below              | 10              |
+| strictly above              | 11              |
+</dd>
+
+<dt> \<operator\_name\>   </dt>
+<dd>The name (optionally schema-qualified) of an operator associated with the operator class.</dd>
+
+<dt> \<op\_type\>   </dt>
+<dd>The operand data type(s) of an operator, or `NONE` to signify a left-unary or right-unary operator. The operand data types may be omitted in the normal case where they are the same as the operator class data type.</dd>
+
+<dt>RECHECK  </dt>
+<dd>If present, the index is "lossy" for this operator, and so the rows retrieved using the index must be rechecked to verify that they actually satisfy the qualification clause involving this operator.</dd>
+
+<dt> \<support\_number\>   </dt>
+<dd>Index methods require additional support routines in order to work. These operations are administrative routines used internally by the index methods. As with strategies, the operator class identifies which specific functions should play each of these roles for a given data type and semantic interpretation. The index method defines the set of functions it needs, and the operator class identifies the correct functions to use by assigning them to the *support function numbers* as follows: <a id="topic1__bx145974"></a>
+
+<span class="tablecap">Table 3. B-tree and Bitmap Support Functions</span>
+
+| Function                                                                                                                                                                | Support Number |
+|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
+| Compare two keys and return an integer less than zero, zero, or greater than zero, indicating whether the first key is less than, equal to, or greater than the second. | 1              |
+
+
+<span class="tablecap">Table 4. GiST Support Functions</span>
+
+<a id="topic1__bx145974a"></a>
+
+| Function                                                                                                                      | Support Number |
+|-------------------------------------------------------------------------------------------------------------------------------|----------------|
+| consistent - determine whether key satisfies the query qualifier.                                                             | 1              |
+| union - compute union of a set of keys.                                                                                       | 2              |
+| compress - compute a compressed representation of a key or value to be indexed.                                               | 3              |
+| decompress - compute a decompressed representation of a compressed key.                                                       | 4              |
+| penalty - compute penalty for inserting new key into subtree with given subtree's key.                                        | 5              |
+| picksplit - determine which entries of a page are to be moved to the new page and compute the union keys for resulting pages. | 6              |
+| equal - compare two keys and return true if they are equal.                                                                   | 7              |
+</dd>
+
+<dt> \<funcname\>   </dt>
+<dd>The name (optionally schema-qualified) of a function that is an index method support procedure for the operator class.</dd>
+
+<dt> \<argument\_types\>   </dt>
+<dd>The parameter data type(s) of the function.</dd>
+
+<dt> \<storage\_type\>   </dt>
+<dd>The data type actually stored in the index. Normally this is the same as the column data type, but the GiST index method allows it to be different. The `STORAGE` clause must be omitted unless the index method allows a different type to be used.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Because the index machinery does not check access permissions on functions before using them, including a function or operator in an operator class is the same as granting public execute permission on it. This is usually not an issue for the sorts of functions that are useful in an operator class.
+
+The operators should not be defined by SQL functions. A SQL function is likely to be inlined into the calling query, which will prevent the optimizer from recognizing that the query matches an index.
+
+Any functions used to implement the operator class must be defined as `IMMUTABLE`.
+
+## <a id="topic1__section6"></a>Examples
+
+The following example command defines a GiST index operator class for the data type `_int4` (array of int4):
+
+``` pre
+CREATE OPERATOR CLASS gist__int_ops
+    DEFAULT FOR TYPE _int4 USING gist AS
+        OPERATOR 3 &&,
+        OPERATOR 6 = RECHECK,
+        OPERATOR 7 @>,
+        OPERATOR 8 <@,
+        OPERATOR 20 @@ (_int4, query_int),
+        FUNCTION 1 g_int_consistent (internal, _int4, int4),
+        FUNCTION 2 g_int_union (bytea, internal),
+        FUNCTION 3 g_int_compress (internal),
+        FUNCTION 4 g_int_decompress (internal),
+        FUNCTION 5 g_int_penalty (internal, internal, internal),
+        FUNCTION 6 g_int_picksplit (internal, internal),
+        FUNCTION 7 g_int_same (_int4, _int4, internal);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE OPERATOR CLASS` is a HAWQ extension. There is no `CREATE                OPERATOR CLASS` statement in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[ALTER OPERATOR CLASS](ALTER-OPERATOR-CLASS.html), [DROP OPERATOR CLASS](DROP-OPERATOR-CLASS.html), [CREATE FUNCTION](CREATE-FUNCTION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-OPERATOR.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-OPERATOR.html.md.erb b/markdown/reference/sql/CREATE-OPERATOR.html.md.erb
new file mode 100644
index 0000000..570d226
--- /dev/null
+++ b/markdown/reference/sql/CREATE-OPERATOR.html.md.erb
@@ -0,0 +1,171 @@
+---
+title: CREATE OPERATOR
+---
+
+Defines a new operator.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE OPERATOR <name> ( 
+�������PROCEDURE = <funcname>
+�������[, LEFTARG = <lefttype>] [, RIGHTARG = <righttype>]
+�������[, COMMUTATOR = <com_op>] [, NEGATOR = <neg_op>]
+�������[, RESTRICT = <res_proc>] [, JOIN = <join_proc>]
+�������[, HASHES] [, MERGES]
+�������[, SORT1 = <left_sort_op>] [, SORT2 = <right_sort_op>]
+�������[, LTCMP = <less_than_op>] [, GTCMP = <greater_than_op>] )
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE OPERATOR` defines a new operator. The user who defines an operator becomes its owner.
+
+The operator name is a sequence of up to `NAMEDATALEN`-1 (63 by default) characters from the following list: `` + - * / < > = ~ ! @ # % ^                     & | ` ? ``
+
+There are a few restrictions on your choice of name:
+
+-   `--` and `/*` cannot appear anywhere in an operator name, since they will be taken as the start of a comment.
+-   A multicharacter operator name cannot end in `+` or `-`, unless the name also contains at least one of these characters: `` ~ ! @ # % ^ & | ` ? ``
+
+For example, `@-` is an allowed operator name, but `*-` is not. This restriction allows HAWQ to parse SQL-compliant commands without requiring spaces between tokens.
+
+The operator `!=` is mapped to `<>` on input, so these two names are always equivalent.
+
+At least one of `LEFTARG` and `RIGHTARG` must be defined. For binary operators, both must be defined. For right unary operators, only `LEFTARG` should be defined, while for left unary operators only `RIGHTARG` should be defined.
+
+The \<funcname\> procedure must have been previously defined using `CREATE FUNCTION`, must be `IMMUTABLE`, and must be defined to accept the correct number of arguments (either one or two) of the indicated types.
+
+The other clauses specify optional operator optimization clauses. These clauses should be provided whenever appropriate to speed up queries that use the operator. But if you provide them, you must be sure that they are correct. Incorrect use of an optimization clause can result in server process crashes, subtly wrong output, or other unexpected results. You can always leave out an optimization clause if you are not sure about it.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The (optionally schema-qualified) name of the operator to be defined. Two operators in the same schema can have the same name if they operate on different data types.</dd>
+
+<dt> \<funcname\>   </dt>
+<dd>The function used to implement this operator (must be an `IMMUTABLE` function).</dd>
+
+<dt> \<lefttype\>   </dt>
+<dd>The data type of the operator's left operand, if any. This option would be omitted for a left-unary operator.</dd>
+
+<dt> \<righttype\>   </dt>
+<dd>The data type of the operator's right operand, if any. This option would be omitted for a right-unary operator.</dd>
+
+<dt> \<com\_op\>   </dt>
+<dd>The optional `COMMUTATOR` clause names an operator that is the commutator of the operator being defined. We say that operator A is the commutator of operator B if (x A y) equals (y B x) for all possible input values x, y. Notice that B is also the commutator of A. For example, operators `<` and `>` for a particular data type are usually each others commutators, and operator + is usually commutative with itself. But operator `-` is usually not commutative with anything. The left operand type of a commutable operator is the same as the right operand type of its commutator, and vice versa. So the name of the commutator operator is all that needs to be provided in the `COMMUTATOR` clause.</dd>
+
+<dt> \<neg\_op\>   </dt>
+<dd>The optional `NEGATOR` clause names an operator that is the negator of the operator being defined. We say that operator A is the negator of operator B if both return Boolean results and (x A y) equals NOT (x B y) for all possible inputs x, y. Notice that B is also the negator of A. For example, `<` and `>=` are a negator pair for most data types. An operator's negator must have the same left and/or right operand types as the operator to be defined, so only the operator name need be given in the `NEGATOR` clause.</dd>
+
+<dt> \<res\_proc\>   </dt>
+<dd>The optional `RESTRICT` names a restriction selectivity estimation function for the operator. Note that this is a function name, not an operator name. `RESTRICT` clauses only make sense for binary operators that return `boolean`. The idea behind a restriction selectivity estimator is to guess what fraction of the rows in a table will satisfy a `WHERE`-clause condition of the form:
+
+``` pre
+column OP constant
+```
+
+for the current operator and a particular constant value. This assists the optimizer by giving it some idea of how many rows will be eliminated by `WHERE` clauses that have this form.
+
+You can usually just use one of the following system standard estimator functions for many of your own operators:
+
+`eqsel` for =
+
+`neqsel` for &lt;&gt;
+
+`scalarltsel` for &lt; or &lt;=
+
+`scalargtsel` for &gt; or &gt;=
+</dd>
+
+<dt> \<join\_proc\>   </dt>
+<dd>The optional `JOIN` clause names a join selectivity estimation function for the operator. Note that this is a function name, not an operator name. `JOIN` clauses only make sense for binary operators that return `boolean`. The idea behind a join selectivity estimator is to guess what fraction of the rows in a pair of tables will satisfy a `WHERE`-clause condition of the form:
+
+``` pre
+table1.column1 OP table2.column2
+```
+
+for the current operator. This helps the optimizer by letting it figure out which of several possible join sequences is likely to take the least work.
+
+You can usually just use one of the following system standard join selectivity estimator functions for many of your own operators:
+
+`eqjoinsel` for =
+
+`neqjoinsel` for &lt;&gt;
+
+`scalarltjoinsel` for &lt; or &lt;=
+
+`scalargtjoinsel` for &gt; or &gt;=
+
+`areajoinsel` for 2D area-based comparisons
+
+`positionjoinsel` for 2D position-based comparisons
+
+`contjoinsel` for 2D containment-based comparisons
+</dd>
+
+<dt>HASHES  </dt>
+<dd>The optional `HASHES` clause tells the system that it is permissible to use the hash join method for a join based on this operator. `HASHES` only makes sense for a binary operator that returns `boolean`. The hash join operator can only return true for pairs of left and right values that hash to the same hash code. If two values get put in different hash buckets, the join will never compare them at all, implicitly assuming that the result of the join operator must be false. So it never makes sense to specify `HASHES` for operators that do not represent equality.
+
+To be marked `HASHES`, the join operator must appear in a hash index operator class. Attempts to use the operator in hash joins will fail at run time if no such operator class exists. The system needs the operator class to find the data-type-specific hash function for the operator's input data type. You must also supply a suitable hash function before you can create the operator class. Care should be exercised when preparing a hash function, because there are machine-dependent ways in which it might fail to do the right thing.</dd>
+
+<dt>MERGES  </dt>
+<dd>The `MERGES` clause, if present, tells the system that it is permissible to use the merge-join method for a join based on this operator. `MERGES` only makes sense for a binary operator that returns `boolean`, and in practice the operator must represent equality for some data type or pair of data types.
+
+Merge join is based on the idea of sorting the left- and right-hand tables into order and then scanning them in parallel. So, both data types must be capable of being fully ordered, and the join operator must be one that can only succeed for pairs of values that fall at the same place in the sort order. In practice this means that the join operator must behave like equality. It is possible to merge-join two distinct data types so long as they are logically compatible. For example, the smallint-versus-integer equality operator is merge-joinable. We only need sorting operators that will bring both data types into a logically compatible sequence.
+
+Execution of a merge join requires that the system be able to identify four operators related to the merge-join equality operator: less-than comparison for the left operand data type, less-than comparison for the right operand data type, less-than comparison between the two data types, and greater-than comparison between the two data types. It is possible to specify these operators individually by name, as the `SORT1`, `SORT2`, `LTCMP`, and `GTCMP` options respectively. The system will fill in the default names if any of these are omitted when `MERGES` is specified.</dd>
+
+<dt> \<left\_sort\_op\>   </dt>
+<dd>If this operator can support a merge join, the less-than operator that sorts the left-hand data type of this operator. `<` is the default if not specified.</dd>
+
+<dt> \<right\_sort\_op\>   </dt>
+<dd>If this operator can support a merge join, the less-than operator that sorts the right-hand data type of this operator. `<` is the default if not specified.</dd>
+
+<dt> \<less\_than\_op\>   </dt>
+<dd>If this operator can support a merge join, the less-than operator that compares the input data types of this operator. `<` is the default if not specified.</dd>
+
+<dt> \<greater\_than\_op\>   </dt>
+<dd>If this operator can support a merge join, the greater-than operator that compares the input data types of this operator. `>` is the default if not specified.
+
+To give a schema-qualified operator name in optional arguments, use the `OPERATOR()` syntax, for example:
+
+``` pre
+COMMUTATOR = OPERATOR(myschema.===) ,
+```
+</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Any functions used to implement the operator must be defined as `IMMUTABLE`.
+
+## <a id="topic1__section6"></a>Examples
+
+Here is an example of creating an operator for adding two complex numbers, assuming we have already created the definition of type `complex`. First define the function that does the work, then define the operator:
+
+``` pre
+CREATE FUNCTION complex_add(complex, complex)
+    RETURNS complex
+    AS 'filename', 'complex_add'
+    LANGUAGE C IMMUTABLE STRICT;
+CREATE OPERATOR + (
+    leftarg = complex,
+    rightarg = complex,
+    procedure = complex_add,
+    commutator = +
+);
+```
+
+To use this operator in a query:
+
+``` pre
+SELECT (a + b) AS c FROM test_complex;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE OPERATOR` is a HAWQ language extension. The SQL standard does not provide for user-defined operators.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE FUNCTION](CREATE-FUNCTION.html), [CREATE TYPE](CREATE-TYPE.html), [ALTER OPERATOR](ALTER-OPERATOR.html), [DROP OPERATOR](DROP-OPERATOR.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb b/markdown/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
new file mode 100644
index 0000000..8f9fe93
--- /dev/null
+++ b/markdown/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
@@ -0,0 +1,139 @@
+---
+title: CREATE RESOURCE QUEUE
+---
+
+Defines a new resource queue.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE RESOURCE QUEUE <name> WITH (<queue_attribute>=<value> [, ... ])
+```
+
+where \<queue\_attribute\> is:
+
+``` pre
+    PARENT=<queue_name>
+    MEMORY_LIMIT_CLUSTER=<percentage>
+����CORE_LIMIT_CLUSTER=<percentage>
+����[ACTIVE_STATEMENTS=<integer>]
+    [ALLOCATION_POLICY='even']
+    [VSEG_RESOURCE_QUOTA='mem:<memory_units>']
+    [RESOURCE_OVERCOMMIT_FACTOR=<double>]
+    [NVSEG_UPPER_LIMIT=<integer>]
+    [NVSEG_LOWER_LIMIT=<integer>]
+    [NVSEG_UPPER_LIMIT_PERSEG=<double>]
+    [NVSEG_LOWER_LIMIT_PERSEG=<double>]
+```
+```
+    <memory_units> ::= {128mb|256mb|512mb|1024mb|2048mb|4096mb|
+                        8192mb|16384mb|1gb|2gb|4gb|8gb|16gb}
+    <percentage> ::= <integer>%
+```
+
+## <a id="topic1__section3"></a>Description
+
+Creates a new resource queue for HAWQ workload management. A resource queue must specify a parent queue. Only a superuser can create a resource queue.
+
+Resource queues with an `ACTIVE_STATEMENTS` threshold set a maximum limit on the number of queries that can be executed by roles assigned to that queue. It controls the number of active queries that are allowed to run at the same time. The value for `ACTIVE_STATEMENTS` should be an integer greater than 0. If not specified, the default value is 20.
+
+When creating the resource queue, use the MEMORY\_LIMIT\_CLUSTER and CORE\_LIMIT\_CLUSTER queue attributes to tune the allowed resource usage of the resource queue. MEMORY\_LIMIT\_CLUSTER and CORE\_LIMIT\_CLUSTER must set to the same value for a resource queue. In addition the sum of the percentages of MEMORY\_LIMIT\_CLUSTER (and CORE\_LIMIT\_CLUSTER) for resource queues that share the same parent cannot exceed 100%.
+
+You can optionally configure the maximum or minimum number of virtual segments to use when executing a query by setting NVSEG\_UPPER\_LIMIT/NVSEG\_LOWER\_LIMIT or NVSEG\_UPPER\_LIMIT\_PERSEG/NVSEG\_LOWER\_LIMIT\_PERSEG attributes for the resource queue.
+
+After defining a resource queue, you can assign a role to the queue by using the [ALTER ROLE](ALTER-ROLE.html) or [CREATE ROLE](CREATE-ROLE.html) command. You can only assign roles to the leaf-level resource queues (resource queues that do not have any children.)
+
+See also [Best Practices for Using Resource Queues](../../bestpractices/managing_resources_bestpractices.html#topic_hvd_pls_wv).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<name\> </dt>
+<dd>Required. The name of the resource queue. The name must not already be in use and must not be `pg_default` or `pg_root`.</dd>
+
+<dt>PARENT=\<queue\_name\> </dt>
+<dd>Required. The parent queue of the new resource queue. The parent queue must already exist. This attribute is used to organize resource queues into a tree structure. You cannot specify `pg_default` as a parent queue. Resource queues that are parents to other resource queues are also called branch queues. Resource queues without any children are also called leaf queues. If you do not have any existing resource queues, use `pg_root` as the starting point for new resource queues.
+
+The parent queue cannot have any roles assigned.</dd>
+
+<dt>MEMORY\_LIMIT\_CLUSTER=\<percentage\>  </dt>
+<dd>Required. Defines how much memory a resource queue can consume from its parent resource queue and consequently dispatch to the execution of parallel statements. Since a resource queue obtains its memory from its parent, the actual memory limit is based from its parent queue. The valid values are 1% to 100%. The value of MEMORY\_ LIMIT\_CLUSTER must be identical to the value of CORE\_LIMIT\_CLUSTER. The sum of values for MEMORY\_LIMIT\_CLUSTER of this queue plus other queues that share the same parent cannot exceed 100%. The HAWQ resource manager periodically validates this restriction.</dd>
+
+<dt>CORE\_LIMIT\_CLUSTER=\<percentage\> </dt>
+<dd>Required. The percentage of consumable CPU (virtual core) resources that the resource queue can take from its parent resource queue. The valid values are 1% to 100%. The value of MEMORY\_ LIMIT\_CLUSTER must be identical to the value of CORE\_LIMIT\_CLUSTER. The sum of values for CORE\_LIMIT\_CLUSTER of this queue and queues that share the same parent cannot exceed 100%.</dd>
+
+<dt>ACTIVE\_STATEMENTS=\<integer\> </dt>
+<dd>Optional. Defines the limit of the number of parallel active statements in one leaf queue. The maximum number of connections cannot exceed this limit. If this limit is reached, the HAWQ resource manager queues more query allocation requests. Note that a single session can have several concurrent statement executions that occupy multiple connection resources. The value for `ACTIVE_STATEMENTS` should be an integer greater than 0. The default value is 20.</dd>
+
+<dt>ALLOCATION\_POLICY=\<string\> </dt>
+<dd>Optional. Defines the resource allocation policy for parallel statement execution. The default value is `even`.
+
+**Note:** This release only supports an `even` allocation policy. Even if you do not specify this attribute, the resource queue still applies an `even` allocation policy. Future releases will support alternative allocation policies.
+
+Setting the allocation policy to `even` means resources are always evenly dispatched based on current concurrency. When multiple query resource allocation requests are queued, the resource queue tries to evenly dispatch resources to queued requests until one of the following conditions are encountered:
+
+-   There are no more allocated resources in this queue to dispatch, or
+-   The ACTIVE\_STATEMENTS limit has been reached
+
+For each query resource allocation request, the HAWQ resource manager determines the minimum and maximum size of a virtual segment based on multiple factors including query cost, user configuration, table properties, and so on. For example, a hash distributed table requires fixed size of virtual segments. With an even allocation policy, the HAWQ resource manager uses the minimum virtual segment size requirement and evenly dispatches resources to each query resource allocation request in the resource queue.</dd>
+
+<dt>VSEG\_RESOURCE\_QUOTA='mem:{128mb | 256mb | 512mb | 1024mb | 2048mb | 4096mb | 8192mb | 16384mb | 1gb | 2gb | 4gb | 8gb | 16gb}' </dt>
+<dd>Optional. This quota defines how resources are split across multiple virtual segments. For example, when the HAWQ resource manager determines that 256GB memory and 128 vcores should be allocated to the current resource queue, there are multiple solutions on how to divide the resources across virtual segments. For example, you could use a) 2GB/1 vcore \* 128 virtual segments or b) 1GB/0.5 vcore \* 256 virtual segments. Therefore, you can use this attribute to make the HAWQ resource manager calculate the number of virtual segments based on how to divide the memory. For example, if `VSEG_RESOURCE_QUOTA``='mem:512mb'`, then the resource queue will use 512MB/0.25 vcore \* 512 virtual segments. The default value is '`mem:256mb`'.
+
+**Note:** To avoid resource fragmentation, make sure that the segment resource capacity configured for HAWQ (in HAWQ Standalone mode: `hawq_rm_memory_limit_perseg`; in YARN mode: `yarn.nodemanager.resource.memory-mb` must be a multiple of the resource quotas for all virtual segments and CPU to memory ratio must be a multiple of the amount configured for `yarn.scheduler.minimum-allocation-mb`.</dd>
+
+<dt>RESOURCE\_OVERCOMMIT\_FACTOR=\<double\> </dt>
+<dd>Optional. This factor defines how much a resource can be overcommitted. For example, if RESOURCE\_OVERCOMMIT\_FACTOR is set to 3.0 and MEMORY\_LIMIT\_CLUSTER is set to 30%, then the maximum possible resource allocation in this queue is 90% (30% x 3.0). If the resulting maximum is bigger than 100%, then 100% is adopted. The minimum value that this attribute can be set to is `1.0`. The default value is `2.0`.</dd>
+
+<dt>NVSEG\_UPPER\_LIMIT=\<integer\> / NVSEG\_UPPER\_LIMIT\_PERSEG=\<double\>  </dt>
+<dd>Optional. These limits restrict the range of number of virtual segments allocated in this resource queue for executing one query statement. NVSEG\_UPPER\_LIMIT defines an upper limit of virtual segments for one statement execution regardless of actual cluster size, while NVSEG\_UPPER\_LIMIT\_PERSEG defines the same limit by using the average number of virtual segments in one physical segment. Therefore, the limit defined by NVSEG\_UPPER\_LIMIT\_PERSEG varies dynamically according to the changing size of the HAWQ cluster.
+
+For example, if you set `NVSEG_UPPER_LIMIT=10` all query resource requests are strictly allocated no more than 10 virtual segments. If you set NVSEG\_UPPER\_LIMIT\_PERSEG=2 and assume that currently there are 5 available HAWQ segments in the cluster, query resource requests are allocated 10 virtual segments at the most.
+
+NVSEG\_UPPER\_LIMIT cannot be set to a lower value than NVSEG\_LOWER\_LIMIT if both limits are enabled. In addition, the upper limit cannot be set to a value larger than the value set in global configuration parameter `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit`.
+
+By default, both limits are set to **-1**, which means the limits are disabled. `NVSEG_UPPER_LIMIT` has higher priority than `NVSEG_UPPER_LIMIT_PERSEG`. If both limits are set, then `NVSEG_UPPER_LIMIT_PERSEG` is ignored. If you have enabled resource quotas for the query statement, then these limits are ignored.
+
+**Note:** If the actual lower limit of the number of virtual segments becomes greater than the upper limit, then the lower limit is automatically reduced to be equal to the upper limit. This situation is possible when user sets both `NVSEG_UPPER_LIMIT `and `NVSEG_LOWER_LIMIT_PERSEG`. After expanding the cluster, the dynamic lower limit may become greater than the value set for the fixed upper limit.</dd>
+
+<dt>NVSEG\_LOWER\_LIMIT=\<integer\> / NVSEG\_LOWER\_LIMIT\_PERSEG=\<double\>   </dt>
+<dd>Optional. These limits specify the minimum number of virtual segments allocated for one statement execution in order to guarantee query performance. NVSEG\_LOWER\_LIMIT defines the lower limit of virtual segments for one statement execution regardless the actual cluster size, while NVSEG\_LOWER\_LIMIT\_PERSEG defines the same limit by the average virtual segment number in one segment. Therefore, the limit defined by NVSEG\_LOWER\_LIMIT\_PERSEG varies dynamically along with the size of HAWQ cluster.
+
+NVSEG\_UPPER\_LIMIT\_PERSEG cannot be less than NVSEG\_LOWER\_LIMIT\_PERSEG if both limits are set enabled.
+
+For example, if you set NVSEG\_LOWER\_LIMIT=10, and one statement execution potentially needs no fewer than 10 virtual segments, then this request has at least 10 virtual segments allocated. If you set NVSEG\_UPPER\_LIMIT\_PERSEG=2, assuming there are currently 5 available HAWQ segments in the cluster, and one statement execution potentially needs no fewer than 10 virtual segments, then the query resource request will be allocated at least 10 virtual segments. If one statement execution needs at most 4 virtual segments, the resource manager will allocate at most 4 virtual segments instead of 10 since this resource request does not need more than 9 virtual segments.
+
+By default, both limits are set to **-1**, which means the limits are disabled. `NVSEG_LOWER_LIMIT` has higher priority than `NVSEG_LOWER_LIMIT_PERSEG`. If both limits are set, then `NVSEG_LOWER_LIMIT_PERSEG` is ignored. If you have enabled resource quotas for the query statement, then these limits are ignored.
+
+**Note:** If the actual lower limit of the number of virtual segments becomes greater than the upper limit, then the lower limit is automatically reduced to be equal to the upper limit. This situation is possible when user sets both `NVSEG_UPPER_LIMIT `and `NVSEG_LOWER_LIMIT_PERSEG`. After expanding the cluster, the dynamic lower limit may become greater than the value set for the fixed upper limit. </dd>
+
+## <a id="topic1__section5"></a>Notes
+
+To check on the configuration of a resource queue, you can query the `pg_resqueue` catalog table. To see the runtime status of all resource queues, you can use the `pg_resqueue_status`. See [Checking Existing Resource Queues](../../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+
+`CREATE RESOURCE QUEUE` cannot be run within a transaction.
+
+To see the status of a resource queue, see [Checking Existing Resource Queues](../../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+
+## <a id="topic1__section6"></a>Examples
+
+Create a resource queue as a child of `pg_root` with an active query limit of 20 and memory and core limits of 50%:
+
+``` pre
+CREATE RESOURCE QUEUE myqueue WITH (PARENT='pg_root', ACTIVE_STATEMENTS=20,
+MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%);
+```
+
+Create a resource queue as a child of pg\_root with memory and CPU limits and a resource overcommit factor:
+
+``` pre
+CREATE RESOURCE QUEUE test_queue_1 WITH (PARENT='pg_root', 
+MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%, RESOURCE_OVERCOMMIT_FACTOR=2);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE RESOURCE QUEUE` is a HAWQ extension. There is no provision for resource queues or workload management in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[ALTER RESOURCE QUEUE](ALTER-RESOURCE-QUEUE.html)[ALTER ROLE](ALTER-ROLE.html), [CREATE ROLE](CREATE-ROLE.html), [DROP RESOURCE QUEUE](DROP-RESOURCE-QUEUE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-ROLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-ROLE.html.md.erb b/markdown/reference/sql/CREATE-ROLE.html.md.erb
new file mode 100644
index 0000000..ec7ac7c
--- /dev/null
+++ b/markdown/reference/sql/CREATE-ROLE.html.md.erb
@@ -0,0 +1,196 @@
+---
+title: CREATE ROLE
+---
+
+Defines a new database role (user or group).
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE ROLE <name> [[WITH] <option> [ ... ]]
+```
+
+where \<option\> can be:
+
+``` pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+����| CREATEEXTTABLE | NOCREATEEXTTABLE
+������[ ( <attribute>='<value>'[, ...] ) ]
+�����������where attribute and value are:
+�����������type='readable'|'writable'
+�����������protocol='gpfdist'|'http'
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | CONNECTION LIMIT <connlimit>
+    | [ ENCRYPTED | UNENCRYPTED ] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>'
+    | IN ROLE <rolename> [, ...]
+    | ROLE <rolename> [, ...]
+    | ADMIN <rolename> [, ...]
+����| RESOURCE QUEUE <queue_name>
+����| [ DENY <deny_point> ]
+����| [ DENY BETWEEN <deny_point> AND <deny_point>]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE ROLE` adds a new role to a HAWQ system. A role is an entity that can own database objects and have database privileges. A role can be considered a user, a group, or both depending on how it is used. You must have `CREATEROLE` privilege or be a database superuser to use this command.
+
+Note that roles are defined at the system-level and are valid for all databases in your HAWQ system.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name of the new role.</dd>
+
+<dt>SUPERUSER,  
+NOSUPERUSER  </dt>
+<dd>If `SUPERUSER` is specified, the role being defined will be a superuser, who can override all access restrictions within the database. Superuser status is dangerous and should be used only when really needed. You must yourself be a superuser to create a new superuser. `NOSUPERUSER` is the default.</dd>
+
+<dt>CREATEDB,  
+NOCREATEDB  </dt>
+<dd>If `CREATEDB` is specified, the role being defined will be allowed to create new databases. `NOCREATEDB` (the default) will deny a role the ability to create databases.</dd>
+
+<dt>CREATEROLE,  
+NOCREATEROLE  </dt>
+<dd>If `CREATEDB` is specified, the role being defined will be allowed to create new roles, alter other roles, and drop other roles. `NOCREATEROLE` (the default) will deny a role the ability to create roles or modify roles other than their own.</dd>
+
+<dt>CREATEEXTTABLE,  
+NOCREATEEXTTABLE  </dt>
+<dd>If `CREATEEXTTABLE` is specified, the role being defined is allowed to create external tables. The default \<type\> is `readable` and the default `protocol` is `gpfdist` if not specified. `NOCREATEEXTTABLE` (the default) denies the role the ability to create external tables. Using the `file` protocol when creating external tables is not supported. This is because HAWQ cannot guarantee scheduling executors on a specific host. Likewise, you cannot use the `EXECUTE` command with `ON                      ALL` and `ON HOST` for the same reason. Use the `ON MASTER/number/SEGMENT segment_id` to specify which segment instances are to execute the command.</dd>
+
+<dt>INHERIT,  
+NOINHERIT  </dt>
+<dd>If specified, `INHERIT` (the default) allows the role to use whatever database privileges have been granted to all roles it is directly or indirectly a member of. With `NOINHERIT`, membership in another role only grants the ability to `SET ROLE` to that other role.</dd>
+
+<dt>LOGIN,  
+NOLOGIN  </dt>
+<dd>If specified, `LOGIN` allows a role to log in to a database. A role having the `LOGIN` attribute can be thought of as a user. Roles with `NOLOGIN` (the default) are useful for managing database privileges, and can be thought of as groups.</dd>
+
+<dt>CONNECTION LIMIT \<connlimit\>  </dt>
+<dd>The number maximum of concurrent connections this role can make. The default of -1 means there is no limitation.</dd>
+
+<!-- -->
+
+<dt>PASSWORD \<password\>  </dt>
+<dd>Sets the user password for roles with the `LOGIN` attribute. If you do not plan to use password authentication you can omit this option. If no \<password\> is specified, the password will be set to null and password authentication will always fail for that user. A null \<password\> can optionally be written explicitly as `PASSWORD NULL`.</dd>
+
+<dt>ENCRYPTED,  
+UNENCRYPTED  </dt>
+<dd>These key words control whether the password is stored encrypted in the system catalogs. (If neither is specified, the default behavior is determined by the configuration parameter `password_encryption`.) If the presented password string is already in MD5-encrypted format, then it is stored encrypted as-is, regardless of whether `ENCRYPTED` or `UNENCRYPTED` is specified (since the system cannot decrypt the specified encrypted password string). This allows reloading of encrypted passwords during dump/restore.
+
+Note that older clients may lack support for the MD5 authentication mechanism that is needed to work with passwords that are stored encrypted.</dd>
+
+<dt>VALID UNTIL '\<timestamp\>'  </dt>
+<dd>The VALID UNTIL clause sets a date and time after which the role's password is no longer valid. If this clause is omitted the password will never expire.</dd>
+
+<dt>IN ROLE \<rolename\>  </dt>
+<dd>Adds the new role as a member of the named roles. Note that there is no option to add the new role as an administrator; use a separate `GRANT` command to do that.</dd>
+
+<dt>ROLE \<rolename\>  </dt>
+<dd>Adds the named roles as members of this role, making this new role a group.</dd>
+
+<dt>ADMIN \<rolename\>  </dt>
+<dd>The `ADMIN` clause is like `ROLE`, but the named roles are added to the new role `WITH ADMIN OPTION`, giving them the right to grant membership in this role to others.</dd>
+
+<dt>RESOURCE QUEUE \<queue\_name\>  </dt>
+<dd>The name of the resource queue to which the new user-level role is to be assigned. Only roles with `LOGIN` privilege can be assigned to a resource queue. The special keyword `NONE` means that the role is assigned to the default resource queue. A role can only belong to one resource queue.</dd>
+
+<dt>DENY \<deny\_point\>,  
+DENY BETWEEN \<deny\_point\> AND \<deny\_point\>   </dt>
+<dd>The `DENY` and `DENY BETWEEN` keywords set time-based constraints that are enforced at login. `DENY` sets a day or a day and time to deny access. `DENY BETWEEN` sets an interval during which access is denied. Both use the parameter \<deny\_point\> that has the following format:
+
+``` pre
+DAY <day> [ TIME '<time>' ]
+```
+
+The two parts of the \<deny_point\> parameter use the following formats:
+
+For \<day\>:
+
+``` pre
+{'Sunday' | 'Monday' | 'Tuesday' |'Wednesday' | 'Thursday' | 'Friday' |
+'Saturday' | 0-6 }
+```
+
+For \<time\>:
+
+``` pre
+{ 00-23 : 00-59 | 01-12 : 00-59 { AM | PM }}
+```
+
+The `DENY BETWEEN` clause uses two \<deny\_point\> parameters:
+
+``` pre
+DENY BETWEEN deny_point AND deny_point
+
+```
+</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+The preferred way to add and remove role members (manage groups) is to use [GRANT](GRANT.html) and [REVOKE](REVOKE.html).
+
+The `VALID UNTIL` clause defines an expiration time for a password only, not for the role. The expiration time is not enforced when logging in using a non-password-based authentication method.
+
+The `INHERIT` attribute governs inheritance of grantable privileges (access privileges for database objects and role memberships). It does not apply to the special role attributes set by `CREATE ROLE` and `ALTER                ROLE`. For example, being a member of a role with `CREATEDB` privilege does not immediately grant the ability to create databases, even if `INHERIT` is set.
+
+The `INHERIT` attribute is the default for reasons of backwards compatibility. In prior releases of HAWQ, users always had access to all privileges of groups they were members of. However, `NOINHERIT` provides a closer match to the semantics specified in the SQL standard.
+
+Be careful with the `CREATEROLE` privilege. There is no concept of inheritance for the privileges of a `CREATEROLE`-role. That means that even if a role does not have a certain privilege but is allowed to create other roles, it can easily create another role with different privileges than its own (except for creating roles with superuser privileges). For example, if a role has the `CREATEROLE` privilege but not the `CREATEDB` privilege, it can create a new role with the `CREATEDB` privilege. Therefore, regard roles that have the `CREATEROLE` privilege as almost-superuser-roles.
+
+The `CONNECTION LIMIT` option is never enforced for superusers.
+
+Caution must be exercised when specifying an unencrypted password with this command. The password will be transmitted to the server in clear-text, and it might also be logged in the client's command history or the server log. The client program `createuser`, however, transmits the password encrypted. Also, psql contains a command `\password` that can be used to safely change the password later.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a role that can log in, but don't give it a password:
+
+``` pre
+CREATE ROLE jonathan LOGIN;
+```
+
+Create a role that belongs to a resource queue:
+
+``` pre
+CREATE ROLE jonathan LOGIN RESOURCE QUEUE poweruser;
+```
+
+Create a role with a password that is valid until the end of 2009 (`CREATE                USER` is the same as `CREATE ROLE` except that it implies `LOGIN`):
+
+``` pre
+CREATE USER joelle WITH PASSWORD 'jw8s0F4' VALID UNTIL '2010-01-01';
+```
+
+Create a role that can create databases and manage other roles:
+
+``` pre
+CREATE ROLE admin WITH CREATEDB CREATEROLE;
+```
+
+Create a role that does not allow login access on Sundays:
+
+``` pre
+CREATE ROLE user3 DENY DAY 'Sunday';
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard defines the concepts of users and roles, but it regards them as distinct concepts and leaves all commands defining users to be specified by the database implementation. In HAWQ, users and roles are unified into a single type of object. Roles therefore have many more optional attributes than they do in the standard.
+
+`CREATE ROLE` is in the SQL standard, but the standard only requires the syntax:
+
+``` pre
+CREATE ROLE <name> [WITH ADMIN <rolename>]
+```
+
+Allowing multiple initial administrators, and all the other options of `CREATE ROLE`, are HAWQ extensions.
+
+The behavior specified by the SQL standard is most closely approximated by giving users the `NOINHERIT` attribute, while roles are given the `INHERIT` attribute.
+
+## <a id="topic1__section8"></a>See Also
+
+[SET ROLE](SET-ROLE.html), [ALTER ROLE](ALTER-ROLE.html), [DROP ROLE](DROP-ROLE.html), [GRANT](GRANT.html), [REVOKE](REVOKE.html), [CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-SCHEMA.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-SCHEMA.html.md.erb b/markdown/reference/sql/CREATE-SCHEMA.html.md.erb
new file mode 100644
index 0000000..f24e0cc
--- /dev/null
+++ b/markdown/reference/sql/CREATE-SCHEMA.html.md.erb
@@ -0,0 +1,63 @@
+---
+title: CREATE SCHEMA
+---
+
+Defines a new schema.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE SCHEMA <schema_name> [AUTHORIZATION <username>] 
+   [<schema_element> [ ... ]]
+
+CREATE SCHEMA AUTHORIZATION <rolename> [<schema_element> [ ... ]]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE SCHEMA` enters a new schema into the current database. The schema name must be distinct from the name of any existing schema in the current database.
+
+A schema is essentially a namespace: it contains named objects (tables, data types, functions, and operators) whose names may duplicate those of other objects existing in other schemas. Named objects are accessed either by qualifying their names with the schema name as a prefix, or by setting a search path that includes the desired schema(s). A `CREATE` command specifying an unqualified object name creates the object in the current schema (the one at the front of the search path, which can be determined with the function `current_schema`).
+
+Optionally, `CREATE SCHEMA` can include subcommands to create objects within the new schema. The subcommands are treated essentially the same as separate commands issued after creating the schema, except that if the `AUTHORIZATION` clause is used, all the created objects will be owned by that role.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<schema\_name\>   </dt>
+<dd>The name of a schema to be created. If this is omitted, the user name is used as the schema name. The name cannot begin with `pg_`, as such names are reserved for system catalog schemas.</dd>
+
+<dt> \<rolename\>   </dt>
+<dd>The name of the role who will own the schema. If omitted, defaults to the role executing the command. Only superusers may create schemas owned by roles other than themselves.</dd>
+
+<dt> \<schema\_element\>   </dt>
+<dd>An SQL statement defining an object to be created within the schema. Currently, only `CREATE TABLE`, `CREATE VIEW`, `CREATE               INDEX`, `CREATE SEQUENCE`, and `GRANT` are accepted as clauses within `CREATE SCHEMA`. Other kinds of objects may be created in separate commands after the schema is created.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+To create a schema, the invoking user must have the `CREATE` privilege for the current database or be a superuser.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a schema:
+
+``` pre
+CREATE SCHEMA myschema;
+```
+
+Create a schema for role `joe` (the schema will also be named `joe`):
+
+``` pre
+CREATE SCHEMA AUTHORIZATION joe;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard allows a `DEFAULT CHARACTER SET` clause in `CREATE           SCHEMA`, as well as more subcommand types than are presently accepted by HAWQ.
+
+The SQL standard specifies that the subcommands in `CREATE SCHEMA` may appear in any order. The present HAWQ implementation does not handle all cases of forward references in subcommands; it may sometimes be necessary to reorder the subcommands in order to avoid forward references.
+
+According to the SQL standard, the owner of a schema always owns all objects within it. HAWQ allows schemas to contain objects owned by users other than the schema owner. This can happen only if the schema owner grants the `CREATE` privilege on the schema to someone else.
+
+## <a id="topic1__section8"></a>See Also
+
+[DROP SCHEMA](DROP-SCHEMA.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-SEQUENCE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-SEQUENCE.html.md.erb b/markdown/reference/sql/CREATE-SEQUENCE.html.md.erb
new file mode 100644
index 0000000..b2557c6
--- /dev/null
+++ b/markdown/reference/sql/CREATE-SEQUENCE.html.md.erb
@@ -0,0 +1,135 @@
+---
+title: CREATE SEQUENCE
+---
+
+Defines a new sequence generator.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [TEMPORARY | TEMP] SEQUENCE <name>
+�������[INCREMENT [BY] <value>]
+�������[MINVALUE <minvalue> | NO MINVALUE]
+�������[MAXVALUE <maxvalue> | NO MAXVALUE]
+�������[START [ WITH ] <start>]
+�������[CACHE <cache>]
+�������[[NO] CYCLE]
+�������[OWNED BY { <table>.<column> | NONE }]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE SEQUENCE` creates a new sequence number generator. This involves creating and initializing a new special single-row table. The generator will be owned by the user issuing the command.
+
+If a schema name is given, then the sequence is created in the specified schema. Otherwise it is created in the current schema. Temporary sequences exist in a special schema, so a schema name may not be given when creating a temporary sequence. The sequence name must be distinct from the name of any other sequence, table, or view in the same schema.
+
+After a sequence is created, you use the `nextval` function to operate on the sequence. For example, to insert a row into a table that gets the next value of a sequence:
+
+``` pre
+INSERT INTO distributors VALUES (nextval('myserial'), 'acme');
+```
+
+You can also use the function `setval` to operate on a sequence, but only for queries that do not operate on distributed data. For example, the following query is allowed because it resets the sequence counter value for the sequence generator process on the master:
+
+``` pre
+SELECT setval('myserial', 201);
+```
+
+But the following query will be rejected in HAWQ because it operates on distributed data:
+
+``` pre
+INSERT INTO product VALUES (setval('myserial', 201), 'gizmo');
+```
+
+In a regular (non-distributed) database, functions that operate on the sequence go to the local sequence table to get values as they are needed. In HAWQ, however, keep in mind that each segment is its own distinct database process. Therefore the segments need a single point of truth to go for sequence values so that all segments get incremented correctly and the sequence moves forward in the right order. A sequence server process runs on the master and is the point-of-truth for a sequence in a HAWQ distributed database. Segments get sequence values at runtime from the master.
+
+Because of this distributed sequence design, there are some limitations on the functions that operate on a sequence in HAWQ:
+
+-   `lastval` and `currval` functions are not supported.
+-   `setval` can only be used to set the value of the sequence generator on the master, it cannot be used in subqueries to update records on distributed table data.
+-   `nextval` sometimes grabs a block of values from the master for a segment to use, depending on the query. So values may sometimes be skipped in the sequence if all of the block turns out not to be needed at the segment level. Note that a regular PostgreSQL database does this too, so this is not something unique to HAWQ.
+
+Although you cannot update a sequence directly, you can use a query like:
+
+``` pre
+SELECT * FROM <sequence_name>;
+```
+
+to examine the parameters and current state of a sequence. In particular, the `last_value` field of the sequence shows the last value allocated by any session.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>TEMPORARY | TEMP  </dt>
+<dd>If specified, the sequence object is created only for this session, and is automatically dropped on session exit. Existing permanent sequences with the same name are not visible (in this session) while the temporary sequence exists, unless they are referenced with schema-qualified names.</dd>
+
+<dt> \<name\>  </dt>
+<dd>The name (optionally schema-qualified) of the sequence to be created.</dd>
+
+<dt> \<increment\>  </dt>
+<dd>Specifies which value is added to the current sequence value to create a new value. A positive value will make an ascending sequence, a negative one a descending sequence. The default value is 1.</dd>
+
+<dt> \<minvalue\>  
+NO MINVALUE  </dt>
+<dd>Determines the minimum value a sequence can generate. If this clause is not supplied or `NO MINVALUE` is specified, then defaults will be used. The defaults are 1 and -263-1 for ascending and descending sequences, respectively.</dd>
+
+<dt> \<maxvalue\>  
+NO MAXVALUE  </dt>
+<dd>Determines the maximum value for the sequence. If this clause is not supplied or `NO MAXVALUE` is specified, then default values will be used. The defaults are 263-1 and -1 for ascending and descending sequences, respectively.</dd>
+
+<dt> \<start\>  </dt>
+<dd>Allows the sequence to begin anywhere. The default starting value is \<minvalue\> for ascending sequences and \<maxvalue\> for descending ones.</dd>
+
+<dt> \<cache\>  </dt>
+<dd>Specifies how many sequence numbers are to be preallocated and stored in memory for faster access. The minimum (and default) value is 1 (no cache).</dd>
+
+<dt>CYCLE  
+NO CYCLE  </dt>
+<dd>Allows the sequence to wrap around when the \<maxvalue\> (for ascending) or \<minvalue\> (for descending) has been reached. If the limit is reached, the next number generated will be the \<minvalue\> (for ascending) or \<maxvalue\> (for descending). If `NO CYCLE` is specified, any calls to `nextval` after the sequence has reached its maximum value will return an error. If not specified, `NO CYCLE` is the default.</dd>
+
+<dt>OWNED BY \<table\>.\<column\>  
+OWNED BY NONE  </dt>
+<dd>Causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well. The specified table must have the same owner and be in the same schema as the sequence. `OWNED BY NONE`, the default, specifies that there is no such association.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Sequences are based on bigint arithmetic, so the range cannot exceed the range of an eight-byte integer (-9223372036854775808 to 9223372036854775807).
+
+Although multiple sessions are guaranteed to allocate distinct sequence values, the values may be generated out of sequence when all the sessions are considered. For example, session A might reserve values 1..10 and return `nextval=1`, then session B might reserve values 11..20 and return `nextval=11` before session A has generated nextval=2. Thus, you should only assume that the `nextval` values are all distinct, not that they are generated purely sequentially. Also,`last_value` will reflect the latest value reserved by any session, whether or not it has yet been returned by `nextval`.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a sequence named `myseq`:
+
+``` pre
+CREATE SEQUENCE myseq START 101;
+```
+
+Insert a row into a table that gets the next value:
+
+``` pre
+INSERT INTO distributors VALUES (nextval('myseq'), 'acme');
+```
+
+Reset the sequence counter value on the master:
+
+``` pre
+SELECT setval('myseq', 201);
+```
+
+Illegal use of `setval` in HAWQ (setting sequence values on distributed data):
+
+``` pre
+INSERT INTO product VALUES (setval('myseq', 201), 'gizmo');
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE SEQUENCE` conforms to the SQL standard, with the following exceptions:
+
+-   The `AS data_type                ` expression specified in the SQL standard is not supported.
+-   Obtaining the next value is done using the `nextval()` function instead of the `NEXT VALUE FOR` expression specified in the SQL standard.
+-   The `OWNED BY` clause is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[DROP SEQUENCE](DROP-SEQUENCE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-TABLE-AS.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-TABLE-AS.html.md.erb b/markdown/reference/sql/CREATE-TABLE-AS.html.md.erb
new file mode 100644
index 0000000..1979af4
--- /dev/null
+++ b/markdown/reference/sql/CREATE-TABLE-AS.html.md.erb
@@ -0,0 +1,126 @@
+---
+title: CREATE TABLE AS
+---
+
+Defines a new table from the results of a query.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [ [GLOBAL | LOCAL] {TEMPORARY | TEMP} ] TABLE <table_name>
+���[(<column_name> [, ...] )]
+���[ WITH ( storage_parameter=<value> [, ... ] )
+���[ON COMMIT {PRESERVE ROWS | DELETE ROWS | DROP}]
+���[TABLESPACE <tablespace>]
+���AS <query>
+���[DISTRIBUTED BY (<column>, [ ... ] ) | DISTRIBUTED RANDOMLY]
+```
+
+where \<storage\_parameter\> is:
+
+``` pre
+���APPENDONLY={TRUE}
+���BLOCKSIZE={8192-2097152} 
+   bucketnum={<x>}
+���ORIENTATION={ROW | PARQUET}
+���COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+���COMPRESSLEVEL={0-9 | 1}
+���FILLFACTOR={10-100}
+���OIDS=[TRUE | FALSE]
+   PAGESIZE={1024-1073741823}
+   ROWGROUPSIZE={1024-1073741823}
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE TABLE AS` creates a table and fills it with data computed by a [SELECT](SELECT.html) command. The table columns have the names and data types associated with the output columns of the `SELECT`, however you can override the column names by giving an explicit list of new column names.
+
+`CREATE TABLE AS` creates a new table and evaluates the query just once to fill the new table initially. The new table will not track subsequent changes to the source tables of the query.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>GLOBAL | LOCAL  </dt>
+<dd>These keywords are present for SQL standard compatibility, but have no effect in HAWQ.</dd>
+
+<dt>TEMPORARY | TEMP  </dt>
+<dd>If specified, the new table is created as a temporary table. Temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction (see `ON COMMIT`). Existing permanent tables with the same name are not visible to the current session while the temporary table exists, unless they are referenced with schema-qualified names. Any indexes created on a temporary table are automatically temporary as well.</dd>
+
+<dt> \<table\_name\>   </dt>
+<dd>The name (optionally schema-qualified) of the new table to be created.</dd>
+
+<dt> \<column\_name\>   </dt>
+<dd>The name of a column in the new table. If column names are not provided, they are taken from the output column names of the query. If the table is created from an `EXECUTE` command, a column name list cannot be specified.</dd>
+
+<dt>WITH (\<storage\_parameter\>=\<value\> )  </dt>
+<dd>The `WITH` clause can be used to set storage options for the table or its indexes. Note that you can also set different storage parameters on a particular partition or subpartition by declaring the `WITH` clause in the partition specification. The following storage options are available:
+
+**APPENDONLY** \u2014 Set to `TRUE` to create the table as an append-only table. If `FALSE`, an error message displays stating that heap tables are not supported.
+
+**BLOCKSIZE** \u2014 Set to the size, in bytes for each block in a table. The `BLOCKSIZE` must be between 8192 and 2097152 bytes, and be a multiple of 8192. The default is 32768.
+
+**bucketnum** \u2014 Set to the number of hash buckets to be used in creating a hash-distributed table. If changing the number of hash buckets, use `WITH` to specify `bucketnum` in creating a hash-distributed table. If distribution is specified by column, the table will inherit the value.
+
+**ORIENTATION** \u2014 Set to `row` (the default) for row-oriented storage, or `parquet`. This option is only valid if `APPENDONLY=TRUE`. Heap-storage tables can only be row-oriented.
+
+**COMPRESSTYPE** \u2014 Set to `ZLIB`, `SNAPPY`, or `GZIP` to specify the type of compression used. ZLIB provides more compact compression ratios at lower speeds. Parquet tables support `SNAPPY` and `GZIP` compression. Append-only tables support `SNAPPY` and `ZLIB` compression.  This option is valid only if `APPENDONLY=TRUE`.
+
+**COMPRESSLEVEL** \u2014 Set to an integer value from 1 (fastest compression) to 9 (highest compression ratio). If not declared, the default is 1. This option is valid only if `APPENDONLY=TRUE` and `COMPRESSTYPE=[ZLIB|GZIP]`.
+
+**OIDS** \u2014 Set to `OIDS=FALSE` (the default) so that rows do not have object identifiers assigned to them. Do not enable OIDS when creating a table. On large tables, such as those in a typical HAWQ system, using OIDs for table rows can cause wrap-around of the 32-bit OID counter. Once the counter wraps around, OIDs can no longer be assumed to be unique, which not only makes them useless to user applications, but can also cause problems in the HAWQ system catalog tables. In addition, excluding OIDs from a table reduces the space required to store the table on disk by 4 bytes per row, slightly improving performance.</dd>
+
+<dt>ON COMMIT  </dt>
+<dd>The behavior of temporary tables at the end of a transaction block can be controlled using `ON COMMIT`. The three options are:
+
+**PRESERVE ROWS** \u2014 No special action is taken at the ends of transactions for temporary tables. This is the default behavior.
+
+**DELETE ROWS** \u2014 All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic `TRUNCATE` is done at each commit.
+
+**DROP** \u2014 The temporary table will be dropped at the end of the current transaction block.</dd>
+
+<dt>TABLESPACE \<tablespace\>   </dt>
+<dd>The tablespace is the name of the tablespace in which the new table is to be created. If not specified, the database's default tablespace is used.</dd>
+
+<dt>AS \<query\>   </dt>
+<dd>A [SELECT](SELECT.html) command, or an [EXECUTE](EXECUTE.html) command that runs a prepared `SELECT` query.</dd>
+
+<dt>DISTRIBUTED BY (\<column\>, \[ ... \] )  
+DISTRIBUTED RANDOMLY  </dt>
+<dd>Used to declare the HAWQ distribution policy for the table. The default is RANDOM distribution. `DISTIBUTED BY` can use hash distribution with one or more columns declared as the distribution key. If hash distribution is desired, it can be specified using `bucketnum` attribute, using the first eligible column of the table as the distribution key.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+This command is functionally similar to [SELECT INTO](SELECT-INTO.html), but it is preferred since it is less likely to be confused with other uses of the `SELECT INTO` syntax. Furthermore, `CREATE TABLE AS` offers a superset of the functionality offered by `SELECT INTO`.
+
+`CREATE TABLE AS` can be used for fast data loading from external table data sources. See [CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html).
+
+## <a id="topic1__section6"></a>Examples
+
+Create a new table `films_recent` consisting of only recent entries from the table `films`:
+
+``` pre
+CREATE TABLE films_recent AS SELECT * FROM films WHERE 
+date_prod >= '2007-01-01';
+```
+
+Create a new temporary table `films_recent`, consisting of only recent entries from the table films, using a prepared statement. The new table has OIDs and will be dropped at commit:
+
+``` pre
+PREPARE recentfilms(date) AS SELECT * FROM films WHERE 
+date_prod > $1;
+CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS 
+EXECUTE recentfilms('2007-01-01');
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE TABLE AS` conforms to the SQL standard, with the following exceptions:
+
+-   The standard requires parentheses around the subquery clause; in HAWQ, these parentheses are optional.
+-   The standard defines a `WITH [NO] DATA` clause; this is not currently implemented by HAWQ. The behavior provided by HAWQ is equivalent to the standard's `WITH DATA` case. `WITH NO DATA` can be simulated by appending `LIMIT 0` to the query.
+-   HAWQ handles temporary tables differently from the standard; see `CREATE TABLE` for details.
+-   The `WITH` clause is a HAWQ extension; neither storage parameters nor `OIDs` are in the standard.
+-   The HAWQ concept of tablespaces is not part of the standard. The `TABLESPACE` clause is an extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html), [EXECUTE](EXECUTE.html), [SELECT](SELECT.html), [SELECT INTO](SELECT-INTO.html)


[40/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/startstop.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/startstop.html.md.erb b/markdown/admin/startstop.html.md.erb
new file mode 100644
index 0000000..7aac723
--- /dev/null
+++ b/markdown/admin/startstop.html.md.erb
@@ -0,0 +1,146 @@
+---
+title: Starting and Stopping HAWQ
+---
+
+In a HAWQ DBMS, the database server instances \(the master and all segments\) are started or stopped across all of the hosts in the system in such a way that they can work together as a unified DBMS.
+
+Because a HAWQ system is distributed across many machines, the process for starting and stopping a HAWQ system is different than the process for starting and stopping a regular PostgreSQL DBMS.
+
+Use the `hawq start `*`object`* and `hawq stop `*`object`* commands to start and stop HAWQ, respectively. These management tools are located in the `$GPHOME/bin` directory on your HAWQ master host. 
+
+Initializing a HAWQ system also starts the system.
+
+**Important:**
+
+Do not issue a `KILL` command to end any Postgres process. Instead, use the database command `pg_cancel_backend()`.
+
+For information about [hawq start](../reference/cli/admin_utilities/hawqstart.html) and [hawq stop](../reference/cli/admin_utilities/hawqstop.html), see the appropriate pages in the HAWQ Management Utility Reference or enter `hawq start -h` or `hawq stop -h` on the command line.
+
+
+## <a id="task_hkd_gzv_fp"></a>Starting HAWQ 
+
+When a HAWQ system is first initialized, it is also started. For more information about initializing HAWQ, see [hawq init](../reference/cli/admin_utilities/hawqinit.html). 
+
+To start a stopped HAWQ system that was previously initialized, run the `hawq start` command on the master instance.
+
+You can also use the `hawq start master` command to start only the HAWQ master, without segment nodes, then add these later, using `hawq start segment`. If you want HAWQ to ignore hosts that fail ssh validation, use the hawq start `--ignore-bad-hosts` option. 
+
+Use the `hawq start cluster` command to start a HAWQ system that has already been initialized by the `hawq init cluster` command, but has been stopped by the `hawq stop cluster` command. The `hawq start cluster` command starts a HAWQ system on the master host and starts all its segments. The command orchestrates this process and performs the process in parallel.
+
+
+## <a id="task_gpdb_restart"></a>Restarting HAWQ 
+
+Stop the HAWQ system and then restart it.
+
+The `hawq restart` command with the appropriate `cluster` or node-type option will stop and then restart HAWQ after the shutdown completes. If the master or segments are already stopped, restart will have no effect.
+
+-   To restart a HAWQ cluster, enter the following command on the master host:
+
+    ```shell
+    $ hawq restart cluster
+    ```
+
+
+## <a id="task_upload_config"></a>Reloading Configuration File Changes Only 
+
+Reload changes to the HAWQ configuration files without interrupting the system.
+
+The `hawq stop` command can reload changes to the `pg_hba.conf `configuration file and to *runtime* parameters in the `hawq-site.xml` and `pg_hba.conf` files without service interruption. Active sessions pick up changes when they reconnect to the database. Many server configuration parameters require a full system restart \(`hawq restart cluster`\) to activate. For information about server configuration parameters, see the [Server Configuration Parameter Reference](../reference/guc/guc_config.html).
+
+-   Reload configuration file changes without shutting down the system using the `hawq stop` command:
+
+    ```shell
+    $ hawq stop cluster --reload
+    ```
+    
+    Or:
+
+    ```shell
+    $ hawq stop cluster -u
+    ```
+    
+
+## <a id="task_maint_mode"></a>Starting the Master in Maintenance Mode 
+
+Start only the master to perform maintenance or administrative tasks without affecting data on the segments.
+
+Maintenance mode is a superuser-only mode that should only be used when required for a particular maintenance task. For example, you can connect to a database only on the master instance in maintenance mode and edit system catalog settings.
+
+1.  Run `hawq start` on the `master` using the `-m` option:
+
+    ```shell
+    $ hawq start master -m
+    ```
+
+2.  Connect to the master in maintenance mode to do catalog maintenance. For example:
+
+    ```shell
+    $ PGOPTIONS='-c gp_session_role=utility' psql template1
+    ```
+3.  After completing your administrative tasks, restart the master in production mode. 
+
+    ```shell
+    $ hawq restart master 
+    ```
+
+    **Warning:**
+
+    Incorrect use of maintenance mode connections can result in an inconsistent HAWQ system state. Only expert users should perform this operation.
+
+
+## <a id="task_gpdb_stop"></a>Stopping HAWQ 
+
+The `hawq stop cluster` command stops or restarts your HAWQ system and always runs on the master host. When activated, `hawq stop cluster` stops all `postgres` processes in the system, including the master and all segment instances. The `hawq stop cluster` command uses a default of up to 64 parallel worker threads to bring down the segments that make up the HAWQ cluster. The system waits for any active transactions to finish before shutting down. To stop HAWQ immediately, use fast mode. The commands `hawq stop master`, `hawq stop segment`, `hawq stop standby`, or `hawq stop allsegments` can be used to stop the master, the local segment node, standby, or all segments in the cluster. Stopping the master will stop only the master segment, and will not shut down a cluster.
+
+-   To stop HAWQ:
+
+    ```shell
+    $ hawq stop cluster
+    ```
+
+-   To stop HAWQ in fast mode:
+
+    ```shell
+    $ hawq stop cluster -M fast
+    ```
+
+
+## <a id="task_tx4_bl3_h5"></a>Best Practices to Start/Stop HAWQ Cluster Members 
+
+For best results in using `hawq start` and `hawq stop` to manage your HAWQ system, the following best practices are recommended.
+
+-   Issue the `CHECKPOINT` command to update and flush all data files to disk and update the log file before stopping the cluster. A checkpoint ensures that, in the event of a crash, files can be restored from the checkpoint snapshot.
+
+-   Stop the entire HAWQ system by stopping the cluster on the master host. 
+
+    ```shell
+    $ hawq stop cluster
+    ```
+
+-   To stop segments and kill any running queries without causing data loss or inconsistency issues, use `fast` or `immediate` mode on the cluster:
+
+    ```shell
+    $ hawq stop cluster -M fast
+    $ hawq stop cluster -M immediate
+    ```
+
+-   Use `hawq stop master` to stop the master only. If you cannot stop the master due to running transactions, try using `fast` shutdown. If `fast` shutdown does not work, use `immediate` shutdown. Use `immediate` shutdown with caution, as it will result in a crash-recovery run when the system is restarted.
+
+	```shell
+    $ hawq stop master -M fast
+    $ hawq stop master -M immediate
+    ```
+-   If you have changed or want to reload server parameter settings on a HAWQ database where there are active connections, use the command:
+
+
+	```shell
+    $ hawq stop master -u -M fast 
+    ```   
+
+-   When stopping a segment or all segments, use `smart` mode, which is the default. Using `fast` or `immediate` mode on segments will have no effect since segments are stateless.
+
+    ```shell
+    $ hawq stop segment
+    $ hawq stop allsegments
+    ```
+-	You should typically always use `hawq start cluster` or `hawq restart cluster` to start the cluster. If you do end up starting nodes individually with `hawq start standby|master|segment`, make sure to always start the standby *before* the active master. Otherwise, the standby can become unsynchronized with the active master.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/bestpractices/HAWQBestPracticesOverview.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/bestpractices/HAWQBestPracticesOverview.html.md.erb b/markdown/bestpractices/HAWQBestPracticesOverview.html.md.erb
new file mode 100644
index 0000000..13b4dca
--- /dev/null
+++ b/markdown/bestpractices/HAWQBestPracticesOverview.html.md.erb
@@ -0,0 +1,28 @@
+---
+title: Best Practices
+---
+
+This chapter provides best practices on using the components and features that are part of a HAWQ system.
+
+
+-   **[Best Practices for Operating HAWQ](../bestpractices/operating_hawq_bestpractices.html)**
+
+    This topic provides best practices for operating HAWQ, including recommendations for stopping, starting and monitoring HAWQ.
+
+-   **[Best Practices for Securing HAWQ](../bestpractices/secure_bestpractices.html)**
+
+    To secure your HAWQ deployment, review the recommendations listed in this topic.
+
+-   **[Best Practices for Managing Resources](../bestpractices/managing_resources_bestpractices.html)**
+
+    This topic describes best practices for managing resources in HAWQ.
+
+-   **[Best Practices for Managing Data](../bestpractices/managing_data_bestpractices.html)**
+
+    This topic describes best practices for creating databases, loading data, partioning data, and recovering data in HAWQ.
+
+-   **[Best Practices for Querying Data](../bestpractices/querying_data_bestpractices.html)**
+
+    To obtain the best results when querying data in HAWQ, review the best practices described in this topic.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/bestpractices/general_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/bestpractices/general_bestpractices.html.md.erb b/markdown/bestpractices/general_bestpractices.html.md.erb
new file mode 100644
index 0000000..503887b
--- /dev/null
+++ b/markdown/bestpractices/general_bestpractices.html.md.erb
@@ -0,0 +1,26 @@
+---
+title: HAWQ Best Practices
+---
+
+This topic addresses general best practices for users who are new to HAWQ.
+
+When using HAWQ, adhere to the following guidelines for best results:
+
+-   **Use a consistent `hawq-site.xml` file to configure your entire cluster**:
+
+    Configuration guc/parameters are located in `$GPHOME/etc/hawq-site.xml`. This configuration file resides on all HAWQ instances and can be modified by using the `hawq config` utility. You can use the same configuration file cluster-wide across both master and segments.
+    
+    If you install and manage HAWQ using Ambari, do not use `hawq config` to set or change HAWQ configuration properties. Use the Ambari interface for all configuration changes. Configuration changes to `hawq-site.xml` made outside the Ambari interface will be overwritten when you restart or reconfigure  HAWQ using Ambari.
+
+    **Note:** While `postgresql.conf` still exists in HAWQ, any parameters defined in `hawq-site.xml` will overwrite configurations in `postgresql.conf`. For this reason, we recommend that you only use `hawq-site.xml` to configure your HAWQ cluster.
+
+-   **Keep in mind the factors that impact the number of virtual segments used for queries. The number of virtual segments used directly impacts the query's performance.** The degree of parallelism achieved by a query is determined by multiple factors, including the following:
+    -   **Cost of the query**. Small queries use fewer segments and larger queries use more segments. Note that there are some techniques you can use when defining resource queues to influence the number of virtual segments and general resources that are allocated to queries. See [Best Practices for Using Resource Queues](managing_resources_bestpractices.html#topic_hvd_pls_wv).
+    -   **Available resources**. Resources available at query time. If more resources are available in the resource queue, the resources will be used.
+    -   **Hash table and bucket number**. If the query involves only hash-distributed tables, and the bucket number (bucketnum) configured for all the hash tables is either the same bucket number for all tables or the table size for random tables is no more than 1.5 times larger than the size of hash tables for the hash tables, then the query's parallelism is fixed (equal to the hash table bucket number). Otherwise, the number of virtual segments depends on the query's cost and hash-distributed table queries will behave like queries on randomly distributed tables.
+    -   **Query Type**: For queries with some user-defined functions or for external tables where calculating resource costs is difficult , then the number of virtual segments is controlled by `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, as well as by the ON clause and the location list of external tables. If the query has a hash result table (e.g. `INSERT into hash_table`) then the number of virtual segment number must be equal to the bucket number of the resulting hash table, If the query is performed in utility mode, such as for `COPY` and `ANALYZE` operations, the virtual segment number is calculated by different policies, which will be explained later in this section.
+    -   **PXF**: PXF external tables use the `default_hash_table_bucket_number` parameter, not the `hawq_rm_nvseg_perquery_perseg_limit` parameter, to control the number of virtual segments. 
+
+    See [Query Performance](../query/query-performance.html#topic38) for more details.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/bestpractices/managing_data_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/bestpractices/managing_data_bestpractices.html.md.erb b/markdown/bestpractices/managing_data_bestpractices.html.md.erb
new file mode 100644
index 0000000..11d6e02
--- /dev/null
+++ b/markdown/bestpractices/managing_data_bestpractices.html.md.erb
@@ -0,0 +1,47 @@
+---
+title: Best Practices for Managing Data
+---
+
+This topic describes best practices for creating databases, loading data, partioning data, and recovering data in HAWQ.
+
+## <a id="topic_xhy_v2j_1v"></a>Best Practices for Loading Data
+
+Loading data into HDFS is challenging due to the limit on the number of files that can be opened concurrently for write on both NameNodes and DataNodes.
+
+To obtain the best performance during data loading, observe the following best practices:
+
+-   Typically the number of concurrent connections to a NameNode should not exceed 50,000, and the number of open files per DataNode should not exceed 10,000. If you exceed these limits, NameNode and DataNode may become overloaded and slow.
+-   If the number of partitions in a table is large, the recommended way to load data into the partitioned table is to load the data partition by partition. For example, you can use query such as the following to load data into only one partition:
+
+    ```sql
+    INSERT INTO target_partitioned_table_part1 SELECT * FROM source_table WHERE filter
+    ```
+
+    where *filter* selects only the data in the target partition.
+
+-   To alleviate the load on NameNode, you can reduce the number of virtual segment used per node. You can do this on the statement-level or on the resource queue level. See [Configuring the Maximum Number of Virtual Segments](../resourcemgmt/ConfigureResourceManagement.html#topic_tl5_wq1_f5) for more information.
+-   Use resource queues to limit load query and read query concurrency.
+
+The best practice for loading data into partitioned tables is to create an intermediate staging table, load it, and then exchange it into your partition design. See [Exchanging a Partition](../ddl/ddl-partition.html#topic83).
+
+## <a id="topic_s23_52j_1v"></a>Best Practices for Partitioning Data
+
+### <a id="topic65"></a>Deciding on a Table Partitioning Strategy
+
+Not all tables are good candidates for partitioning. If the answer is *yes* to all or most of the following questions, table partitioning is a viable database design strategy for improving query performance. If the answer is *no* to most of the following questions, table partitioning is not the right solution for that table. Test your design strategy to ensure that query performance improves as expected.
+
+-   **Is the table large enough?** Large fact tables are good candidates for table partitioning. If you have millions or billions of records in a table, you may see performance benefits from logically breaking that data up into smaller chunks. For smaller tables with only a few thousand rows or less, the administrative overhead of maintaining the partitions will outweigh any performance benefits you might see.
+-   **Are you experiencing unsatisfactory performance?** As with any performance tuning initiative, a table should be partitioned only if queries against that table are producing slower response times than desired.
+-   **Do your query predicates have identifiable access patterns?** Examine the `WHERE` clauses of your query workload and look for table columns that are consistently used to access data. For example, if most of your queries tend to look up records by date, then a monthly or weekly date-partitioning design might be beneficial. Or if you tend to access records by region, consider a list-partitioning design to divide the table by region.
+-   **Does your data warehouse maintain a window of historical data?** Another consideration for partition design is your organization's business requirements for maintaining historical data. For example, your data warehouse may require that you keep data for the past twelve months. If the data is partitioned by month, you can easily drop the oldest monthly partition from the warehouse and load current data into the most recent monthly partition.
+-   **Can the data be divided into somewhat equal parts based on some defining criteria?** Choose partitioning criteria that will divide your data as evenly as possible. If the partitions contain a relatively equal number of records, query performance improves based on the number of partitions created. For example, by dividing a large table into 10 partitions, a query will execute 10 times faster than it would against the unpartitioned table, provided that the partitions are designed to support the query's criteria.
+
+Do not create more partitions than are needed. Creating too many partitions can slow down management and maintenance jobs, such as vacuuming, recovering segments, expanding the cluster, checking disk usage, and others.
+
+Partitioning does not improve query performance unless the query optimizer can eliminate partitions based on the query predicates. Queries that scan every partition run slower than if the table were not partitioned, so avoid partitioning if few of your queries achieve partition elimination. Check the explain plan for queries to make sure that partitions are eliminated. See [Query Profiling](../query/query-profiling.html#topic39) for more about partition elimination.
+
+Be very careful with multi-level partitioning because the number of partition files can grow very quickly. For example, if a table is partitioned by both day and city, and there are 1,000 days of data and 1,000 cities, the total number of partitions is one million. Column-oriented tables store each column in a physical table, so if this table has 100 columns, the system would be required to manage 100 million files for the table.
+
+Before settling on a multi-level partitioning strategy, consider a single level partition with bitmap indexes. Indexes slow down data loads, so consider performance testing with your data and schema to decide on the best strategy.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/bestpractices/managing_resources_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/bestpractices/managing_resources_bestpractices.html.md.erb b/markdown/bestpractices/managing_resources_bestpractices.html.md.erb
new file mode 100644
index 0000000..f770611
--- /dev/null
+++ b/markdown/bestpractices/managing_resources_bestpractices.html.md.erb
@@ -0,0 +1,144 @@
+---
+title: Best Practices for Managing Resources
+---
+
+This topic describes best practices for managing resources in HAWQ.
+
+## <a id="topic_ikz_ndx_15"></a>Best Practices for Configuring Resource Management
+
+When configuring resource management, you can apply certain best practices to ensure that resources are managed both efficiently and for best system performance.
+
+The following is a list of high-level best practices for optimal resource management:
+
+-   Make sure segments do not have identical IP addresses. See [Segments Do Not Appear in gp\_segment\_configuration](../troubleshooting/Troubleshooting.html#topic_hlj_zxx_15) for an explanation of this problem.
+-   Configure all segments to have the same resource capacity. See [Configuring Segment Resource Capacity](../resourcemgmt/ConfigureResourceManagement.html#topic_htk_fxh_15).
+-   To prevent resource fragmentation, ensure that your deployment's segment resource capacity (standalone mode) or YARN node resource capacity (YARN mode) is a multiple of all virtual segment resource quotas. See [Configuring Segment Resource Capacity](../resourcemgmt/ConfigureResourceManagement.html#topic_htk_fxh_15) (HAWQ standalone mode) and [Setting HAWQ Segment Resource Capacity in YARN](../resourcemgmt/YARNIntegration.html#topic_pzf_kqn_c5).
+-   Ensure that enough registered segments are available and usable for query resource requests. If the number of unavailable or unregistered segments is higher than a set limit, then query resource requests are rejected. Also ensure that the variance of dispatched virtual segments across physical segments is not greater than the configured limit. See [Rejection of Query Resource Requests](../troubleshooting/Troubleshooting.html#topic_vm5_znx_15).
+-   Use multiple master and segment temporary directories on separate, large disks (2TB or greater) to load balance writes to temporary files (for example, `/disk1/tmp             /disk2/tmp`). For a given query, HAWQ will use a separate temp directory (if available) for each virtual segment to store spill files. Multiple HAWQ sessions will also use separate temp directories where available to avoid disk contention. If you configure too few temp directories, or you place multiple temp directories on the same disk, you increase the risk of disk contention or running out of disk space when multiple virtual segments target the same disk.
+-   Configure minimum resource levels in YARN, and tune the timeout of when idle resources are returned to YARN. See [Tune HAWQ Resource Negotiations with YARN](../resourcemgmt/YARNIntegration.html#topic_wp3_4bx_15).
+-   Make sure that the property `yarn.scheduler.minimum-allocation-mb` in `yarn-site.xml` is an equal subdivision of 1GB. For example, 1024, 512.
+
+## <a id="topic_hvd_pls_wv"></a>Best Practices for Using Resource Queues
+
+Design and configure your resource queues depending on the operational needs of your deployment. This topic describes the best practices for creating and modifying resource queues within the context of different operational scenarios.
+
+### Modifying Resource Queues for Overloaded HDFS
+
+A high number of concurrent HAWQ queries can cause HDFS to overload, especially when querying partitioned tables. Use the `ACTIVE_STATEMENTS` attribute to restrict statement concurrency in a resource queue. For example, if an external application is executing more than 100 concurrent queries, then limiting the number of active statements in your resource queues will instruct the HAWQ resource manager to restrict actual statement concurrency within HAWQ. You might want to modify an existing resource queue as follows:
+
+```sql
+ALTER RESOURCE QUEUE sampleque1 WITH (ACTIVE_STATEMENTS=20);
+```
+
+In this case, when this DDL is applied to queue `sampleque1`, the roles using this queue will have to wait until no more than 20 statements are running to execute their queries. Therefore, 80 queries will be waiting in the queue for later execution. Restricting the number of active query statements helps limit the usage of HDFS resources and protects HDFS. You can alter concurrency even when the resource queue is busy. For example, if a queue already has 40 concurrent statements running, and you apply a DDL statement that specifies `ACTIVE_STATEMENTS=20`, then the resource queue pauses the allocation of resources to queries until more than 20 statements have returned their resources.
+
+### Isolating and Protecting Production Workloads
+
+Another best practice is using resource queues to isolate your workloads. Workload isolation prevents your production workload from being starved of resources. To create this isolation, divide your workload by creating roles for specific purposes. For example, you could create one role for production online verification and another role for the regular running of production processes.
+
+In this scenario, let us assign `role1` for the production workload and `role2` for production software verification. We can define the following resource queues under the same parent queue `dept1que`, which is the resource queue defined for the entire department.
+
+```sql
+CREATE RESOURCE QUEUE dept1product
+   WITH (PARENT='dept1que', MEMORY_LIMIT_CLUSTER=90%, CORE_LIMIT_CLUSTER=90%, RESOURCE_OVERCOMMIT_FACTOR=2);
+
+CREATE RESOURCE QUEUE dept1verification 
+   WITH (PARENT='dept1que', MEMORY_LIMIT_CLUSTER=10%, CORE_LIMIT_CLUSTER=10%, RESOURCE_OVERCOMMIT_FACTOR=10);
+
+ALTER ROLE role1 RESOURCE QUEUE dept1product;
+
+ALTER ROLE role2 RESOURCE QUEUE dept1verification;
+```
+
+With these resource queues defined, workload is spread across the resource queues as follows:
+
+-   When both `role1` and `role2` have workloads, the test verification workload gets only 10% of the total available `dept1que` resources, leaving 90% of the `dept1que` resources available for running the production workload.
+-   When `role1` has a workload but `role2` is idle, then 100% of all `dept1que` resources can be consumed by the production workload.
+-   When only `role2` has a workload (for example, during a scheduled testing window), then 100% of all `dept1que` resources can also be utilized for testing.
+
+Even when the resource queues are busy, you can alter the resource queue's memory and core limits to change resource allocation policies before switching workloads.
+
+In addition, you can use resource queues to isolate workloads for different departments or different applications. For example, we can use the following DDL statements to define 3 departments, and an administrator can arbitrarily redistribute resource allocations among the departments according to usage requirements.
+
+```sql
+ALTER RESOURCE QUEUE pg_default 
+   WITH (MEMORY_LIMIT_CLUSTER=10%, CORE_LIMIT_CLUSTER=10%);
+
+CREATE RESOURCE QUEUE dept1 
+   WITH (PARENT='pg_root', MEMORY_LIMIT_CLUSTER=30%, CORE_LIMIT_CLUSTER=30%);
+
+CREATE RESOURCE QUEUE dept2 
+   WITH (PARENT='pg_root', MEMORY_LIMIT_CLUSTER=30%, CORE_LIMIT_CLUSTER=30%);
+
+CREATE RESOURCE QUEUE dept3 
+   WITH (PARENT='pg_root', MEMORY_LIMIT_CLUSTER=30%, CORE_LIMIT_CLUSTER=30%);
+
+CREATE RESOURCE QUEUE dept11
+   WITH (PARENT='dept1', MEMORY_LIMIT_CLUSTER=50%,CORE_LIMIT_CLUSTER=50%);
+
+CREATE RESOURCE QUEUE dept12
+   WITH (PARENT='dept1', MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%);
+```
+
+### Querying Parquet Tables with Large Table Size
+
+You can use resource queues to improve query performance on Parquet tables with a large page size. This type of query requires a large memory quota for virtual segments. Therefore, if one role mostly queries Parquet tables with a large page size, alter the resource queue associated with the role to increase its virtual segment resource quota. For example:
+
+```sql
+ALTER RESOURCE queue1 WITH (VSEG_RESOURCE_QUOTA='mem:2gb');
+```
+
+If there are only occasional queries on Parquet tables with a large page size, use a statement level specification instead of altering the resource queue. For example:
+
+```sql
+SET HAWQ_RM_STMT_NVSEG=10;
+SET HAWQ_RM_STMT_VSEG_MEMORY='2gb';
+query1;
+SET HAWQ_RM_STMT_NVSEG=0;
+```
+
+### Restricting Resource Consumption for Specific Queries
+
+In general, the HAWQ resource manager attempts to provide as much resources as possible to the current query to achieve high query performance. When a query is complex and large, however, the associated resource queue can use up many virtual segments causing other resource queues (and queries) to starve. Under these circumstances,you should enable nvseg limits on the resource queue associated with the large query. For example, you can specify that all queries can use no more than 200 virtual segments. To achieve this limit, alter the resource queue as follows
+
+``` sql
+ALTER RESOURCE QUEUE queue1 WITH (NVSEG_UPPER_LIMIT=200);
+```
+
+If we hope to make this limit vary according to the dynamic cluster size, we can use the following statement.
+
+```sql
+ALTER RESOURCE QUEUE queue1 WITH (NVSEG_UPPER_LIMIT_PERSEG=10);
+```
+
+After setting the limit in the above example, the actual limit will be 100 if you have a 10-node cluster. If the cluster is expanded to 20 nodes, then the limit increases automatically to 200.
+
+### Guaranteeing Resource Allocations for Individual Statements
+
+In general, the minimum number of virtual segments allocated to a statement is decided by the resource queue's actual capacity and its concurrency setting. For example, if there are 10 nodes in a cluster and the total resource capacity of the cluster is 640GB and 160 cores, then a resource queue having 20% capacity has a capacity of 128GB (640GB \* .20) and 32 cores (160 \*.20). If the virtual segment quota is set to 256MB, then this queue has 512 virtual segments allocated (128GB/256MB=512). If the `ACTIVE_STATEMENTS` concurrency setting for the resource queue is 20, then the minimum number of allocated virtual segments for each query is **25** (*trunc*(512/20)=25). However, this minimum number of virtual segments is a soft restriction. If a query statement requires only 5 virtual segments, then this minimum number of 25 is ignored since it is not necessary to allocate 25 for this statement.
+
+In order to raise the minimum number of virtual segments available for a query statement, there are two options.
+
+-   *Option 1*: Alter the resource queue to reduce concurrency. This is the recommended way to achieve the goal. For example:
+
+    ```sql
+    ALTER RESOURCE QUEUE queue1 WITH (ACTIVE_STATEMENTS=10);
+    ```
+
+    If the original concurrency setting is 20, then the minimum number of virtual segments is doubled.
+
+-   *Option 2*: Alter the nvseg limits of the resource queue. For example:
+
+    ```sql
+    ALTER RESOURCE QUEUE queue1 WITH (NVSEG_LOWER_LIMIT=50);
+    ```
+
+    or, alternately:
+
+    ```sql
+    ALTER RESOURCE QUEUE queue1 WITH (NVSEG_LOWER_LIMIT_PERSEG=5);
+    ```
+
+    In the second DDL, if there are 10 nodes in the cluster, the actual minimum number of virtual segments is 50 (5 \* 10 = 50).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/bestpractices/operating_hawq_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/bestpractices/operating_hawq_bestpractices.html.md.erb b/markdown/bestpractices/operating_hawq_bestpractices.html.md.erb
new file mode 100644
index 0000000..9dc56e9
--- /dev/null
+++ b/markdown/bestpractices/operating_hawq_bestpractices.html.md.erb
@@ -0,0 +1,298 @@
+---
+title: Best Practices for Operating HAWQ
+---
+
+This topic provides best practices for operating HAWQ, including recommendations for stopping, starting and monitoring HAWQ.
+
+## <a id="best_practice_config"></a>Best Practices for Configuring HAWQ Parameters
+
+The HAWQ configuration guc/parameters are located in `$GPHOME/etc/hawq-site.xml`. This configuration file resides on all HAWQ instances and can be modified either by the Ambari interface or the command line. 
+
+If you install and manage HAWQ using Ambari, use the Ambari interface for all configuration changes. Do not use command line utilities such as `hawq config` to set or change HAWQ configuration properties for Ambari-managed clusters. Configuration changes to `hawq-site.xml` made outside the Ambari interface will be overwritten when you restart or reconfigure HAWQ using Ambari.
+
+If you manage your cluster using command line tools instead of Ambari, use a consistent `hawq-site.xml` file to configure your entire cluster. 
+
+**Note:** While `postgresql.conf` still exists in HAWQ, any parameters defined in `hawq-site.xml` will overwrite configurations in `postgresql.conf`. For this reason, we recommend that you only use `hawq-site.xml` to configure your HAWQ cluster. For Ambari clusters, always use Ambari for configuring `hawq-site.xml` parameters.
+
+## <a id="task_qgk_bz3_1v"></a>Best Practices to Start/Stop HAWQ Cluster Members
+
+For best results in using `hawq start` and `hawq stop` to manage your HAWQ system, the following best practices are recommended.
+
+-   Issue the `CHECKPOINT` command to update and flush all data files to disk and update the log file before stopping the cluster. A checkpoint ensures that, in the event of a crash, files can be restored from the checkpoint snapshot.
+-   Stop the entire HAWQ system by stopping the cluster on the master host:
+    ```shell
+    $ hawq stop cluster
+    ```
+
+-   To stop segments and kill any running queries without causing data loss or inconsistency issues, use `fast` or `immediate` mode on the cluster:
+
+    ```shell
+    $ hawq stop cluster -M fast
+    ```
+    ```shell
+    $ hawq stop cluster -M immediate
+    ```
+
+-   Use `hawq stop master` to stop the master only. If you cannot stop the master due to running transactions, try using fast shutdown. If fast shutdown does not work, use immediate shutdown. Use immediate shutdown with caution, as it will result in a crash-recovery run when the system is restarted. 
+
+    ```shell
+    $ hawq stop master -M fast
+    ```
+    ```shell
+    $ hawq stop master -M immediate
+    ```
+
+-   When stopping a segment or all segments, you can use the default mode of smart mode. Using fast or immediate mode on segments will have no effect since segments are stateless.
+
+    ```shell
+    $ hawq stop segment
+    ```
+    ```shell
+    $ hawq stop allsegments
+    ```
+
+-   Typically you should always use `hawq start cluster` or `hawq               restart cluster` to start the cluster. If you do end up using `hawq start standby|master|segment` to start nodes individually, make sure you always start the standby before the active master. Otherwise, the standby can become unsynchronized with the active master.
+
+## <a id="id_trr_m1j_1v"></a>Guidelines for Cluster Expansion
+
+This topic provides some guidelines around expanding your HAWQ cluster.
+
+There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
+
+-   When you add a new node, install both a DataNode and a physical segment on the new node.
+-   After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
+-   Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, execute **`select gp_metadata_cache_clear();`**.
+-   Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html#topic1) command.
+-   If you are using hash tables, consider updating the `default_hash_table_bucket_number` server configuration parameter to a larger value after expanding the cluster but before redistributing the hash tables.
+
+## <a id="id_o5n_p1j_1v"></a>Database State Monitoring Activities
+
+<a id="id_o5n_p1j_1v__d112e31"></a>
+
+<table>
+<caption><span class="tablecap">Table 1. Database State Monitoring Activities</span></caption>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Activity</th>
+<th>Procedure</th>
+<th>Corrective Actions</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>List segments that are currently down. If any rows are returned, this should generate a warning or alert.
+<p>Recommended frequency: run every 5 to 10 minutes</p>
+<p>Severity: IMPORTANT</p></td>
+<td>Run the following query in the <code class="ph codeph">postgres</code> database:
+<pre class="pre codeblock"><code>SELECT * FROM gp_segment_configuration
+WHERE status &lt;&gt; &#39;u&#39;;</code></pre></td>
+<td>If the query returns any rows, follow these steps to correct the problem:
+<ol>
+<li>Verify that the hosts with down segments are responsive.</li>
+<li>If hosts are OK, check the <span class="ph filepath">pg_log</span> files for the down segments to discover the root cause of the segments going down.</li>
+</ol></td>
+</tr>
+</tbody>
+</table>
+
+
+## <a id="id_d3w_p1j_1v"></a>Hardware and Operating System Monitoring
+
+<a id="id_d3w_p1j_1v__d112e111"></a>
+
+<table>
+<caption><span class="tablecap">Table 2. Hardware and Operating System Monitoring Activities</span></caption>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Activity</th>
+<th>Procedure</th>
+<th>Corrective Actions</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Underlying platform check for maintenance required or system down of the hardware.
+<p>Recommended frequency: real-time, if possible, or every 15 minutes</p>
+<p>Severity: CRITICAL</p></td>
+<td>Set up system check for hardware and OS errors.</td>
+<td>If required, remove a machine from the HAWQ cluster to resolve hardware and OS issues, then add it back to the cluster after the issues are resolved.</td>
+</tr>
+<tr class="even">
+<td>Check disk space usage on volumes used for HAWQ data storage and the OS.
+<p>Recommended frequency: every 5 to 30 minutes</p>
+<p>Severity: CRITICAL</p></td>
+<td><div class="p">
+Set up a disk space check.
+<ul>
+<li>Set a threshold to raise an alert when a disk reaches a percentage of capacity. The recommended threshold is 75% full.</li>
+<li>It is not recommended to run the system with capacities approaching 100%.</li>
+</ul>
+</div></td>
+<td>Free space on the system by removing some data or files.</td>
+</tr>
+<tr class="odd">
+<td>Check for errors or dropped packets on the network interfaces.
+<p>Recommended frequency: hourly</p>
+<p>Severity: IMPORTANT</p></td>
+<td>Set up a network interface checks.</td>
+<td><p>Work with network and OS teams to resolve errors.</p></td>
+</tr>
+<tr class="even">
+<td>Check for RAID errors or degraded RAID performance.
+<p>Recommended frequency: every 5 minutes</p>
+<p>Severity: CRITICAL</p></td>
+<td>Set up a RAID check.</td>
+<td><ul>
+<li>Replace failed disks as soon as possible.</li>
+<li>Work with system administration team to resolve other RAID or controller errors as soon as possible.</li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>Check for adequate I/O bandwidth and I/O skew.
+<p>Recommended frequency: when create a cluster or when hardware issues are suspected.</p></td>
+<td>Run the HAWQ <code class="ph codeph">hawq checkperf</code> utility.</td>
+<td><div class="p">
+The cluster may be under-specified if data transfer rates are not similar to the following:
+<ul>
+<li>2GB per second disk read</li>
+<li>1 GB per second disk write</li>
+<li>10 Gigabit per second network read and write</li>
+</ul>
+If transfer rates are lower than expected, consult with your data architect regarding performance expectations.
+</div>
+<p>If the machines on the cluster display an uneven performance profile, work with the system administration team to fix faulty machines.</p></td>
+</tr>
+</tbody>
+</table>
+
+
+## <a id="id_khd_q1j_1v"></a>Data Maintenance
+
+<a id="id_khd_q1j_1v__d112e279"></a>
+
+<table>
+<caption><span class="tablecap">Table 3. Data Maintenance Activities</span></caption>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Activity</th>
+<th>Procedure</th>
+<th>Corrective Actions</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Check for missing statistics on tables.</td>
+<td>Check the <code class="ph codeph">hawq_stats_missing</code> view in each database:
+<pre class="pre codeblock"><code>SELECT * FROM hawq_toolkit.hawq_stats_missing;</code></pre></td>
+<td>Run <code class="ph codeph">ANALYZE</code> on tables that are missing statistics.</td>
+</tr>
+</tbody>
+</table>
+
+
+## <a id="id_lx4_q1j_1v"></a>Database Maintenance
+
+<a id="id_lx4_q1j_1v__d112e343"></a>
+
+<table>
+<caption><span class="tablecap">Table 4. Database Maintenance Activities</span></caption>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Activity</th>
+<th>Procedure</th>
+<th>Corrective Actions</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Mark deleted rows in HAWQ system catalogs (tables in the <code class="ph codeph">pg_catalog</code> schema) so that the space they occupy can be reused.
+<p>Recommended frequency: daily</p>
+<p>Severity: CRITICAL</p></td>
+<td>Vacuum each system catalog:
+<pre class="pre codeblock"><code>VACUUM &lt;table&gt;;</code></pre></td>
+<td>Vacuum system catalogs regularly to prevent bloating.</td>
+</tr>
+<tr class="even">
+<td>Update table statistics.
+<p>Recommended frequency: after loading data and before executing queries</p>
+<p>Severity: CRITICAL</p></td>
+<td>Analyze user tables:
+<pre class="pre codeblock"><code>ANALYZEDB -d &lt;database&gt; -a</code></pre></td>
+<td>Analyze updated tables regularly so that the optimizer can produce efficient query execution plans.</td>
+</tr>
+<tr class="odd">
+<td>Backup the database data.
+<p>Recommended frequency: daily, or as required by your backup plan</p>
+<p>Severity: CRITICAL</p></td>
+<td>See <a href="../admin/BackingUpandRestoringHAWQDatabases.html">Backing up and Restoring HAWQ Databases</a> for a discussion of backup procedures</td>
+<td>Best practice is to have a current backup ready in case the database must be restored.</td>
+</tr>
+<tr class="even">
+<td>Reindex system catalogs (tables in the <code class="ph codeph">pg_catalog</code> schema) to maintain an efficient catalog.
+<p>Recommended frequency: weekly, or more often if database objects are created and dropped frequently</p></td>
+<td>Run <code class="ph codeph">REINDEX SYSTEM</code> in each database.
+<pre class="pre codeblock"><code>REINDEXDB -s</code></pre></td>
+<td>The optimizer retrieves information from the system tables to create query plans. If system tables and indexes are allowed to become bloated over time, scanning the system tables increases query execution time.</td>
+</tr>
+</tbody>
+</table>
+
+
+## <a id="id_blv_q1j_1v"></a>Patching and Upgrading
+
+<a id="id_blv_q1j_1v__d112e472"></a>
+
+<table>
+<caption><span class="tablecap">Table 5. Patch and Upgrade Activities</span></caption>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Activity</th>
+<th>Procedure</th>
+<th>Corrective Actions</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Ensure any bug fixes or enhancements are applied to the kernel.
+<p>Recommended frequency: at least every 6 months</p>
+<p>Severity: IMPORTANT</p></td>
+<td>Follow the vendor's instructions to update the Linux kernel.</td>
+<td>Keep the kernel current to include bug fixes and security fixes, and to avoid difficult future upgrades.</td>
+</tr>
+<tr class="even">
+<td>Install HAWQ minor releases.
+<p>Recommended frequency: quarterly</p>
+<p>Severity: IMPORTANT</p></td>
+<td>Always upgrade to the latest in the series.</td>
+<td>Keep the HAWQ software current to incorporate bug fixes, performance enhancements, and feature enhancements into your HAWQ cluster.</td>
+</tr>
+</tbody>
+</table>
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/bestpractices/querying_data_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/bestpractices/querying_data_bestpractices.html.md.erb b/markdown/bestpractices/querying_data_bestpractices.html.md.erb
new file mode 100644
index 0000000..3efe569
--- /dev/null
+++ b/markdown/bestpractices/querying_data_bestpractices.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: Best Practices for Querying Data
+---
+
+To obtain the best results when querying data in HAWQ, review the best practices described in this topic.
+
+## <a id="virtual_seg_performance"></a>Factors Impacting Query Performance
+
+The number of virtual segments used for a query directly impacts the query's performance. The following factors can impact the degree of parallelism of a query:
+
+-   **Cost of the query**. Small queries use fewer segments and larger queries use more segments. Some techniques used in defining resource queues can influence the number of both virtual segments and general resources allocated to queries. For more information, see [Best Practices for Using Resource Queues](managing_resources_bestpractices.html#topic_hvd_pls_wv).
+-   **Available resources at query time**. If more resources are available in the resource queue, those resources will be used.
+-   **Hash table and bucket number**. If the query involves only hash-distributed tables, the query's parallelism is fixed (equal to the hash table bucket number) under the following conditions: 
+ 
+  	- The bucket number (bucketnum) configured for all the hash tables is the same for all tables 
+   - The table size for random tables is no more than 1.5 times the size allotted for the hash tables. 
+
+  Otherwise, the number of virtual segments depends on the query's cost: hash-distributed table queries behave like queries on randomly-distributed tables.
+  
+-   **Query Type**: It can be difficult to calculate  resource costs for queries with some user-defined functions or for queries to external tables. With these queries,  the number of virtual segments is controlled by the  `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, as well as by the ON clause and the location list of external tables. If the query has a hash result table (e.g. `INSERT into hash_table`), the number of virtual segments must be equal to the bucket number of the resulting hash table. If the query is performed in utility mode, such as for `COPY` and `ANALYZE` operations, the virtual segment number is calculated by different policies.
+
+  ***Note:*** PXF external tables use the `default_hash_table_bucket_number` parameter, not the `hawq_rm_nvseg_perquery_perseg_limit` parameter, to control the number of virtual segments.
+
+See [Query Performance](../query/query-performance.html#topic38) for more details.
+
+## <a id="id_xtk_jmq_1v"></a>Examining Query Plans to Solve Problems
+
+If a query performs poorly, examine its query plan and ask the following questions:
+
+-   **Do operations in the plan take an exceptionally long time?** Look for an operation that consumes the majority of query processing time. For example, if a scan on a hash table takes longer than expected, the data locality may be low; reloading the data can increase the data locality and speed up the query. Or, adjust `enable_<operator>` parameters to see if you can force the legacy query optimizer (planner) to choose a different plan by disabling a particular query plan operator for that query.
+-   **Are the optimizer's estimates close to reality?** Run `EXPLAIN             ANALYZE` and see if the number of rows the optimizer estimates is close to the number of rows the query operation actually returns. If there is a large discrepancy, collect more statistics on the relevant columns.
+-   **Are selective predicates applied early in the plan?** Apply the most selective filters early in the plan so fewer rows move up the plan tree. If the query plan does not correctly estimate query predicate selectivity, collect more statistics on the relevant columns. You can also try reordering the `WHERE` clause of your SQL statement.
+-   **Does the optimizer choose the best join order?** When you have a query that joins multiple tables, make sure that the optimizer chooses the most selective join order. Joins that eliminate the largest number of rows should be done earlier in the plan so fewer rows move up the plan tree.
+
+    If the plan is not choosing the optimal join order, set `join_collapse_limit=1` and use explicit `JOIN` syntax in your SQL statement to force the legacy query optimizer (planner) to the specified join order. You can also collect more statistics on the relevant join columns.
+
+-   **Does the optimizer selectively scan partitioned tables?** If you use table partitioning, is the optimizer selectively scanning only the child tables required to satisfy the query predicates? Scans of the parent tables should return 0 rows since the parent tables do not contain any data. See [Verifying Your Partition Strategy](../ddl/ddl-partition.html#topic74) for an example of a query plan that shows a selective partition scan.
+-   **Does the optimizer choose hash aggregate and hash join operations where applicable?** Hash operations are typically much faster than other types of joins or aggregations. Row comparison and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer to choose hash operations, there must be sufficient memory available to hold the estimated number of rows. Run an `EXPLAIN  ANALYZE` for the query to show which plan operations spilled to disk, how much work memory they used, and how much memory was required to avoid spilling to disk. For example:
+
+    `Work_mem used: 23430K bytes avg, 23430K bytes max (seg0). Work_mem wanted: 33649K bytes avg, 33649K bytes max (seg0) to lessen workfile I/O affecting 2               workers.`
+
+  **Note:** The "bytes wanted" (*work\_mem* property) is based on the amount of data written to work files and is not exact. This property is not configurable. Use resource queues to manage memory use. For more information on resource queues, see [Configuring Resource Management](../resourcemgmt/ConfigureResourceManagement.html) and [Working with Hierarchical Resource Queues](../resourcemgmt/ResourceQueues.html).
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/bestpractices/secure_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/bestpractices/secure_bestpractices.html.md.erb b/markdown/bestpractices/secure_bestpractices.html.md.erb
new file mode 100644
index 0000000..04c5343
--- /dev/null
+++ b/markdown/bestpractices/secure_bestpractices.html.md.erb
@@ -0,0 +1,11 @@
+---
+title: Best Practices for Securing HAWQ
+---
+
+To secure your HAWQ deployment, review the recommendations listed in this topic.
+
+-   Set up SSL to encrypt your client server communication channel. See [Encrypting Client/Server Connections](../clientaccess/client_auth.html#topic5).
+-   Configure `pg_hba.conf` only on HAWQ master. Do not configure it on segments.
+    **Note:** For a more secure system, consider removing all connections that use trust authentication from your master `pg_hba.conf`. Trust authentication means the role is granted access without any authentication, therefore bypassing all security. Replace trust entries with ident authentication if your system has an ident service available.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/client_auth.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/client_auth.html.md.erb b/markdown/clientaccess/client_auth.html.md.erb
new file mode 100644
index 0000000..a13f4e1
--- /dev/null
+++ b/markdown/clientaccess/client_auth.html.md.erb
@@ -0,0 +1,193 @@
+---
+title: Configuring Client Authentication
+---
+
+When a HAWQ system is first initialized, the system contains one predefined *superuser* role. This role will have the same name as the operating system user who initialized the HAWQ system. This role is referred to as `gpadmin`. By default, the system is configured to only allow local connections to the database from the `gpadmin` role. To allow any other roles to connect, or to allow connections from remote hosts, you configure HAWQ to allow such connections.
+
+## <a id="topic2"></a>Allowing Connections to HAWQ 
+
+Client access and authentication is controlled by the standard PostgreSQL host-based authentication file, `pg_hba.conf`. In HAWQ, the `pg_hba.conf` file of the master instance controls client access and authentication to your HAWQ system. HAWQ segments have `pg_hba.conf` files that are configured to allow only client connections from the master host and never accept client connections. Do not alter the `pg_hba.conf` file on your segments.
+
+See [The pg\_hba.conf File](http://www.postgresql.org/docs/9.0/interactive/auth-pg-hba-conf.html) in the PostgreSQL documentation for more information.
+
+The general format of the `pg_hba.conf` file is a set of records, one per line. HAWQ ignores blank lines and any text after the `#` comment character. A record consists of a number of fields that are separated by spaces and/or tabs. Fields can contain white space if the field value is quoted. Records cannot be continued across lines. Each remote client access record has the following format:
+
+```
+host|hostssl|hostnossl���<database>���<role>���<CIDR-address>|<IP-address>,<IP-mask>���<authentication-method>
+```
+
+Each UNIX-domain socket access record has the following format:
+
+```
+local���<database>���<role>���<authentication-method>
+```
+
+The following table describes meaning of each field.
+
+|Field|Description|
+|-----|-----------|
+|local|Matches connection attempts using UNIX-domain sockets. Without a record of this type, UNIX-domain socket connections are disallowed.|
+|host|Matches connection attempts made using TCP/IP. Remote TCP/IP connections will not be possible unless the server is started with an appropriate value for the listen\_addresses server configuration parameter.|
+|hostssl|Matches connection attempts made using TCP/IP, but only when the connection is made with SSL encryption. SSL must be enabled at server start time by setting the ssl configuration parameter|
+|hostnossl|Matches connection attempts made over TCP/IP that do not use SSL.|
+|\<database\>|Specifies which database names this record matches. The value `all` specifies that it matches all databases. Multiple database names can be supplied by separating them with commas. A separate file containing database names can be specified by preceding the file name with @.|
+|\<role\>|Specifies which database role names this record matches. The value `all` specifies that it matches all roles. If the specified role is a group and you want all members of that group to be included, precede the role name with a +. Multiple role names can be supplied by separating them with commas. A separate file containing role names can be specified by preceding the file name with @.|
+|\<CIDR-address\>|Specifies the client machine IP address range that this record matches. It contains an IP address in standard dotted decimal notation and a CIDR mask length. IP addresses can only be specified numerically, not as domain or host names. The mask length indicates the number of high-order bits of the client IP address that must match. Bits to the right of this must be zero in the given IP address. There must not be any white space between the IP address, the /, and the CIDR mask length. Typical examples of a CIDR-address are 192.0.2.0/32 for a single host, or 192.0.2.2/24 for a small network, or 192.0.2.3/16 for a larger one. To specify a single host, use a CIDR mask of 32 for IPv4 or 128 for IPv6. In a network address, do not omit trailing zeroes.|
+|\<IP-address\>, \<IP-mask\>|These fields can be used as an alternative to the CIDR-address notation. Instead of specifying the mask length, the actual mask is specified in a separate column. For example, 255.255.255.255 represents a CIDR mask length of 32. These fields only apply to host, hostssl, and hostnossl records.|
+|\<authentication-method\>|Specifies the authentication method to use when connecting. HAWQ supports the [authentication methods](http://www.postgresql.org/docs/9.0/static/auth-methods.html) supported by PostgreSQL 9.0.|
+
+### <a id="topic3"></a>Editing the pg\_hba.conf File 
+
+This example shows how to edit the `pg_hba.conf` file of the master to allow remote client access to all databases from all roles using encrypted password authentication.
+
+**Note:** For a more secure system, consider removing all connections that use trust authentication from your master `pg_hba.conf`. Trust authentication means the role is granted access without any authentication, therefore bypassing all security. Replace trust entries with ident authentication if your system has an ident service available.
+
+#### <a id="ip144328"></a>Editing pg\_hba.conf 
+
+1.  Obtain the master data directory location from the `hawq_master_directory` property value in `hawq-site.xml` and use a text editor to open the `pg_hba.conf` file in this directory.
+2.  Add a line to the file for each type of connection you want to allow. Records are read sequentially, so the order of the records is significant. Typically, earlier records will have tight connection match parameters and weaker authentication methods, while later records will have looser match parameters and stronger authentication methods. For example:
+
+    ```
+    # allow the gpadmin user local access to all databases
+    # using ident authentication
+    local ��all ��gpadmin ��ident ��������sameuser
+    host ���all ��gpadmin ��127.0.0.1/32 �ident
+    host ���all ��gpadmin ��::1/128 ������ident
+    # allow the 'dba' role access to any database from any
+    # host with IP address 192.168.x.x and use md5 encrypted
+    # passwords to authenticate the user
+    # Note that to use SHA-256 encryption, replace *md5* with
+    # password in the line below
+    host ���all ��dba ��192.168.0.0/32 �md5
+    # allow all roles access to any database from any
+    # host and use ldap to authenticate the user. HAWQ role
+    # names must match the LDAP common name.
+    host ���all ��all ��192.168.0.0/32 �ldap ldapserver=usldap1
+    ldapport=1389 ldapprefix="cn="
+    ldapsuffix=",ou=People,dc=company,dc=com"
+    ```
+
+3.  Save and close the file.
+4.  Reload the `pg_hba.conf `configuration file for your changes to take effect. Include the `-M fast` option if you have active/open database connections:
+
+    ``` bash
+    $ hawq stop cluster -u [-M fast]
+    ```
+    
+
+
+## <a id="topic4"></a>Limiting Concurrent Connections 
+
+HAWQ allocates some resources on a per-connection basis, so setting the maximum number of connections allowed is recommended.
+
+To limit the number of active concurrent sessions to your HAWQ system, you can configure the `max_connections` server configuration parameter on master or the `seg_max_connections` server configuration parameter on segments. These parameters are *local* parameters, meaning that you must set them in the `hawq-site.xml` file of all HAWQ instances.
+
+When you set `max_connections`, you must also set the dependent parameter `max_prepared_transactions`. This value must be at least as large as the value of `max_connections`, and all HAWQ instances should be set to the same value.
+
+Example `$GPHOME/etc/hawq-site.xml` configuration:
+
+``` xml
+  <property>
+      <name>max_connections</name>
+      <value>500</value>
+  </property>
+  <property>
+      <name>max_prepared_transactions</name>
+      <value>1000</value>
+  </property>
+  <property>
+      <name>seg_max_connections</name>
+      <value>3200</value>
+  </property>
+```
+
+**Note:** Raising the values of these parameters may cause HAWQ to request more shared memory. To mitigate this effect, consider decreasing other memory-related server configuration parameters such as [gp\_cached\_segworkers\_threshold](../reference/guc/parameter_definitions.html#gp_cached_segworkers_threshold).
+
+
+### <a id="ip142411"></a>Setting the number of allowed connections
+
+You will perform different procedures to set connection-related server configuration parameters for your HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set server configuration parameters.
+
+If you use Ambari to manage your cluster:
+
+1. Set the `max_connections`, `seg_max_connections`, and `max_prepared_transactions` configuration properties via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down.
+2. Select **Service Actions > Restart All** to load the updated configuration.
+
+If you manage your cluster from the command line:
+
+1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
+
+    ``` shell
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+    
+2.  Use the `hawq config` utility to set the values of the `max_connections`, `seg_max_connections`, and `max_prepared_transactions` parameters to values appropriate for your deployment. For example: 
+
+    ``` bash
+    $ hawq config -c max_connections -v 100
+    $ hawq config -c seg_max_connections -v 6400
+    $ hawq config -c max_prepared_transactions -v 200
+    ```
+
+    The value of `max_prepared_transactions` must be greater than or equal to `max_connections`.
+
+5.  Load the new configuration values by restarting your HAWQ cluster:
+
+    ``` bash
+    $ hawq restart cluster
+    ```
+
+6.  Use the `-s` option to `hawq config` to display server configuration parameter values:
+
+    ``` bash
+    $ hawq config -s max_connections
+    $ hawq config -s seg_max_connections
+    ```
+
+
+## <a id="topic5"></a>Encrypting Client/Server Connections 
+
+Enable SSL for client connections to HAWQ to encrypt the data passed over the network between the client and the database.
+
+HAWQ has native support for SSL connections between the client and the master server. SSL connections prevent third parties from snooping on the packets, and also prevent man-in-the-middle attacks. SSL should be used whenever the client connection goes through an insecure link, and must be used whenever client certificate authentication is used.
+
+Enabling SSL requires that OpenSSL be installed on both the client and the master server systems. HAWQ can be started with SSL enabled by setting the server configuration parameter `ssl` to `on` in the master `hawq-site.xml`. When starting in SSL mode, the server will look for the files `server.key` \(server private key\) and `server.crt` \(server certificate\) in the master data directory. These files must be set up correctly before an SSL-enabled HAWQ system can start.
+
+**Important:** Do not protect the private key with a passphrase. The server does not prompt for a passphrase for the private key, and the database startup fails with an error if one is required.
+
+A self-signed certificate can be used for testing, but a certificate signed by a certificate authority \(CA\) should be used in production, so the client can verify the identity of the server. Either a global or local CA can be used. If all the clients are local to the organization, a local CA is recommended.
+
+### <a id="topic6"></a>Creating a Self-signed Certificate without a Passphrase for Testing Only 
+
+To create a quick self-signed certificate for the server for testing, use the following OpenSSL command:
+
+```
+# openssl req -new -text -out server.req
+```
+
+Enter the information requested by the prompts. Be sure to enter the local host name as *Common Name*. The challenge password can be left blank.
+
+The program will generate a key that is passphrase protected, and does not accept a passphrase that is less than four characters long.
+
+To use this certificate with HAWQ, remove the passphrase with the following commands:
+
+```
+# openssl rsa -in privkey.pem -out server.key
+# rm privkey.pem
+```
+
+Enter the old passphrase when prompted to unlock the existing key.
+
+Then, enter the following command to turn the certificate into a self-signed certificate and to copy the key and certificate to a location where the server will look for them.
+
+``` 
+# openssl req -x509 -in server.req -text -key server.key -out server.crt
+```
+
+Finally, change the permissions on the key with the following command. The server will reject the file if the permissions are less restrictive than these.
+
+```
+# chmod og-rwx server.key
+```
+
+For more details on how to create your server private key and certificate, refer to the [OpenSSL documentation](https://www.openssl.org/docs/).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/disable-kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/disable-kerberos.html.md.erb b/markdown/clientaccess/disable-kerberos.html.md.erb
new file mode 100644
index 0000000..5646eec
--- /dev/null
+++ b/markdown/clientaccess/disable-kerberos.html.md.erb
@@ -0,0 +1,85 @@
+---
+title: Disabling Kerberos Security
+---
+
+Follow these steps to disable Kerberos security for HAWQ and PXF for manual installations.
+
+**Note:** If you install or manage your cluster using Ambari, then the HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you disable security for Hadoop. The following instructions are only necessary for manual installations, or when Hadoop security is disabled outside of Ambari.
+
+1.  Disable Kerberos on the Hadoop cluster on which you use HAWQ.
+2.  Disable security for HAWQ:
+    1.  Login to the HAWQ database master server as the `gpadmin` user:
+
+        ``` bash
+        $ ssh hawq_master_fqdn
+        ```
+
+    2.  Run the following command to set up HAWQ environment variables:
+
+        ``` bash
+        $ source /usr/local/hawq/greenplum_path.sh
+        ```
+
+    3.  Start HAWQ if necessary:
+
+        ``` bash
+        $ hawq start -a
+        ```
+
+    4.  Run the following command to disable security:
+
+        ``` bash
+        $ hawq config --masteronly -c enable_secure_filesystem -v \u201coff\u201d
+        ```
+
+    5.  Change the permission of the HAWQ HDFS data directory:
+
+        ``` bash
+        $ sudo -u hdfs hdfs dfs -chown -R gpadmin:gpadmin /hawq_data
+        ```
+
+    6.  On the HAWQ master node and on all segment server nodes, edit the `/usr/local/hawq/etc/hdfs-client.xml` file to disable kerberos security. Comment or remove the following properties in each file:
+
+        ``` xml
+        <!--
+        <property>
+          <name>hadoop.security.authentication</name>
+          <value>kerberos</value>
+        </property>
+
+        <property>
+          <name>dfs.namenode.kerberos.principal</name>
+          <value>nn/_HOST@LOCAL.DOMAIN</value>
+        </property>
+        -->
+        ```
+
+    7.  Restart HAWQ:
+
+        ``` bash
+        $ hawq restart -a -M fast
+        ```
+
+3.  Disable security for PXF:
+    1.  On each PXF node, edit the `/etc/gphd/pxf/conf/pxf-site.xml` to comment or remove the properties:
+
+        ``` xml
+        <!--
+        <property>
+            <name>pxf.service.kerberos.keytab</name>
+            <value>/etc/security/phd/keytabs/pxf.service.keytab</value>
+            <description>path to keytab file owned by pxf service
+            with permissions 0400</description>
+        </property>
+
+        <property>
+            <name>pxf.service.kerberos.principal</name>
+            <value>pxf/_HOST@PHD.LOCAL</value>
+            <description>Kerberos principal pxf service should use.
+            _HOST is replaced automatically with hostnames
+            FQDN</description>
+        </property>
+        -->
+        ```
+
+    2.  Restart the PXF service.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/g-connecting-with-psql.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/g-connecting-with-psql.html.md.erb b/markdown/clientaccess/g-connecting-with-psql.html.md.erb
new file mode 100644
index 0000000..0fa501c
--- /dev/null
+++ b/markdown/clientaccess/g-connecting-with-psql.html.md.erb
@@ -0,0 +1,35 @@
+---
+title: Connecting with psql
+---
+
+Depending on the default values used or the environment variables you have set, the following examples show how to access a database via `psql`:
+
+``` bash
+$ psql -d gpdatabase -h master_host -p 5432 -U `gpadmin`
+```
+
+``` bash
+$ psql gpdatabase
+```
+
+``` bash
+$ psql
+```
+
+If a user-defined database has not yet been created, you can access the system by connecting to the `template1` database. For example:
+
+``` bash
+$ psql template1
+```
+
+After connecting to a database, `psql` provides a prompt with the name of the database to which `psql` is currently connected, followed by the string `=>` \(or `=#` if you are the database superuser\). For example:
+
+``` sql
+gpdatabase=>
+```
+
+At the prompt, you may type in SQL commands. A SQL command must end with a `;` \(semicolon\) in order to be sent to the server and executed. For example:
+
+``` sql
+=> SELECT * FROM mytable;
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/g-database-application-interfaces.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/g-database-application-interfaces.html.md.erb b/markdown/clientaccess/g-database-application-interfaces.html.md.erb
new file mode 100644
index 0000000..29e22c5
--- /dev/null
+++ b/markdown/clientaccess/g-database-application-interfaces.html.md.erb
@@ -0,0 +1,96 @@
+---
+title: HAWQ Database Drivers and APIs
+---
+
+You may want to connect your existing Business Intelligence (BI) or Analytics applications with HAWQ. The database application programming interfaces most commonly used with HAWQ are the Postgres and ODBC and JDBC APIs.
+
+HAWQ provides the following connectivity tools for connecting to the database:
+
+  - ODBC driver
+  - JDBC driver
+  - `libpq` - PostgreSQL C API
+
+## <a id="dbdriver"></a>HAWQ Drivers
+
+ODBC and JDBC drivers for HAWQ are available as a separate download from Pivotal Network [Pivotal Network](https://network.pivotal.io/products/pivotal-hdb).
+
+### <a id="odbc_driver"></a>ODBC Driver
+
+The ODBC API specifies a standard set of C interfaces for accessing database management systems.  For additional information on using the ODBC API, refer to the [ODBC Programmer's Reference](https://msdn.microsoft.com/en-us/library/ms714177(v=vs.85).aspx) documentation.
+
+HAWQ supports the DataDirect ODBC Driver. Installation instructions for this driver are provided on the Pivotal Network driver download page. Refer to [HAWQ ODBC Driver](http://media.datadirect.com/download/docs/odbc/allodbc/#page/odbc%2Fthe-greenplum-wire-protocol-driver.html%23) for HAWQ-specific ODBC driver information.
+
+#### <a id="odbc_driver_connurl"></a>Connection Data Source
+The information required by the HAWQ ODBC driver to connect to a database is typically stored in a named data source. Depending on your platform, you may use [GUI](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FData_Source_Configuration_through_a_GUI_14.html%23) or [command line](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FData_Source_Configuration_in_the_UNIX_2fLinux_odbc_13.html%23) tools to create your data source definition. On Linux, ODBC data sources are typically defined in a file named `odbc.ini`. 
+
+Commonly-specified HAWQ ODBC data source connection properties include:
+
+| Property Name                                                    | Value Description                                                                                                                                                                                         |
+|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Database | Name of the database to which you want to connect. |
+| Driver   | Full path to the ODBC driver library file.                                                                                           |
+| HostName              | HAWQ master host name.                                                                                     |
+| MaxLongVarcharSize      | Maximum size of columns of type long varchar.                                                                                      |
+| Password              | Password used to connect to the specified database.                                                                                       |
+| PortNumber              | HAWQ master database port number.                                                                                      |
+
+Refer to [Connection Option Descriptions](http://media.datadirect.com/download/docs/odbc/allodbc/#page/odbc%2Fgreenplum-connection-option-descriptions.html%23) for a list of ODBC connection properties supported by the HAWQ DataDirect ODBC driver.
+
+Example HAWQ DataDirect ODBC driver data source definition:
+
+``` shell
+[HAWQ-201]
+Driver=/usr/local/hawq_drivers/odbc/lib/ddgplm27.so
+Description=DataDirect 7.1 Greenplum Wire Protocol - for HAWQ
+Database=getstartdb
+HostName=hdm1
+PortNumber=5432
+Password=changeme
+MaxLongVarcharSize=8192
+```
+
+The first line, `[HAWQ-201]`, identifies the name of the data source.
+
+ODBC connection properties may also be specified in a connection string identifying either a data source name, the name of a file data source, or the name of a driver.  A HAWQ ODBC connection string has the following format:
+
+``` shell
+([DSN=<data_source_name>]|[FILEDSN=<filename.dsn>]|[DRIVER=<driver_name>])[;<attribute=<value>[;...]]
+```
+
+For additional information on specifying a HAWQ ODBC connection string, refer to [Using a Connection String](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FUsing_a_Connection_String_16.html%23).
+
+### <a id="jdbc_driver"></a>JDBC Driver
+The JDBC API specifies a standard set of Java interfaces to SQL-compliant databases. For additional information on using the JDBC API, refer to the [Java JDBC API](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) documentation.
+
+HAWQ supports the DataDirect JDBC Driver. Installation instructions for this driver are provided on the Pivotal Network driver download page. Refer to [HAWQ JDBC Driver](http://media.datadirect.com/download/docs/jdbc/alljdbc/help.html#page/jdbcconnect%2Fgreenplum-driver.html%23) for HAWQ-specific JDBC driver information.
+
+#### <a id="jdbc_driver_connurl"></a>Connection URL
+Connection URLs for accessing the HAWQ DataDirect JDBC driver must be in the following format:
+
+``` shell
+jdbc:pivotal:greenplum://host:port[;<property>=<value>[;...]]
+```
+
+Commonly-specified HAWQ JDBC connection properties include:
+
+| Property Name                                                    | Value Description                                                                                                                                                                                         |
+|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| DatabaseName | Name of the database to which you want to connect. |
+| User                         | Username used to connect to the specified database.                                                                                           |
+| Password              | Password used to connect to the specified database.                                                                                       |
+
+Refer to [Connection Properties](http://media.datadirect.com/download/docs/jdbc/alljdbc/help.html#page/jdbcconnect%2FConnection_Properties_10.html%23) for a list of JDBC connection properties supported by the HAWQ DataDirect JDBC driver.
+
+Example HAWQ JDBC connection string:
+
+``` shell
+jdbc:pivotal:greenplum://hdm1:5432;DatabaseName=getstartdb;User=hdbuser;Password=hdbpass
+```
+
+## <a id="libpq_api"></a>libpq API
+`libpq` is the C API to PostgreSQL/HAWQ. This API provides a set of library functions enabling client programs to pass queries to the PostgreSQL backend server and to receive the results of those queries.
+
+`libpq` is installed in the `lib/` directory of your HAWQ distribution. `libpq-fe.h`, the header file required for developing front-end PostgreSQL applications, can be found in the `include/` directory.
+
+For additional information on using the `libpq` API, refer to [libpq - C Library](https://www.postgresql.org/docs/8.2/static/libpq.html) in the PostgreSQL documentation.
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/g-establishing-a-database-session.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/g-establishing-a-database-session.html.md.erb b/markdown/clientaccess/g-establishing-a-database-session.html.md.erb
new file mode 100644
index 0000000..a1c5f1c
--- /dev/null
+++ b/markdown/clientaccess/g-establishing-a-database-session.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: Establishing a Database Session
+---
+
+Users can connect to HAWQ using a PostgreSQL-compatible client program, such as `psql`. Users and administrators *always* connect to HAWQ through the *master*; the segments cannot accept client connections.
+
+In order to establish a connection to the HAWQ master, you will need to know the following connection information and configure your client program accordingly.
+
+|Connection Parameter|Description|Environment Variable|
+|--------------------|-----------|--------------------|
+|Application name|The application name that is connecting to the database. The default value, held in the `application_name` connection parameter is *psql*.|`$PGAPPNAME`|
+|Database name|The name of the database to which you want to connect. For a newly initialized system, use the `template1` database to connect for the first time.|`$PGDATABASE`|
+|Host name|The host name of the HAWQ master. The default host is the local host.|`$PGHOST`|
+|Port|The port number that the HAWQ master instance is running on. The default is 5432.|`$PGPORT`|
+|User name|The database user \(role\) name to connect as. This is not necessarily the same as your OS user name. Check with your HAWQ administrator if you are not sure what you database user name is. Note that every HAWQ system has one superuser account that is created automatically at initialization time. This account has the same name as the OS name of the user who initialized the HAWQ system \(typically `gpadmin`\).|`$PGUSER`|
+
+[Connecting with psql](g-connecting-with-psql.html) provides example commands for connecting to HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/g-hawq-database-client-applications.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/g-hawq-database-client-applications.html.md.erb b/markdown/clientaccess/g-hawq-database-client-applications.html.md.erb
new file mode 100644
index 0000000..a1e8ff3
--- /dev/null
+++ b/markdown/clientaccess/g-hawq-database-client-applications.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: HAWQ Client Applications
+---
+
+HAWQ comes installed with a number of client utility applications located in the `$GPHOME/bin` directory of your HAWQ master host installation. The following are the most commonly used client utility applications:
+
+|Name|Usage|
+|----|-----|
+|`createdb`|create a new database|
+|`createlang`|define a new procedural language|
+|`createuser`|define a new database role|
+|`dropdb`|remove a database|
+|`droplang`|remove a procedural language|
+|`dropuser`|remove a role|
+|`psql`|PostgreSQL interactive terminal|
+|`reindexdb`|reindex a database|
+|`vacuumdb`|garbage-collect and analyze a database|
+
+When using these client applications, you must connect to a database through the HAWQ master instance. You will need to know the name of your target database, the host name and port number of the master, and what database user name to connect as. This information can be provided on the command-line using the options `-d`, `-h`, `-p`, and `-U` respectively. If an argument is found that does not belong to any option, it will be interpreted as the database name first.
+
+All of these options have default values which will be used if the option is not specified. The default host is the local host. The default port number is 5432. The default user name is your OS system user name, as is the default database name. Note that OS user names and HAWQ user names are not necessarily the same.
+
+If the default values are not correct, you can set the environment variables `PGDATABASE`, `PGHOST`, `PGPORT`, and `PGUSER` to the appropriate values, or use a `psql``~/.pgpass` file to contain frequently-used passwords.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/g-supported-client-applications.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/g-supported-client-applications.html.md.erb b/markdown/clientaccess/g-supported-client-applications.html.md.erb
new file mode 100644
index 0000000..202f625
--- /dev/null
+++ b/markdown/clientaccess/g-supported-client-applications.html.md.erb
@@ -0,0 +1,8 @@
+---
+title: Supported Client Applications
+---
+
+Users can connect to HAWQ using various client applications:
+
+-   A number of [HAWQ Client Applications](g-hawq-database-client-applications.html) are provided with your HAWQ installation. The `psql` client application provides an interactive command-line interface to HAWQ.
+-   Using standard ODBC/JDBC Application Interfaces, such as ODBC and JDBC, users can connect their client applications to HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/g-troubleshooting-connection-problems.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/g-troubleshooting-connection-problems.html.md.erb b/markdown/clientaccess/g-troubleshooting-connection-problems.html.md.erb
new file mode 100644
index 0000000..0328606
--- /dev/null
+++ b/markdown/clientaccess/g-troubleshooting-connection-problems.html.md.erb
@@ -0,0 +1,13 @@
+---
+title: Troubleshooting Connection Problems
+---
+
+A number of things can prevent a client application from successfully connecting to HAWQ. This topic explains some of the common causes of connection problems and how to correct them.
+
+|Problem|Solution|
+|-------|--------|
+|No pg\_hba.conf entry for host or user|To enable HAWQ to accept remote client connections, you must configure your HAWQ master instance so that connections are allowed from the client hosts and database users that will be connecting to HAWQ. This is done by adding the appropriate entries to the pg\_hba.conf configuration file \(located in the master instance's data directory\). For more detailed information, see [Allowing Connections to HAWQ](client_auth.html).|
+|HAWQ is not running|If the HAWQ master instance is down, users will not be able to connect. You can verify that the HAWQ system is up by running the `hawq state` utility on the HAWQ master host.|
+|Network problems<br/><br/>Interconnect timeouts|If users connect to the HAWQ master host from a remote client, network problems can prevent a connection \(for example, DNS host name resolution problems, the host system is down, and so on.\). To ensure that network problems are not the cause, connect to the HAWQ master host from the remote client host. For example: `ping hostname`. <br/><br/>If the system cannot resolve the host names and IP addresses of the hosts involved in HAWQ, queries and connections will fail. For some operations, connections to the HAWQ master use `localhost` and others use the actual host name, so you must be able to resolve both. If you encounter this error, first make sure you can connect to each host in your HAWQ array from the master host over the network. In the `/etc/hosts` file of the master and all segments, make sure you have the correct host names and IP addresses for all hosts involved in the HAWQ array. The `127.0.0.1` IP must resolve to `localho
 st`.|
+|Too many clients already|By default, HAWQ is configured to allow a maximum of 200 concurrent user connections on the master and 1280 connections on a segment. A connection attempt that causes that limit to be exceeded will be refused. This limit is controlled by the `max_connections` parameter on the master instance and by the `seg_max_connections` parameter on segment instances. If you change this setting for the master, you must also make appropriate changes at the segments.|
+|Query failure|Reverse DNS must be configured in your HAWQ cluster network. In cases where reverse DNS has not been configured, failing queries will generate "Failed to reverse DNS lookup for ip \<ip-address\>" warning messages to the HAWQ master node log file. |

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/index.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/index.md.erb b/markdown/clientaccess/index.md.erb
new file mode 100644
index 0000000..c88adeb
--- /dev/null
+++ b/markdown/clientaccess/index.md.erb
@@ -0,0 +1,17 @@
+---
+title: Managing Client Access
+---
+
+This section explains how to configure client connections and authentication for HAWQ:
+
+*  <a class="subnav" href="./client_auth.html">Configuring Client Authentication</a>
+*  <a class="subnav" href="./ldap.html">Using LDAP Authentication with TLS/SSL</a>
+*  <a class="subnav" href="./kerberos.html">Using Kerberos Authentication</a>
+*  <a class="subnav" href="./disable-kerberos.html">Disabling Kerberos Security</a>
+*  <a class="subnav" href="./roles_privs.html">Managing Roles and Privileges</a>
+*  <a class="subnav" href="./g-establishing-a-database-session.html">Establishing a Database Session</a>
+*  <a class="subnav" href="./g-supported-client-applications.html">Supported Client Applications</a>
+*  <a class="subnav" href="./g-hawq-database-client-applications.html">HAWQ Client Applications</a>
+*  <a class="subnav" href="./g-connecting-with-psql.html">Connecting with psql</a>
+*  <a class="subnav" href="./g-database-application-interfaces.html">Database Application Interfaces</a>
+*  <a class="subnav" href="./g-troubleshooting-connection-problems.html">Troubleshooting Connection Problems</a>



[08/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/svg/hawq_resource_management.svg
----------------------------------------------------------------------
diff --git a/mdimages/svg/hawq_resource_management.svg b/mdimages/svg/hawq_resource_management.svg
deleted file mode 100644
index 064a3ef..0000000
--- a/mdimages/svg/hawq_resource_management.svg
+++ /dev/null
@@ -1,621 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   version="1.1"
-   viewBox="0 0 662.48035 375.4053"
-   stroke-miterlimit="10"
-   id="svg2"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="hawq_resource_management.svg"
-   width="662.48035"
-   height="375.4053"
-   style="fill:none;stroke:none;stroke-linecap:square;stroke-miterlimit:10">
-  <metadata
-     id="metadata233">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <defs
-     id="defs231" />
-  <sodipodi:namedview
-     pagecolor="#ffffff"
-     bordercolor="#666666"
-     borderopacity="1"
-     objecttolerance="10"
-     gridtolerance="10"
-     guidetolerance="10"
-     inkscape:pageopacity="0"
-     inkscape:pageshadow="2"
-     inkscape:window-width="1448"
-     inkscape:window-height="846"
-     id="namedview229"
-     showgrid="false"
-     showborder="true"
-     fit-margin-top="0"
-     fit-margin-left="0"
-     fit-margin-right="0"
-     fit-margin-bottom="0"
-     inkscape:zoom="1.0763737"
-     inkscape:cx="435.28584"
-     inkscape:cy="75.697983"
-     inkscape:window-x="0"
-     inkscape:window-y="0"
-     inkscape:window-maximized="0"
-     inkscape:current-layer="g7" />
-  <clipPath
-     id="p.0">
-    <path
-       d="M 0,0 720,0 720,540 0,540 0,0 Z"
-       id="path5"
-       inkscape:connector-curvature="0"
-       style="clip-rule:nonzero" />
-  </clipPath>
-  <g
-     clip-path="url(#p.0)"
-     id="g7"
-     transform="translate(-31.087543,-29.454071)">
-    <path
-       d="m 0,0 720,0 0,540 -720,0 z"
-       id="path9"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 39.249344,35.48819 158.740156,0 0,61.259842 -158.740156,0 z"
-       id="path11"
-       inkscape:connector-curvature="0"
-       style="fill:#fce5cd;fill-rule:nonzero" />
-    <path
-       d="m 39.249344,35.48819 158.740156,0 0,61.259842 -158.740156,0 z"
-       id="path13"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#f6b26b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 106.38019,58.51061 0,3.5 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.10937,0.0625 -0.0625,0.01563 -0.1875,0.03125 -0.125,0.03125 -0.29688,0.03125 -0.1875,0 -0.3125,-0.03125 -0.10937,-0.01563 -0.1875,-0.03125 -0.0781,-0.03125 -0.10937,-0.0625 -0.0312,-0.04687 -0.0312,-0.109375 l 0,-3.5 -2.67188,-5.34375 q -0.0937,-0.171875 -0.10937,-0.265625 -0.0156,-0.09375 0.0312,-0.140625 0.0625,-0.0625 0.20312,-0.0625 0.14063,-0.01563 0.39063,-0.01563 0.21875,0 0.34375,0.01563 0.14062,0 0.21875,0.03125 0.0937,0.03125 0.125,0.07813 0.0469,0.04687 0.0781,0.125 l 1.3125,2.71875 q 0.1875,0.390625 0.35938,0.8125 0.1875,0.421875 0.375,0.859375 l 0.0156,0 q 0.17188,-0.421875 0.34375,-0.828125 0.1875,-0.421875 0.375,-0.828125 l 1.3125,-2.734375 q 0.0312,-0.07813 0.0625,-0.125 0.0469,-0.04687 0.10938,-0.07813 0.0781,-0.03125 0.20312,-0.03125 0.125,-0.01563 0.3125,-0.01563 0.26563,0 0.40625,0.01563 0.15625,0.01563 0.20313,0.07813 0.0625,0.04687 0.0469,0.140625 -0.0156,0.09375 -0.0937,0.25 l 
 -2.6875,5.34375 z m 11.22018,3.234375 q 0.0625,0.171875 0.0625,0.265625 0,0.09375 -0.0625,0.15625 -0.0469,0.04687 -0.1875,0.0625 -0.14062,0.01563 -0.35937,0.01563 -0.23438,0 -0.375,-0.01563 -0.125,-0.01563 -0.20313,-0.03125 -0.0625,-0.03125 -0.0937,-0.07813 -0.0312,-0.04687 -0.0625,-0.109375 l -0.8125,-2.296875 -3.9375,0 -0.78125,2.265625 q -0.0156,0.07813 -0.0625,0.125 -0.0312,0.04687 -0.10937,0.07813 -0.0625,0.03125 -0.1875,0.04687 -0.125,0.01563 -0.34375,0.01563 -0.20313,0 -0.34375,-0.03125 -0.14063,-0.01563 -0.20313,-0.0625 -0.0469,-0.04687 -0.0469,-0.140625 0.0156,-0.109375 0.0781,-0.265625 l 3.17188,-8.8125 q 0.0312,-0.07813 0.0781,-0.125 0.0469,-0.04687 0.14063,-0.07813 0.0937,-0.03125 0.23437,-0.03125 0.14063,-0.01563 0.35938,-0.01563 0.23437,0 0.39062,0.01563 0.15625,0 0.25,0.03125 0.0937,0.03125 0.14063,0.09375 0.0625,0.04687 0.0781,0.125 l 3.1875,8.796875 z m -4.07812,-7.765625 -0.0156,0 -1.625,4.71875 3.29688,0 -1.65625,-4.71875 z m 11.77718,8.03125 q 0,0.0625 -0.0312,0.
 109375 -0.0156,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.20313,0.04687 -0.125,0.01563 -0.34375,0.01563 -0.1875,0 -0.3125,-0.01563 -0.125,-0.01563 -0.20312,-0.04687 -0.0625,-0.03125 -0.10938,-0.09375 -0.0312,-0.0625 -0.0625,-0.140625 l -0.875,-2.234375 q -0.15625,-0.390625 -0.32812,-0.703125 -0.15625,-0.328125 -0.39063,-0.546875 -0.21875,-0.234375 -0.53125,-0.359375 -0.29687,-0.125 -0.73437,-0.125 l -0.84375,0 0,4.03125 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.10938,0.0625 -0.0625,0.01563 -0.1875,0.03125 -0.10937,0.03125 -0.29687,0.03125 -0.1875,0 -0.3125,-0.03125 -0.10938,-0.01563 -0.1875,-0.03125 -0.0781,-0.03125 -0.10938,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-8.78125 q 0,-0.28125 0.14062,-0.390625 0.15625,-0.125 0.32813,-0.125 l 2.01562,0 q 0.35938,0 0.59375,0.03125 0.23438,0.01563 0.42188,0.03125 0.54687,0.09375 0.96875,0.3125 0.42187,0.203125 0.70312,0.515625 0.29688,0.3125 0.4375,0.71875 0.14063,0.40625 0.14063,0.890625 0,0.484375 -0.125,0.859375 -0.125,0.375 -0.
 375,0.671875 -0.23438,0.28125 -0.57813,0.5 -0.32812,0.203125 -0.75,0.359375 0.23438,0.09375 0.42188,0.25 0.1875,0.15625 0.34375,0.375 0.17187,0.21875 0.3125,0.515625 0.15625,0.28125 0.29687,0.640625 l 0.85938,2.09375 q 0.0937,0.25 0.125,0.359375 0.0312,0.109375 0.0312,0.171875 z m -1.89063,-6.65625 q 0,-0.5625 -0.25,-0.9375 -0.25,-0.390625 -0.84375,-0.5625 -0.17187,-0.04687 -0.40625,-0.0625 -0.23437,-0.03125 -0.60937,-0.03125 l -1.0625,0 0,3.1875 1.23437,0 q 0.5,0 0.85938,-0.109375 0.35937,-0.125 0.59375,-0.34375 0.25,-0.21875 0.35937,-0.5 0.125,-0.296875 0.125,-0.640625 z m 10.69226,6.328125 q 0,0.140625 -0.0469,0.25 -0.0469,0.09375 -0.125,0.171875 -0.0781,0.0625 -0.17188,0.09375 -0.0937,0.01563 -0.1875,0.01563 l -0.40625,0 q -0.1875,0 -0.32812,-0.03125 -0.14063,-0.04687 -0.28125,-0.140625 -0.125,-0.109375 -0.25,-0.296875 -0.125,-0.1875 -0.28125,-0.46875 l -2.98438,-5.390625 q -0.23437,-0.421875 -0.48437,-0.875 -0.23438,-0.453125 -0.4375,-0.890625 l -0.0156,0 q 0.0156,0.53125 0.015
 6,1.078125 0.0156,0.546875 0.0156,1.09375 l 0,5.71875 q 0,0.04687 -0.0312,0.09375 -0.0312,0.04687 -0.10937,0.07813 -0.0625,0.01563 -0.17188,0.03125 -0.10937,0.03125 -0.28125,0.03125 -0.1875,0 -0.29687,-0.03125 -0.10938,-0.01563 -0.1875,-0.03125 -0.0625,-0.03125 -0.0937,-0.07813 -0.0156,-0.04687 -0.0156,-0.09375 l 0,-8.75 q 0,-0.296875 0.15625,-0.421875 0.15625,-0.125 0.34375,-0.125 l 0.60938,0 q 0.20312,0 0.34375,0.04687 0.15625,0.03125 0.26562,0.125 0.10938,0.07813 0.21875,0.234375 0.10938,0.140625 0.23438,0.375 l 2.29687,4.15625 q 0.21875,0.375 0.40625,0.75 0.20313,0.359375 0.375,0.71875 0.1875,0.34375 0.35938,0.6875 0.1875,0.328125 0.375,0.671875 l 0,0 q -0.0156,-0.578125 -0.0156,-1.203125 0,-0.625 0,-1.203125 l 0,-5.140625 q 0,-0.04687 0.0156,-0.09375 0.0312,-0.04687 0.0937,-0.07813 0.0781,-0.03125 0.1875,-0.04687 0.125,-0.01563 0.3125,-0.01563 0.15625,0 0.26562,0.01563 0.125,0.01563 0.1875,0.04687 0.0625,0.03125 0.0937,0.07813 0.0312,0.04687 0.0312,0.09375 l 0,8.75 z"
-       id="path15"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 70.24903,80.01061 q 0,0.0625 -0.03125,0.109375 -0.01563,0.03125 -0.09375,0.0625 -0.0625,0.03125 -0.203125,0.04687 -0.125,0.01563 -0.34375,0.01563 -0.1875,0 -0.3125,-0.01563 -0.125,-0.01563 -0.203125,-0.04687 -0.0625,-0.03125 -0.109375,-0.09375 -0.03125,-0.0625 -0.0625,-0.140625 l -0.875,-2.234375 q -0.15625,-0.390625 -0.328125,-0.703125 -0.15625,-0.328125 -0.390625,-0.546875 -0.21875,-0.234375 -0.53125,-0.359375 -0.296875,-0.125 -0.734375,-0.125 l -0.84375,0 0,4.03125 q 0,0.0625 -0.03125,0.109375 -0.03125,0.03125 -0.109375,0.0625 -0.0625,0.01563 -0.1875,0.03125 -0.109375,0.03125 -0.296875,0.03125 -0.1875,0 -0.3125,-0.03125 -0.109375,-0.01563 -0.1875,-0.03125 -0.07813,-0.03125 -0.109375,-0.0625 -0.01563,-0.04687 -0.01563,-0.109375 l 0,-8.78125 q 0,-0.28125 0.140625,-0.390625 0.15625,-0.125 0.328125,-0.125 l 2.015625,0 q 0.359375,0 0.59375,0.03125 0.234375,0.01563 0.421875,0.03125 0.546875,0.09375 0.96875,0.3125 0.421875,0.203125 0.703125,0.515625 0.296875,0.3125 0.4375,0.
 71875 0.140625,0.40625 0.140625,0.890625 0,0.484375 -0.125,0.859375 -0.125,0.375 -0.375,0.671875 -0.234375,0.28125 -0.578125,0.5 -0.328125,0.203125 -0.75,0.359375 0.234375,0.09375 0.421875,0.25 0.1875,0.15625 0.34375,0.375 0.171875,0.21875 0.3125,0.515625 0.15625,0.28125 0.296875,0.640625 l 0.859375,2.09375 q 0.09375,0.25 0.125,0.359375 0.03125,0.109375 0.03125,0.171875 z m -1.890625,-6.65625 q 0,-0.5625 -0.25,-0.9375 -0.25,-0.390625 -0.84375,-0.5625 -0.171875,-0.04687 -0.40625,-0.0625 -0.234375,-0.03125 -0.609375,-0.03125 l -1.0625,0 0,3.1875 1.234375,0 q 0.5,0 0.859375,-0.109375 0.359375,-0.125 0.59375,-0.34375 0.25,-0.21875 0.359375,-0.5 0.125,-0.296875 0.125,-0.640625 z m 9.020386,3.078125 q 0,0.28125 -0.15625,0.40625 -0.140625,0.125 -0.3125,0.125 l -4.328125,0 q 0,0.546875 0.109375,0.984375 0.109375,0.4375 0.359375,0.765625 0.265625,0.3125 0.671875,0.484375 0.421875,0.15625 1.015625,0.15625 0.46875,0 0.828125,-0.07813 0.359375,-0.07813 0.625,-0.171875 0.28125,-0.09375 0.453125,
 -0.171875 0.171875,-0.07813 0.25,-0.07813 0.0625,0 0.09375,0.03125 0.04687,0.01563 0.0625,0.07813 0.03125,0.04687 0.03125,0.140625 0.01563,0.09375 0.01563,0.21875 0,0.09375 -0.01563,0.171875 0,0.0625 -0.01563,0.125 0,0.04687 -0.03125,0.09375 -0.03125,0.04687 -0.07813,0.09375 -0.03125,0.03125 -0.234375,0.125 -0.1875,0.09375 -0.5,0.1875 -0.3125,0.07813 -0.734375,0.140625 -0.40625,0.07813 -0.875,0.07813 -0.8125,0 -1.4375,-0.21875 -0.609375,-0.234375 -1.03125,-0.671875 -0.40625,-0.453125 -0.625,-1.125 -0.203125,-0.6875 -0.203125,-1.578125 0,-0.84375 0.21875,-1.515625 0.21875,-0.6875 0.625,-1.15625 0.421875,-0.46875 1,-0.71875 0.59375,-0.265625 1.3125,-0.265625 0.78125,0 1.328125,0.25 0.546875,0.25 0.890625,0.671875 0.359375,0.421875 0.515625,1 0.171875,0.5625 0.171875,1.203125 l 0,0.21875 z m -1.21875,-0.359375 q 0.01563,-0.953125 -0.421875,-1.484375 -0.4375,-0.546875 -1.3125,-0.546875 -0.453125,0 -0.796875,0.171875 -0.328125,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.1
 25,0.359375 -0.140625,0.765625 l 3.578125,0 z m 7.026718,2.140625 q 0,0.515625 -0.1875,0.90625 -0.1875,0.390625 -0.53125,0.671875 -0.34375,0.265625 -0.828125,0.40625 -0.46875,0.140625 -1.046875,0.140625 -0.34375,0 -0.671875,-0.0625 -0.3125,-0.04687 -0.578125,-0.125 -0.25,-0.09375 -0.421875,-0.1875 -0.171875,-0.09375 -0.25,-0.15625 -0.07813,-0.07813 -0.125,-0.203125 -0.03125,-0.140625 -0.03125,-0.359375 0,-0.140625 0.01563,-0.234375 0.01563,-0.109375 0.03125,-0.15625 0.03125,-0.0625 0.0625,-0.07813 0.04687,-0.03125 0.09375,-0.03125 0.07813,0 0.234375,0.09375 0.15625,0.09375 0.390625,0.21875 0.234375,0.109375 0.546875,0.21875 0.3125,0.09375 0.734375,0.09375 0.296875,0 0.546875,-0.0625 0.25,-0.0625 0.4375,-0.1875 0.1875,-0.140625 0.28125,-0.328125 0.09375,-0.203125 0.09375,-0.46875 0,-0.28125 -0.140625,-0.46875 -0.140625,-0.203125 -0.375,-0.34375 -0.234375,-0.140625 -0.53125,-0.25 -0.296875,-0.125 -0.609375,-0.25 -0.296875,-0.125 -0.59375,-0.28125 -0.296875,-0.15625 -0.53125,-0.375 -0.
 234375,-0.234375 -0.390625,-0.546875 -0.140625,-0.3125 -0.140625,-0.765625 0,-0.375 0.15625,-0.734375 0.15625,-0.359375 0.453125,-0.625 0.296875,-0.265625 0.75,-0.421875 0.453125,-0.171875 1.046875,-0.171875 0.265625,0 0.53125,0.04687 0.265625,0.04687 0.46875,0.109375 0.21875,0.0625 0.359375,0.140625 0.15625,0.07813 0.234375,0.140625 0.07813,0.0625 0.09375,0.109375 0.03125,0.03125 0.04687,0.09375 0.01563,0.04687 0.01563,0.140625 0.01563,0.07813 0.01563,0.1875 0,0.125 -0.01563,0.21875 0,0.09375 -0.03125,0.15625 -0.03125,0.04687 -0.0625,0.07813 -0.03125,0.03125 -0.07813,0.03125 -0.0625,0 -0.1875,-0.07813 -0.125,-0.09375 -0.328125,-0.171875 -0.203125,-0.09375 -0.46875,-0.171875 -0.265625,-0.09375 -0.609375,-0.09375 -0.3125,0 -0.546875,0.07813 -0.234375,0.0625 -0.390625,0.203125 -0.140625,0.125 -0.21875,0.296875 -0.07813,0.171875 -0.07813,0.375 0,0.296875 0.140625,0.484375 0.15625,0.1875 0.390625,0.34375 0.234375,0.140625 0.53125,0.265625 0.3125,0.109375 0.609375,0.234375 0.3125,0.125 0
 .609375,0.28125 0.3125,0.15625 0.546875,0.375 0.234375,0.21875 0.375,0.53125 0.15625,0.296875 0.15625,0.71875 z m 7.716629,-1.5625 q 0,0.796875 -0.21875,1.484375 -0.203125,0.671875 -0.625,1.171875 -0.421875,0.484375 -1.0625,0.765625 -0.625,0.265625 -1.46875,0.265625 -0.8125,0 -1.421875,-0.234375 -0.59375,-0.25 -1,-0.703125 -0.390625,-0.46875 -0.59375,-1.125 -0.203125,-0.65625 -0.203125,-1.5 0,-0.796875 0.203125,-1.46875 0.21875,-0.6875 0.640625,-1.171875 0.421875,-0.5 1.046875,-0.765625 0.625,-0.28125 1.46875,-0.28125 0.8125,0 1.421875,0.25 0.609375,0.234375 1,0.703125 0.40625,0.453125 0.609375,1.125 0.203125,0.65625 0.203125,1.484375 z m -1.265625,0.07813 q 0,-0.53125 -0.109375,-1 -0.09375,-0.484375 -0.328125,-0.84375 -0.21875,-0.359375 -0.609375,-0.5625 -0.390625,-0.21875 -0.96875,-0.21875 -0.53125,0 -0.921875,0.1875 -0.375,0.1875 -0.625,0.546875 -0.25,0.34375 -0.375,0.828125 -0.125,0.46875 -0.125,1.03125 0,0.546875 0.09375,1.03125 0.109375,0.46875 0.328125,0.828125 0.234375,0.343
 75 0.625,0.5625 0.390625,0.203125 0.96875,0.203125 0.53125,0 0.921875,-0.1875 0.390625,-0.203125 0.640625,-0.546875 0.25,-0.34375 0.359375,-0.8125 0.125,-0.484375 0.125,-1.046875 z m 8.510132,3.28125 q 0,0.0625 -0.03125,0.109375 -0.01563,0.03125 -0.09375,0.0625 -0.0625,0.03125 -0.171875,0.04687 -0.09375,0.01563 -0.25,0.01563 -0.171875,0 -0.28125,-0.01563 -0.09375,-0.01563 -0.15625,-0.04687 -0.0625,-0.03125 -0.09375,-0.0625 -0.01563,-0.04687 -0.01563,-0.109375 l 0,-0.875 q -0.5625,0.625 -1.125,0.921875 -0.546875,0.28125 -1.109375,0.28125 -0.65625,0 -1.109375,-0.21875 -0.453125,-0.21875 -0.734375,-0.59375 -0.265625,-0.390625 -0.390625,-0.890625 -0.125,-0.5 -0.125,-1.21875 l 0,-4 q 0,-0.04687 0.03125,-0.09375 0.03125,-0.04687 0.09375,-0.07813 0.07813,-0.03125 0.1875,-0.03125 0.125,-0.01563 0.296875,-0.01563 0.1875,0 0.296875,0.01563 0.125,0 0.1875,0.03125 0.0625,0.03125 0.09375,0.07813 0.03125,0.04687 0.03125,0.09375 l 0,3.84375 q 0,0.578125 0.07813,0.9375 0.09375,0.34375 0.265625,0.59
 375 0.171875,0.234375 0.4375,0.375 0.265625,0.125 0.609375,0.125 0.453125,0 0.90625,-0.3125 0.453125,-0.328125 0.953125,-0.953125 l 0,-4.609375 q 0,-0.04687 0.03125,-0.09375 0.03125,-0.04687 0.09375,-0.07813 0.07813,-0.03125 0.1875,-0.03125 0.125,-0.01563 0.296875,-0.01563 0.171875,0 0.28125,0.01563 0.125,0 0.1875,0.03125 0.07813,0.03125 0.109375,0.07813 0.03125,0.04687 0.03125,0.09375 l 0,6.59375 z m 5.903385,-6.15625 q 0,0.15625 -0.0156,0.265625 0,0.109375 -0.0156,0.171875 -0.0156,0.0625 -0.0625,0.109375 -0.0312,0.03125 -0.0781,0.03125 -0.0625,0 -0.15625,-0.03125 -0.0781,-0.04687 -0.1875,-0.07813 -0.10937,-0.03125 -0.23437,-0.0625 -0.125,-0.03125 -0.28125,-0.03125 -0.1875,0 -0.35938,0.07813 -0.17187,0.07813 -0.375,0.25 -0.1875,0.15625 -0.40625,0.4375 -0.21875,0.28125 -0.46875,0.6875 l 0,4.328125 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.1875,0.04687 -0.10937,0.01563 -0.29687,0.01563 -0.17188,0 -0.29688,-0.01563 -0.10937,-0.01563 -0.1875,-0.04687
  -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-6.59375 q 0,-0.04687 0.0156,-0.09375 0.0312,-0.04687 0.0937,-0.07813 0.0625,-0.03125 0.15625,-0.03125 0.10938,-0.01563 0.28125,-0.01563 0.15625,0 0.26563,0.01563 0.10937,0 0.15625,0.03125 0.0625,0.03125 0.0937,0.07813 0.0312,0.04687 0.0312,0.09375 l 0,0.96875 q 0.26562,-0.40625 0.5,-0.640625 0.23437,-0.25 0.45312,-0.390625 0.21875,-0.15625 0.42188,-0.203125 0.20312,-0.0625 0.42187,-0.0625 0.0937,0 0.20313,0.01563 0.125,0.01563 0.25,0.04687 0.14062,0.01563 0.25,0.0625 0.10937,0.03125 0.15625,0.07813 0.0469,0.03125 0.0469,0.0625 0.0156,0.03125 0.0312,0.09375 0.0156,0.04687 0.0156,0.140625 0,0.09375 0,0.265625 z m 6.00027,5.15625 q 0,0.125 -0.0156,0.21875 0,0.09375 -0.0156,0.15625 -0.0156,0.0625 -0.0469,0.109375 -0.0312,0.04687 -0.125,0.140625 -0.0781,0.09375 -0.29688,0.234375 -0.21875,0.125 -0.5,0.234375 -0.28125,0.09375 -0.60937,0.15625 -0.3125,0.07813 -0.65625,0.07813 -0.70313,0 -1.26563,-0.234375 -0.54687,-0.
 234375 -0.92187,-0.6875 -0.35938,-0.453125 -0.5625,-1.109375 -0.1875,-0.65625 -0.1875,-1.515625 0,-0.96875 0.23437,-1.65625 0.25,-0.703125 0.65625,-1.15625 0.42188,-0.453125 0.96875,-0.65625 0.5625,-0.21875 1.21875,-0.21875 0.3125,0 0.60938,0.0625 0.29687,0.04687 0.54687,0.140625 0.25,0.09375 0.4375,0.21875 0.20313,0.125 0.28125,0.21875 0.0937,0.09375 0.125,0.140625 0.0312,0.04687 0.0469,0.125 0.0312,0.0625 0.0312,0.15625 0.0156,0.07813 0.0156,0.21875 0,0.28125 -0.0625,0.40625 -0.0625,0.109375 -0.15625,0.109375 -0.10937,0 -0.26562,-0.125 -0.14063,-0.125 -0.35938,-0.265625 -0.21875,-0.15625 -0.53125,-0.265625 -0.3125,-0.125 -0.73437,-0.125 -0.875,0 -1.34375,0.671875 -0.45313,0.671875 -0.45313,1.9375 0,0.640625 0.10938,1.125 0.125,0.46875 0.35937,0.796875 0.23438,0.328125 0.57813,0.484375 0.34375,0.15625 0.78125,0.15625 0.42187,0 0.73437,-0.125 0.3125,-0.140625 0.54688,-0.296875 0.23437,-0.15625 0.39062,-0.28125 0.15625,-0.140625 0.23438,-0.140625 0.0625,0 0.0937,0.03125 0.0312,0.0312
 5 0.0625,0.109375 0.0312,0.0625 0.0312,0.171875 0.0156,0.109375 0.0156,0.25 z m 7.08804,-2.578125 q 0,0.28125 -0.15625,0.40625 -0.14062,0.125 -0.3125,0.125 l -4.32812,0 q 0,0.546875 0.10937,0.984375 0.10938,0.4375 0.35938,0.765625 0.26562,0.3125 0.67187,0.484375 0.42188,0.15625 1.01563,0.15625 0.46875,0 0.82812,-0.07813 0.35938,-0.07813 0.625,-0.171875 0.28125,-0.09375 0.45313,-0.171875 0.17187,-0.07813 0.25,-0.07813 0.0625,0 0.0937,0.03125 0.0469,0.01563 0.0625,0.07813 0.0312,0.04687 0.0312,0.140625 0.0156,0.09375 0.0156,0.21875 0,0.09375 -0.0156,0.171875 0,0.0625 -0.0156,0.125 0,0.04687 -0.0312,0.09375 -0.0312,0.04687 -0.0781,0.09375 -0.0312,0.03125 -0.23438,0.125 -0.1875,0.09375 -0.5,0.1875 -0.3125,0.07813 -0.73437,0.140625 -0.40625,0.07813 -0.875,0.07813 -0.8125,0 -1.4375,-0.21875 -0.60938,-0.234375 -1.03125,-0.671875 -0.40625,-0.453125 -0.625,-1.125 -0.20313,-0.6875 -0.20313,-1.578125 0,-0.84375 0.21875,-1.515625 0.21875,-0.6875 0.625,-1.15625 0.42188,-0.46875 1,-0.71875 0.5937
 5,-0.265625 1.3125,-0.265625 0.78125,0 1.32813,0.25 0.54687,0.25 0.89062,0.671875 0.35938,0.421875 0.51563,1 0.17187,0.5625 0.17187,1.203125 l 0,0.21875 z m -1.21875,-0.359375 q 0.0156,-0.953125 -0.42187,-1.484375 -0.4375,-0.546875 -1.3125,-0.546875 -0.45313,0 -0.79688,0.171875 -0.32812,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.125,0.359375 -0.14062,0.765625 l 3.57812,0 z m 16.63699,3.9375 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.10937,0.0625 -0.0625,0.01563 -0.17188,0.03125 -0.10937,0.03125 -0.29687,0.03125 -0.17188,0 -0.29688,-0.03125 -0.10937,-0.01563 -0.1875,-0.03125 -0.0625,-0.03125 -0.0937,-0.0625 -0.0312,-0.04687 -0.0312,-0.109375 l 0,-8.25 -0.0156,0 -3.375,8.28125 q -0.0312,0.04687 -0.0781,0.09375 -0.0312,0.03125 -0.10938,0.0625 -0.0781,0.01563 -0.1875,0.03125 -0.0937,0.01563 -0.25,0.01563 -0.14062,0 -0.25,-0.01563 -0.10937,-0.01563 -0.1875,-0.03125 -0.0781,-0.03125 -0.125,-0.0625 -0.0312,-0.04687 -0.0469,-0.09375 l -3.23438,-8.28125 0,0 0,8.25 q 0,0.
 0625 -0.0312,0.109375 -0.0312,0.03125 -0.10937,0.0625 -0.0625,0.01563 -0.1875,0.03125 -0.10938,0.03125 -0.29688,0.03125 -0.17187,0 -0.29687,-0.03125 -0.10938,-0.01563 -0.1875,-0.03125 -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-8.71875 q 0,-0.3125 0.15625,-0.4375 0.15625,-0.140625 0.35938,-0.140625 l 0.76562,0 q 0.23438,0 0.40625,0.04687 0.17188,0.04687 0.29688,0.140625 0.14062,0.09375 0.21875,0.25 0.0937,0.140625 0.17187,0.34375 l 2.73438,6.859375 0.0469,0 2.84375,-6.84375 q 0.0937,-0.21875 0.1875,-0.375 0.0937,-0.15625 0.20312,-0.234375 0.10938,-0.09375 0.25,-0.140625 0.14063,-0.04687 0.32813,-0.04687 l 0.79687,0 q 0.10938,0 0.20313,0.04687 0.10937,0.03125 0.17187,0.109375 0.0781,0.0625 0.10938,0.171875 0.0469,0.09375 0.0469,0.25 l 0,8.71875 z m 7.06206,0.01563 q 0,0.07813 -0.0625,0.125 -0.0625,0.04687 -0.17188,0.0625 -0.0937,0.03125 -0.29687,0.03125 -0.1875,0 -0.29688,-0.03125 -0.10937,-0.01563 -0.17187,-0.0625 -0.0469,-0.04687 -0.0469,-0.125 l 0,-0.6
 5625 q -0.4375,0.453125 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.51562,0 -0.9375,-0.140625 -0.42187,-0.125 -0.71875,-0.375 -0.29687,-0.265625 -0.46875,-0.640625 -0.15625,-0.375 -0.15625,-0.859375 0,-0.546875 0.21875,-0.953125 0.23438,-0.421875 0.65625,-0.6875 0.4375,-0.265625 1.04688,-0.40625 0.60937,-0.140625 1.39062,-0.140625 l 0.90625,0 0,-0.515625 q 0,-0.375 -0.0937,-0.65625 -0.0781,-0.296875 -0.26562,-0.484375 -0.17188,-0.203125 -0.45313,-0.296875 -0.28125,-0.109375 -0.70312,-0.109375 -0.4375,0 -0.79688,0.109375 -0.35937,0.109375 -0.625,0.234375 -0.26562,0.125 -0.45312,0.234375 -0.17188,0.109375 -0.26563,0.109375 -0.0625,0 -0.10937,-0.03125 -0.0312,-0.03125 -0.0625,-0.09375 -0.0312,-0.0625 -0.0469,-0.140625 -0.0156,-0.09375 -0.0156,-0.203125 0,-0.1875 0.0156,-0.296875 0.0312,-0.109375 0.125,-0.203125 0.10938,-0.09375 0.34375,-0.21875 0.25,-0.125 0.5625,-0.234375 0.3125,-0.109375 0.6875,-0.171875 0.375,-0.07813 0.75,-0.07813 0.71875,0 1.20313,0.171875 0.5,0.15625 0.8125,0.4
 6875 0.3125,0.3125 0.45312,0.78125 0.14063,0.453125 0.14063,1.0625 l 0,4.453125 z m -1.20313,-3.015625 -1.03125,0 q -0.5,0 -0.875,0.09375 -0.35937,0.07813 -0.60937,0.25 -0.23438,0.15625 -0.35938,0.390625 -0.10937,0.21875 -0.10937,0.53125 0,0.515625 0.32812,0.8125 0.32813,0.296875 0.92188,0.296875 0.46875,0 0.875,-0.234375 0.40625,-0.25 0.85937,-0.734375 l 0,-1.40625 z m 8.92665,3 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.1875,0.04687 -0.10937,0.01563 -0.28125,0.01563 -0.1875,0 -0.3125,-0.01563 -0.10937,-0.01563 -0.1875,-0.04687 -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-3.859375 q 0,-0.5625 -0.0937,-0.90625 -0.0937,-0.34375 -0.26563,-0.59375 -0.15625,-0.25 -0.42187,-0.375 -0.26563,-0.140625 -0.625,-0.140625 -0.45313,0 -0.90625,0.328125 -0.45313,0.328125 -0.95313,0.9375 l 0,4.609375 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.1875,0.04687 -0.10937,0.01563 -0.29687,0.01563 -0.17188,0 -0
 .29688,-0.01563 -0.10937,-0.01563 -0.1875,-0.04687 -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-6.59375 q 0,-0.04687 0.0156,-0.09375 0.0312,-0.04687 0.0937,-0.07813 0.0625,-0.03125 0.15625,-0.03125 0.10938,-0.01563 0.28125,-0.01563 0.15625,0 0.26563,0.01563 0.10937,0 0.15625,0.03125 0.0625,0.03125 0.0937,0.07813 0.0312,0.04687 0.0312,0.09375 l 0,0.875 q 0.54687,-0.625 1.09375,-0.90625 0.5625,-0.296875 1.125,-0.296875 0.65625,0 1.10937,0.234375 0.45313,0.21875 0.73438,0.59375 0.28125,0.375 0.39062,0.875 0.125,0.5 0.125,1.203125 l 0,4.015625 z m 6.99713,0.01563 q 0,0.07813 -0.0625,0.125 -0.0625,0.04687 -0.17187,0.0625 -0.0937,0.03125 -0.29688,0.03125 -0.1875,0 -0.29687,-0.03125 -0.10938,-0.01563 -0.17188,-0.0625 -0.0469,-0.04687 -0.0469,-0.125 l 0,-0.65625 q -0.4375,0.453125 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.51563,0 -0.9375,-0.140625 -0.42188,-0.125 -0.71875,-0.375 -0.29688,-0.265625 -0.46875,-0.640625 -0.15625,-0.375 -0.15625,-0.859375 0,-0.546
 875 0.21875,-0.953125 0.23437,-0.421875 0.65625,-0.6875 0.4375,-0.265625 1.04687,-0.40625 0.60938,-0.140625 1.39063,-0.140625 l 0.90625,0 0,-0.515625 q 0,-0.375 -0.0937,-0.65625 -0.0781,-0.296875 -0.26563,-0.484375 -0.17187,-0.203125 -0.45312,-0.296875 -0.28125,-0.109375 -0.70313,-0.109375 -0.4375,0 -0.79687,0.109375 -0.35938,0.109375 -0.625,0.234375 -0.26563,0.125 -0.45313,0.234375 -0.17187,0.109375 -0.26562,0.109375 -0.0625,0 -0.10938,-0.03125 -0.0312,-0.03125 -0.0625,-0.09375 -0.0312,-0.0625 -0.0469,-0.140625 -0.0156,-0.09375 -0.0156,-0.203125 0,-0.1875 0.0156,-0.296875 0.0312,-0.109375 0.125,-0.203125 0.10937,-0.09375 0.34375,-0.21875 0.25,-0.125 0.5625,-0.234375 0.3125,-0.109375 0.6875,-0.171875 0.375,-0.07813 0.75,-0.07813 0.71875,0 1.20312,0.171875 0.5,0.15625 0.8125,0.46875 0.3125,0.3125 0.45313,0.78125 0.14062,0.453125 0.14062,1.0625 l 0,4.453125 z m -1.20312,-3.015625 -1.03125,0 q -0.5,0 -0.875,0.09375 -0.35938,0.07813 -0.60938,0.25 -0.23437,0.15625 -0.35937,0.390625 -0.10
 938,0.21875 -0.10938,0.53125 0,0.515625 0.32813,0.8125 0.32812,0.296875 0.92187,0.296875 0.46875,0 0.875,-0.234375 0.40625,-0.25 0.85938,-0.734375 l 0,-1.40625 z m 8.75478,-3.28125 q 0,0.25 -0.0781,0.375 -0.0625,0.109375 -0.17187,0.109375 l -0.9375,0 q 0.25,0.25 0.34375,0.578125 0.10937,0.3125 0.10937,0.65625 0,0.578125 -0.1875,1.015625 -0.17187,0.4375 -0.51562,0.75 -0.34375,0.296875 -0.8125,0.46875 -0.46875,0.15625 -1.03125,0.15625 -0.40625,0 -0.76563,-0.109375 -0.35937,-0.109375 -0.5625,-0.265625 -0.14062,0.125 -0.21875,0.296875 -0.0781,0.171875 -0.0781,0.390625 0,0.25 0.23437,0.421875 0.23438,0.171875 0.625,0.1875 l 1.73438,0.0625 q 0.48437,0.01563 0.89062,0.140625 0.42188,0.125 0.71875,0.34375 0.29688,0.21875 0.46875,0.546875 0.17188,0.328125 0.17188,0.765625 0,0.453125 -0.20313,0.859375 -0.1875,0.40625 -0.57812,0.71875 -0.39063,0.3125 -1,0.484375 -0.60938,0.1875 -1.4375,0.1875 -0.79688,0 -1.35938,-0.140625 -0.5625,-0.125 -0.92187,-0.359375 -0.35938,-0.234375 -0.51563,-0.5625 -0
 .15625,-0.328125 -0.15625,-0.703125 0,-0.25 0.0469,-0.484375 0.0625,-0.21875 0.1875,-0.421875 0.125,-0.203125 0.29687,-0.390625 0.1875,-0.1875 0.42188,-0.375 -0.35938,-0.171875 -0.53125,-0.453125 -0.17188,-0.28125 -0.17188,-0.609375 0,-0.4375 0.17188,-0.78125 0.1875,-0.359375 0.46875,-0.640625 -0.23438,-0.265625 -0.375,-0.609375 -0.125,-0.34375 -0.125,-0.828125 0,-0.5625 0.1875,-1 0.20312,-0.453125 0.53125,-0.765625 0.34375,-0.3125 0.8125,-0.46875 0.46875,-0.171875 1.03125,-0.171875 0.29687,0 0.54687,0.03125 0.26563,0.03125 0.5,0.09375 l 1.98438,0 q 0.125,0 0.1875,0.125 0.0625,0.125 0.0625,0.375 z m -1.89063,1.734375 q 0,-0.671875 -0.375,-1.046875 -0.35937,-0.390625 -1.04687,-0.390625 -0.34375,0 -0.60938,0.125 -0.25,0.109375 -0.42187,0.328125 -0.17188,0.203125 -0.26563,0.46875 -0.0781,0.265625 -0.0781,0.5625 0,0.640625 0.35937,1.015625 0.375,0.375 1.04688,0.375 0.35937,0 0.60937,-0.109375 0.26563,-0.109375 0.4375,-0.3125 0.1875,-0.203125 0.26563,-0.46875 0.0781,-0.265625 0.0781,-0.5
 46875 z m 0.60938,5.21875 q 0,-0.421875 -0.34375,-0.65625 -0.34375,-0.234375 -0.9375,-0.25 l -1.71875,-0.04687 q -0.23438,0.171875 -0.39063,0.34375 -0.14062,0.15625 -0.23437,0.3125 -0.0781,0.15625 -0.10938,0.296875 -0.0312,0.140625 -0.0312,0.296875 0,0.484375 0.48438,0.71875 0.48437,0.25 1.34375,0.25 0.54687,0 0.90625,-0.109375 0.375,-0.109375 0.60937,-0.28125 0.23438,-0.171875 0.32813,-0.40625 0.0937,-0.21875 0.0937,-0.46875 z m 8.30499,-4.25 q 0,0.28125 -0.15625,0.40625 -0.14063,0.125 -0.3125,0.125 l -4.32813,0 q 0,0.546875 0.10938,0.984375 0.10937,0.4375 0.35937,0.765625 0.26563,0.3125 0.67188,0.484375 0.42187,0.15625 1.01562,0.15625 0.46875,0 0.82813,-0.07813 0.35937,-0.07813 0.625,-0.171875 0.28125,-0.09375 0.45312,-0.171875 0.17188,-0.07813 0.25,-0.07813 0.0625,0 0.0937,0.03125 0.0469,0.01563 0.0625,0.07813 0.0312,0.04687 0.0312,0.140625 0.0156,0.09375 0.0156,0.21875 0,0.09375 -0.0156,0.171875 0,0.0625 -0.0156,0.125 0,0.04687 -0.0312,0.09375 -0.0312,0.04687 -0.0781,0.09375 -0.
 0312,0.03125 -0.23437,0.125 -0.1875,0.09375 -0.5,0.1875 -0.3125,0.07813 -0.73438,0.140625 -0.40625,0.07813 -0.875,0.07813 -0.8125,0 -1.4375,-0.21875 -0.60937,-0.234375 -1.03125,-0.671875 -0.40625,-0.453125 -0.625,-1.125 -0.20312,-0.6875 -0.20312,-1.578125 0,-0.84375 0.21875,-1.515625 0.21875,-0.6875 0.625,-1.15625 0.42187,-0.46875 1,-0.71875 0.59375,-0.265625 1.3125,-0.265625 0.78125,0 1.32812,0.25 0.54688,0.25 0.89063,0.671875 0.35937,0.421875 0.51562,1 0.17188,0.5625 0.17188,1.203125 l 0,0.21875 z m -1.21875,-0.359375 q 0.0156,-0.953125 -0.42188,-1.484375 -0.4375,-0.546875 -1.3125,-0.546875 -0.45312,0 -0.79687,0.171875 -0.32813,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.125,0.359375 -0.14063,0.765625 l 3.57813,0 z m 6.72986,-2.21875 q 0,0.15625 -0.0156,0.265625 0,0.109375 -0.0156,0.171875 -0.0156,0.0625 -0.0625,0.109375 -0.0312,0.03125 -0.0781,0.03125 -0.0625,0 -0.15625,-0.03125 -0.0781,-0.04687 -0.1875,-0.07813 -0.10937,-0.03125 -0.23437,-0.0625 -0.125,-0.03125 -
 0.28125,-0.03125 -0.1875,0 -0.35938,0.07813 -0.17187,0.07813 -0.375,0.25 -0.1875,0.15625 -0.40625,0.4375 -0.21875,0.28125 -0.46875,0.6875 l 0,4.328125 q 0,0.0625 -0.0312,0.109375 -0.0312,0.03125 -0.0937,0.0625 -0.0625,0.03125 -0.1875,0.04687 -0.10937,0.01563 -0.29687,0.01563 -0.17188,0 -0.29688,-0.01563 -0.10937,-0.01563 -0.1875,-0.04687 -0.0625,-0.03125 -0.0937,-0.0625 -0.0156,-0.04687 -0.0156,-0.109375 l 0,-6.59375 q 0,-0.04687 0.0156,-0.09375 0.0312,-0.04687 0.0937,-0.07813 0.0625,-0.03125 0.15625,-0.03125 0.10938,-0.01563 0.28125,-0.01563 0.15625,0 0.26563,0.01563 0.10937,0 0.15625,0.03125 0.0625,0.03125 0.0937,0.07813 0.0312,0.04687 0.0312,0.09375 l 0,0.96875 q 0.26562,-0.40625 0.5,-0.640625 0.23437,-0.25 0.45312,-0.390625 0.21875,-0.15625 0.42188,-0.203125 0.20312,-0.0625 0.42187,-0.0625 0.0937,0 0.20313,0.01563 0.125,0.01563 0.25,0.04687 0.14062,0.01563 0.25,0.0625 0.10937,0.03125 0.15625,0.07813 0.0469,0.03125 0.0469,0.0625 0.0156,0.03125 0.0312,0.09375 0.0156,0.04687 0.0156
 ,0.140625 0,0.09375 0,0.265625 z"
-       id="path17"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 39.249344,107.73753 158.740156,0 0,38.01575 -158.740156,0 z"
-       id="path19"
-       inkscape:connector-curvature="0"
-       style="fill:#95f3ef;fill-rule:nonzero" />
-    <path
-       d="m 39.249344,107.73753 158.740156,0 0,38.01575 -158.740156,0 z"
-       id="path21"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#34ebe4;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 81.37492,130.48166 q 0,0.125 -0.01563,0.21875 0,0.0781 -0.03125,0.14062 -0.01563,0.0625 -0.04687,0.125 -0.01563,0.0469 -0.09375,0.125 -0.07813,0.0625 -0.3125,0.21875 -0.234375,0.15625 -0.578125,0.29688 -0.34375,0.14062 -0.796875,0.23437 -0.453125,0.10938 -0.984375,0.10938 -0.921875,0 -1.671875,-0.3125 -0.734375,-0.3125 -1.265625,-0.90625 -0.515625,-0.60938 -0.8125,-1.48438 -0.28125,-0.89062 -0.28125,-2.03124 0,-1.1875 0.296875,-2.10938 0.3125,-0.92187 0.859375,-1.5625 0.5625,-0.64062 1.328125,-0.96875 0.765625,-0.34375 1.6875,-0.34375 0.40625,0 0.796875,0.0781 0.390625,0.0781 0.71875,0.20312 0.328125,0.10938 0.578125,0.26563 0.265625,0.14062 0.359375,0.25 0.109375,0.0937 0.140625,0.15625 0.03125,0.0469 0.04687,0.125 0.01563,0.0625 0.01563,0.15625 0.01563,0.0937 0.01563,0.21875 0,0.15625 -0.01563,0.26562 -0.01563,0.0937 -0.04687,0.17188 -0.01563,0.0625 -0.0625,0.0937 -0.03125,0.0312 -0.09375,0.0312 -0.109375,0 -0.296875,-0.14063 -0.171875,-0.14062 -0.46875,-0.3125 -0.2812
 5,-0.17187 -0.703125,-0.3125 -0.40625,-0.15625 -0.984375,-0.15625 -0.640625,0 -1.15625,0.26563 -0.515625,0.25 -0.890625,0.73437 -0.359375,0.48438 -0.5625,1.20313 -0.1875,0.70312 -0.1875,1.60937 0,0.90625 0.1875,1.59375 0.203125,0.6875 0.5625,1.15625 0.359375,0.46875 0.875,0.70312 0.53125,0.23438 1.203125,0.23438 0.5625,0 0.984375,-0.14063 0.421875,-0.14062 0.71875,-0.3125 0.296875,-0.17187 0.484375,-0.29687 0.1875,-0.14063 0.296875,-0.14063 0.0625,0 0.09375,0.0156 0.03125,0.0156 0.04687,0.0781 0.03125,0.0625 0.04687,0.17188 0.01563,0.10937 0.01563,0.28125 z m 6.314758,1.17187 q 0,0.0781 -0.0625,0.125 -0.0625,0.0469 -0.171875,0.0625 -0.09375,0.0312 -0.296875,0.0312 -0.1875,0 -0.296875,-0.0312 -0.109375,-0.0156 -0.171875,-0.0625 -0.04687,-0.0469 -0.04687,-0.125 l 0,-0.65625 q -0.4375,0.45313 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.515625,0 -0.9375,-0.14062 -0.421875,-0.125 -0.71875,-0.375 -0.296875,-0.26563 -0.46875,-0.64063 -0.15625,-0.375 -0.15625,-0.85937 0,-0.54688 0.21875,-
 0.95313 0.234375,-0.42187 0.65625,-0.6875 0.4375,-0.26562 1.046875,-0.40624 0.609375,-0.14063 1.390625,-0.14063 l 0.90625,0 0,-0.51562 q 0,-0.375 -0.09375,-0.65625 -0.07813,-0.29688 -0.265625,-0.48438 -0.171875,-0.20312 -0.453125,-0.29687 -0.28125,-0.10938 -0.703125,-0.10938 -0.4375,0 -0.796875,0.10938 -0.359375,0.10937 -0.625,0.23437 -0.265625,0.125 -0.453125,0.23438 -0.171875,0.10937 -0.265625,0.10937 -0.0625,0 -0.109375,-0.0312 -0.03125,-0.0312 -0.0625,-0.0937 -0.03125,-0.0625 -0.04687,-0.14062 -0.01563,-0.0937 -0.01563,-0.20313 0,-0.1875 0.01563,-0.29687 0.03125,-0.10938 0.125,-0.20313 0.109375,-0.0937 0.34375,-0.21875 0.25,-0.125 0.5625,-0.23437 0.3125,-0.10938 0.6875,-0.17188 0.375,-0.0781 0.75,-0.0781 0.71875,0 1.203125,0.17187 0.5,0.15625 0.8125,0.46875 0.3125,0.3125 0.453125,0.78125 0.140625,0.45313 0.140625,1.0625 l 0,4.45312 z m -1.203125,-3.01562 -1.03125,0 q -0.5,0 -0.875,0.0937 -0.359375,0.0781 -0.609375,0.25 -0.234375,0.15625 -0.359375,0.39062 -0.109375,0.21875 -0.109
 375,0.53125 0,0.51563 0.328125,0.8125 0.328125,0.29688 0.921875,0.29688 0.46875,0 0.875,-0.23438 0.40625,-0.25 0.859375,-0.73437 l 0,-1.40625 z m 6.676651,2.51562 q 0,0.21875 -0.03125,0.34375 -0.03125,0.125 -0.09375,0.1875 -0.04687,0.0469 -0.171875,0.10938 -0.109375,0.0469 -0.265625,0.0781 -0.140625,0.0312 -0.3125,0.0469 -0.171875,0.0312 -0.34375,0.0312 -0.515625,0 -0.875,-0.125 -0.359375,-0.14063 -0.59375,-0.42188 -0.234375,-0.28125 -0.34375,-0.6875 -0.109375,-0.42187 -0.109375,-1 l 0,-3.85937 -0.921875,0 q -0.109375,0 -0.1875,-0.10937 -0.0625,-0.125 -0.0625,-0.375 0,-0.14063 0.01563,-0.23438 0.03125,-0.10937 0.0625,-0.17187 0.03125,-0.0625 0.07813,-0.0781 0.04687,-0.0312 0.09375,-0.0312 l 0.921875,0 0,-1.5625 q 0,-0.0469 0.01563,-0.0937 0.03125,-0.0469 0.09375,-0.0781 0.07813,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.296875,-0.0156 0.1875,0 0.296875,0.0156 0.125,0.0156 0.1875,0.0469 0.07813,0.0312 0.09375,0.0781 0.03125,0.0469 0.03125,0.0937 l 0,1.5625 1.703125,0 q 0.04687,0 0.09375,
 0.0312 0.04687,0.0156 0.07813,0.0781 0.03125,0.0625 0.04687,0.17187 0.01563,0.0937 0.01563,0.23438 0,0.25 -0.0625,0.375 -0.0625,0.10937 -0.171875,0.10937 l -1.703125,0 0,3.6875 q 0,0.67187 0.203125,1.03125 0.203125,0.34375 0.71875,0.34375 0.171875,0 0.296875,-0.0312 0.140625,-0.0312 0.234375,-0.0625 0.109375,-0.0469 0.1875,-0.0781 0.07813,-0.0312 0.125,-0.0312 0.04687,0 0.07813,0.0156 0.03125,0.0156 0.04687,0.0781 0.01563,0.0469 0.03125,0.14063 0.01563,0.0781 0.01563,0.20312 z m 6.456146,0.5 q 0,0.0781 -0.0625,0.125 -0.0625,0.0469 -0.171875,0.0625 -0.09375,0.0312 -0.296875,0.0312 -0.1875,0 -0.296875,-0.0312 -0.109375,-0.0156 -0.171875,-0.0625 -0.04687,-0.0469 -0.04687,-0.125 l 0,-0.65625 q -0.4375,0.45313 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.515625,0 -0.9375,-0.14062 -0.421875,-0.125 -0.71875,-0.375 -0.296875,-0.26563 -0.46875,-0.64063 -0.15625,-0.375 -0.15625,-0.85937 0,-0.54688 0.21875,-0.95313 0.234375,-0.42187 0.65625,-0.6875 0.4375,-0.26562 1.046875,-0.40624 0.609375,-
 0.14063 1.390625,-0.14063 l 0.90625,0 0,-0.51562 q 0,-0.375 -0.09375,-0.65625 -0.07813,-0.29688 -0.265625,-0.48438 -0.171875,-0.20312 -0.453125,-0.29687 -0.28125,-0.10938 -0.703125,-0.10938 -0.4375,0 -0.796875,0.10938 -0.359375,0.10937 -0.625,0.23437 -0.265625,0.125 -0.453125,0.23438 -0.171875,0.10937 -0.265625,0.10937 -0.0625,0 -0.109375,-0.0312 -0.03125,-0.0312 -0.0625,-0.0937 -0.03125,-0.0625 -0.04687,-0.14062 -0.01563,-0.0937 -0.01563,-0.20313 0,-0.1875 0.01563,-0.29687 0.03125,-0.10938 0.125,-0.20313 0.109375,-0.0937 0.34375,-0.21875 0.25,-0.125 0.5625,-0.23437 0.3125,-0.10938 0.6875,-0.17188 0.375,-0.0781 0.75,-0.0781 0.71875,0 1.203125,0.17187 0.5,0.15625 0.8125,0.46875 0.3125,0.3125 0.453125,0.78125 0.140625,0.45313 0.140625,1.0625 l 0,4.45312 z m -1.203125,-3.01562 -1.03125,0 q -0.5,0 -0.875,0.0937 -0.359375,0.0781 -0.609375,0.25 -0.234375,0.15625 -0.359375,0.39062 -0.109375,0.21875 -0.109375,0.53125 0,0.51563 0.328125,0.8125 0.328125,0.29688 0.921875,0.29688 0.46875,0 0.87
 5,-0.23438 0.40625,-0.25 0.859375,-0.73437 l 0,-1.40625 z m 4.457905,3 q 0,0.0625 -0.0312,0.10937 -0.0312,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.29688,0.0156 -0.17187,0 -0.29687,-0.0156 -0.10938,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10937 l 0,-9.78125 q 0,-0.0625 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0781,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.29687,-0.0156 0.1875,0 0.29688,0.0156 0.125,0.0156 0.1875,0.0469 0.0625,0.0312 0.0937,0.0781 0.0312,0.0312 0.0312,0.0937 l 0,9.78125 z m 8.28537,-3.35938 q 0,0.79688 -0.21875,1.48438 -0.20313,0.67187 -0.625,1.17187 -0.42188,0.48438 -1.0625,0.76563 -0.625,0.26562 -1.46875,0.26562 -0.8125,0 -1.42188,-0.23437 -0.59375,-0.25 -1,-0.70313 -0.39062,-0.46875 -0.59375,-1.125 -0.20312,-0.65625 -0.20312,-1.5 0,-0.79687 0.20312,-1.46874 0.21875,-0.6875 0.64063,-1.17188 0.42187,-0.5 1.04687,-0.76562 0.625,-0.28125 1.46875,-0.28125 0.8125,0 1.42188,0.25 0.60937,0.23437 1,0
 .70312 0.40625,0.45313 0.60937,1.125 0.20313,0.65625 0.20313,1.48437 z m -1.26563,0.0781 q 0,-0.53125 -0.10937,-1 -0.0937,-0.48437 -0.32813,-0.84375 -0.21875,-0.35937 -0.60937,-0.5625 -0.39063,-0.21875 -0.96875,-0.21875 -0.53125,0 -0.92188,0.1875 -0.375,0.1875 -0.625,0.54688 -0.25,0.34375 -0.375,0.82812 -0.125,0.46875 -0.125,1.03125 0,0.54687 0.0937,1.03125 0.10938,0.46875 0.32813,0.82812 0.23437,0.34375 0.625,0.5625 0.39062,0.20313 0.96875,0.20313 0.53125,0 0.92187,-0.1875 0.39063,-0.20313 0.64063,-0.54688 0.25,-0.34375 0.35937,-0.8125 0.125,-0.48437 0.125,-1.04687 z m 8.36951,-3 q 0,0.25 -0.0781,0.375 -0.0625,0.10938 -0.17187,0.10938 l -0.9375,0 q 0.25,0.25 0.34375,0.57812 0.10937,0.3125 0.10937,0.65625 0,0.57813 -0.1875,1.01562 -0.17187,0.4375 -0.51562,0.75 -0.34375,0.29688 -0.8125,0.46875 -0.46875,0.15625 -1.03125,0.15625 -0.40625,0 -0.76563,-0.10937 -0.35937,-0.10938 -0.5625,-0.26563 -0.14062,0.125 -0.21875,0.29688 -0.0781,0.17187 -0.0781,0.39062 0,0.25 0.23437,0.42188 0.23438,
 0.17187 0.625,0.1875 l 1.73438,0.0625 q 0.48437,0.0156 0.89062,0.14062 0.42188,0.125 0.71875,0.34375 0.29688,0.21875 0.46875,0.54688 0.17188,0.32812 0.17188,0.76562 0,0.45313 -0.20313,0.85938 -0.1875,0.40625 -0.57812,0.71875 -0.39063,0.3125 -1,0.48437 -0.60938,0.1875 -1.4375,0.1875 -0.79688,0 -1.35938,-0.14062 -0.5625,-0.125 -0.92187,-0.35938 -0.35938,-0.23437 -0.51563,-0.5625 -0.15625,-0.32812 -0.15625,-0.70312 0,-0.25 0.0469,-0.48438 0.0625,-0.21875 0.1875,-0.42187 0.125,-0.20313 0.29687,-0.39063 0.1875,-0.1875 0.42188,-0.375 -0.35938,-0.17187 -0.53125,-0.45312 -0.17188,-0.28125 -0.17188,-0.60938 0,-0.4375 0.17188,-0.78125 0.1875,-0.35937 0.46875,-0.64062 -0.23438,-0.26563 -0.375,-0.60937 -0.125,-0.34375 -0.125,-0.82813 0,-0.5625 0.1875,-1 0.20312,-0.45312 0.53125,-0.76562 0.34375,-0.3125 0.8125,-0.46875 0.46875,-0.17188 1.03125,-0.17188 0.29687,0 0.54687,0.0312 0.26563,0.0312 0.5,0.0937 l 1.98438,0 q 0.125,0 0.1875,0.125 0.0625,0.125 0.0625,0.375 z m -1.89063,1.73438 q 0,-0.67188
  -0.375,-1.04688 -0.35937,-0.39062 -1.04687,-0.39062 -0.34375,0 -0.60938,0.125 -0.25,0.10937 -0.42187,0.32812 -0.17188,0.20313 -0.26563,0.46875 -0.0781,0.26563 -0.0781,0.5625 0,0.64063 0.35937,1.01562 0.375,0.375 1.04688,0.375 0.35937,0 0.60937,-0.10937 0.26563,-0.10938 0.4375,-0.3125 0.1875,-0.20312 0.26563,-0.46875 0.0781,-0.26562 0.0781,-0.54687 z m 0.60938,5.21874 q 0,-0.42187 -0.34375,-0.65625 -0.34375,-0.23437 -0.9375,-0.25 l -1.71875,-0.0469 q -0.23438,0.17187 -0.39063,0.34375 -0.14062,0.15625 -0.23437,0.3125 -0.0781,0.15625 -0.10938,0.29687 -0.0312,0.14063 -0.0312,0.29688 0,0.48437 0.48438,0.71875 0.48437,0.25 1.34375,0.25 0.54687,0 0.90625,-0.10938 0.375,-0.10937 0.60937,-0.28125 0.23438,-0.17187 0.32813,-0.40625 0.0937,-0.21875 0.0937,-0.46875 z m 10.13402,-2.46875 q 0,0.51563 -0.1875,0.90625 -0.1875,0.39063 -0.53125,0.67188 -0.34375,0.26562 -0.82813,0.40625 -0.46875,0.14062 -1.04687,0.14062 -0.34375,0 -0.67188,-0.0625 -0.3125,-0.0469 -0.57812,-0.125 -0.25,-0.0937 -0.42188
 ,-0.1875 -0.17187,-0.0937 -0.25,-0.15625 -0.0781,-0.0781 -0.125,-0.20312 -0.0312,-0.14063 -0.0312,-0.35938 0,-0.14062 0.0156,-0.23437 0.0156,-0.10938 0.0312,-0.15625 0.0312,-0.0625 0.0625,-0.0781 0.0469,-0.0312 0.0937,-0.0312 0.0781,0 0.23437,0.0937 0.15625,0.0937 0.39063,0.21875 0.23437,0.10938 0.54687,0.21875 0.3125,0.0937 0.73438,0.0937 0.29687,0 0.54687,-0.0625 0.25,-0.0625 0.4375,-0.1875 0.1875,-0.14062 0.28125,-0.32812 0.0937,-0.20313 0.0937,-0.46875 0,-0.28125 -0.14062,-0.46875 -0.14063,-0.20313 -0.375,-0.34375 -0.23438,-0.14063 -0.53125,-0.25 -0.29688,-0.125 -0.60938,-0.25 -0.29687,-0.125 -0.59375,-0.28125 -0.29687,-0.15625 -0.53125,-0.375 -0.23437,-0.23437 -0.39062,-0.54687 -0.14063,-0.3125 -0.14063,-0.76563 0,-0.375 0.15625,-0.73437 0.15625,-0.35938 0.45313,-0.625 0.29687,-0.26563 0.75,-0.42188 0.45312,-0.17187 1.04687,-0.17187 0.26563,0 0.53125,0.0469 0.26563,0.0469 0.46875,0.10938 0.21875,0.0625 0.35938,0.14062 0.15625,0.0781 0.23437,0.14063 0.0781,0.0625 0.0937,0.10937 
 0.0312,0.0312 0.0469,0.0937 0.0156,0.0469 0.0156,0.14063 0.0156,0.0781 0.0156,0.1875 0,0.125 -0.0156,0.21875 0,0.0937 -0.0312,0.15625 -0.0312,0.0469 -0.0625,0.0781 -0.0312,0.0312 -0.0781,0.0312 -0.0625,0 -0.1875,-0.0781 -0.125,-0.0937 -0.32813,-0.17188 -0.20312,-0.0937 -0.46875,-0.17187 -0.26562,-0.0937 -0.60937,-0.0937 -0.3125,0 -0.54688,0.0781 -0.23437,0.0625 -0.39062,0.20313 -0.14063,0.125 -0.21875,0.29687 -0.0781,0.17188 -0.0781,0.375 0,0.29688 0.14063,0.48438 0.15625,0.1875 0.39062,0.34375 0.23438,0.14062 0.53125,0.26562 0.3125,0.10938 0.60938,0.23438 0.3125,0.12499 0.60937,0.28124 0.3125,0.15625 0.54688,0.375 0.23437,0.21875 0.375,0.53125 0.15625,0.29688 0.15625,0.71875 z m 7.21663,-1.78125 q 0,0.28125 -0.15625,0.40625 -0.14063,0.125 -0.3125,0.125 l -4.32813,0 q 0,0.54688 0.10938,0.98438 0.10937,0.4375 0.35937,0.76562 0.26563,0.3125 0.67188,0.48438 0.42187,0.15625 1.01562,0.15625 0.46875,0 0.82813,-0.0781 0.35937,-0.0781 0.625,-0.17187 0.28125,-0.0937 0.45312,-0.17188 0.17188,
 -0.0781 0.25,-0.0781 0.0625,0 0.0937,0.0312 0.0469,0.0156 0.0625,0.0781 0.0312,0.0469 0.0312,0.14063 0.0156,0.0937 0.0156,0.21875 0,0.0937 -0.0156,0.17187 0,0.0625 -0.0156,0.125 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.0781,0.0937 -0.0312,0.0312 -0.23437,0.125 -0.1875,0.0937 -0.5,0.1875 -0.3125,0.0781 -0.73438,0.14063 -0.40625,0.0781 -0.875,0.0781 -0.8125,0 -1.4375,-0.21875 -0.60937,-0.23437 -1.03125,-0.67187 -0.40625,-0.45313 -0.625,-1.125 -0.20312,-0.6875 -0.20312,-1.57813 0,-0.84374 0.21875,-1.51562 0.21875,-0.6875 0.625,-1.15625 0.42187,-0.46875 1,-0.71875 0.59375,-0.26562 1.3125,-0.26562 0.78125,0 1.32812,0.25 0.54688,0.25 0.89063,0.67187 0.35937,0.42188 0.51562,1 0.17188,0.5625 0.17188,1.20313 l 0,0.21874 z m -1.21875,-0.35937 q 0.0156,-0.95312 -0.42188,-1.48437 -0.4375,-0.54688 -1.3125,-0.54688 -0.45312,0 -0.79687,0.17188 -0.32813,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.125,0.35937 -0.14063,0.76562 l 3.57813,0 z m 6.72984,-2.21875 q 0,0.15625 -0.0156,0.26
 563 0,0.10937 -0.0156,0.17187 -0.0156,0.0625 -0.0625,0.10938 -0.0312,0.0312 -0.0781,0.0312 -0.0625,0 -0.15625,-0.0312 -0.0781,-0.0469 -0.1875,-0.0781 -0.10937,-0.0312 -0.23437,-0.0625 -0.125,-0.0312 -0.28125,-0.0312 -0.1875,0 -0.35938,0.0781 -0.17187,0.0781 -0.375,0.25 -0.1875,0.15625 -0.40625,0.4375 -0.21875,0.28125 -0.46875,0.6875 l 0,4.32812 q 0,0.0625 -0.0312,0.10937 -0.0312,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10937,0.0156 -0.29687,0.0156 -0.17188,0 -0.29688,-0.0156 -0.10937,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10937 l 0,-6.59375 q 0,-0.0469 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0625,-0.0312 0.15625,-0.0312 0.10938,-0.0156 0.28125,-0.0156 0.15625,0 0.26563,0.0156 0.10937,0 0.15625,0.0312 0.0625,0.0312 0.0937,0.0781 0.0312,0.0469 0.0312,0.0937 l 0,0.96875 q 0.26562,-0.40625 0.5,-0.64062 0.23437,-0.25 0.45312,-0.39063 0.21875,-0.15625 0.42188,-0.20312 0.20312,-0.0625 0.42187,-0.0625 0.0937,0 0.20313,0.0156 0
 .125,0.0156 0.25,0.0469 0.14062,0.0156 0.25,0.0625 0.10937,0.0312 0.15625,0.0781 0.0469,0.0312 0.0469,0.0625 0.0156,0.0312 0.0312,0.0937 0.0156,0.0469 0.0156,0.14063 0,0.0937 0,0.26562 z m 6.67215,-0.45312 q 0,0.0312 -0.0156,0.0781 0,0.0312 -0.0156,0.0625 0,0.0312 -0.0156,0.0781 0,0.0469 -0.0156,0.0937 l -2.25,6.26562 q -0.0312,0.0781 -0.0781,0.14062 -0.0469,0.0469 -0.14062,0.0781 -0.0937,0.0156 -0.25,0.0312 -0.14063,0.0156 -0.35938,0.0156 -0.21875,0 -0.375,-0.0156 -0.14062,-0.0156 -0.23437,-0.0469 -0.0781,-0.0312 -0.14063,-0.0781 -0.0469,-0.0469 -0.0781,-0.125 l -2.23438,-6.26562 q -0.0312,-0.0781 -0.0625,-0.14063 -0.0156,-0.0781 -0.0156,-0.10937 0,-0.0312 0,-0.0625 0,-0.0469 0.0312,-0.0937 0.0312,-0.0469 0.0937,-0.0625 0.0781,-0.0312 0.1875,-0.0312 0.10937,-0.0156 0.28125,-0.0156 0.21875,0 0.34375,0.0156 0.125,0 0.1875,0.0312 0.0781,0.0312 0.10937,0.0781 0.0312,0.0469 0.0625,0.10938 l 1.85938,5.43749 0.0312,0.0781 0.0156,-0.0781 1.84375,-5.43749 q 0.0156,-0.0625 0.0469,-0.10938 0.
 0469,-0.0469 0.10937,-0.0781 0.0625,-0.0312 0.1875,-0.0312 0.125,-0.0156 0.32813,-0.0156 0.15625,0 0.26562,0.0156 0.10938,0 0.17188,0.0312 0.0625,0.0312 0.0937,0.0781 0.0312,0.0312 0.0312,0.0781 z m 2.41652,6.60937 q 0,0.0625 -0.0312,0.10937 -0.0312,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.29688,0.0156 -0.17187,0 -0.29687,-0.0156 -0.10938,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10937 l 0,-6.59375 q 0,-0.0469 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0625 0.0781,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.29687,-0.0156 0.1875,0 0.29688,0.0156 0.125,0.0156 0.1875,0.0469 0.0625,0.0156 0.0937,0.0625 0.0312,0.0469 0.0312,0.0937 l 0,6.59375 z m 0.14062,-8.8125 q 0,0.42188 -0.17187,0.57813 -0.15625,0.15625 -0.57813,0.15625 -0.42187,0 -0.59375,-0.14063 -0.15625,-0.15625 -0.15625,-0.57812 0,-0.42188 0.15625,-0.57813 0.17188,-0.15625 0.60938,-0.15625 0.42187,0 0.57812,0.15625 0.15625,0.14063 0.15625,0.5625 z m 6.75412,7.8125 q 
 0,0.125 -0.0156,0.21875 0,0.0937 -0.0156,0.15625 -0.0156,0.0625 -0.0469,0.10937 -0.0312,0.0469 -0.125,0.14063 -0.0781,0.0937 -0.29688,0.23437 -0.21875,0.125 -0.5,0.23438 -0.28125,0.0937 -0.60937,0.15625 -0.3125,0.0781 -0.65625,0.0781 -0.70313,0 -1.26563,-0.23437 -0.54687,-0.23438 -0.92187,-0.6875 -0.35938,-0.45313 -0.5625,-1.10938 -0.1875,-0.65625 -0.1875,-1.51562 0,-0.96875 0.23437,-1.65625 0.25,-0.70312 0.65625,-1.15625 0.42188,-0.45312 0.96875,-0.65625 0.5625,-0.21875 1.21875,-0.21875 0.3125,0 0.60938,0.0625 0.29687,0.0469 0.54687,0.14063 0.25,0.0937 0.4375,0.21875 0.20313,0.125 0.28125,0.21875 0.0937,0.0937 0.125,0.14062 0.0312,0.0469 0.0469,0.125 0.0312,0.0625 0.0312,0.15625 0.0156,0.0781 0.0156,0.21875 0,0.28125 -0.0625,0.40625 -0.0625,0.10938 -0.15625,0.10938 -0.10937,0 -0.26562,-0.125 -0.14063,-0.125 -0.35938,-0.26563 -0.21875,-0.15625 -0.53125,-0.26562 -0.3125,-0.125 -0.73437,-0.125 -0.875,0 -1.34375,0.67187 -0.45313,0.67188 -0.45313,1.9375 0,0.64062 0.10938,1.125 0.125,0.4
 6875 0.35937,0.79687 0.23438,0.32813 0.57813,0.48438 0.34375,0.15625 0.78125,0.15625 0.42187,0 0.73437,-0.125 0.3125,-0.14063 0.54688,-0.29688 0.23437,-0.15625 0.39062,-0.28125 0.15625,-0.14062 0.23438,-0.14062 0.0625,0 0.0937,0.0312 0.0312,0.0312 0.0625,0.10937 0.0312,0.0625 0.0312,0.17188 0.0156,0.10937 0.0156,0.25 z m 7.08805,-2.57813 q 0,0.28125 -0.15625,0.40625 -0.14063,0.125 -0.3125,0.125 l -4.32813,0 q 0,0.54688 0.10938,0.98438 0.10937,0.4375 0.35937,0.76562 0.26563,0.3125 0.67188,0.48438 0.42187,0.15625 1.01562,0.15625 0.46875,0 0.82813,-0.0781 0.35937,-0.0781 0.625,-0.17187 0.28125,-0.0937 0.45312,-0.17188 0.17188,-0.0781 0.25,-0.0781 0.0625,0 0.0937,0.0312 0.0469,0.0156 0.0625,0.0781 0.0312,0.0469 0.0312,0.14063 0.0156,0.0937 0.0156,0.21875 0,0.0937 -0.0156,0.17187 0,0.0625 -0.0156,0.125 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.0781,0.0937 -0.0312,0.0312 -0.23437,0.125 -0.1875,0.0937 -0.5,0.1875 -0.3125,0.0781 -0.73438,0.14063 -0.40625,0.0781 -0.875,0.0781 -0.8125,0 -1.43
 75,-0.21875 -0.60937,-0.23437 -1.03125,-0.67187 -0.40625,-0.45313 -0.625,-1.125 -0.20312,-0.6875 -0.20312,-1.57813 0,-0.84374 0.21875,-1.51562 0.21875,-0.6875 0.625,-1.15625 0.42187,-0.46875 1,-0.71875 0.59375,-0.26562 1.3125,-0.26562 0.78125,0 1.32812,0.25 0.54688,0.25 0.89063,0.67187 0.35937,0.42188 0.51562,1 0.17188,0.5625 0.17188,1.20313 l 0,0.21874 z m -1.21875,-0.35937 q 0.0156,-0.95312 -0.42188,-1.48437 -0.4375,-0.54688 -1.3125,-0.54688 -0.45312,0 -0.79687,0.17188 -0.32813,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.65625 -0.125,0.35937 -0.14063,0.76562 l 3.57813,0 z"
-       id="path23"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 39.249344,161.14174 148.125976,0 0,101.38582 -148.125976,0 z"
-       id="path25"
-       inkscape:connector-curvature="0"
-       style="fill:#95f3ef;fill-rule:nonzero" />
-    <path
-       d="m 39.249344,161.14174 148.125976,0 0,101.38582 -148.125976,0 z"
-       id="path27"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#34ebe4;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 44.55643,164.66554 148.12599,0 0,104.03148 -148.12599,0 z"
-       id="path29"
-       inkscape:connector-curvature="0"
-       style="fill:#95f3ef;fill-rule:nonzero" />
-    <path
-       d="m 44.55643,164.66554 148.12599,0 0,104.03148 -148.12599,0 z"
-       id="path31"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#34ebe4;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 49.863518,167.77983 148.125982,0 0,106.3307 -148.125982,0 z"
-       id="path33"
-       inkscape:connector-curvature="0"
-       style="fill:#95f3ef;fill-rule:nonzero" />
-    <path
-       d="m 49.863518,167.77983 148.125982,0 0,106.3307 -148.125982,0 z"
-       id="path35"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#34ebe4;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 39.249344,292.16254 148.125976,0 0,94.67715 -148.125976,0 z"
-       id="path37"
-       inkscape:connector-curvature="0"
-       style="fill:#c7ed6f;fill-opacity:0.70619998;fill-rule:nonzero" />
-    <path
-       d="m 39.249344,292.16254 148.125976,0 0,94.67715 -148.125976,0 z"
-       id="path39"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#adce60;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 44.55643,298.35687 148.12599,0 0,94.67716 -148.12599,0 z"
-       id="path41"
-       inkscape:connector-curvature="0"
-       style="fill:#c7ed6f;fill-opacity:0.70619998;fill-rule:nonzero" />
-    <path
-       d="m 44.55643,298.35687 148.12599,0 0,94.67716 -148.12599,0 z"
-       id="path43"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#adce60;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 49.863518,303.83142 148.125982,0 0,94.67715 -148.125982,0 z"
-       id="path45"
-       inkscape:connector-curvature="0"
-       style="fill:#c7ed6f;fill-opacity:0.70619998;fill-rule:nonzero" />
-    <path
-       d="m 49.863518,303.83142 148.125982,0 0,94.67715 -148.125982,0 z"
-       id="path47"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#adce60;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 96.85993,355.73438 q 0,0.14062 -0.04687,0.25 -0.04687,0.0937 -0.125,0.17187 -0.07813,0.0625 -0.171875,0.0937 -0.09375,0.0156 -0.1875,0.0156 l -0.40625,0 q -0.1875,0 -0.328125,-0.0312 -0.140625,-0.0469 -0.28125,-0.14063 -0.125,-0.10937 -0.25,-0.29687 -0.125,-0.1875 -0.28125,-0.46875 l -2.98438,-5.3906 q -0.234375,-0.42187 -0.484375,-0.875 -0.234375,-0.45312 -0.4375,-0.89062 l -0.01563,0 q 0.01563,0.53125 0.01563,1.07812 0.01563,0.54688 0.01563,1.09375 l 0,5.71875 q 0,0.0469 -0.03125,0.0937 -0.03125,0.0469 -0.109375,0.0781 -0.0625,0.0156 -0.171875,0.0312 -0.109375,0.0312 -0.28125,0.0312 -0.1875,0 -0.296875,-0.0312 -0.109375,-0.0156 -0.1875,-0.0312 -0.0625,-0.0312 -0.09375,-0.0781 -0.01563,-0.0469 -0.01563,-0.0937 l 0,-8.75 q 0,-0.29687 0.15625,-0.42187 0.15625,-0.125 0.34375,-0.125 l 0.609375,0 q 0.203125,0 0.34375,0.0469 0.15625,0.0312 0.265625,0.125 0.109375,0.0781 0.21875,0.23438 0.109375,0.14062 0.234375,0.375 l 2.296875,4.15625 q 0.21875,0.375 0.40625,0.75 0.203125,0.
 35937 0.375,0.71875 0.1875,0.34375 0.359375,0.6875 0.1875,0.32812 0.375,0.67187 l 0,0 q -0.01563,-0.57812 -0.01563,-1.20312 0,-0.625 0,-1.20313 l 0,-5.14062 q 0,-0.0469 0.01563,-0.0937 0.03125,-0.0469 0.09375,-0.0781 0.07813,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.3125,-0.0156 0.15625,0 0.265625,0.0156 0.125,0.0156 0.1875,0.0469 0.0625,0.0312 0.09375,0.0781 0.03125,0.0469 0.03125,0.0937 l 0,8.75 z m 7.1326,0.34375 q 0,0.0781 -0.0625,0.125 -0.0625,0.0469 -0.17188,0.0625 -0.0937,0.0312 -0.29687,0.0312 -0.1875,0 -0.29688,-0.0312 -0.10937,-0.0156 -0.17187,-0.0625 -0.0469,-0.0469 -0.0469,-0.125 l 0,-0.65625 q -0.4375,0.45312 -0.96875,0.71875 -0.53125,0.25 -1.125,0.25 -0.51562,0 -0.937496,-0.14063 -0.421875,-0.125 -0.71875,-0.375 -0.296875,-0.26562 -0.46875,-0.64062 -0.15625,-0.375 -0.15625,-0.85938 0,-0.54687 0.21875,-0.95312 0.234375,-0.42188 0.65625,-0.6875 0.4375,-0.26563 1.046876,-0.40625 0.60937,-0.14063 1.39062,-0.14063 l 0.90625,0 0,-0.51562 q 0,-0.375 -0.0937,-0.65625 -0.0781,-0.2
 9688 -0.26562,-0.48438 -0.17188,-0.20312 -0.45313,-0.29687 -0.28125,-0.10938 -0.70312,-0.10938 -0.4375,0 -0.79688,0.10938 -0.35937,0.10937 -0.624996,0.23437 -0.265625,0.125 -0.453125,0.23438 -0.171875,0.10937 -0.265625,0.10937 -0.0625,0 -0.109375,-0.0312 -0.03125,-0.0312 -0.0625,-0.0937 -0.03125,-0.0625 -0.04687,-0.14062 -0.01563,-0.0937 -0.01563,-0.20313 0,-0.1875 0.01563,-0.29687 0.03125,-0.10938 0.125,-0.20313 0.109375,-0.0937 0.34375,-0.21875 0.25,-0.125 0.5625,-0.23437 0.312501,-0.10938 0.687501,-0.17188 0.375,-0.0781 0.75,-0.0781 0.71875,0 1.20313,0.17187 0.5,0.15625 0.8125,0.46875 0.3125,0.3125 0.45312,0.78125 0.14063,0.45313 0.14063,1.0625 l 0,4.45313 z m -1.20313,-3.01563 -1.03125,0 q -0.5,0 -0.875,0.0937 -0.35937,0.0781 -0.60937,0.25 -0.23438,0.15625 -0.359376,0.39063 -0.109375,0.21875 -0.109375,0.53125 0,0.51562 0.328121,0.8125 0.32813,0.29687 0.92188,0.29687 0.46875,0 0.875,-0.23437 0.40625,-0.25 0.85937,-0.73438 l 0,-1.40625 z m 13.03603,3 q 0,0.0625 -0.0312,0.10938 -0.
 0312,0.0312 -0.10937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.28125,0.0156 -0.1875,0 -0.3125,-0.0156 -0.10938,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10938 l 0,-4 q 0,-0.42187 -0.0781,-0.76562 -0.0781,-0.34375 -0.23438,-0.59375 -0.15625,-0.25 -0.40625,-0.375 -0.25,-0.14063 -0.59375,-0.14063 -0.40625,0 -0.82812,0.32813 -0.42188,0.32812 -0.9375,0.9375 l 0,4.60937 q 0,0.0625 -0.0312,0.10938 -0.0156,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.29688,0.0156 -0.15625,0 -0.28125,-0.0156 -0.125,-0.0156 -0.20312,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10938 l 0,-4 q 0,-0.42187 -0.0781,-0.76562 -0.0781,-0.34375 -0.25,-0.59375 -0.15625,-0.25 -0.40625,-0.375 -0.23438,-0.14063 -0.57813,-0.14063 -0.42187,0 -0.84375,0.32813 -0.42187,0.32812 -0.92187,0.9375 l 0,4.60937 q 0,0.0625 -0.0312,0.10938 -0.0312,0.0312 -0.0937,0.0625 -0.0625,0.0312 -0.1875,0.0469 -0.10938,0.0156 -0.29688,0.0156 -0.
 17187,0 -0.29687,-0.0156 -0.10938,-0.0156 -0.1875,-0.0469 -0.0625,-0.0312 -0.0937,-0.0625 -0.0156,-0.0469 -0.0156,-0.10938 l 0,-6.59375 q 0,-0.0469 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0625,-0.0312 0.15625,-0.0312 0.10937,-0.0156 0.28125,-0.0156 0.15625,0 0.26562,0.0156 0.10938,0 0.15625,0.0312 0.0625,0.0312 0.0937,0.0781 0.0312,0.0469 0.0312,0.0937 l 0,0.875 q 0.54688,-0.625 1.0625,-0.90625 0.53125,-0.29687 1.0625,-0.29687 0.42188,0 0.73438,0.0937 0.32812,0.0937 0.57812,0.28125 0.25,0.17187 0.42188,0.40625 0.1875,0.23437 0.29687,0.53125 0.32813,-0.35938 0.625,-0.60938 0.29688,-0.25 0.57813,-0.40625 0.28125,-0.15625 0.53125,-0.21875 0.26562,-0.0781 0.53125,-0.0781 0.625,0 1.0625,0.23437 0.4375,0.21875 0.70312,0.59375 0.26563,0.375 0.375,0.875 0.125,0.5 0.125,1.0625 l 0,4.15625 z m 7.55157,-3.57812 q 0,0.28125 -0.15625,0.40625 -0.14062,0.125 -0.3125,0.125 l -4.32812,0 q 0,0.54687 0.10937,0.98437 0.10938,0.4375 0.35938,0.76563 0.26562,0.3125 0.67187,0.48437 0.42188,0.15625 1
 .01563,0.15625 0.46875,0 0.82812,-0.0781 0.35938,-0.0781 0.625,-0.17188 0.28125,-0.0937 0.45313,-0.17187 0.17187,-0.0781 0.25,-0.0781 0.0625,0 0.0937,0.0312 0.0469,0.0156 0.0625,0.0781 0.0312,0.0469 0.0312,0.14062 0.0156,0.0937 0.0156,0.21875 0,0.0937 -0.0156,0.17188 0,0.0625 -0.0156,0.125 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.0781,0.0937 -0.0312,0.0312 -0.23438,0.125 -0.1875,0.0937 -0.5,0.1875 -0.3125,0.0781 -0.73437,0.14062 -0.40625,0.0781 -0.875,0.0781 -0.8125,0 -1.4375,-0.21875 -0.60938,-0.23438 -1.03125,-0.67188 -0.40625,-0.45312 -0.625,-1.125 -0.20313,-0.6875 -0.20313,-1.57812 0,-0.84375 0.21875,-1.51563 0.21875,-0.6875 0.625,-1.15625 0.42188,-0.46875 1,-0.71875 0.59375,-0.26562 1.3125,-0.26562 0.78125,0 1.32813,0.25 0.54687,0.25 0.89062,0.67187 0.35938,0.42188 0.51563,1 0.17187,0.5625 0.17187,1.20313 l 0,0.21875 z m -1.21875,-0.35938 q 0.0156,-0.95312 -0.42187,-1.48437 -0.4375,-0.54688 -1.3125,-0.54688 -0.45313,0 -0.79688,0.17188 -0.32812,0.15625 -0.5625,0.4375 -0.21875,0
 .28125 -0.34375,0.65625 -0.125,0.35937 -0.14062,0.76562 l 3.57812,0 z m 13.49637,3.60938 q 0,0.14062 -0.0469,0.25 -0.0469,0.0937 -0.125,0.17187 -0.0781,0.0625 -0.17187,0.0937 -0.0937,0.0156 -0.1875,0.0156 l -0.40625,0 q -0.1875,0 -0.32813,-0.0312 -0.14062,-0.0469 -0.28125,-0.14063 -0.125,-0.10937 -0.25,-0.29687 -0.125,-0.1875 -0.28125,-0.46875 l -2.98437,-5.39063 q -0.23438,-0.42187 -0.48438,-0.875 -0.23437,-0.45312 -0.4375,-0.89062 l -0.0156,0 q 0.0156,0.53125 0.0156,1.07812 0.0156,0.54688 0.0156,1.09375 l 0,5.71875 q 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.10938,0.0781 -0.0625,0.0156 -0.17187,0.0312 -0.10938,0.0312 -0.28125,0.0312 -0.1875,0 -0.29688,-0.0312 -0.10937,-0.0156 -0.1875,-0.0312 -0.0625,-0.0312 -0.0937,-0.0781 -0.0156,-0.0469 -0.0156,-0.0937 l 0,-8.75 q 0,-0.29687 0.15625,-0.42187 0.15625,-0.125 0.34375,-0.125 l 0.60937,0 q 0.20313,0 0.34375,0.0469 0.15625,0.0312 0.26563,0.125 0.10937,0.0781 0.21875,0.23438 0.10937,0.14062 0.23437,0.375 l 2.29688,4.15625 q 0.21875,0.3
 75 0.40625,0.75 0.20312,0.35937 0.375,0.71875 0.1875,0.34375 0.35937,0.6875 0.1875,0.32812 0.375,0.67187 l 0,0 q -0.0156,-0.57812 -0.0156,-1.20312 0,-0.625 0,-1.20313 l 0,-5.14062 q 0,-0.0469 0.0156,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0781,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.3125,-0.0156 0.15625,0 0.26563,0.0156 0.125,0.0156 0.1875,0.0469 0.0625,0.0312 0.0937,0.0781 0.0312,0.0469 0.0312,0.0937 l 0,8.75 z m 8.28884,-3.03125 q 0,0.79687 -0.21875,1.48437 -0.20312,0.67188 -0.625,1.17188 -0.42187,0.48437 -1.0625,0.76562 -0.625,0.26563 -1.46875,0.26563 -0.8125,0 -1.42187,-0.23438 -0.59375,-0.25 -1,-0.70312 -0.39063,-0.46875 -0.59375,-1.125 -0.20313,-0.65625 -0.20313,-1.5 0,-0.79688 0.20313,-1.46875 0.21875,-0.6875 0.64062,-1.17188 0.42188,-0.5 1.04688,-0.76562 0.625,-0.28125 1.46875,-0.28125 0.8125,0 1.42187,0.25 0.60938,0.23437 1,0.70312 0.40625,0.45313 0.60938,1.125 0.20312,0.65625 0.20312,1.48438 z m -1.26562,0.0781 q 0,-0.53125 -0.10938,-1 -0.0937,-0.48437 -0.32812,-0.84375 -0.
 21875,-0.35937 -0.60938,-0.5625 -0.39062,-0.21875 -0.96875,-0.21875 -0.53125,0 -0.92187,0.1875 -0.375,0.1875 -0.625,0.54688 -0.25,0.34375 -0.375,0.82812 -0.125,0.46875 -0.125,1.03125 0,0.54688 0.0937,1.03125 0.10937,0.46875 0.32812,0.82813 0.23438,0.34375 0.625,0.5625 0.39063,0.20312 0.96875,0.20312 0.53125,0 0.92188,-0.1875 0.39062,-0.20312 0.64062,-0.54687 0.25,-0.34375 0.35938,-0.8125 0.125,-0.48438 0.125,-1.04688 z m 8.51013,3.28125 q 0,0.0625 -0.0312,0.10938 -0.0156,0.0469 -0.0781,0.0781 -0.0625,0.0156 -0.17188,0.0312 -0.0937,0.0156 -0.25,0.0156 -0.14062,0 -0.25,-0.0156 -0.10937,-0.0156 -0.17187,-0.0312 -0.0625,-0.0312 -0.0937,-0.0781 -0.0312,-0.0469 -0.0312,-0.10938 l 0,-0.875 q -0.51563,0.57813 -1.07813,0.89063 -0.5625,0.3125 -1.21875,0.3125 -0.73437,0 -1.25,-0.28125 -0.5,-0.28125 -0.82812,-0.76563 -0.3125,-0.48437 -0.46875,-1.125 -0.14063,-0.65625 -0.14063,-1.375 0,-0.84375 0.17188,-1.53125 0.1875,-0.6875 0.54687,-1.15625 0.35938,-0.48437 0.89063,-0.75 0.53125,-0.26562 1.234
 37,-0.26562 0.57813,0 1.04688,0.26562 0.48437,0.25 0.95312,0.73438 l 0,-3.82813 q 0,-0.0469 0.0312,-0.0937 0.0312,-0.0469 0.0937,-0.0781 0.0781,-0.0312 0.1875,-0.0469 0.125,-0.0156 0.29688,-0.0156 0.17187,0 0.28125,0.0156 0.125,0.0156 0.1875,0.0469 0.0781,0.0312 0.10937,0.0781 0.0312,0.0469 0.0312,0.0937 l 0,9.75 z m -1.21875,-4.625 q -0.48437,-0.60937 -0.95312,-0.92187 -0.45313,-0.32813 -0.95313,-0.32813 -0.45312,0 -0.78125,0.21875 -0.3125,0.21875 -0.51562,0.57813 -0.20313,0.35937 -0.29688,0.8125 -0.0937,0.45312 -0.0937,0.92187 0,0.5 0.0781,0.98438 0.0781,0.46875 0.26562,0.84375 0.1875,0.35937 0.5,0.59375 0.32813,0.21875 0.79688,0.21875 0.25,0 0.46875,-0.0625 0.21875,-0.0781 0.45312,-0.21875 0.23438,-0.15625 0.48438,-0.40625 0.26562,-0.25 0.54687,-0.60938 l 0,-2.625 z m 8.90338,1.04688 q 0,0.28125 -0.15625,0.40625 -0.14062,0.125 -0.3125,0.125 l -4.32812,0 q 0,0.54687 0.10937,0.98437 0.10938,0.4375 0.35938,0.76563 0.26562,0.3125 0.67187,0.48437 0.42188,0.15625 1.01563,0.15625 0.4687
 5,0 0.82812,-0.0781 0.35938,-0.0781 0.625,-0.17188 0.28125,-0.0937 0.45313,-0.17187 0.17187,-0.0781 0.25,-0.0781 0.0625,0 0.0937,0.0312 0.0469,0.0156 0.0625,0.0781 0.0312,0.0469 0.0312,0.14062 0.0156,0.0937 0.0156,0.21875 0,0.0937 -0.0156,0.17188 0,0.0625 -0.0156,0.125 0,0.0469 -0.0312,0.0937 -0.0312,0.0469 -0.0781,0.0937 -0.0312,0.0312 -0.23438,0.125 -0.1875,0.0937 -0.5,0.1875 -0.3125,0.0781 -0.73437,0.14062 -0.40625,0.0781 -0.875,0.0781 -0.8125,0 -1.4375,-0.21875 -0.60938,-0.23438 -1.03125,-0.67188 -0.40625,-0.45312 -0.625,-1.125 -0.20313,-0.6875 -0.20313,-1.57812 0,-0.84375 0.21875,-1.51563 0.21875,-0.6875 0.625,-1.15625 0.42188,-0.46875 1,-0.71875 0.59375,-0.26562 1.3125,-0.26562 0.78125,0 1.32813,0.25 0.54687,0.25 0.89062,0.67187 0.35938,0.42188 0.51563,1 0.17187,0.5625 0.17187,1.20313 l 0,0.21875 z m -1.21875,-0.35938 q 0.0156,-0.95312 -0.42187,-1.48437 -0.4375,-0.54688 -1.3125,-0.54688 -0.45313,0 -0.79688,0.17188 -0.32812,0.15625 -0.5625,0.4375 -0.21875,0.28125 -0.34375,0.656
 25 -0.125,0.35937 -0.14062,0.76562 l 3.57812,0 z"
-       id="path49"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 56.750656,216.88708 133.259844,0 0,45.63779 -133.259844,0 z"
-       id="path51"
-       inkscape:connector-curvature="0"
-       style="fill:#95f3ef;fill-rule:nonzero" />
-    <path
-       d="m 56.750656,216.88708 133.259844,0 0,45.63779 -133.259844,0 z"
-       id="path53"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:1, 3" />
-    <path
-       d="m 114.59029,230.30534 q 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0781,0.0469 -0.0469,0.0156 -0.14062,0.0312 -0.0781,0.0156 -0.21875,0.0156 -0.14063,0 -0.23438,-0.0156 -0.0781,-0.0156 -0.14062,-0.0312 -0.0469,-0.0156 -0.0625,-0.0469 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-3.07813 -3.17187,0 0,3.07813 q 0,0.0469 -0.0156,0.0781 -0.0156,0.0312 -0.0781,0.0469 -0.0469,0.0156 -0.14063,0.0312 -0.0937,0.0156 -0.21875,0.0156 -0.14062,0 -0.23437,-0.0156 -0.0781,-0.0156 -0.14063,-0.0312 -0.0469,-0.0156 -0.0781,-0.0469 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-6.67188 q 0,-0.0469 0.0156,-0.0781 0.0312,-0.0312 0.0781,-0.0469 0.0625,-0.0156 0.14063,-0.0312 0.0937,-0.0156 0.23437,-0.0156 0.125,0 0.21875,0.0156 0.0937,0.0156 0.14063,0.0312 0.0625,0.0156 0.0781,0.0469 0.0156,0.0312 0.0156,0.0781 l 0,2.78125 3.17187,0 0,-2.78125 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0625,-0.0156 0.14062,-0.0312 0.0937,-0.0156 0.23438,-0.0156 0.14062,0 0.21875,0.0156 0.0937,0.0156 0.14062,0.0312
  0.0625,0.0156 0.0781,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,6.67188 z m 6.82684,-0.1875 q 0.0469,0.125 0.0469,0.20312 0,0.0625 -0.0469,0.10938 -0.0312,0.0312 -0.14062,0.0312 -0.0937,0.0156 -0.26563,0.0156 -0.15625,0 -0.26562,-0.0156 -0.0937,0 -0.14063,-0.0156 -0.0469,-0.0156 -0.0781,-0.0469 -0.0312,-0.0469 -0.0469,-0.0937 l -0.59375,-1.6875 -2.89062,0 -0.5625,1.67187 q -0.0156,0.0469 -0.0469,0.0937 -0.0312,0.0312 -0.0781,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0937,0.0156 -0.25,0.0156 -0.15625,0 -0.26562,-0.0156 -0.0937,-0.0156 -0.14063,-0.0469 -0.0312,-0.0312 -0.0312,-0.10937 0,-0.0781 0.0469,-0.1875 l 2.32812,-6.46875 q 0.0312,-0.0469 0.0625,-0.0781 0.0312,-0.0469 0.0937,-0.0625 0.0781,-0.0312 0.17188,-0.0312 0.10937,-0.0156 0.26562,-0.0156 0.17188,0 0.28125,0.0156 0.125,0 0.1875,0.0312 0.0781,0.0156 0.10938,0.0625 0.0469,0.0312 0.0625,0.0937 l 2.32812,6.45313 z m -2.98437,-5.70313 -0.0156,0 -1.1875,3.46875 2.40625,0 -1.20312,-3.46875 z m 10.60335,5.82813 q -0.0156,0.0781 -0.062
 5,0.125 -0.0469,0.0469 -0.125,0.0781 -0.0625,0.0156 -0.1875,0.0156 -0.10938,0.0156 -0.26563,0.0156 -0.17187,0 -0.28125,-0.0156 -0.10937,0 -0.1875,-0.0156 -0.0781,-0.0312 -0.125,-0.0781 -0.0312,-0.0469 -0.0469,-0.125 l -1.45313,-5.25 -0.0156,0 -1.34375,5.25 q -0.0156,0.0781 -0.0625,0.125 -0.0312,0.0469 -0.10938,0.0781 -0.0625,0.0156 -0.17187,0.0156 -0.10938,0.0156 -0.28125,0.0156 -0.17188,0 -0.29688,-0.0156 -0.10937,0 -0.1875,-0.0156 -0.0781,-0.0312 -0.125,-0.0781 -0.0312,-0.0469 -0.0469,-0.125 l -1.84375,-6.42188 q -0.0312,-0.125 -0.0312,-0.1875 0,-0.0781 0.0469,-0.10937 0.0469,-0.0469 0.14063,-0.0469 0.10937,-0.0156 0.28125,-0.0156 0.17187,0 0.26562,0.0156 0.0937,0 0.14063,0.0312 0.0469,0.0156 0.0625,0.0625 0.0312,0.0312 0.0469,0.0781 l 1.5625,5.82812 0,0 1.48438,-5.8125 q 0.0156,-0.0625 0.0312,-0.0937 0.0312,-0.0469 0.0781,-0.0625 0.0625,-0.0312 0.15625,-0.0312 0.10938,-0.0156 0.26563,-0.0156 0.14062,0 0.23437,0.0156 0.0937,0 0.14063,0.0312 0.0625,0.0156 0.0781,0.0625 0.0312,0.031
 2 0.0469,0.0937 l 1.59375,5.8125 0.0156,0 1.53125,-5.8125 q 0.0156,-0.0625 0.0312,-0.0937 0.0156,-0.0469 0.0625,-0.0625 0.0469,-0.0312 0.14063,-0.0312 0.0937,-0.0156 0.25,-0.0156 0.15625,0 0.25,0.0156 0.10937,0.0156 0.14062,0.0625 0.0469,0.0312 0.0469,0.10938 0,0.0625 -0.0312,0.17187 l -1.84375,6.42188 z m 9.49594,0.8125 q 0,0.125 -0.0156,0.20312 0,0.0937 -0.0312,0.14063 -0.0312,0.0469 -0.0625,0.0625 -0.0312,0.0156 -0.0625,0.0156 -0.10937,0 -0.34375,-0.0937 -0.23437,-0.0937 -0.54687,-0.26562 -0.3125,-0.15625 -0.67188,-0.40625 -0.35937,-0.23438 -0.6875,-0.5625 -0.26562,0.15625 -0.67187,0.28125 -0.39063,0.125 -0.92188,0.125 -0.79687,0 -1.375,-0.23438 -0.5625,-0.23437 -0.9375,-0.67187 -0.375,-0.45313 -0.5625,-1.10938 -0.17187,-0.67187 -0.17187,-1.53125 0,-0.82812 0.1875,-1.5 0.20312,-0.67187 0.59375,-1.14062 0.40625,-0.46875 1,-0.71875 0.60937,-0.25 1.40625,-0.25 0.73437,0 1.29687,0.23437 0.57813,0.21875 0.95313,0.67188 0.39062,0.4375 0.57812,1.09375 0.20313,0.64062 0.20313,1.48437 0,0
 .4375 -0.0625,0.84375 -0.0469,0.39063 -0.15625,0.75 -0.10938,0.34375 -0.28125,0.65625 -0.15625,0.29688 -0.39063,0.53125 0.39063,0.32813 0.6875,0.51563 0.29688,0.17187 0.48438,0.26562 0.20312,0.0937 0.3125,0.125 0.10937,0.0469 0.15625,0.0937 0.0625,0.0469 0.0781,0.14063 0.0156,0.0937 0.0156,0.25 z m -1.8125,-4.10938 q 0,-0.59375 -0.10938,-1.09375 -0.10937,-0.5 -0.35937,-0.875 -0.23438,-0.375 -0.64063,-0.57812 -0.40625,-0.21875 -1.01562,-0.21875 -0.59375,0 -1.01563,0.23437 -0.40625,0.21875 -0.65625,0.59375 -0.25,0.375 -0.35937,0.875 -0.10938,0.5 -0.10938,1.0625 0,0.60938 0.0937,1.125 0.10938,0.51563 0.34375,0.89063 0.25,0.375 0.65625,0.57812 0.40625,0.20313 1.01563,0.20313 0.60937,0 1.01562,-0.21875 0.40625,-0.23438 0.65625,-0.60938 0.26563,-0.39062 0.375,-0.89062 0.10938,-0.51563 0.10938,-1.07813 z"
-       id="path55"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 88.26409,243.30534 q 0,0.0469 -0.01563,0.0781 -0.01563,0.0312 -0.07813,0.0625 -0.04687,0.0156 -0.15625,0.0156 -0.09375,0.0156 -0.25,0.0156 -0.140625,0 -0.234375,-0.0156 -0.07813,0 -0.140625,-0.0156 -0.04687,-0.0312 -0.07813,-0.0781 -0.03125,-0.0469 -0.04687,-0.10938 l -0.640625,-1.64062 q -0.109375,-0.28125 -0.234375,-0.51563 -0.125,-0.23437 -0.296875,-0.39062 -0.15625,-0.17188 -0.390625,-0.26563 -0.21875,-0.0937 -0.53125,-0.0937 l -0.625,0 0,2.95313 q 0,0.0469 -0.03125,0.0781 -0.01563,0.0312 -0.0625,0.0469 -0.04687,0.0156 -0.140625,0.0312 -0.09375,0.0156 -0.21875,0.0156 -0.140625,0 -0.234375,-0.0156 -0.07813,-0.0156 -0.140625,-0.0312 -0.04687,-0.0156 -0.07813,-0.0469 -0.01563,-0.0312 -0.01563,-0.0781 l 0,-6.4375 q 0,-0.20313 0.109375,-0.28125 0.109375,-0.0937 0.234375,-0.0937 l 1.484375,0 q 0.265625,0 0.4375,0.0156 0.171875,0.0156 0.3125,0.0312 0.40625,0.0625 0.703125,0.21875 0.3125,0.15625 0.515625,0.39063 0.21875,0.21875 0.3125,0.51562 0.109375,0.29688 0.109375,0.6562
 5 0,0.35938 -0.09375,0.64063 -0.09375,0.26562 -0.265625,0.48437 -0.171875,0.20313 -0.421875,0.35938 -0.25,0.15625 -0.5625,0.26562 0.171875,0.0781 0.3125,0.20313 0.140625,0.10937 0.265625,0.26562 0.125,0.15625 0.21875,0.375 0.109375,0.20313 0.21875,0.46875 l 0.625,1.53125 q 0.07813,0.1875 0.09375,0.26563 0.03125,0.0781 0.03125,0.125 z m -1.390625,-4.875 q 0,-0.42188 -0.1875,-0.70313 -0.1875,-0.28125 -0.609375,-0.40625 -0.140625,-0.0312 -0.3125,-0.0469 -0.15625,-0.0156 -0.4375,-0.0156 l -0.78125,0 0,2.34375 0.90625,0 q 0.359375,0 0.625,-0.0781 0.265625,-0.0937 0.4375,-0.25 0.1875,-0.17188 0.265625,-0.375 0.09375,-0.21875 0.09375,-0.46875 z m 6.567261,2.25 q 0,0.21875 -0.109375,0.3125 -0.109375,0.0781 -0.234375,0.0781 l -3.171875,0 q 0,0.40625 0.07813,0.73438 0.07813,0.3125 0.265625,0.54687 0.1875,0.23438 0.484375,0.35938 0.3125,0.10937 0.75,0.10937 0.34375,0 0.609375,-0.0469 0.265625,-0.0625 0.453125,-0.125 0.203125,-0.0781 0.328125,-0.125 0.125,-0.0625 0.1875,-0.0625 0.04687,0 0.0625
 ,0.0156 0.03125,0.0156 0.04687,0.0625 0.03125,0.0312 0.03125,0.10938 0.01563,0.0625 0.01563,0.15625 0,0.0781 -0.01563,0.125 0,0.0469 -0.01563,0.0937 0,0.0312 -0.03125,0.0625 -0.01563,0.0312 -0.04687,0.0625 -0.01563,0.0312 -0.171875,0.10937 -0.140625,0.0625 -0.375,0.125 -0.21875,0.0625 -0.53125,0.10938 -0.296875,0.0625 -0.640625,0.0625 -0.59375,0 -1.046875,-0.17188 -0.453125,-0.17187 -0.765625,-0.5 -0.296875,-0.32812 -0.453125,-0.8125 -0.15625,-0.5 -0.15625,-1.15625 0,-0.625 0.15625,-1.10937 0.171875,-0.5 0.46875,-0.84375 0.296875,-0.35938 0.71875,-0.53125 0.4375,-0.1875 0.96875,-0.1875 0.578125,0 0.96875,0.1875 0.40625,0.17187 0.65625,0.48437 0.265625,0.29688 0.390625,0.71875 0.125,0.42188 0.125,0.89063 l 0,0.15625 z m -0.890625,-0.26563 q 0.01563,-0.6875 -0.3125,-1.07812 -0.328125,-0.40625 -0.96875,-0.40625 -0.328125,0 -0.578125,0.125 -0.25,0.125 -0.421875,0.32812 -0.15625,0.20313 -0.25,0.48438 -0.09375,0.26562 -0.09375,0.54687 l 2.625,0 z m 5.098984,1.57813 q 0,0.375 -0.140625,0.6
 7187 -0.140625,0.28125 -0.390625,0.48438 -0.25,0.1875 -0.609375,0.29687 -0.34375,0.10938 -0.765625,0.10938 -0.25,0 -0.484375,-0.0469 -0.234375,-0.0469 -0.421875,-0.10937 -0.1875,-0.0625 -0.3125,-0.125 -0.125,-0.0625 -0.1875,-0.10938 -0.0625,-0.0625 -0.09375,-0.15625 -0.01563,-0.0937 -0.01563,-0.26562 0,-0.10938 0,-0.17188 0.01563,-0.0781 0.03125,-0.10937 0.01563,-0.0469 0.04687,-0.0625 0.03125,-0.0156 0.0625,-0.0156 0.0625,0 0.171875,0.0781 0.125,0.0625 0.296875,0.15625 0.171875,0.0781 0.390625,0.15625 0.234375,0.0625 0.546875,0.0625 0.21875,0 0.390625,-0.0469 0.1875,-0.0469 0.328125,-0.14062 0.140625,-0.0937 0.203125,-0.23438 0.07813,-0.15625 0.07813,-0.34375 0,-0.20312 -0.109375,-0.34375 -0.109375,-0.14062 -0.28125,-0.25 -0.171875,-0.10937 -0.390625,-0.1875 -0.203125,-0.0937 -0.4375,-0.17187 -0.21875,-0.0937 -0.4375,-0.20313 -0.21875,-0.125 -0.390625,-0.28125 -0.171875,-0.17187 -0.28125,-0.40625 -0.109375,-0.23437 -0.109375,-0.5625 0,-0.28125 0.109375,-0.53125 0.125,-0.26562 0.343
 75,-0.45312 0.21875,-0.20313 0.546875,-0.3125 0.328125,-0.125 0.765625,-0.125 0.203125,0 0.390625,0.0312 0.1875,0.0312 0.34375,0.0781 0.15625,0.0469 0.265625,0.10938 0.109375,0.0469 0.171875,0.0937 0.0625,0.0469 0.07813,0.0781 0.01563,0.0312 0.01563,0.0781 0.01563,0.0312 0.01563,0.0937 0.01563,0.0469 0.01563,0.14062 0,0.0937 -0.01563,0.15625 0,0.0625 -0.01563,0.10938 -0.01563,0.0469 -0.04687,0.0625 -0.03125,0.0156 -0.0625,0.0156 -0.04687,0 -0.140625,-0.0469 -0.09375,-0.0625 -0.234375,-0.125 -0.140625,-0.0781 -0.34375,-0.125 -0.1875,-0.0625 -0.453125,-0.0625 -0.21875,0 -0.390625,0.0469 -0.171875,0.0469 -0.28125,0.14063 -0.109375,0.0937 -0.171875,0.23437 -0.04687,0.125 -0.04687,0.26563 0,0.21875 0.09375,0.35937 0.109375,0.14063 0.28125,0.25 0.171875,0.10938 0.390625,0.20313 0.234375,0.0781 0.453125,0.17187 0.234375,0.0781 0.453125,0.20313 0.21875,0.10937 0.390625,0.26562 0.171875,0.15625 0.28125,0.39063 0.109375,0.21875 0.109375,0.53125 z m 5.620925,-1.15625 q 0,0.59375 -0.15625,1.093
 75 -0.15625,0.5 -0.46875,0.85937 -0.29687,0.35938 -0.76562,0.5625 -0.46875,0.20313 -1.07813,0.20313 -0.59375,0 -1.046874,-0.17188 -0.4375,-0.1875 -0.734375,-0.51562 -0.28125,-0.34375 -0.4375,-0.82813 -0.140625,-0.48437 -0.140625,-1.10937 0,-0.57813 0.15625,-1.07813 0.15625,-0.5 0.453125,-0.85937 0.3125,-0.35938 0.765625,-0.54688 0.468754,-0.20312 1.093754,-0.20312 0.59375,0 1.03125,0.17187 0.45312,0.17188 0.73437,0.51563 0.29688,0.34375 0.4375,0.82812 0.15625,0.46875 0.15625,1.07813 z m -0.92187,0.0625 q 0,-0.39063 -0.0781,-0.73438 -0.0781,-0.35937 -0.25,-0.60937 -0.15625,-0.26563 -0.45312,-0.42188 -0.28125,-0.15625 -0.70313,-0.15625 -0.39062,0 -0.67187,0.14063 -0.281254,0.14062 -0.468754,0.40625 -0.171875,0.25 -0.265625,0.59375 -0.09375,0.34375 -0.09375,0.76562 0,0.39063 0.07813,0.75 0.07813,0.34375 0.234375,0.60938 0.171875,0.25 0.453124,0.40625 0.29687,0.15625 0.71875,0.15625 0.39062,0 0.67187,-0.14063 0.28125,-0.14062 0.46875,-0.39062 0.1875,-0.25 0.26563,-0.59375 0.0937,-0.3593
 8 0.0937,-0.78125 z m 6.19763,2.40625 q 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0625,0.0625 -0.0469,0.0156 -0.125,0.0156 -0.0781,0.0156 -0.1875,0.0156 -0.125,0 -0.20313,-0.0156 -0.0781,0 -0.125,-0.0156 -0.0469,-0.0312 -0.0625,-0.0625 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-0.64063 q -0.40625,0.46875 -0.8125,0.6875 -0.40625,0.20313 -0.8125,0.20313 -0.48438,0 -0.82813,-0.15625 -0.32812,-0.17188 -0.53125,-0.45313 -0.20312,-0.28125 -0.29687,-0.64062 -0.0781,-0.375 -0.0781,-0.89063 l 0,-2.9375 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0625,-0.0312 0.14062,-0.0312 0.0937,-0.0156 0.21875,-0.0156 0.14063,0 0.21875,0.0156 0.0937,0 0.14063,0.0312 0.0469,0.0156 0.0625,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,2.8125 q 0,0.42188 0.0625,0.6875 0.0625,0.25 0.1875,0.4375 0.125,0.17188 0.3125,0.28125 0.20312,0.0937 0.45312,0.0937 0.34375,0 0.67188,-0.23437 0.32812,-0.25 0.70312,-0.70313 l 0,-3.375 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0625,-0.0312 0.14062,-0.0312 0
 .0937,-0.0156 0.21875,-0.0156 0.125,0 0.20313,0.0156 0.0937,0 0.14062,0.0312 0.0625,0.0156 0.0781,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,4.82813 z m 4.27057,-4.51563 q 0,0.125 0,0.20313 0,0.0781 -0.0156,0.125 -0.0156,0.0469 -0.0469,0.0781 -0.0156,0.0156 -0.0625,0.0156 -0.0469,0 -0.10938,-0.0156 -0.0625,-0.0312 -0.14062,-0.0469 -0.0781,-0.0312 -0.17188,-0.0469 -0.0937,-0.0312 -0.20312,-0.0312 -0.14063,0 -0.26563,0.0625 -0.125,0.0469 -0.28125,0.17188 -0.14062,0.125 -0.29687,0.32812 -0.15625,0.20313 -0.34375,0.5 l 0,3.17188 q 0,0.0469 -0.0156,0.0781 -0.0156,0.0312 -0.0781,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0781,0.0156 -0.20312,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,0 -0.14063,-0.0156 -0.0469,-0.0312 -0.0625,-0.0625 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-4.82813 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0469,-0.0312 0.10938,-0.0312 0.0781,-0.0156 0.20312,-0.0156 0.125,0 0.20313,0.0156 0.0781,0 0.10937,0.0312 0.0469,0.0156 0.0625,0.0469 0.0312,0.0312 0.0312,0.0781 
 l 0,0.70313 q 0.20313,-0.29688 0.375,-0.46875 0.17188,-0.1875 0.32813,-0.28125 0.15625,-0.10938 0.29687,-0.14063 0.15625,-0.0469 0.3125,-0.0469 0.0781,0 0.15625,0.0156 0.0937,0 0.1875,0.0156 0.10938,0.0156 0.1875,0.0469 0.0781,0.0312 0.10938,0.0625 0.0312,0.0156 0.0312,0.0469 0.0156,0.0156 0.0156,0.0625 0.0156,0.0312 0.0156,0.10937 0,0.0625 0,0.1875 z m 4.37137,3.78125 q 0,0.0937 -0.0156,0.17188 0,0.0625 -0.0156,0.10937 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0781,0.10937 -0.0625,0.0625 -0.23437,0.15625 -0.15625,0.0937 -0.35938,0.17188 -0.20312,0.0781 -0.4375,0.125 -0.23437,0.0625 -0.48437,0.0625 -0.53125,0 -0.9375,-0.17188 -0.39063,-0.17187 -0.67188,-0.5 -0.26562,-0.34375 -0.40625,-0.8125 -0.14062,-0.48437 -0.14062,-1.125 0,-0.70312 0.17187,-1.21875 0.17188,-0.51562 0.46875,-0.84375 0.3125,-0.32812 0.71875,-0.48437 0.42188,-0.15625 0.89063,-0.15625 0.23437,0 0.45312,0.0469 0.21875,0.0312 0.39063,0.10938 0.1875,0.0625 0.32812,0.15625 0.15625,0.0937 0.21875,0.15625 0.0625,0.0625 0.
 0781,0.10937 0.0312,0.0312 0.0469,0.0937 0.0156,0.0469 0.0156,0.10938 0.0156,0.0625 0.0156,0.15625 0,0.20312 -0.0625,0.29687 -0.0469,0.0781 -0.10937,0.0781 -0.0781,0 -0.1875,-0.0781 -0.10938,-0.0937 -0.26563,-0.20312 -0.15625,-0.10938 -0.39062,-0.1875 -0.21875,-0.0937 -0.53125,-0.0937 -0.64063,0 -0.98438,0.5 -0.34375,0.48437 -0.34375,1.40625 0,0.46875 0.0781,0.82812 0.0937,0.34375 0.26562,0.59375 0.17188,0.23438 0.42188,0.34375 0.25,0.10938 0.57812,0.10938 0.3125,0 0.53125,-0.0937 0.23438,-0.0937 0.40625,-0.20313 0.17188,-0.125 0.28125,-0.21875 0.125,-0.0937 0.1875,-0.0937 0.0312,0 0.0625,0.0312 0.0312,0.0156 0.0469,0.0625 0.0156,0.0469 0.0156,0.125 0.0156,0.0781 0.0156,0.1875 z m 5.16226,-1.89062 q 0,0.21875 -0.10938,0.3125 -0.10937,0.0781 -0.23437,0.0781 l -3.17188,0 q 0,0.40625 0.0781,0.73438 0.0781,0.3125 0.26562,0.54687 0.1875,0.23438 0.48438,0.35938 0.3125,0.10937 0.75,0.10937 0.34375,0 0.60937,-0.0469 0.26563,-0.0625 0.45313,-0.125 0.20312,-0.0781 0.32812,-0.125 0.125,-0.0625
  0.1875,-0.0625 0.0469,0 0.0625,0.0156 0.0312,0.0156 0.0469,0.0625 0.0312,0.0312 0.0312,0.10938 0.0156,0.0625 0.0156,0.15625 0,0.0781 -0.0156,0.125 0,0.0469 -0.0156,0.0937 0,0.0312 -0.0312,0.0625 -0.0156,0.0312 -0.0469,0.0625 -0.0156,0.0312 -0.17188,0.10937 -0.14062,0.0625 -0.375,0.125 -0.21875,0.0625 -0.53125,0.10938 -0.29687,0.0625 -0.64062,0.0625 -0.59375,0 -1.04688,-0.17188 -0.45312,-0.17187 -0.76562,-0.5 -0.29688,-0.32812 -0.45313,-0.8125 -0.15625,-0.5 -0.15625,-1.15625 0,-0.625 0.15625,-1.10937 0.17188,-0.5 0.46875,-0.84375 0.29688,-0.35938 0.71875,-0.53125 0.4375,-0.1875 0.96875,-0.1875 0.57813,0 0.96875,0.1875 0.40625,0.17187 0.65625,0.48437 0.26563,0.29688 0.39063,0.71875 0.125,0.42188 0.125,0.89063 l 0,0.15625 z m -0.89063,-0.26563 q 0.0156,-0.6875 -0.3125,-1.07812 -0.32812,-0.40625 -0.96875,-0.40625 -0.32812,0 -0.57812,0.125 -0.25,0.125 -0.42188,0.32812 -0.15625,0.20313 -0.25,0.48438 -0.0937,0.26562 -0.0937,0.54687 l 2.625,0 z m 12.1331,2.89063 q 0,0.0469 -0.0312,0.0781 -
 0.0156,0.0312 -0.0781,0.0469 -0.0469,0.0156 -0.125,0.0312 -0.0781,0.0156 -0.21875,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,-0.0156 -0.14062,-0.0312 -0.0469,-0.0156 -0.0781,-0.0469 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-6.04688 0,0 -2.48438,6.07813 q -0.0156,0.0312 -0.0469,0.0625 -0.0312,0.0312 -0.0937,0.0469 -0.0469,0.0156 -0.125,0.0156 -0.0781,0.0156 -0.1875,0.0156 -0.10938,0 -0.1875,-0.0156 -0.0781,0 -0.14063,-0.0156 -0.0469,-0.0312 -0.0781,-0.0469 -0.0312,-0.0312 -0.0469,-0.0625 l -2.35938,-6.07813 0,0 0,6.04688 q 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0781,0.0469 -0.0469,0.0156 -0.14062,0.0312 -0.0781,0.0156 -0.21875,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,-0.0156 -0.125,-0.0312 -0.0469,-0.0156 -0.0781,-0.0469 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-6.39063 q 0,-0.21875 0.10937,-0.3125 0.125,-0.10937 0.28125,-0.10937 l 0.54688,0 q 0.17187,0 0.29687,0.0312 0.14063,0.0312 0.23438,0.10937 0.0937,0.0625 0.15625,0.17188 0.0625,0.10937 0.125,0.25 l 2.00001,5.03125 0.0312,0 2.09375,-5.0
 1563 q 0.0625,-0.15625 0.125,-0.26562 0.0781,-0.10938 0.15625,-0.17188 0.0937,-0.0781 0.1875,-0.10937 0.10937,-0.0312 0.23437,-0.0312 l 0.59375,0 q 0.0781,0 0.14063,0.0312 0.0781,0.0156 0.125,0.0781 0.0625,0.0469 0.0937,0.125 0.0312,0.0781 0.0312,0.1875 l 0,6.39063 z m 5.09526,0.0156 q 0,0.0625 -0.0469,0.0937 -0.0469,0.0312 -0.125,0.0469 -0.0625,0.0156 -0.21875,0.0156 -0.14062,0 -0.21875,-0.0156 -0.0781,-0.0156 -0.125,-0.0469 -0.0312,-0.0312 -0.0312,-0.0937 l 0,-0.48437 q -0.3125,0.32812 -0.70312,0.53125 -0.39063,0.1875 -0.82813,0.1875 -0.39062,0 -0.70312,-0.10938 -0.29688,-0.0937 -0.51563,-0.28125 -0.21875,-0.1875 -0.34375,-0.45312 -0.10937,-0.28125 -0.10937,-0.64063 0,-0.40625 0.15625,-0.70312 0.17187,-0.29688 0.48437,-0.5 0.3125,-0.20313 0.76563,-0.29688 0.45312,-0.0937 1.01562,-0.0937 l 0.65625,0 0,-0.39062 q 0,-0.26563 -0.0625,-0.48438 -0.0469,-0.21875 -0.1875,-0.35937 -0.125,-0.14063 -0.34375,-0.20313 -0.20312,-0.0781 -0.51562,-0.0781 -0.3125,0 -0.57813,0.0781 -0.26562,0.0781 
 -0.46875,0.17188 -0.1875,0.0937 -0.32812,0.17187 -0.125,0.0781 -0.1875,0.0781 -0.0469,0 -0.0781,-0.0156 -0.0312,-0.0312 -0.0625,-0.0781 -0.0156,-0.0469 -0.0312,-0.10938 0,-0.0625 0,-0.14062 0,-0.14063 0.0156,-0.21875 0.0156,-0.0781 0.0781,-0.14063 0.0781,-0.0781 0.25,-0.17187 0.1875,-0.0937 0.42188,-0.17188 0.23437,-0.0781 0.5,-0.125 0.28125,-0.0469 0.5625,-0.0469 0.51562,0 0.875,0.125 0.375,0.10937 0.59375,0.34375 0.23437,0.21875 0.32812,0.5625 0.10938,0.32812 0.10938,0.78125 l 0,3.26562 z m -0.89063,-2.21875 -0.75,0 q -0.375,0 -0.64062,0.0625 -0.26563,0.0625 -0.45313,0.1875 -0.17187,0.125 -0.25,0.29688 -0.0781,0.15625 -0.0781,0.39062 0,0.375 0.23437,0.59375 0.23438,0.21875 0.67188,0.21875 0.34375,0 0.64062,-0.17187 0.29688,-0.1875 0.625,-0.54688 l 0,-1.03125 z m 6.51064,2.20313 q 0,0.0469 -0.0312,0.0781 -0.0156,0.0312 -0.0625,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0781,0.0156 -0.20312,0.0156 -0.14063,0 -0.23438,-0.0156 -0.0781,0 -0.125,-0.0156 -0.0469,-0.0312 -0.0781,-0.0625 -0.
 0156,-0.0312 -0.0156,-0.0781 l 0,-2.82813 q 0,-0.40625 -0.0625,-0.65625 -0.0625,-0.26562 -0.1875,-0.4375 -0.125,-0.1875 -0.32812,-0.28125 -0.1875,-0.0937 -0.4375,-0.0937 -0.34375,0 -0.67188,0.23438 -0.32812,0.23437 -0.70312,0.6875 l 0,3.375 q 0,0.0469 -0.0156,0.0781 -0.0156,0.0312 -0.0781,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0781,0.0156 -0.20312,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,0 -0.14063,-0.0156 -0.0469,-0.0312 -0.0625,-0.0625 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-4.82813 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0469,-0.0312 0.10938,-0.0312 0.0781,-0.0156 0.20312,-0.0156 0.125,0 0.20313,0.0156 0.0781,0 0.10937,0.0312 0.0469,0.0156 0.0625,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,0.64063 q 0.40625,-0.45313 0.8125,-0.65625 0.40625,-0.21875 0.8125,-0.21875 0.48438,0 0.8125,0.17187 0.34375,0.15625 0.54688,0.4375 0.20312,0.26563 0.28125,0.64063 0.0937,0.35937 0.0937,0.875 l 0,2.9375 z m 5.08307,0.0156 q 0,0.0625 -0.0469,0.0937 -0.0469,0.0312 -0.125,0.0469 -0.06
 25,0.0156 -0.21875,0.0156 -0.14062,0 -0.21875,-0.0156 -0.0781,-0.0156 -0.125,-0.0469 -0.0312,-0.0312 -0.0312,-0.0937 l 0,-0.48437 q -0.3125,0.32812 -0.70312,0.53125 -0.39063,0.1875 -0.82813,0.1875 -0.39062,0 -0.70312,-0.10938 -0.29688,-0.0937 -0.51563,-0.28125 -0.21875,-0.1875 -0.34375,-0.45312 -0.10937,-0.28125 -0.10937,-0.64063 0,-0.40625 0.15625,-0.70312 0.17187,-0.29688 0.48437,-0.5 0.3125,-0.20313 0.76563,-0.29688 0.45312,-0.0937 1.01562,-0.0937 l 0.65625,0 0,-0.39062 q 0,-0.26563 -0.0625,-0.48438 -0.0469,-0.21875 -0.1875,-0.35937 -0.125,-0.14063 -0.34375,-0.20313 -0.20312,-0.0781 -0.51562,-0.0781 -0.3125,0 -0.57813,0.0781 -0.26562,0.0781 -0.46875,0.17188 -0.1875,0.0937 -0.32812,0.17187 -0.125,0.0781 -0.1875,0.0781 -0.0469,0 -0.0781,-0.0156 -0.0312,-0.0312 -0.0625,-0.0781 -0.0156,-0.0469 -0.0312,-0.10938 0,-0.0625 0,-0.14062 0,-0.14063 0.0156,-0.21875 0.0156,-0.0781 0.0781,-0.14063 0.0781,-0.0781 0.25,-0.17187 0.1875,-0.0937 0.42188,-0.17188 0.23437,-0.0781 0.5,-0.125 0.28125,-
 0.0469 0.5625,-0.0469 0.51562,0 0.875,0.125 0.375,0.10937 0.59375,0.34375 0.23437,0.21875 0.32812,0.5625 0.10938,0.32812 0.10938,0.78125 l 0,3.26562 z m -0.89063,-2.21875 -0.75,0 q -0.375,0 -0.64062,0.0625 -0.26563,0.0625 -0.45313,0.1875 -0.17187,0.125 -0.25,0.29688 -0.0781,0.15625 -0.0781,0.39062 0,0.375 0.23437,0.59375 0.23438,0.21875 0.67188,0.21875 0.34375,0 0.64062,-0.17187 0.29688,-0.1875 0.625,-0.54688 l 0,-1.03125 z m 6.38564,-2.40625 q 0,0.1875 -0.0469,0.28125 -0.0469,0.0781 -0.14062,0.0781 l -0.6875,0 q 0.1875,0.1875 0.26562,0.42187 0.0781,0.23438 0.0781,0.48438 0,0.42187 -0.14063,0.75 -0.125,0.3125 -0.375,0.54687 -0.25,0.21875 -0.59375,0.34375 -0.34375,0.10938 -0.76562,0.10938 -0.29688,0 -0.5625,-0.0781 -0.26563,-0.0781 -0.40625,-0.20312 -0.10938,0.10937 -0.17188,0.23437 -0.0625,0.10938 -0.0625,0.28125 0,0.1875 0.17188,0.3125 0.1875,0.125 0.46875,0.125 l 1.26562,0.0625 q 0.35938,0 0.65625,0.0937 0.3125,0.0937 0.53125,0.26563 0.21875,0.15625 0.34375,0.39062 0.125,0.23438 0
 .125,0.5625 0,0.32813 -0.14062,0.625 -0.14063,0.29688 -0.4375,0.53125 -0.28125,0.23438 -0.73438,0.35938 -0.4375,0.125 -1.04687,0.125 -0.57813,0 -1,-0.0937 -0.40625,-0.0937 -0.67188,-0.26563 -0.26562,-0.17187 -0.39062,-0.42187 -0.10938,-0.23438 -0.10938,-0.51563 0,-0.17187 0.0469,-0.34375 0.0469,-0.15625 0.125,-0.3125 0.0937,-0.15625 0.21875,-0.28125 0.14062,-0.14062 0.3125,-0.28125 -0.26563,-0.125 -0.39063,-0.32812 -0.125,-0.20313 -0.125,-0.45313 0,-0.32812 0.125,-0.57812 0.14063,-0.26563 0.34375,-0.46875 -0.17187,-0.1875 -0.26562,-0.4375 -0.0937,-0.25 -0.0937,-0.60938 0,-0.40625 0.14062,-0.73437 0.14063,-0.32813 0.375,-0.54688 0.25,-0.23437 0.59375,-0.35937 0.35938,-0.125 0.76563,-0.125 0.21875,0 0.40625,0.0312 0.1875,0.0156 0.35937,0.0625 l 1.45313,0 q 0.0937,0 0.14062,0.0937 0.0469,0.0781 0.0469,0.26562 z m -1.39063,1.28125 q 0,-0.5 -0.26562,-0.76562 -0.26563,-0.28125 -0.76563,-0.28125 -0.26562,0 -0.45312,0.0937 -0.1875,0.0781 -0.3125,0.23437 -0.125,0.14063 -0.1875,0.34375 -0.062
 5,0.1875 -0.0625,0.40625 0,0.46875 0.26562,0.75 0.26563,0.26563 0.76563,0.26563 0.26562,0 0.45312,-0.0781 0.1875,-0.0781 0.3125,-0.21875 0.14063,-0.15625 0.1875,-0.34375 0.0625,-0.20312 0.0625,-0.40625 z m 0.45313,3.82813 q 0,-0.3125 -0.26563,-0.48438 -0.25,-0.17187 -0.6875,-0.1875 l -1.25,-0.0312 q -0.17187,0.125 -0.28125,0.25 -0.10937,0.125 -0.17187,0.23438 -0.0625,0.10937 -0.0937,0.21875 -0.0156,0.10937 -0.0156,0.21875 0,0.34375 0.34375,0.51562 0.35938,0.1875 1,0.1875 0.40625,0 0.67188,-0.0781 0.26562,-0.0781 0.4375,-0.20313 0.17187,-0.125 0.23437,-0.29687 0.0781,-0.17188 0.0781,-0.34375 z m 6.04718,-3.125 q 0,0.21875 -0.10938,0.3125 -0.10937,0.0781 -0.23437,0.0781 l -3.17188,0 q 0,0.40625 0.0781,0.73438 0.0781,0.3125 0.26562,0.54687 0.1875,0.23438 0.48438,0.35938 0.3125,0.10937 0.75,0.10937 0.34375,0 0.60937,-0.0469 0.26563,-0.0625 0.45313,-0.125 0.20312,-0.0781 0.32812,-0.125 0.125,-0.0625 0.1875,-0.0625 0.0469,0 0.0625,0.0156 0.0312,0.0156 0.0469,0.0625 0.0312,0.0312 0.0312,0.
 10938 0.0156,0.0625 0.0156,0.15625 0,0.0781 -0.0156,0.125 0,0.0469 -0.0156,0.0937 0,0.0312 -0.0312,0.0625 -0.0156,0.0312 -0.0469,0.0625 -0.0156,0.0312 -0.17188,0.10937 -0.14062,0.0625 -0.375,0.125 -0.21875,0.0625 -0.53125,0.10938 -0.29687,0.0625 -0.64062,0.0625 -0.59375,0 -1.04688,-0.17188 -0.45312,-0.17187 -0.76562,-0.5 -0.29688,-0.32812 -0.45313,-0.8125 -0.15625,-0.5 -0.15625,-1.15625 0,-0.625 0.15625,-1.10937 0.17188,-0.5 0.46875,-0.84375 0.29688,-0.35938 0.71875,-0.53125 0.4375,-0.1875 0.96875,-0.1875 0.57813,0 0.96875,0.1875 0.40625,0.17187 0.65625,0.48437 0.26563,0.29688 0.39063,0.71875 0.125,0.42188 0.125,0.89063 l 0,0.15625 z m -0.89063,-0.26563 q 0.0156,-0.6875 -0.3125,-1.07812 -0.32812,-0.40625 -0.96875,-0.40625 -0.32812,0 -0.57812,0.125 -0.25,0.125 -0.42188,0.32812 -0.15625,0.20313 -0.25,0.48438 -0.0937,0.26562 -0.0937,0.54687 l 2.625,0 z m 4.88024,-1.625 q 0,0.125 0,0.20313 0,0.0781 -0.0156,0.125 -0.0156,0.0469 -0.0469,0.0781 -0.0156,0.0156 -0.0625,0.0156 -0.0469,0 -0.10
 938,-0.0156 -0.0625,-0.0312 -0.14062,-0.0469 -0.0781,-0.0312 -0.17188,-0.0469 -0.0937,-0.0312 -0.20312,-0.0312 -0.14063,0 -0.26563,0.0625 -0.125,0.0469 -0.28125,0.17188 -0.14062,0.125 -0.29687,0.32812 -0.15625,0.20313 -0.34375,0.5 l 0,3.17188 q 0,0.0469 -0.0156,0.0781 -0.0156,0.0312 -0.0781,0.0625 -0.0469,0.0156 -0.14063,0.0156 -0.0781,0.0156 -0.20312,0.0156 -0.125,0 -0.21875,-0.0156 -0.0781,0 -0.14063,-0.0156 -0.0469,-0.0312 -0.0625,-0.0625 -0.0156,-0.0312 -0.0156,-0.0781 l 0,-4.82813 q 0,-0.0469 0.0156,-0.0781 0.0156,-0.0312 0.0625,-0.0469 0.0469,-0.0312 0.10938,-0.0312 0.0781,-0.0156 0.20312,-0.0156 0.125,0 0.20313,0.0156 0.0781,0 0.10937,0.0312 0.0469,0.0156 0.0625,0.0469 0.0312,0.0312 0.0312,0.0781 l 0,0.70313 q 0.20313,-0.29688 0.375,-0.46875 0.17188,-0.1875 0.32813,-0.28125 0.15625,-0.10938 0.296

<TRUNCATED>


[17/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/hawq-reference.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/hawq-reference.html.md.erb b/markdown/reference/hawq-reference.html.md.erb
new file mode 100644
index 0000000..f5abd2a
--- /dev/null
+++ b/markdown/reference/hawq-reference.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: HAWQ Reference
+---
+
+This section provides a complete reference to HAWQ SQL commands, management utilities, configuration parameters, environment variables, and database objects.
+
+-   **[Server Configuration Parameter Reference](../reference/HAWQSiteConfig.html)**
+
+    This section describes all server configuration guc/parameters that are available in HAWQ.
+
+-   **[HDFS Configuration Reference](../reference/HDFSConfigurationParameterReference.html)**
+
+    This reference page describes HDFS configuration values that are configured for HAWQ either within `hdfs-site.xml`, `core-site.xml`, or `hdfs-client.xml`.
+
+-   **[Environment Variables](../reference/HAWQEnvironmentVariables.html)**
+
+    This topic contains a reference of the environment variables that you set for HAWQ.
+
+-   **[Character Set Support Reference](../reference/CharacterSetSupportReference.html)**
+
+    This topic provides a referene of the character sets supported in HAWQ.
+
+-   **[Data Types](../reference/HAWQDataTypes.html)**
+
+    This topic provides a reference of the data types supported in HAWQ.
+
+-   **[SQL Commands](../reference/SQLCommandReference.html)**
+
+    This�section contains a description and the syntax�of�the SQL commands supported by HAWQ.
+
+-   **[System Catalog Reference](../reference/catalog/catalog_ref.html)**
+
+    This reference describes the HAWQ system catalog tables and views.
+
+-   **[The hawq\_toolkit Administrative Schema](../reference/toolkit/hawq_toolkit.html)**
+
+    This section provides a reference on the `hawq_toolkit` administrative schema.
+
+-   **[HAWQ Management Tools Reference](../reference/cli/management_tools.html)**
+
+    Reference information for command-line utilities available in HAWQ.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ABORT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ABORT.html.md.erb b/markdown/reference/sql/ABORT.html.md.erb
new file mode 100644
index 0000000..ab053d8
--- /dev/null
+++ b/markdown/reference/sql/ABORT.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: ABORT
+---
+
+Aborts the current transaction.
+
+## <a id="synop"></a>Synopsis
+
+```pre
+ABORT [ WORK | TRANSACTION ]
+```
+
+## <a id="abort__section3"></a>Description
+
+`ABORT` rolls back the current transaction and causes all the updates made by the transaction to be discarded. This command is identical in behavior to the standard SQL command `ROLLBACK`, and is present only for historical reasons.
+
+## <a id="abort__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional key words. They have no effect.</dd>
+
+## <a id="abort__section5"></a>Notes
+
+Use `COMMIT` to successfully terminate a transaction.
+
+Issuing `ABORT` when not inside a transaction does no harm, but it will provoke a warning message.
+
+## <a id="compat"></a>Compatibility
+
+This command is a HAWQ extension present for historical reasons. ROLLBACK is the equivalent standard SQL command.
+
+## <a id="see"></a>See Also
+
+[BEGIN](BEGIN.html), [COMMIT](COMMIT.html), [ROLLBACK](ROLLBACK.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-AGGREGATE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-AGGREGATE.html.md.erb b/markdown/reference/sql/ALTER-AGGREGATE.html.md.erb
new file mode 100644
index 0000000..b1131ef
--- /dev/null
+++ b/markdown/reference/sql/ALTER-AGGREGATE.html.md.erb
@@ -0,0 +1,68 @@
+---
+title: ALTER AGGREGATE
+---
+
+Changes the definition of an aggregate function.
+
+## <a id="synop"></a>Synopsis
+
+```pre
+ALTER AGGREGATE <name> ( <type> [ , ... ] ) RENAME TO <new_name>
+
+ALTER AGGREGATE <name> ( <type> [ , ... ] ) OWNER TO <new_owner>
+
+ALTER AGGREGATE <name> ( <type> [ , ... ] ) SET SCHEMA <new_schema>
+```
+
+## <a id="desc"></a>Description
+
+`ALTER AGGREGATE` changes the definition of an aggregate function.
+
+You must own the aggregate function to use `ALTER AGGREGATE`. To change the schema of an aggregate function, you must also have `CREATE` privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the aggregate function\u2019s schema. (These restrictions enforce that altering the owner does not do anything you could not do by dropping and recreating the aggregate function. However, a superuser can alter ownership of any aggregate function anyway.)
+
+## <a id="alteraggregate__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing aggregate function.</dd>
+
+<dt> \<type\>   </dt>
+<dd>An input data type on which the aggregate function operates. To reference a zero-argument aggregate function, write \* in place of the list of input data types.</dd>
+
+<dt> \<new\_name\>   </dt>
+<dd>The new name of the aggregate function.</dd>
+
+<dt> \<new\_owner\>   </dt>
+<dd>The new owner of the aggregate function.</dd>
+
+<dt> \<new\_schema\>   </dt>
+<dd>The new schema for the aggregate function.</dd>
+
+## <a id="alteraggregate__section5"></a>Examples
+
+To rename the aggregate function `myavg` for type `integer` to `my_average`:
+
+```pre
+ALTER AGGREGATE myavg(integer) RENAME TO my_average;
+```
+
+To change the owner of the aggregate function `myavg` for type `integer` to `joe`:
+
+```pre
+ALTER AGGREGATE myavg(integer) OWNER TO joe;
+```
+
+To move the aggregate function `myavg` for type `integer` into schema `myschema`:
+
+```pre
+ALTER AGGREGATE myavg(integer) SET SCHEMA myschema;
+```
+
+## <a id="compat"></a>Compatibility
+
+There is no `ALTER AGGREGATE` statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[CREATE AGGREGATE](CREATE-AGGREGATE.html),�[DROP AGGREGATE](DROP-AGGREGATE.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-DATABASE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-DATABASE.html.md.erb b/markdown/reference/sql/ALTER-DATABASE.html.md.erb
new file mode 100644
index 0000000..782daf5
--- /dev/null
+++ b/markdown/reference/sql/ALTER-DATABASE.html.md.erb
@@ -0,0 +1,52 @@
+---
+title: ALTER DATABASE
+---
+
+Changes the attributes of a database.
+
+## <a id="alterrole__section2"></a>Synopsis
+
+```pre
+ALTER DATABASE <name> SET <parameter> { TO | = } { <value> | DEFAULT } 
+
+ALTER DATABASE <name> RESET <parameter>
+```
+
+## <a id="desc"></a>Description
+
+`ALTER DATABASE` changes the attributes of a HAWQ database.
+
+`SET` and `RESET` \<parameter\> changes the session default for a configuration parameter for a HAWQ database. Whenever a new session is subsequently started in that database, the specified value becomes the session default value. The database-specific default overrides whatever setting is present in the server configuration file (`hawq-site.xml`). Only the database owner or a superuser can change the session defaults for a database. Certain parameters cannot be set this way, or can only be set by a superuser.
+
+## <a id="alterrole__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name of the database whose attributes are to be altered.
+
+**Note:** HAWQ reserves the database "hcatalog" for system use. You cannot connect to or alter the system "hcatalog" database.</dd>
+
+<dt> \<parameter\>   </dt>
+<dd>Set this database's session default for the specified configuration parameter to the given value. If value is `DEFAULT` or if `RESET` is used, the database-specific setting is removed, so the system-wide default setting will be inherited in new sessions. Use `RESET ALL` to clear all database-specific settings. See [About Server Configuration Parameters](../guc/guc_config.html#topic1) for information about user-settable configuration parameters.</dd>
+
+## <a id="notes"></a>Notes
+
+It is also possible to set a configuration parameter session default for a specific role (user) rather than to a database. Role-specific settings override database-specific ones if there is a conflict. See [ALTER ROLE](ALTER-ROLE.html).
+
+## <a id="examples"></a>Examples
+
+To set the default schema search path for the `mydatabase` database:
+
+```pre
+ALTER DATABASE mydatabase SET search_path TO myschema, 
+public, pg_catalog;
+```
+
+## <a id="compat"></a>Compatibility
+
+The `ALTER DATABASE` statement is a HAWQ extension.
+
+## <a id="see"></a>See Also
+
+[CREATE DATABASE](CREATE-DATABASE.html#topic1), [DROP DATABASE](DROP-DATABASE.html#topic1), [SET](SET.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-FUNCTION.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-FUNCTION.html.md.erb b/markdown/reference/sql/ALTER-FUNCTION.html.md.erb
new file mode 100644
index 0000000..f21a808
--- /dev/null
+++ b/markdown/reference/sql/ALTER-FUNCTION.html.md.erb
@@ -0,0 +1,108 @@
+---
+title: ALTER FUNCTION
+---
+
+Changes the definition of a function.
+
+## <a id="alterfunction__section2"></a>Synopsis
+
+``` sql
+ALTER FUNCTION <name> ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+   <action> [, ... ] [RESTRICT]
+
+ALTER FUNCTION <name> ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+   RENAME TO <new_name>
+
+ALTER FUNCTION <name> ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+   OWNER TO <new_owner>
+
+ALTER FUNCTION <name> ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+   SET SCHEMA <new_schema>
+
+```
+
+where \<action\> is one of:
+
+```pre
+{ CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT }
+{ IMMUTABLE | STABLE | VOLATILE }
+{ [EXTERNAL] SECURITY INVOKER | [EXTERNAL] SECURITY DEFINER }
+```
+
+## <a id="desc"></a>Description
+
+`ALTER FUNCTION` changes the definition of a function.�
+
+You must own the function to use `ALTER FUNCTION`. To change a function\u2019s schema, you must also have `CREATE` privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the function\u2019s schema. (These restrictions enforce that altering the owner does not do anything you could not do by dropping and recreating the function. However, a superuser can alter ownership of any function anyway.)
+
+## <a id="alterfunction__section4"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name (optionally schema-qualified) of an existing function.</dd>
+
+<dt>\<argmode\>  </dt>
+<dd>The mode of an argument: either `IN`, `OUT`, or `INOUT`. If omitted, the default is `IN`. Note that `ALTER FUNCTION` does not actually pay any attention to `OUT` arguments, since only the input arguments are needed to determine the function's identity. So it is sufficient to list the `IN` and `INOUT` arguments.</dd>
+
+<dt> \<argname\>  </dt>
+<dd>The name of an argument. Note that `ALTER FUNCTION` does not actually pay any attention to argument names, since only the argument data types are needed to determine the function's identity.</dd>
+
+<dt> \<argtype\>  </dt>
+<dd>The data type(s) of the function's arguments (optionally schema-qualified), if any.</dd>
+
+<dt> \<new\_name\>  </dt>
+<dd>The new name of the function.</dd>
+
+<dt> \<new\_owner\>  </dt>
+<dd>The new owner of the function. Note that if the function is marked `SECURITY DEFINER`, it will subsequently execute as the new owner.</dd>
+
+<dt> \<new\_schema\>  </dt>
+<dd>The new schema for the function.</dd>
+
+<dt>CALLED ON NULL INPUT  
+RETURNS NULL ON NULL INPUT  
+STRICT  </dt>
+<dd>`CALLED ON NULL INPUT` changes the function so that it will be invoked when some or all of its arguments are null. `RETURNS NULL ON NULL                      INPUT` or `STRICT` changes the function so that it is not invoked if any of its arguments are null; instead, a null result is assumed automatically. See `CREATE FUNCTION` for more information.</dd>
+
+<dt>IMMUTABLE  
+STABLE  
+VOLATILE  </dt>
+<dd>Change the volatility of the function to the specified setting. See `CREATE FUNCTION` for details.</dd>
+
+<dt>\[ EXTERNAL \] SECURITY INVOKER  
+\[ EXTERNAL \] SECURITY DEFINER  </dt>
+<dd>Change whether the function is a security definer or not. The key word `EXTERNAL` is ignored for SQL conformance. See `CREATE                      FUNCTION` for more information about this capability.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Ignored for conformance with the SQL standard.</dd>
+
+## <a id="notes"></a>Notes
+
+HAWQ�has limitations on the use of functions defined as `STABLE` or `VOLATILE`. See [CREATE FUNCTION](CREATE-FUNCTION.html)�for more information.
+
+## <a id="alterfunction__section6"></a>Examples
+
+To rename the function `sqrt` for type `integer` to `square_root`:
+
+``` pre
+ALTER FUNCTION sqrt(integer) RENAME TO square_root;
+```
+
+To change the owner of the function `sqrt` for type `integer` to `joe`:
+
+``` pre
+ALTER FUNCTION sqrt(integer) OWNER TO joe;
+```
+
+To change the schema of the function `sqrt` for type `integer` to `math`:
+
+``` pre
+ALTER FUNCTION sqrt(integer) SET SCHEMA math;
+```
+
+## <a id="compat"></a>Compatibility
+
+This statement is partially compatible with the `ALTER FUNCTION` statement in the SQL standard. The standard allows more properties of a function to be modified, but does not provide the ability to rename a function, make a function a security definer, or change the owner, schema, or volatility of a function. The standard also requires the `RESTRICT` key word, which is optional in HAWQ.
+
+## <a id="see"></a>See Also
+
+[CREATE AGGREGATE](CREATE-AGGREGATE.html),�[DROP AGGREGATE](DROP-AGGREGATE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-OPERATOR-CLASS.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-OPERATOR-CLASS.html.md.erb b/markdown/reference/sql/ALTER-OPERATOR-CLASS.html.md.erb
new file mode 100644
index 0000000..1d2878e
--- /dev/null
+++ b/markdown/reference/sql/ALTER-OPERATOR-CLASS.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: ALTER OPERATOR CLASS
+---
+
+Changes the definition of an operator class.
+
+## <a id="synop"></a>Synopsis
+
+``` sql
+ALTER OPERATOR CLASS <name> USING <index_method> RENAME TO <newname>
+
+ALTER OPERATOR CLASS <name> USING <index_method> OWNER TO <newowner>
+```
+
+## <a id="desc"></a>Description
+
+`ALTER OPERATOR CLASS` changes the definition of an operator class.�
+
+You must own the operator class to use `ALTER OPERATOR CLASS`. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the operator class\u2019s schema. (These restrictions enforce that altering the owner does not do anything you could not do by dropping and recreating the operator class. However, a superuser can alter ownership of any operator class anyway.)
+
+## <a id="alteroperatorclass__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing operator class.</dd>
+
+<dt> \<index\_method\>   </dt>
+<dd>The name of the index method this operator class is for.</dd>
+
+<dt> \<newname\>   </dt>
+<dd>The new name of the operator class.</dd>
+
+<dt> \<newowner\>   </dt>
+<dd>The new owner of the operator class</dd>
+
+## <a id="compat"></a>Compatibility
+
+There is no `ALTER OPERATOR` statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[CREATE OPERATOR](CREATE-OPERATOR.html), [DROP OPERATOR CLASS](DROP-OPERATOR-CLASS.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-OPERATOR.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-OPERATOR.html.md.erb b/markdown/reference/sql/ALTER-OPERATOR.html.md.erb
new file mode 100644
index 0000000..a63d838
--- /dev/null
+++ b/markdown/reference/sql/ALTER-OPERATOR.html.md.erb
@@ -0,0 +1,50 @@
+---
+title: ALTER OPERATOR
+---
+
+Changes the definition of an operator.
+
+## <a id="synop"></a>Synopsis
+
+```pre
+ALTER OPERATOR <name> ( {<lefttype> | NONE} , {<righttype> | NONE} ) 
+   OWNER TO <newowner>        
+```
+
+## <a id="desc"></a>Description
+
+`ALTER OPERATOR` changes the definition of an operator. The only currently available functionality is to change the owner of the operator.�
+
+You must own the operator to use `ALTER OPERATOR`. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the operator\u2019s schema. (These restrictions enforce that altering the owner does not do anything you could not do by dropping and recreating the operator. However, a superuser can alter ownership of any operator anyway.)
+
+## <a id="alteroperator__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing operator.</dd>
+
+<dt> \<lefttype\>   </dt>
+<dd>The data type of the operator's left operand; write `NONE` if the operator has no left operand.</dd>
+
+<dt> \<righttype\>   </dt>
+<dd>The data type of the operator's right operand; write `NONE` if the operator has no right operand.</dd>
+
+<dt> \<newowner\>   </dt>
+<dd>The new owner of the operator.</dd>
+
+## <a id="example"></a>Example
+
+Change the owner of a custom operator `a @@ b` for type `text`:
+
+```pre
+ALTER OPERATOR @@ (text, text) OWNER TO joe;
+```
+
+## <a id="compat"></a>Compatibility
+
+There is no `ALTER OPERATOR` statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[CREATE OPERATOR](CREATE-OPERATOR.html),�[DROP OPERATOR](DROP-OPERATOR.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb b/markdown/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
new file mode 100644
index 0000000..ec051e8
--- /dev/null
+++ b/markdown/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
@@ -0,0 +1,132 @@
+---
+title: ALTER RESOURCE QUEUE
+---
+
+Modify an existing resource queue.
+
+## <a id="topic1__section2"></a>Synopsis
+
+```pre
+ALTER RESOURCE QUEUE <name> WITH (<queue_attribute>=<value> [, ... ])
+```
+
+where \<queue\_attribute\> is:
+
+```pre
+   [MEMORY_LIMIT_CLUSTER=<percentage>]
+   [CORE_LIMIT_CLUSTER=<percentage>]
+   [ACTIVE_STATEMENTS=<integer>]
+   [ALLOCATION_POLICY='even']
+   [VSEG_RESOURCE_QUOTA='mem:<memory_units>']
+   [RESOURCE_OVERCOMMIT_FACTOR=<double>]
+   [NVSEG_UPPER_LIMIT=<integer>]
+   [NVSEG_LOWER_LIMIT=<integer>]
+   [NVSEG_UPPER_LIMIT_PERSEG=<double>]
+   [NVSEG_LOWER_LIMIT_PERSEG=<double>]
+```
+```
+   <memory_units> ::= {128mb|256mb|512mb|1024mb|2048mb|4096mb|
+                       8192mb|16384mb|1gb|2gb|4gb|8gb|16gb}
+   <percentage> ::= <integer>%
+```
+
+## <a id="topic1__section3"></a>Description
+
+Changes attributes for an existing resource queue in HAWQ. You cannot change the parent of an existing resource queue, and you cannot change a resource queue while it is active. Only a superuser can modify a resource queue.
+
+Resource queues with an `ACTIVE_STATEMENTS` threshold set a maximum limit on the number parallel active query statements that can be executed by roles assigned to the leaf queue. It controls the number of active queries that are allowed to run at the same time. The value for `ACTIVE_STATEMENTS` should be an integer greater than 0. If not specified, the default value is 20.
+
+When modifying the resource queue, use the MEMORY\_LIMIT\_CLUSTER and CORE\_LIMIT\_CLUSTER to tune the allowed resource usage of the resource queue. MEMORY\_LIMIT\_CLUSTER and CORE\_LIMIT\_CLUSTER must be equal for the same resource queue. In addition the sum of the percentages of MEMORY\_LIMIT\_CLUSTER (and CORE\_LIMIT\_CLUSTER) for resource queues that share the same parent cannot exceed 100%.
+
+To modify the role associated with the resource queue, use the [ALTER ROLE](ALTER-ROLE.html) or [CREATE ROLE](CREATE-ROLE.html) command. You can only assign roles to the leaf-level resource queues (resource queues that do not have any children.)
+
+The default memory allotment can be overridden on a per-query basis by using `hawq_rm_stmt_vseg_memory` and` hawq_rm_stmt_nvseg` configuration parameters. See [Configuring Resource Quotas for Query Statements](../../resourcemgmt/ConfigureResourceManagement.html#topic_g2p_zdq_15).
+
+To see the status of a resource queue, see [Checking Existing Resource Queues](../../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+
+See also [Best Practices for Using Resource Queues](../../bestpractices/managing_resources_bestpractices.html#topic_hvd_pls_wv).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>Required. The name of the resource queue you wish to modify.</dd>
+
+<!-- -->
+
+<dt>MEMORY\_LIMIT\_CLUSTER=\<percentage\> </dt>
+<dd>Required. Defines how much memory a resource queue can consume from its parent resource queue and consequently dispatch to the execution of parallel statements. The valid values are 1% to 100%. The value of MEMORY\_ LIMIT\_CLUSTER must be identical to the value of CORE\_LIMIT\_CLUSTER. The sum of values for MEMORY\_LIMIT\_CLUSTER of this queue plus other queues that share the same parent cannot exceed 100%. The HAWQ resource manager periodically validates this restriction.
+
+**Note:** If you want to increase the percentage, you may need to decrease the percentage of any resource queue(s) that share the same parent resource queue first. The total cannot exceed 100%.</dd>
+
+<dt>CORE\_LIMIT\_CLUSTER=\<percentage\> </dt>
+<dd>Required. The percentage of consumable CPU (virtual core) resources that the resource queue can take from its parent resource queue. The valid values are 1% to 100%. The value of CORE\_ LIMIT\_CLUSTER must be identical to the value of MEMORY\_LIMIT\_CLUSTER. The sum of values for CORE\_LIMIT\_CLUSTER of this queue and queues that share the same parent cannot exceed 100%.
+
+**Note:** If you want to increase the percentage, you may need to decrease the percentage of any resource queue(s) that share the same parent resource queue first. The total cannot exceed 100%.</dd>
+
+<dt>ACTIVE\_STATEMENTS=\<integer\> </dt>
+<dd>Optional. Defines the limit of the number of parallel active statements in one leaf queue. The maximum number of connections cannot exceed this limit. If this limit is reached, the HAWQ resource manager queues more query allocation requests. Note that a single session can have several concurrent statement executions that occupy multiple connection resources. The value for `ACTIVE_STATEMENTS` should be an integer greater than 0. The default value is 20.</dd>
+
+<dt>ALLOCATION\_POLICY=\<string\> </dt>
+<dd>Optional. Defines the resource allocation policy for parallel statement execution. The default value is `even`.
+
+**Note:** This release only supports an `even` allocation policy. Even if you do not specify this attribute, the resource queue still applies an `even` allocation policy. Future releases will support alternative allocation policies.
+
+Setting the allocation policy to `even` means resources are always evenly dispatched based on current concurrency. When multiple query resource allocation requests are queued, the resource queue tries to evenly dispatch resources to queued requests until one of the following conditions are encountered:
+
+-   There are no more allocated resources in this queue to dispatch, or
+-   The ACTIVE\_STATEMENTS limit has been reached
+
+For each query resource allocation request, the HAWQ resource mananger determines the minimum and maximum size of a virtual segment based on multiple factors including query cost, user configuration, table properties, and so on. For example, a hash distributed table requires fixed size of virtual segments. With an even allocation policy, the HAWQ resource manager uses the minimum virtual segment size requirement and evenly dispatches resources to each query resource allocation request in the resource queue.</dd>
+
+<dt>VSEG\_RESOURCE\_QUOTA='mem:{128mb | 256mb | 512mb | 1024mb | 2048mb | 4096mb | 8192mb | 16384mb | 1gb | 2gb | 4gb | 8gb | 16gb}'</dt>
+<dd>Optional. This quota defines how resources are split across multiple virtual segments. For example, when the HAWQ resource manager determines that 256GB memory and 128 vcores should be allocated to the current resource queue, there are multiple solutions on how to divide the resources across virtual segments. For example, you could use a) 2GB/1 vcore \* 128 virtual segments or b) 1GB/0.5 vcore \* 256 virtual segments. Therefore, you can use this attribute to make the HAWQ resource manager calculate the number of virtual segments based on how to divide the memory. For example, if `VSEG_RESOURCE_QUOTA``='mem:512mb'`, then the resource queue will use 512MB/0.25 vcore \* 512 virtual segments. The default value is '`mem:256mb`'.
+
+**Note:** To avoid resource fragmentation, make sure that the segment resource capacity configured for HAWQ (in HAWQ Standalone mode: `hawq_rm_memory_limit_perseg`; in YARN mode: `yarn.nodemanager.resource.memory-mb` must be a multiple of the resource quotas for all virtual segments and CPU to memory ratio must be a multiple of the amount configured for `yarn.scheduler.minimum-allocation-mb`.</dd>
+
+<dt>RESOURCE\_OVERCOMMIT\_FACTOR=\<double\> </dt>
+<dd>Optional. This factor defines how much a resource can be overcommitted. The default value is `2.0`. For example, if RESOURCE\_OVERCOMMIT\_FACTOR is set to 3.0 and MEMORY\_LIMIT\_CLUSTER is set to 30%, then the maximum possible resource allocation in this queue is 90% (30% x 3.0). If the resulting maximum is bigger than 100%, then 100% is adopted. The minimum value that this attribute can be set to is `1.0`.</dd>
+
+<dt>NVSEG\_UPPER\_LIMIT=\<integer\> / NVSEG\_UPPER\_LIMIT\_PERSEG=\<double\>  </dt>
+<dd>Optional. These limits restrict the range of number of virtual segments allocated in this resource queue for executing one query statement. NVSEG\_UPPER\_LIMIT defines an upper limit of virtual segments for one statement execution regardless of actual cluster size, while NVSEG\_UPPER\_LIMIT\_PERSEG defines the same limit by using the average number of virtual segments in one physical segment. Therefore, the limit defined by NVSEG\_UPPER\_LIMIT\_PERSEG varies dynamically according to the changing size of the HAWQ cluster.
+
+For example, if you set `NVSEG_UPPER_LIMIT=10` all query resource requests are strictly allocated no more than 10 virtual segments. If you set NVSEG\_UPPER\_LIMIT\_PERSEG=2 and assume that currently there are 5 available HAWQ segments in the cluster, query resource requests are allocated 10 virtual segments at the most.
+
+NVSEG\_UPPER\_LIMIT cannot be set to a lower value than NVSEG\_LOWER\_LIMIT if both limits are enabled. In addition, the upper limit cannot be set to a value larger than the value set in global configuration parameter `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit`.
+
+By default, both limits are set to **-1**, which means the limits are disabled. `NVSEG_UPPER_LIMIT` has higher priority than `NVSEG_UPPER_LIMIT_PERSEG`. If both limits are set, then `NVSEG_UPPER_LIMIT_PERSEG` is ignored. If you have enabled resource quotas for the query statement, then these limits are ignored.
+
+**Note:** If the actual lower limit of the number of virtual segments becomes greater than the upper limit, then the lower limit is automatically reduced to be equal to the upper limit. This situation is possible when user sets both `NVSEG_UPPER_LIMIT `and `NVSEG_LOWER_LIMIT_PERSEG`. After expanding the cluster, the dynamic lower limit may become greater than the value set for the fixed upper limit.</dd>
+
+<dt>NVSEG\_LOWER\_LIMIT=\<integer\> / NVSEG\_LOWER\_LIMIT\_PERSEG=\<double\>   </dt>
+<dd>Optional. These limits specify the minimum number of virtual segments allocated for one statement execution in order to guarantee query performance. NVSEG\_LOWER\_LIMIT defines the lower limit of virtual segments for one statement execution regardless the actual cluster size, while NVSEG\_LOWER\_LIMIT\_PERSEG defines the same limit by the average virtual segment number in one segment. Therefore, the limit defined by NVSEG\_LOWER\_LIMIT\_PERSEG varies dynamically along with the size of HAWQ cluster.
+
+NVSEG\_UPPER\_LIMIT\_PERSEG cannot be less than NVSEG\_LOWER\_LIMIT\_PERSEG if both limits are set enabled.
+
+For example, if you set NVSEG\_LOWER\_LIMIT=10, and one statement execution potentially needs no fewer than 10 virtual segments, then this request has at least 10 virtual segments allocated. If you set NVSEG\_UPPER\_LIMIT\_PERSEG=2, assuming there are currently 5 available HAWQ segments in the cluster, and one statement execution potentially needs no fewer than 10 virtual segments, then the query resource request will be allocated at least 10 virtual segments. If one statement execution needs at most 4 virtual segments, the resource manager will allocate at most 4 virtual segments instead of 10 since this resource request does not need more than 9 virtual segments.
+
+By default, both limits are set to **-1**, which means the limits are disabled. `NVSEG_LOWER_LIMIT` has higher priority than `NVSEG_LOWER_LIMIT_PERSEG`. If both limits are set, then `NVSEG_LOWER_LIMIT_PERSEG` is ignored. If you have enabled resource quotas for the query statement, then these limits are ignored.
+
+**Note:** If the actual lower limit of the number of virtual segments becomes greater than the upper limit, then the lower limit is automatically reduced to be equal to the upper limit. This situation is possible when user sets both `NVSEG_UPPER_LIMIT `and `NVSEG_LOWER_LIMIT_PERSEG`. After expanding the cluster, the dynamic lower limit may become greater than the value set for the fixed upper limit. </dd>
+
+## <a id="topic1__section6"></a>Examples
+
+Change the memory and core limit of a resource queue:
+
+```pre
+ALTER RESOURCE QUEUE test_queue_1 WITH (MEMORY_LIMIT_CLUSTER=40%,
+CORE_LIMIT_CLUSTER=40%);
+```
+
+Change the active statements maximum for the resource queue:
+
+```pre
+ALTER RESOURCE QUEUE test_queue_1 WITH (ACTIVE_STATEMENTS=50);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`ALTER RESOURCE QUEUE` is a HAWQ extension. There is no provision for resource queues or workload management in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[ALTER ROLE](ALTER-ROLE.html), [CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html), [CREATE ROLE](CREATE-ROLE.html), [DROP RESOURCE QUEUE](DROP-RESOURCE-QUEUE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-ROLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-ROLE.html.md.erb b/markdown/reference/sql/ALTER-ROLE.html.md.erb
new file mode 100644
index 0000000..ccc2c28
--- /dev/null
+++ b/markdown/reference/sql/ALTER-ROLE.html.md.erb
@@ -0,0 +1,178 @@
+---
+title: ALTER ROLE
+---
+
+Changes a database role (user or group).
+
+## <a id="alterrole__section2"></a>Synopsis
+
+```pre
+ALTER ROLE <name> RENAME TO <newname>
+
+ALTER ROLE <name> RESOURCE QUEUE {<queue_name> | NONE}
+
+ALTER ROLE <name> [ [WITH] <option> [ ... ] ]
+```
+
+where \<option\> can be:
+
+```pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+    | CREATEEXTTABLE | NOCREATEEXTTABLE
+    [ ( <attribute>='<value>'[, ...] ) ]
+           where attribute and value are:
+           type='readable'|'writable'
+           protocol='gpfdist'|'http'
+    
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | CONNECTION LIMIT <connlimit>
+    | [ENCRYPTED | UNENCRYPTED] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>'
+    | [ DENY <deny_point> ]
+    | [ DENY BETWEEN <deny_point> AND <deny_point>]
+    | [ DROP DENY FOR <deny_point> ]
+```
+
+## <a id="desc"></a>Description
+
+`ALTER ROLE` changes the attributes of a HAWQ role. There are several variants of this command:
+
+-   **RENAME** \u2014 Changes the name of the role. Database superusers can rename any role. Roles having `CREATEROLE` privilege can rename non-superuser roles. The current session user cannot be renamed (connect as a different user to rename a role). Because MD5-encrypted passwords use the role name as cryptographic salt, renaming a role clears its password if the password is MD5-encrypted.
+-   **RESOURCE QUEUE** \u2014 Assigns the role to a workload management resource queue. The role would then be subject to the limits assigned to the resource queue when issuing queries. Specify `NONE` to assign the role to the default resource queue. A role can only belong to one resource queue. For a role without `LOGIN` privilege, resource queues have no effect. See [CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html#topic1) for more information.
+-   **WITH** \<option\> \u2014 Changes many of the role attributes that can be specified in [CREATE ROLE](CREATE-ROLE.html). Attributes not mentioned in the command retain their previous settings. Database superusers can change any of these settings for any role. Roles having `CREATEROLE` privilege can change any of these settings, but only for non-superuser roles. Ordinary roles can only change their own password.
+
+**Note:** SET and RESET commands are currently not supported in connection with ALTER ROLE and will result in an error. See [SET](SET.html) and [About Server Configuration Parameters](../guc/guc_config.html#topic1) for information about user-settable configuration parameters.
+
+## <a id="alterrole__section4"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name of the role whose attributes are to be altered.</dd>
+
+<dt> \<newname\>  </dt>
+<dd>The new name of the role.</dd>
+
+<dt> \<queue\_name\>  </dt>
+<dd>The name of the resource queue to which the user-level role is to be assigned. Only roles with `LOGIN` privilege can be assigned to a resource queue. To unassign a role from a resource queue and put it in the default resource queue, specify `NONE`. A role can only belong to one resource queue.</dd>
+
+<dt>SUPERUSER | NOSUPERUSER  
+CREATEDB | NOCREATEDB  
+CREATEROLE | NOCREATEROLE  
+CREATEEXTTABLE | NOCREATEEXTTABLE \[(\<attribute\>='\<value\>')\]  </dt>
+<dd>If `CREATEEXTTABLE` is specified, the role being defined is allowed to create external tables. The default `type` is `readable` and the default `protocol` is `gpfdist` if not specified. `NOCREATEEXTTABLE` (the default) denies the role the ability to create external tables. Using the `file` protocol when creating external tables is not supported. This is because HAWQ cannot guarantee scheduling executors on a specific host. Likewise, you cannot use the `execute` command with `ON ALL` and `ON HOST` for the same reason. Use the `ON MASTER/<number>/SEGMENT <segment_id>` to specify which segment instances are to execute the command.</dd>
+
+<dt>INHERIT | NOINHERIT  
+LOGIN | NOLOGIN  
+CONNECTION LIMIT \<connlimit\>  
+PASSWORD '\<password\>'  
+ENCRYPTED | UNENCRYPTED  
+VALID UNTIL '\<timestamp\>'  </dt>
+<dd>These clauses alter role attributes originally set by [CREATE ROLE](CREATE-ROLE.html).</dd>
+
+<dt>DENY \<deny\_point\>  
+DENY BETWEEN \<deny\_point\> AND \<deny\_point\>   </dt>
+<dd>The `DENY` and `DENY BETWEEN` keywords set time-based constraints that are enforced at login. `DENY`sets a day or a day and time to deny access. `DENY BETWEEN` sets an interval during which access is denied. Both use the parameter \<deny\_point\> that has following format:
+
+```pre
+DAY <day> [ TIME '<time>' ]
+```
+
+The two parts of the \<deny_point\> parameter use the following formats:
+
+For \<day\>:
+
+``` pre
+{'Sunday' | 'Monday' | 'Tuesday' |'Wednesday' | 'Thursday' | 'Friday' |
+'Saturday' | 0-6 }
+```
+
+For \<time\>:
+
+``` pre
+{ 00-23 : 00-59 | 01-12 : 00-59 { AM | PM }}
+```
+
+The `DENY BETWEEN` clause uses two \<deny\_point\> parameters.
+
+```pre
+DENY BETWEEN <deny_point> AND <deny_point>
+
+```
+</dd>
+
+<dt>DROP DENY FOR \<deny\_point\>  </dt>
+<dd>The `DROP DENY FOR` clause removes a time-based constraint from the role. It uses the \<deny\_point\> parameter described above.</dd>
+
+## Notes
+
+Use `GRANT` and `REVOKE` for adding and removing role memberships.
+
+Caution must be exercised when specifying an unencrypted password with this command. The password will be transmitted to the server in clear text, and it might also be logged in the client\u2019s command history or the server log. The `psql` command-line client contains a meta-command `\password` that can be used to safely change a role\u2019s password.
+
+It is also possible to tie a session default to a specific database rather than to a role. Role-specific settings override database-specific ones if there is a conflict.
+
+## Examples
+
+Change the password for a role:
+
+```pre
+ALTER ROLE daria WITH PASSWORD 'passwd123';
+```
+
+Change a password expiration date:
+
+```pre
+ALTER ROLE scott VALID UNTIL 'May 4 12:00:00 2015 +1';
+```
+
+Make a password valid forever:
+
+```pre
+ALTER ROLE luke VALID UNTIL 'infinity';
+```
+
+Give a role the ability to create other roles and new databases:
+
+```pre
+ALTER ROLE joelle CREATEROLE CREATEDB;
+```
+
+Give a role a non-default setting of the `maintenance_work_mem` parameter:
+
+```pre
+ALTER ROLE admin SET maintenance_work_mem = 100000;
+```
+
+Assign a role to a resource queue:
+
+```pre
+ALTER ROLE sammy RESOURCE QUEUE poweruser;
+```
+
+Give a role permission to create writable external tables:
+
+```pre
+ALTER ROLE load CREATEEXTTABLE (type='writable');
+```
+
+Alter a role so it does not allow login access on Sundays:
+
+```pre
+ALTER ROLE user3 DENY DAY 'Sunday';
+```
+
+Alter a role to remove the constraint that does not allow login access on Sundays:
+
+```pre
+ALTER ROLE user3 DROP DENY FOR DAY 'Sunday';
+```
+
+## <a id="compat"></a>Compatibility
+
+The `ALTER ROLE` statement is a HAWQ extension.
+
+## <a id="see"></a>See Also
+
+[CREATE ROLE](CREATE-ROLE.html), [DROP ROLE](DROP-ROLE.html), [SET](SET.html), [CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html), [GRANT](GRANT.html), [REVOKE](REVOKE.html)�

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-TABLE.html.md.erb b/markdown/reference/sql/ALTER-TABLE.html.md.erb
new file mode 100644
index 0000000..4303f0c
--- /dev/null
+++ b/markdown/reference/sql/ALTER-TABLE.html.md.erb
@@ -0,0 +1,422 @@
+---
+title: ALTER TABLE
+---
+
+Changes the definition of a table.
+
+## <a id="altertable__section2"></a>Synopsis
+
+```pre
+ALTER TABLE [ONLY] <name> RENAME [COLUMN] <column> TO <new_column>
+
+ALTER TABLE <name> RENAME TO <new_name>
+
+ALTER TABLE <name> SET SCHEMA <new_schema>
+
+ALTER TABLE [ONLY] <name> SET 
+     DISTRIBUTED BY (<column>, [ ... ] ) 
+   | DISTRIBUTED RANDOMLY 
+   | WITH (REORGANIZE=true|false)
+ 
+ALTER TABLE [ONLY] <name>
+            <action> [, ... ]
+
+ALTER TABLE <name>
+   [ ALTER PARTITION { <partition_name> | FOR (RANK(<number>)) 
+   | FOR (<value>) } <partition_action> [...] ] 
+   <partition_action>        
+```
+
+where \<action\> is one of:
+
+```pre
+   ADD [COLUMN] <column_name> <type>
+      [ ENCODING ( <storage_directive> [,...] ) ]
+      [<column_constraint> [ ... ]]
+  DROP [COLUMN] <column> [RESTRICT | CASCADE]
+  ALTER [COLUMN] <column> TYPE <type> [USING <expression>]
+  ALTER [COLUMN] <column> SET DEFAULT <expression>
+  ALTER [COLUMN] <column> DROP DEFAULT
+  ALTER [COLUMN] <column> { SET | DROP } NOT NULL
+  ALTER [COLUMN] <column> SET STATISTICS <integer>
+  ADD <table_constraint>
+  DROP CONSTRAINT <constraint_name> [RESTRICT | CASCADE]
+  SET WITHOUT OIDS
+  INHERIT <parent_table>
+  NO INHERIT <parent_table>
+  OWNER TO <new_owner>
+         
+```
+
+where \<partition\_action\> is one of:
+
+```pre
+  ALTER DEFAULT PARTITION
+  DROP DEFAULT PARTITION [IF EXISTS]
+  DROP PARTITION [IF EXISTS] { <partition_name> | 
+    FOR (RANK(<number>)) | FOR (<value>) } [CASCADE]
+  TRUNCATE DEFAULT PARTITION
+  TRUNCATE PARTITION { <partition_name> | FOR (RANK(<number>)) | 
+    FOR (<value>) }
+  RENAME DEFAULT PARTITION TO <new_partition_name>
+  RENAME PARTITION { <partition_name> | FOR (RANK(<number>)) | 
+        FOR (<value>) } TO <new_partition_name>
+  ADD DEFAULT PARTITION <name> [ ( <subpartition_spec> ) ]
+  ADD PARTITION <name>
+            <partition_element>
+      [ ( <subpartition_spec> ) ]
+  EXCHANGE DEFAULT PARTITION WITH TABLE <table_name>
+        [ WITH | WITHOUT VALIDATION ]
+  EXCHANGE PARTITION { <partition_name> | FOR (RANK(<number>)) | 
+        FOR (<value>) } WITH TABLE <table_name>
+        [ WITH | WITHOUT VALIDATION ]
+  SET SUBPARTITION TEMPLATE (<subpartition_spec>)
+  SPLIT DEFAULT PARTITION
+     { AT (<list_value>)
+     | START([<datatype>] <range_value>) [INCLUSIVE | EXCLUSIVE] 
+        END([<datatype>] <range_value>) [INCLUSIVE | EXCLUSIVE] }
+    [ INTO ( PARTITION <new_partition_name>, 
+             PARTITION <default_partition_name> ) ]
+  SPLIT PARTITION { <partition_name> | FOR (RANK(<number>)) | 
+    FOR (<value>) } AT (<value>) 
+    [ INTO (PARTITION <partition_name>, PARTITION <partition_name>)]
+```
+
+where \<partition\_element\> is:
+
+```pre
+    VALUES (<list_value> [,...] )
+  | START ([<datatype>] '<start_value>') [INCLUSIVE | EXCLUSIVE]
+    [ END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE] ]
+  | END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE]
+[ WITH ( <partition_storage_parameter>=<value> [, ... ] ) ]
+[ TABLESPACE <tablespace> ]
+```
+
+where \<subpartition\_spec\> is:
+
+```pre
+            <subpartition_element> [, ...]
+```
+
+and \<subpartition\_element\> is:
+
+```pre
+  DEFAULT SUBPARTITION <subpartition_name>
+  | [SUBPARTITION <subpartition_name>] VALUES (<list_value> [,...] )
+  | [SUBPARTITION <subpartition_name>] 
+     START ([<datatype>] '<start_value>') [INCLUSIVE | EXCLUSIVE]
+     [ END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE] ]
+     [ EVERY ( [<number> | <datatype>] '<interval_value>') ]
+  | [SUBPARTITION <subpartition_name>] 
+     END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE]
+    [ EVERY ( [<number> | <datatype>] '<interval_value>') ]
+[ WITH ( <partition_storage_parameter>=<value> [, ... ] ) ]
+[ TABLESPACE <tablespace> ]
+```
+
+where \<storage\_parameter\> is:
+
+```pre
+   APPENDONLY={TRUE}
+   BLOCKSIZE={8192-2097152}
+   ORIENTATION={ROW | PARQUET}
+   COMPRESSTYPE={ZLIB|SNAPPY|GZIP|NONE}
+   COMPRESSLEVEL={0-9}
+   FILLFACTOR={10-100}
+   OIDS[=TRUE|FALSE]
+```
+
+where \<storage\_directive\> is:
+
+```pre
+   COMPRESSTYPE={ZLIB|SNAPPY|GZIP|NONE} 
+ | COMPRESSLEVEL={0-9} 
+ | BLOCKSIZE={8192-2097152}
+```
+
+where \<column\_reference\_storage\_directive\> is:
+
+```pre
+   COLUMN <column_name> ENCODING ( <storage_directive> [, ... ] ), ... 
+ | DEFAULT COLUMN ENCODING ( <storage_directive> [, ... ] )
+```
+
+**Note:**
+When using multi-level partition designs, the following operations are not supported with ALTER TABLE:
+
+-   ADD DEFAULT PARTITION
+-   ADD PARTITION
+-   DROP DEFAULT PARTITION
+-   DROP PARTITION
+-   SPLIT PARTITION
+-   All operations that involve modifying subpartitions.
+
+## <a id="limitations"></a>Limitations
+
+HAWQ does not support using `ALTER TABLE` to `ADD` or `DROP` a column in an existing Parquet table.
+
+## <a id="altertable__section4"></a>Parameters
+
+
+<dt>ONLY  </dt>
+<dd>Only perform the operation on the table name specified. If the `ONLY` keyword is not used, the operation       will be performed on the named table and any child table partitions associated with that table.</dd>
+
+<dt>\<name\>  </dt>
+<dd>The name (possibly schema-qualified) of an existing table to alter. If `ONLY` is specified, only that table is altered. If `ONLY` is not specified, the table and all its descendant tables (if any) are updated.
+
+*Note:* Constraints can only be added to an entire table, not to a partition. Because of that restriction, the \<name\> parameter can only contain a table name, not a partition name.</dd>
+
+<dt> \<column\>   </dt>
+<dd>Name of a new or existing column. Note that HAWQ distribution key columns must be treated with special care. Altering or dropping these columns can change the distribution policy for the table.</dd>
+
+<dt> \<new\_column\>   </dt>
+<dd>New name for an existing column.</dd>
+
+<dt> \<new\_name\>   </dt>
+<dd>New name for the table.</dd>
+
+<dt> \<type\>   </dt>
+<dd>Data type of the new column, or new data type for an existing column. If changing the data type of a HAWQ distribution key column, you are only allowed to change it to a compatible type (for example, `text` to `varchar` is OK, but `text` to `int` is not).</dd>
+
+<dt> \<table\_constraint\>   </dt>
+<dd>New table constraint for the table. Note that foreign key constraints are currently not supported in HAWQ. Also a table is only allowed one unique constraint and the uniqueness must be within the HAWQ distribution key.</dd>
+
+<dt> \<constraint\_name\>   </dt>
+<dd>Name of an existing constraint to drop.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the dropped column or constraint (for example, views referencing the column).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the column or constraint if there are any dependent objects. This is the default behavior.</dd>
+
+<dt>ALL  </dt>
+<dd>Disable or enable all triggers belonging to the table including constraint related triggers. This requires superuser privilege.</dd>
+
+<dt>USER  </dt>
+<dd>Disable or enable all user-created triggers belonging to the table.</dd>
+
+<dt>DISTRIBUTED RANDOMLY | DISTRIBUTED BY (\<column\>)  </dt>
+<dd>Specifies the distribution policy for a table. The default is RANDOM distribution. Changing a distribution policy will cause the table data to be physically redistributed on disk, which can be resource intensive. If you declare the same distribution policy or change from random to hash distribution, data will not be redistributed unless you declare `SET WITH (REORGANIZE=true)`.</dd>
+
+<dt>REORGANIZE=true|false  </dt>
+<dd>Use `REORGANIZE=true` when the distribution policy has not changed or when you have changed from a random to a hash distribution, and you want to redistribute the data anyways.</dd>
+
+<dt> \<parent\_table\>   </dt>
+<dd>A parent table to associate or de-associate with this table.</dd>
+
+<dt> \<new\_owner\>   </dt>
+<dd>The role name of the new owner of the table.</dd>
+
+<dt> \<new\_tablespace\>   </dt>
+<dd>The name of the tablespace to which the table will be moved.</dd>
+
+<dt> \<new\_schema\>   </dt>
+<dd>The name of the schema to which the table will be moved.</dd>
+
+<dt> \<parent\_table\_name\>   </dt>
+<dd>When altering a partitioned table, the name of the top-level parent table.</dd>
+
+<dt>ALTER \[DEFAULT\] PARTITION  </dt>
+<dd>If altering a partition deeper than the first level of partitions, the `ALTER PARTITION` clause is used to specify which subpartition in the hierarchy you want to alter.</dd>
+
+<dt>DROP \[DEFAULT\] PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+Drops the specified partition. If the partition has subpartitions, the subpartitions are automatically dropped as well.</dd>
+
+<dt>TRUNCATE \[DEFAULT\] PARTITION  </dt>
+<dd>Truncates the specified partition. If the partition has subpartitions, the subpartitions are automatically truncated as well.</dd>
+
+<dt>RENAME \[DEFAULT\] PARTITION  </dt>
+<dd>Changes the partition name of a partition (not the relation name). Partitioned tables are created using the naming convention: \<*parentname*\>\_\<*level*\>\_prt\_\<*partition\_name*\>.</dd>
+
+<dt>ADD DEFAULT PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+Adds a default partition to an existing partition design. When data does not match to an existing partition, it is inserted into the default partition. Partition designs that do not have a default partition will reject incoming rows that do not match to an existing partition. Default partitions must be given a name.</dd>
+
+<dt>ADD PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+\<partition\_element\> - Using the existing partition type of the table (range or list), defines the boundaries of new partition you are adding.
+
+\<name\> - A name for this new partition.
+
+**VALUES** - For list partitions, defines the value(s) that the partition will contain.
+
+**START** - For range partitions, defines the starting range value for the partition. By default, start values are `INCLUSIVE`. For example, if you declared a start date of `'2008-01-01'`, then the partition would contain all dates greater than or equal to `'2008-01-01'`. Typically the data type of the `START` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**END** - For range partitions, defines the ending range value for the partition. By default, end values are `EXCLUSIVE`. For example, if you declared an end date of `'2008-02-01'`, then the partition would contain all dates less than but not equal to `'2008-02-01'`. Typically the data type of the `END` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**WITH** - Sets the table storage options for a partition. For example, you may want older partitions to be append-only tables and newer partitions to be regular heap tables. See `CREATE TABLE` for a description of the storage options.
+
+**TABLESPACE** - The name of the tablespace in which the partition is to be created.
+
+\<subpartition\_spec\> - Only allowed on partition designs that were created without a subpartition template. Declares a subpartition specification for the new partition you are adding. If the partitioned table was originally defined using a subpartition template, then the template will be used to generate the subpartitions automatically.</dd>
+
+<dt>EXCHANGE \[DEFAULT\] PARTITION  </dt>
+<dd>Exchanges another table into the partition hierarchy into the place of an existing partition. In a multi-level partition design, you can only exchange the lowest level partitions (those that contain data).
+
+**WITH TABLE** \<table\_name\> - The name of the table you are swapping in to the partition design.
+
+**WITH** | **WITHOUT VALIDATION** - Validates that the data in the table matches the `CHECK` constraint of the partition you are exchanging. The default is to validate the data against the `CHECK` constraint.</dd>
+
+<dt>SET SUBPARTITION TEMPLATE  </dt>
+<dd>Modifies the subpartition template for an existing partition. After a new subpartition template is set, all new partitions added will have the new subpartition design (existing partitions are not modified).</dd>
+
+<dt>SPLIT DEFAULT PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+Splits a default partition. In a multi-level partition design, you can only split the lowest level default partitions (those that contain data). Splitting a default partition creates a new partition containing the values specified and leaves the default partition containing any values that do not match to an existing partition.
+
+**AT** - For list partitioned tables, specifies a single list value that should be used as the criteria for the split.
+
+**START** - For range partitioned tables, specifies a starting value for the new partition.
+
+**END** - For range partitioned tables, specifies an ending value for the new partition.
+
+**INTO** - Allows you to specify a name for the new partition. When using the `INTO` clause to split a default partition, the second partition name specified should always be that of the existing default partition. If you do not know the name of the default partition, you can look it up using the `pg_partitions` view.</dd>
+
+<dt>SPLIT PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+Splits an existing partition into two partitions. In a multi-level partition design, you can only split the lowest level partitions (those that contain data).
+
+**AT** - Specifies a single value that should be used as the criteria for the split. The partition will be divided into two new partitions with the split value specified being the starting range for the *latter* partition.
+
+**INTO** - Allows you to specify names for the two new partitions created by the split.</dd>
+
+<dt> \<partition\_name\>   </dt>
+<dd>The given name of a partition.</dd>
+
+<dt>FOR (RANK(\<number\>))  </dt>
+<dd>For range partitions, the rank of the partition in the range.</dd>
+
+<dt>FOR ('\<value\>')  </dt>
+<dd>Specifies a partition by declaring a value that falls within the partition boundary specification. If the value declared with `FOR` matches to both a partition and one of its subpartitions (for example, if the value is a date and the table is partitioned by month and then by day), then `FOR` will operate on the first level where a match is found (for example, the monthly partition). If your intent is to operate on a subpartition, you must declare so as follows:
+
+``` pre
+ALTER TABLE name ALTER PARTITION FOR ('2008-10-01') DROP PARTITION FOR ('2008-10-01');
+```
+</dd>
+
+## <a id="notes"></a>Notes
+
+Take special care when altering or dropping columns that are part of the HAWQ distribution key as this can change the distribution policy for the table. HAWQ does not currently support foreign key constraints.
+
+**Note:** Note: The table name specified in the `ALTER TABLE` command cannot be the name of a partition within a table.
+
+Adding a `CHECK` or `NOT NULL` constraint requires scanning the table to verify that existing rows meet the constraint.
+
+When a column is added with `ADD COLUMN`, all existing rows in the table are initialized with the column\u2019s default value (`NULL` if no `DEFAULT` clause is specified). Adding a column with a non-null default or changing the type of an existing column will require the entire table to be rewritten. This may take a significant amount of time for a large table; and it will temporarily require double the disk space.
+
+You can specify multiple changes in a single `ALTER TABLE` command, which will be done in a single pass over the table.
+
+The `DROP COLUMN` form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed.
+
+The fact that `ALTER TYPE` requires rewriting the whole table is sometimes an advantage, because the rewriting process eliminates any dead space in the table. For example, to reclaim the space occupied by a dropped column immediately, the fastest way is: `ALTER TABLE <table> ALTER COLUMN <anycol> TYPE <sametype>;` Where�\<anycol\>�is any remaining table column and�\<sametype\>�is the same type that column already has. This results in no semantically-visible change in the table, but the command forces rewriting, which gets rid of no-longer-useful data.
+
+If a table is partitioned or has any descendant tables, it is not permitted to add, rename, or change the type of a column in the parent table without doing the same to the descendants. This ensures that the descendants always have columns matching the parent.
+
+A recursive `DROP COLUMN` operation will remove a descendant table\u2019s column only if the descendant does not inherit that column from any other parents and never had an independent definition of the column. A nonrecursive `DROP COLUMN` (`ALTER TABLE ONLY ... DROP COLUMN`) never removes any descendant columns, but instead marks them as independently defined rather than inherited.
+
+The `OWNER` action never recurse to descendant tables; that is, they always act as though `ONLY` were specified. Adding a constraint can recurse only for `CHECK` constraints.
+
+Changing any part of a system catalog table is not permitted.
+
+## <a id="examples"></a>Examples
+
+Add a column to a table:
+
+``` pre
+ALTER TABLE distributors ADD COLUMN address varchar(30);
+```
+
+Rename an existing column:
+
+``` pre
+ALTER TABLE distributors RENAME COLUMN address TO city;
+```
+
+Rename an existing table:
+
+``` pre
+ALTER TABLE distributors RENAME TO suppliers;
+```
+
+Add a not-null constraint to a column:
+
+``` pre
+ALTER TABLE distributors ALTER COLUMN street SET NOT NULL;
+```
+
+Add a check constraint to a table:
+
+``` pre
+ALTER TABLE distributors ADD CONSTRAINT zipchk CHECK (char_length(zipcode) = 5);
+```
+
+Move a table to a different schema:
+
+``` pre
+ALTER TABLE myschema.distributors SET SCHEMA yourschema;
+```
+
+Add a new partition to a partitioned table:
+
+``` pre
+ALTER TABLE sales ADD PARTITION
+        START (date '2009-02-01') INCLUSIVE 
+        END (date '2009-03-01') EXCLUSIVE; 
+```
+
+Add a default partition to an existing partition design:
+
+``` pre
+ALTER TABLE sales ADD DEFAULT PARTITION other;
+```
+
+Rename a partition:
+
+``` pre
+ALTER TABLE sales RENAME PARTITION FOR ('2008-01-01') TO jan08;
+```
+
+Drop the first (oldest) partition in a range sequence:
+
+``` pre
+ALTER TABLE sales DROP PARTITION FOR (RANK(1));
+```
+
+Exchange a table into your partition design:
+
+``` pre
+ALTER TABLE sales EXCHANGE PARTITION FOR ('2008-01-01') WITH TABLE jan08;
+```
+
+Split the default partition (where the existing default partition\u2019s name is `other`) to add a new monthly partition for January 2009:
+
+``` pre
+ALTER TABLE sales SPLIT DEFAULT PARTITION
+    START ('2009-01-01') INCLUSIVE
+    END ('2009-02-01') EXCLUSIVE
+    INTO (PARTITION jan09, PARTITION other);
+```
+
+Split a monthly partition into two with the first partition containing dates January 1-15 and the second partition containing dates January 16-31:
+
+``` pre
+ALTER TABLE sales SPLIT PARTITION FOR ('2008-01-01')
+    AT ('2008-01-16')
+    INTO (PARTITION jan081to15, PARTITION jan0816to31);
+```
+
+## <a id="compat"></a>Compatibility
+
+The `ADD`, `DROP`, and `SET DEFAULT` forms conform with the SQL standard. The other forms are HAWQ extensions of the SQL standard. Also, the ability to specify more than one manipulation in a single `ALTER                TABLE` command is an extension. `ALTER TABLE DROP COLUMN` can be used to drop the only column of a table, leaving a zero-column table. This is an extension of SQL, which disallows zero-column tables.
+
+## <a id="altertable__section8"></a>See Also
+
+[CREATE TABLE](CREATE-TABLE.html), [DROP TABLE](DROP-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-TABLESPACE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-TABLESPACE.html.md.erb b/markdown/reference/sql/ALTER-TABLESPACE.html.md.erb
new file mode 100644
index 0000000..e539177
--- /dev/null
+++ b/markdown/reference/sql/ALTER-TABLESPACE.html.md.erb
@@ -0,0 +1,55 @@
+---
+title: ALTER TABLESPACE
+---
+
+Changes the definition of a tablespace.
+
+## <a id="synopsis"></a>Synopsis
+
+``` pre
+ALTER TABLESPACE <name> RENAME TO <newname>
+
+ALTER TABLESPACE <name> OWNER TO <newowner>
+         
+```
+
+## <a id="desc"></a>Description
+
+`ALTER TABLESPACE` changes the definition of a tablespace.
+
+You must own the tablespace to use `ALTER TABLESPACE`. To alter the owner, you must also be a direct or indirect member of the new owning role. (Note that superusers have these privileges automatically.)
+
+## <a id="altertablespace__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name of an existing tablespace.</dd>
+
+<dt> \<newname\>   </dt>
+<dd>The new name of the tablespace. The new name cannot begin with *pg\_* (reserved for system tablespaces).</dd>
+
+<dt> \<newowner\>   </dt>
+<dd>The new owner of the tablespace.</dd>
+
+## <a id="altertablespace__section5"></a>Examples
+
+Rename tablespace `index_space` to `fast_raid`:
+
+``` pre
+ALTER TABLESPACE index_space RENAME TO fast_raid;
+```
+
+Change the owner of tablespace `index_space`:
+
+``` pre
+ALTER TABLESPACE index_space OWNER TO mary;
+```
+
+## <a id="altertablespace__section6"></a>Compatibility
+
+There is no `ALTER TABLESPACE` statement in the SQL standard.
+
+## <a id="see"></a>�See Also
+
+[CREATE TABLESPACE](CREATE-TABLESPACE.html), [DROP TABLESPACE](DROP-TABLESPACE.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-TYPE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-TYPE.html.md.erb b/markdown/reference/sql/ALTER-TYPE.html.md.erb
new file mode 100644
index 0000000..da50e80
--- /dev/null
+++ b/markdown/reference/sql/ALTER-TYPE.html.md.erb
@@ -0,0 +1,54 @@
+---
+title: ALTER TYPE
+---
+
+Changes the definition of a data type.
+
+## <a id="synopsis"></a>Synopsis
+
+``` pre
+ALTER TYPE <name>
+���OWNER TO <new_owner> | SET SCHEMA <new_schema>
+         
+```
+
+## <a id="desc"></a>Description
+
+�`ALTER TYPE` changes the definition of an existing type. You can change the owner and the schema of a type.
+
+You must own the type to use `ALTER TYPE`. To change the schema of a type, you must also have `CREATE` privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the type's schema. (These restrictions enforce that altering the owner does not do anything that could be done by dropping and recreating the type. However, a superuser can alter ownership of any type.)
+
+## <a id="altertype__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing type to alter.</dd>
+
+<dt> \<new\_owner\>   </dt>
+<dd>The user name of the new owner of the type.</dd>
+
+<dt> \<new\_schema\>   </dt>
+<dd>The new schema for the type.</dd>
+
+## <a id="altertype__section5"></a>Examples
+
+To change the owner of the user-defined type `email` to `joe`:
+
+``` pre
+ALTER TYPE email OWNER TO joe;
+```
+
+To change the schema of the user-defined type `email` to `customers`:
+
+``` pre
+ALTER TYPE email SET SCHEMA customers;
+```
+
+## <a id="altertype__section6"></a>Compatibility
+
+There is no `ALTER TYPE` statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[CREATE TYPE](CREATE-TYPE.html),�[DROP TYPE](DROP-TYPE.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ALTER-USER.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ALTER-USER.html.md.erb b/markdown/reference/sql/ALTER-USER.html.md.erb
new file mode 100644
index 0000000..f53e788
--- /dev/null
+++ b/markdown/reference/sql/ALTER-USER.html.md.erb
@@ -0,0 +1,44 @@
+---
+title: ALTER USER
+---
+
+Changes the definition of a database role (user).
+
+## <a id="alteruser__section2"></a>Synopsis
+
+``` pre
+ALTER USER <name> RENAME TO <newname>
+
+ALTER USER <name> SET <config_parameter> {TO | =} {<value> | DEFAULT}
+
+ALTER USER <name> RESET <config_parameter>
+
+ALTER USER <name> [ [WITH] <option> [ ... ] ]
+```
+
+where \<option\> can be:
+
+``` pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+    | CREATEUSER | NOCREATEUSER
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | [ ENCRYPTED | UNENCRYPTED ] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>'
+```
+
+## <a id="alteruser__section3"></a>Description
+
+`ALTER USER` is a deprecated command but is still accepted for historical reasons. It is an alias for `ALTER ROLE`. See `ALTER ROLE` for more information.
+
+## <a id="alteruser__section4"></a>Compatibility
+
+The `ALTER USER` statement is a HAWQ extension. The SQL standard leaves the definition of users to the implementation.
+
+## <a id="see"></a>See Also
+
+[ALTER ROLE](ALTER-ROLE.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ANALYZE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ANALYZE.html.md.erb b/markdown/reference/sql/ANALYZE.html.md.erb
new file mode 100644
index 0000000..983696a
--- /dev/null
+++ b/markdown/reference/sql/ANALYZE.html.md.erb
@@ -0,0 +1,75 @@
+---
+title: ANALYZE
+---
+
+Collects statistics about a database.
+
+## <a id="synopsis"></a>Synopsis
+
+``` pre
+ANALYZE [VERBOSE] [ROOTPARTITION] <table> [ (<column> [, ...] ) ]]
+```
+
+## <a id="desc"></a>Description
+
+`ANALYZE` collects statistics about the contents of tables in the database, and stores the results in the system table�`pg_statistic`. Subsequently, the query planner uses these statistics to help determine the most efficient execution plans for queries.
+
+With no parameter, `ANALYZE` examines every table in the current database. With a parameter, `ANALYZE` examines only that table. It is further possible to give a list of column names, in which case only the statistics for those columns are collected.
+
+## <a id="params"></a>Parameters
+
+<dt>VERBOSE  </dt>
+<dd>Enables display of progress messages. When specified, `ANALYZE` emits progress messages to indicate which table is currently being processed. Various statistics about the tables are printed as well.</dd>
+
+<dt>ROOTPARTITION  </dt>
+<dd>For partitioned tables, `ANALYZE` on the parent (the root in multi-level partitioning) table without this option will collect statistics on each individual leaf partition as well as the global partition table, both of which are needed for query planning. In scenarios when all the individual child partitions have up-to-date statistics (for example, after loading and analyzing a daily partition), the `ROOTPARTITION` option can be used to collect only the global stats on the partition table. This could save the time of re-analyzing each individual leaf partition.
+
+If you use `ROOTPARTITION` on a non-root or non-partitioned table, `ANALYZE` will skip the option and issue a�warning.�You can also analyze all root partition tables in the database by using `ROOTPARTITION ALL`
+
+**Note:** Use `ROOTPARTITION ALL` to analyze all root partition tables in the database.</dd>
+
+<dt> \<table\>   </dt>
+<dd>The name (possibly schema-qualified) of a specific table to analyze. Defaults to all tables in the current database.</dd>
+
+<dt> \<column\>   </dt>
+<dd>The name of a specific column to analyze. Defaults to all columns.</dd>
+
+## <a id="notes"></a>Notes
+
+It is a good idea to run `ANALYZE` periodically, or just after making major changes in the contents of a table. Accurate statistics will help the query planner to choose the most appropriate query plan, and thereby improve the speed of query processing. A common strategy is to run `VACUUM` and `ANALYZE` once a day during a low-usage time of day.
+
+`ANALYZE` requires only a read lock on the target table, so it can run in parallel with other activity on the table.
+
+`ANALYZE` skips tables if the user is not the table owner or database owner.
+
+The statistics collected by `ANALYZE` usually include a list of some of the most common values in each column and a histogram showing the approximate data distribution in each column. One or both of these may be omitted if `ANALYZE` deems them uninteresting (for example, in a unique-key column, there are no common values) or if the column data type does not support the appropriate operators.
+
+For large tables, `ANALYZE` takes a random sample of the table contents, rather than examining every row. This allows even very large tables to be analyzed in a small amount of time. Note, however, that the statistics are only approximate, and will change slightly each time `ANALYZE` is run, even if the actual table contents did not change. This may result in small changes in the planner\u2019s estimated costs shown by `EXPLAIN`. In rare situations, this non-determinism will cause the query optimizer to choose a different query plan between runs of `ANALYZE`. To avoid this, raise the amount of statistics collected by `ANALYZE` by adjusting the�`default_statistics_target`�configuration parameter, or on a column-by-column basis by setting the per-column statistics target with `ALTER                TABLE ... ALTER COLUMN ... SET STATISTICS` (see `ALTER             TABLE`). The target value sets the maximum number of entries in the most-common-value list and the maximum number of bins in
  the histogram. The default target value is 10, but this can be adjusted up or down to trade off accuracy of planner estimates against the time taken for `ANALYZE` and the amount of space occupied in�`pg_statistic`. In particular, setting the statistics target to zero disables collection of statistics for that column. It may be useful to do that for columns that are never used as part of the `WHERE`, `GROUP                BY`, or `ORDER BY` clauses of queries, since the planner will have no use for statistics on such columns.
+
+The largest statistics target among the columns being analyzed determines the number of table rows sampled to prepare the statistics. Increasing the target causes a proportional increase in the time and space needed to do `ANALYZE`.
+
+The `pxf_enable_stat_collection` server configuration parameter determines if `ANALYZE` calculates statistics for PXF readable tables. When `pxf_enable_stat_collection` is true, the default setting, `ANALYZE` estimates the number of tuples in the table from the total size of the table, the size of the first fragment, and the number of tuples in the first fragment. Then it builds a sample table and calculates statistics for the PXF table by running statistics queries on the sample table, the same as it does with native tables. A sample table is always created to calculate PXF table statistics, even when the table has a small number of rows.
+
+The `pxf_stat_max_fragments` configuration parameter, default 100, sets the maximum number of fragments that are sampled to build the sample table. Setting `pxf_stat_max_fragments` to a higher value provides a more uniform sample, but decreases `ANALYZE` performance. Setting it to a lower value increases performance, but the statistics are calculated on a less uniform sample.
+
+When `pxf_stat_max_fragments` is false, `ANALYZE` outputs a message to warn that it is skipping the PXF table because `pxf_stat_max_fragments` is turned off.
+
+There may be situations where the remote statistics retrieval could fail to perform a task on a PXF table. For example, if a PXF Java component is down, the remote statistics retrieval might not occur, and the database transaction would not succeed. In these cases, the statistics remain with the default external table values.
+
+## <a id="examples"></a>Examples
+
+Collect statistics for the table�`mytable`:
+
+``` pre
+ANALYZE mytable;
+```
+
+## <a id="compat"></a>Compatibility
+
+There is no ANALYZE statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[ALTER TABLE](ALTER-TABLE.html), [EXPLAIN](EXPLAIN.html), [VACUUM](VACUUM.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/BEGIN.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/BEGIN.html.md.erb b/markdown/reference/sql/BEGIN.html.md.erb
new file mode 100644
index 0000000..265e66e
--- /dev/null
+++ b/markdown/reference/sql/BEGIN.html.md.erb
@@ -0,0 +1,58 @@
+---
+title: BEGIN
+---
+
+Starts a transaction block.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+BEGIN [WORK | TRANSACTION] [SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED]
+      [READ WRITE | READ ONLY]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`BEGIN` initiates a transaction block, that is, all statements after a `BEGIN` command will be executed in a single transaction until an explicit `COMMIT` or `ROLLBACK` is given. By default (without `BEGIN`), HAWQ executes transactions in autocommit mode, that is, each statement is executed in its own transaction and a commit is implicitly performed at the end of the statement (if execution was successful, otherwise a rollback is done).
+
+Statements are executed more quickly in a transaction block, because transaction start/commit requires significant CPU and disk activity. Execution of multiple statements inside a transaction is also useful to ensure consistency when making several related changes: other sessions will be unable to see the intermediate states wherein not all the related updates have been done.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional key words. They have no effect.</dd>
+
+<dt>SERIALIZABLE  
+REPEATABLE READ  
+READ COMMITTED  
+READ UNCOMMITTED  </dt>
+<dd>The SQL standard defines four transaction isolation levels: `READ COMMITTED`, `READ UNCOMMITTED`, `SERIALIZABLE`, and `REPEATABLE READ`. The default behavior is that a statement can only see rows committed before it began (`READ COMMITTED`). In HAWQ, `READ UNCOMMITTED` is treated the same as `READ COMMITTED`. `SERIALIZABLE` is supported the same as `REPEATABLE                      READ` wherein all statements of the current transaction can only see rows committed before the first statement was executed in the transaction. `SERIALIZABLE` is the strictest transaction isolation. This level emulates serial transaction execution, as if transactions had been executed one after another, serially, rather than concurrently. Applications using this level must be prepared to retry transactions due to serialization failures.</dd>
+
+<dt>READ WRITE  
+READ ONLY  </dt>
+<dd>Determines whether the transaction is read/write or read-only. Read/write is the default. When a transaction is read-only, the following SQL commands are disallowed: `INSERT` and `COPY FROM` if the table they would write to is not a temporary table; all `CREATE`, `ALTER`, and `DROP` commands; `GRANT`, `REVOKE`, `TRUNCATE`; and `EXPLAIN ANALYZE` and `EXECUTE` if the command they would execute is among those listed.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use [COMMIT](COMMIT.html) or [ROLLBACK](ROLLBACK.html) to terminate a transaction block.
+
+Issuing `BEGIN` when already inside a transaction block will provoke a warning message. The state of the transaction is not affected. To nest transactions within a transaction block, use savepoints (see [SAVEPOINT](SAVEPOINT.html)).
+
+## <a id="topic1__section6"></a>Examples
+
+To begin a transaction block:
+
+``` pre
+BEGIN;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`BEGIN` is a HAWQ language extension. It is equivalent to the SQL-standard command `START TRANSACTION�lang="EN"`.
+
+Incidentally, the `BEGIN` key word is used for a different purpose in embedded SQL. You are advised to be careful about the transaction semantics when porting database applications.
+
+## <a id="topic1__section8"></a>See Also
+
+[COMMIT](COMMIT.html), [ROLLBACK](ROLLBACK.html), [SAVEPOINT](SAVEPOINT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CHECKPOINT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CHECKPOINT.html.md.erb b/markdown/reference/sql/CHECKPOINT.html.md.erb
new file mode 100644
index 0000000..d699013
--- /dev/null
+++ b/markdown/reference/sql/CHECKPOINT.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: CHECKPOINT
+---
+
+Forces a transaction log checkpoint.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CHECKPOINT
+```
+
+## <a id="topic1__section3"></a>Description
+
+Write-Ahead Logging (WAL) puts a checkpoint in the transaction log every so often. The automatic checkpoint interval is set per HAWQ segment instance by the server configuration parameters `checkpoint\_segments` and `checkpoint\_timeout`. The `CHECKPOINT` command forces an immediate checkpoint when the command is issued, without waiting for a scheduled checkpoint.
+
+A checkpoint is a point in the transaction log sequence at which all data files have been updated to reflect the information in the log. All data files will be flushed to disk.
+
+Only superusers may call `CHECKPOINT`. The command is not intended for use during normal operation.
+
+## <a id="topic1__section4"></a>Compatibility
+
+The `CHECKPOINT` command is a HAWQ language extension.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CLOSE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CLOSE.html.md.erb b/markdown/reference/sql/CLOSE.html.md.erb
new file mode 100644
index 0000000..ae9c958
--- /dev/null
+++ b/markdown/reference/sql/CLOSE.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: CLOSE
+---
+
+Closes a cursor.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CLOSE <cursor_name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CLOSE` frees the resources associated with an open cursor. After the cursor is closed, no subsequent operations are allowed on it. A cursor should be closed when it is no longer needed.
+
+Every non-holdable open cursor is implicitly closed when a transaction is terminated by `COMMIT` or `ROLLBACK`. A holdable cursor is implicitly closed if the transaction that created it aborts via `ROLLBACK`. If the creating transaction successfully commits, the holdable cursor remains open until an explicit `CLOSE` is executed, or the client disconnects.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<cursor\_name\>   </dt>
+<dd>The name of an open cursor to close.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+HAWQ does not have an explicit `OPEN` cursor statement. A cursor is considered open when it is declared. Use the `DECLARE` statement to declare (and open) a cursor.
+
+You can see all available cursors by querying the `pg_cursors` system view.
+
+## <a id="topic1__section6"></a>Examples
+
+Close the cursor `portala`:
+
+``` pre
+CLOSE portala;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CLOSE` is fully conforming with the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[DECLARE](DECLARE.html), [FETCH](FETCH.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/COMMIT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/COMMIT.html.md.erb b/markdown/reference/sql/COMMIT.html.md.erb
new file mode 100644
index 0000000..dd91969
--- /dev/null
+++ b/markdown/reference/sql/COMMIT.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: COMMIT
+---
+
+Commits the current transaction.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+COMMIT [WORK | TRANSACTION]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`COMMIT` commits the current transaction. All changes made by the transaction become visible to others and are guaranteed to be durable if a crash occurs.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional key words. They have no effect.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use [ROLLBACK](ROLLBACK.html) to abort a transaction.
+
+Issuing `COMMIT` when not inside a transaction does no harm, but it will provoke a warning message.
+
+## <a id="topic1__section6"></a>Examples
+
+To commit the current transaction and make all changes permanent:
+
+``` pre
+COMMIT;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard only specifies the two forms `COMMIT` and `COMMIT           WORK`. Otherwise, this command is fully conforming.
+
+## <a id="topic1__section8"></a>See Also
+
+[BEGIN](BEGIN.html), [END](END.html), [ROLLBACK](ROLLBACK.html)


[43/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/ddl/ddl-partition.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-partition.html.md.erb b/ddl/ddl-partition.html.md.erb
deleted file mode 100644
index f790161..0000000
--- a/ddl/ddl-partition.html.md.erb
+++ /dev/null
@@ -1,483 +0,0 @@
----
-title: Partitioning Large Tables
----
-
-Table partitioning enables supporting very large tables, such as fact tables, by logically dividing them into smaller, more manageable pieces. Partitioned tables can improve query performance by allowing the HAWQ query optimizer to scan only the data needed to satisfy a given query instead of scanning all the contents of a large table.
-
-Partitioning does not change the physical distribution of table data across the segments. Table distribution is physical: HAWQ physically divides partitioned tables and non-partitioned tables across segments to enable parallel query processing. Table *partitioning* is logical: HAWQ logically divides big tables to improve query performance and facilitate data warehouse maintenance tasks, such as rolling old data out of the data warehouse.
-
-HAWQ supports:
-
--   *range partitioning*: division of data based on a numerical range, such as date or price.
--   *list partitioning*: division of data based on a list of values, such as sales territory or product line.
--   A combination of both types.
-<a id="im207241"></a>
-
-![](../mdimages/partitions.jpg "Example Multi-level Partition Design")
-
-## <a id="topic64"></a>Table Partitioning in HAWQ 
-
-HAWQ divides tables into parts \(also known as partitions\) to enable massively parallel processing. Tables are partitioned during `CREATE TABLE` using the `PARTITION BY` \(and optionally the `SUBPARTITION BY`\) clause. Partitioning creates a top-level \(or parent\) table with one or more levels of sub-tables \(or child tables\). Internally, HAWQ creates an inheritance relationship between the top-level table and its underlying partitions, similar to the functionality of the `INHERITS` clause of PostgreSQL.
-
-HAWQ uses the partition criteria defined during table creation to create each partition with a distinct `CHECK` constraint, which limits the data that table can contain. The query optimizer uses `CHECK` constraints to determine which table partitions to scan to satisfy a given query predicate.
-
-The HAWQ system catalog stores partition hierarchy information so that rows inserted into the top-level parent table propagate correctly to the child table partitions. To change the partition design or table structure, alter the parent table using `ALTER TABLE` with the `PARTITION` clause.
-
-To insert data into a partitioned table, you specify the root partitioned table, the table created with the `CREATE TABLE` command. You also can specify a leaf child table of the partitioned table in an `INSERT` command. An error is returned if the data is not valid for the specified leaf child table. Specifying a child table that is not a leaf child table in the `INSERT` command is not supported.
-
-## <a id="topic65"></a>Deciding on a Table Partitioning Strategy 
-
-Not all tables are good candidates for partitioning. If the answer is *yes* to all or most of the following questions, table partitioning is a viable database design strategy for improving query performance. If the answer is *no* to most of the following questions, table partitioning is not the right solution for that table. Test your design strategy to ensure that query performance improves as expected.
-
--   **Is the table large enough?** Large fact tables are good candidates for table partitioning. If you have millions or billions of records in a table, you may see performance benefits from logically breaking that data up into smaller chunks. For smaller tables with only a few thousand rows or less, the administrative overhead of maintaining the partitions will outweigh any performance benefits you might see.
--   **Are you experiencing unsatisfactory performance?** As with any performance tuning initiative, a table should be partitioned only if queries against that table are producing slower response times than desired.
--   **Do your query predicates have identifiable access patterns?** Examine the `WHERE` clauses of your query workload and look for table columns that are consistently used to access data. For example, if most of your queries tend to look up records by date, then a monthly or weekly date-partitioning design might be beneficial. Or if you tend to access records by region, consider a list-partitioning design to divide the table by region.
--   **Does your data warehouse maintain a window of historical data?** Another consideration for partition design is your organization's business requirements for maintaining historical data. For example, your data warehouse may require that you keep data for the past twelve months. If the data is partitioned by month, you can easily drop the oldest monthly partition from the warehouse and load current data into the most recent monthly partition.
--   **Can the data be divided into somewhat equal parts based on some defining criteria?** Choose partitioning criteria that will divide your data as evenly as possible. If the partitions contain a relatively equal number of records, query performance improves based on the number of partitions created. For example, by dividing a large table into 10 partitions, a query will execute 10 times faster than it would against the unpartitioned table, provided that the partitions are designed to support the query's criteria.
-
-Do not create more partitions than are needed. Creating too many partitions can slow down management and maintenance jobs, such as vacuuming, recovering segments, expanding the cluster, checking disk usage, and others.
-
-Partitioning does not improve query performance unless the query optimizer can eliminate partitions based on the query predicates. Queries that scan every partition run slower than if the table were not partitioned, so avoid partitioning if few of your queries achieve partition elimination. Check the explain plan for queries to make sure that partitions are eliminated. See [Query Profiling](../query/query-profiling.html) for more about partition elimination.
-
-Be very careful with multi-level partitioning because the number of partition files can grow very quickly. For example, if a table is partitioned by both day and city, and there are 1,000 days of data and 1,000 cities, the total number of partitions is one million. Column-oriented tables store each column in a physical table, so if this table has 100 columns, the system would be required to manage 100 million files for the table.
-
-Before settling on a multi-level partitioning strategy, consider a single level partition with bitmap indexes. Indexes slow down data loads, so consider performance testing with your data and schema to decide on the best strategy.
-
-## <a id="topic66"></a>Creating Partitioned Tables 
-
-You partition tables when you create them with `CREATE TABLE`. This topic provides examples of SQL syntax for creating a table with various partition designs.
-
-To partition a table:
-
-1.  Decide on the partition design: date range, numeric range, or list of values.
-2.  Choose the column\(s\) on which to partition the table.
-3.  Decide how many levels of partitions you want. For example, you can create a date range partition table by month and then subpartition the monthly partitions by sales region.
-
--   [Defining Date Range Table Partitions](#topic67)
--   [Defining Numeric Range Table Partitions](#topic68)
--   [Defining List Table Partitions](#topic69)
--   [Defining Multi-level Partitions](#topic70)
--   [Partitioning an Existing Table](#topic71)
-
-### <a id="topic67"></a>Defining Date Range Table Partitions 
-
-A date range partitioned table uses a single `date` or `timestamp` column as the partition key column. You can use the same partition key column to create subpartitions if necessary, for example, to partition by month and then subpartition by day. Consider partitioning by the most granular level. For example, for a table partitioned by date, you can partition by day and have 365 daily partitions, rather than partition by year then subpartition by month then subpartition by day. A multi-level design can reduce query planning time, but a flat partition design runs faster.
-
-You can have HAWQ automatically generate partitions by giving a `START` value, an `END` value, and an `EVERY` clause that defines the partition increment value. By default, `START` values are always inclusive and `END` values are always exclusive. For example:
-
-``` sql
-CREATE TABLE sales (id int, date date, amt decimal(10,2))
-DISTRIBUTED BY (id)
-PARTITION BY RANGE (date)
-( START (date '2008-01-01') INCLUSIVE
-   END (date '2009-01-01') EXCLUSIVE
-   EVERY (INTERVAL '1 day') );
-```
-
-You can also declare and name each partition individually. For example:
-
-``` sql
-CREATE TABLE sales (id int, date date, amt decimal(10,2))
-DISTRIBUTED BY (id)
-PARTITION BY RANGE (date)
-( PARTITION Jan08 START (date '2008-01-01') INCLUSIVE ,
-  PARTITION Feb08 START (date '2008-02-01') INCLUSIVE ,
-  PARTITION Mar08 START (date '2008-03-01') INCLUSIVE ,
-  PARTITION Apr08 START (date '2008-04-01') INCLUSIVE ,
-  PARTITION May08 START (date '2008-05-01') INCLUSIVE ,
-  PARTITION Jun08 START (date '2008-06-01') INCLUSIVE ,
-  PARTITION Jul08 START (date '2008-07-01') INCLUSIVE ,
-  PARTITION Aug08 START (date '2008-08-01') INCLUSIVE ,
-  PARTITION Sep08 START (date '2008-09-01') INCLUSIVE ,
-  PARTITION Oct08 START (date '2008-10-01') INCLUSIVE ,
-  PARTITION Nov08 START (date '2008-11-01') INCLUSIVE ,
-  PARTITION Dec08 START (date '2008-12-01') INCLUSIVE
-                  END (date '2009-01-01') EXCLUSIVE );
-```
-
-You do not have to declare an `END` value for each partition, only the last one. In this example, `Jan08` ends where `Feb08` starts.
-
-### <a id="topic68"></a>Defining Numeric Range Table Partitions 
-
-A numeric range partitioned table uses a single numeric data type column as the partition key column. For example:
-
-``` sql
-CREATE TABLE rank (id int, rank int, year int, gender
-char(1), count int)
-DISTRIBUTED BY (id)
-PARTITION BY RANGE (year)
-( START (2001) END (2008) EVERY (1),
-  DEFAULT PARTITION extra );
-```
-
-For more information about default partitions, see [Adding a Default Partition](#topic80).
-
-### <a id="topic69"></a>Defining List Table Partitions 
-
-A list partitioned table can use any data type column that allows equality comparisons as its partition key column. A list partition can also have a multi-column \(composite\) partition key, whereas a range partition only allows a single column as the partition key. For list partitions, you must declare a partition specification for every partition \(list value\) you want to create. For example:
-
-``` sql
-CREATE TABLE rank (id int, rank int, year int, gender
-char(1), count int )
-DISTRIBUTED BY (id)
-PARTITION BY LIST (gender)
-( PARTITION girls VALUES ('F'),
-  PARTITION boys VALUES ('M'),
-  DEFAULT PARTITION other );
-```
-
-**Note:** The HAWQ legacy optimizer allows list partitions with multi-column \(composite\) partition keys. A range partition only allows a single column as the partition key. GPORCA does not support composite keys.
-
-For more information about default partitions, see [Adding a Default Partition](#topic80).
-
-### <a id="topic70"></a>Defining Multi-level Partitions 
-
-You can create a multi-level partition design with subpartitions of partitions. Using a *subpartition template* ensures that every partition has the same subpartition design, including partitions that you add later. For example, the following SQL creates the two-level partition design shown in [Figure 1](#im207241):
-
-``` sql
-CREATE TABLE sales (trans_id int, date date, amount
-decimal(9,2), region text)
-DISTRIBUTED BY (trans_id)
-PARTITION BY RANGE (date)
-SUBPARTITION BY LIST (region)
-SUBPARTITION TEMPLATE
-( SUBPARTITION usa VALUES ('usa'),
-  SUBPARTITION asia VALUES ('asia'),
-  SUBPARTITION europe VALUES ('europe'),
-  DEFAULT SUBPARTITION other_regions)
-  (START (date '2011-01-01') INCLUSIVE
-   END (date '2012-01-01') EXCLUSIVE
-   EVERY (INTERVAL '1 month'),
-   DEFAULT PARTITION outlying_dates );
-```
-
-The following example shows a three-level partition design where the `sales` table is partitioned by `year`, then `month`, then `region`. The `SUBPARTITION TEMPLATE` clauses ensure that each yearly partition has the same subpartition structure. The example declares a `DEFAULT` partition at each level of the hierarchy.
-
-``` sql
-CREATE TABLE p3_sales (id int, year int, month int, day int,
-region text)
-DISTRIBUTED BY (id)
-PARTITION BY RANGE (year)
-    SUBPARTITION BY RANGE (month)
-      SUBPARTITION TEMPLATE (
-        START (1) END (13) EVERY (1),
-        DEFAULT SUBPARTITION other_months )
-           SUBPARTITION BY LIST (region)
-             SUBPARTITION TEMPLATE (
-               SUBPARTITION usa VALUES ('usa'),
-               SUBPARTITION europe VALUES ('europe'),
-               SUBPARTITION asia VALUES ('asia'),
-               DEFAULT SUBPARTITION other_regions )
-( START (2002) END (2012) EVERY (1),
-  DEFAULT PARTITION outlying_years );
-```
-
-**CAUTION**:
-
-When you create multi-level partitions on ranges, it is easy to create a large number of subpartitions, some containing little or no data. This can add many entries to the system tables, which increases the time and memory required to optimize and execute queries. Increase the range interval or choose a different partitioning strategy to reduce the number of subpartitions created.
-
-### <a id="topic71"></a>Partitioning an Existing Table 
-
-Tables can be partitioned only at creation. If you have a table that you want to partition, you must create a partitioned table, load the data from the original table into the new table, drop the original table, and rename the partitioned table with the original table's name. You must also re-grant any table permissions. For example:
-
-``` sql
-CREATE TABLE sales2 (LIKE sales)
-PARTITION BY RANGE (date)
-( START (date '2008-01-01') INCLUSIVE
-   END (date '2009-01-01') EXCLUSIVE
-   EVERY (INTERVAL '1 month') );
-INSERT INTO sales2 SELECT * FROM sales;
-DROP TABLE sales;
-ALTER TABLE sales2 RENAME TO sales;
-GRANT ALL PRIVILEGES ON sales TO admin;
-GRANT SELECT ON sales TO guest;
-```
-
-## <a id="topic73"></a>Loading Partitioned Tables 
-
-After you create the partitioned table structure, top-level parent tables are empty. Data is routed to the bottom-level child table partitions. In a multi-level partition design, only the subpartitions at the bottom of the hierarchy can contain data.
-
-Rows that cannot be mapped to a child table partition are rejected and the load fails. To avoid unmapped rows being rejected at load time, define your partition hierarchy with a `DEFAULT` partition. Any rows that do not match a partition's `CHECK` constraints load into the `DEFAULT` partition. See [Adding a Default Partition](#topic80).
-
-At runtime, the query optimizer scans the entire table inheritance hierarchy and uses the `CHECK` table constraints to determine which of the child table partitions to scan to satisfy the query's conditions. The `DEFAULT` partition \(if your hierarchy has one\) is always scanned. `DEFAULT` partitions that contain data slow down the overall scan time.
-
-When you use `COPY` or `INSERT` to load data into a parent table, the data is automatically rerouted to the correct partition, just like a regular table.
-
-Best practice for loading data into partitioned tables is to create an intermediate staging table, load it, and then exchange it into your partition design. See [Exchanging a Partition](#topic83).
-
-## <a id="topic74"></a>Verifying Your Partition Strategy 
-
-When a table is partitioned based on the query predicate, you can use `EXPLAIN` to verify that the query optimizer scans only the relevant data to examine the query plan.
-
-For example, suppose a *sales* table is date-range partitioned by month and subpartitioned by region as shown in [Figure 1](#im207241). For the following query:
-
-``` sql
-EXPLAIN SELECT * FROM sales WHERE date='01-07-12' AND
-region='usa';
-```
-
-The query plan for this query should show a table scan of only the following tables:
-
--   the default partition returning 0-1 rows \(if your partition design has one\)
--   the January 2012 partition \(*sales\_1\_prt\_1*\) returning 0-1 rows
--   the USA region subpartition \(*sales\_1\_2\_prt\_usa*\) returning *some number* of rows.
-
-The following example shows the relevant portion of the query plan.
-
-``` pre
-->  `Seq Scan on``sales_1_prt_1` sales (cost=0.00..0.00 `rows=0`
-�����width=0)
-Filter: "date"=01-07-08::date AND region='USA'::text
-->  `Seq Scan on``sales_1_2_prt_usa` sales (cost=0.00..9.87
-`rows=20`
-������width=40)
-```
-
-Ensure that the query optimizer does not scan unnecessary partitions or subpartitions \(for example, scans of months or regions not specified in the query predicate\), and that scans of the top-level tables return 0-1 rows.
-
-### <a id="topic75"></a>Troubleshooting Selective Partition Scanning 
-
-The following limitations can result in a query plan that shows a non-selective scan of your partition hierarchy.
-
--   The query optimizer can selectively scan partitioned tables only when the query contains a direct and simple restriction of the table using immutable operators such as:
-
-    =, < , <=�, \>,��\>=�, and <\>
-
--   Selective scanning recognizes `STABLE` and `IMMUTABLE` functions, but does not recognize `VOLATILE` functions within a query. For example, `WHERE` clauses such as `date > CURRENT_DATE` cause the query optimizer to selectively scan partitioned tables, but `time > TIMEOFDAY` does not.
-
-## <a id="topic76"></a>Viewing Your Partition Design 
-
-You can look up information about your partition design using the *pg\_partitions* view. For example, to see the partition design of the *sales* table:
-
-``` sql
-SELECT partitionboundary, partitiontablename, partitionname,
-partitionlevel, partitionrank
-FROM pg_partitions
-WHERE tablename='sales';
-```
-
-The following table and views show information about partitioned tables.
-
--   *pg\_partition* - Tracks partitioned tables and their inheritance level relationships.
--   *pg\_partition\_templates* - Shows the subpartitions created using a subpartition template.
--   *pg\_partition\_columns* - Shows the partition key columns used in a partition design.
-
-## <a id="topic77"></a>Maintaining Partitioned Tables 
-
-To maintain a partitioned table, use the `ALTER TABLE` command against the top-level parent table. The most common scenario is to drop old partitions and add new ones to maintain a rolling window of data in a range partition design. If you have a default partition in your partition design, you add a partition by *splitting* the default partition.
-
--   [Adding a Partition](#topic78)
--   [Renaming a Partition](#topic79)
--   [Adding a Default Partition](#topic80)
--   [Dropping a Partition](#topic81)
--   [Truncating a Partition](#topic82)
--   [Exchanging a Partition](#topic83)
--   [Splitting a Partition](#topic84)
--   [Modifying a Subpartition Template](#topic85)
-
-**Note:** When using multi-level partition designs, the following operations are not supported with ALTER TABLE:
-
--   ADD DEFAULT PARTITION
--   ADD PARTITION
--   DROP DEFAULT PARTITION
--   DROP PARTITION
--   SPLIT PARTITION
--   All operations that involve modifying subpartitions.
-
-**Important:** When defining and altering partition designs, use the given partition name, not the table object name. Although you can query and load any table \(including partitioned tables\) directly using SQL commands, you can only modify the structure of a partitioned table using the `ALTER TABLE...PARTITION` clauses.
-
-Partitions are not required to have names. If a partition does not have a name, use one of the following expressions to specify a part: `PARTITION FOR (value)` or \)`PARTITION FOR(RANK(number)`.
-
-### <a id="topic78"></a>Adding a Partition 
-
-You can add a partition to a partition design with the `ALTER TABLE` command. If the original partition design included subpartitions defined by a *subpartition template*, the newly added partition is subpartitioned according to that template. For example:
-
-``` sql
-ALTER TABLE sales ADD PARTITION
-    START (date '2009-02-01') INCLUSIVE
-    END (date '2009-03-01') EXCLUSIVE;
-```
-
-If you did not use a subpartition template when you created the table, you define subpartitions when adding a partition:
-
-``` sql
-ALTER TABLE sales ADD PARTITION
-    START (date '2009-02-01') INCLUSIVE
-    END (date '2009-03-01') EXCLUSIVE
-     ( SUBPARTITION usa VALUES ('usa'),
-       SUBPARTITION asia VALUES ('asia'),
-       SUBPARTITION europe VALUES ('europe') );
-```
-
-When you add a subpartition to an existing partition, you can specify the partition to alter. For example:
-
-``` sql
-ALTER TABLE sales ALTER PARTITION FOR (RANK(12))
-      ADD PARTITION africa VALUES ('africa');
-```
-
-**Note:** You cannot add a partition to a partition design that has a default partition. You must split the default partition to add a partition. See [Splitting a Partition](#topic84).
-
-### <a id="topic79"></a>Renaming a Partition 
-
-Partitioned tables use the following naming convention. Partitioned subtable names are subject to uniqueness requirements and length limitations.
-
-<pre><code><i>&lt;parentname&gt;</i>_<i>&lt;level&gt;</i>_prt_<i>&lt;partition_name&gt;</i></code></pre>
-
-For example:
-
-```
-sales_1_prt_jan08
-```
-
-For auto-generated range partitions, where a number is assigned when no name is given\):
-
-```
-sales_1_prt_1
-```
-
-To rename a partitioned child table, rename the top-level parent table. The *&lt;parentname&gt;* changes in the table names of all associated child table partitions. For example, the following command:
-
-``` sql
-ALTER TABLE sales RENAME TO globalsales;
-```
-
-Changes the associated table names:
-
-```
-globalsales_1_prt_1
-```
-
-You can change the name of a partition to make it easier to identify. For example:
-
-``` sql
-ALTER TABLE sales RENAME PARTITION FOR ('2008-01-01') TO jan08;
-```
-
-Changes the associated table name as follows:
-
-```
-sales_1_prt_jan08
-```
-
-When altering partitioned tables with the `ALTER TABLE` command, always refer to the tables by their partition name \(*jan08*\) and not their full table name \(*sales\_1\_prt\_jan08*\).
-
-**Note:** The table name cannot be a partition name in an `ALTER TABLE` statement. For example, `ALTER TABLE sales...` is correct, `ALTER TABLE sales_1_part_jan08...` is not allowed.
-
-### <a id="topic80"></a>Adding a Default Partition 
-
-You can add a default partition to a partition design with the `ALTER TABLE` command.
-
-``` sql
-ALTER TABLE sales ADD DEFAULT PARTITION other;
-```
-
-If incoming data does not match a partition's `CHECK` constraint and there is no default partition, the data is rejected. Default partitions ensure that incoming data that does not match a partition is inserted into the default partition.
-
-### <a id="topic81"></a>Dropping a Partition 
-
-You can drop a partition from your partition design using the `ALTER TABLE` command. When you drop a partition that has subpartitions, the subpartitions \(and all data in them\) are automatically dropped as well. For range partitions, it is common to drop the older partitions from the range as old data is rolled out of the data warehouse. For example:
-
-``` sql
-ALTER TABLE sales DROP PARTITION FOR (RANK(1));
-```
-
-### <a id="topic_enm_vrk_kv"></a>Sorting AORO Partitioned Tables 
-
-HDFS read access for large numbers of append-only, row-oriented \(AORO\) tables with large numbers of partitions can be tuned by using the `optimizer_parts_to_force_sort_on_insert` parameter to control how HDFS opens files. This parameter controls the way the optimizer sorts tuples during INSERT operations, to maximize HDFS performance.
-
-The user-tunable parameter `optimizer_parts_to_force_sort_on_insert` can force the GPORCA query optimizer to generate a plan for sorting tuples during insertion into an append-only, row-oriented \(AORO\) partitioned tables. Sorting the insert tuples reduces the number of partition switches, thus improving the overall INSERT performance. For a given AORO table, if its number of leaf-partitioned tables is greater than or equal to the number specified in `optimizer_parts_to_force_sort_on_insert`, the plan generated by the GPORCA will sort inserts by their partition IDs before performing the INSERT operation. Otherwise, the inserts are not sorted. The default value for `optimizer_parts_to_force_sort_on_insert` is 160.
-
-### <a id="topic82"></a>Truncating a Partition 
-
-You can truncate a partition using the `ALTER TABLE` command. When you truncate a partition that has subpartitions, the subpartitions are automatically truncated as well.
-
-``` sql
-ALTER TABLE sales TRUNCATE PARTITION FOR (RANK(1));
-```
-
-### <a id="topic83"></a>Exchanging a Partition 
-
-You can exchange a partition using the `ALTER TABLE` command. Exchanging a partition swaps one table in place of an existing partition. You can exchange partitions only at the lowest level of your partition hierarchy \(only partitions that contain data can be exchanged\).
-
-Partition exchange can be useful for data loading. For example, load a staging table and swap the loaded table into your partition design. You can use partition exchange to change the storage type of older partitions to append-only tables. For example:
-
-``` sql
-CREATE TABLE jan12 (LIKE sales) WITH (appendonly=true);
-INSERT INTO jan12 SELECT * FROM sales_1_prt_1 ;
-ALTER TABLE sales EXCHANGE PARTITION FOR (DATE '2012-01-01')
-WITH TABLE jan12;
-```
-
-**Note:** This example refers to the single-level definition of the table `sales`, before partitions were added and altered in the previous examples.
-
-### <a id="topic84"></a>Splitting a Partition 
-
-Splitting a partition divides a partition into two partitions. You can split a partition using the `ALTER TABLE` command. You can split partitions only at the lowest level of your partition hierarchy: only partitions that contain data can be split. The split value you specify goes into the *latter* partition.
-
-For example, to split a monthly partition into two with the first partition containing dates January 1-15 and the second partition containing dates January 16-31:
-
-``` sql
-ALTER TABLE sales SPLIT PARTITION FOR ('2008-01-01')
-AT ('2008-01-16')
-INTO (PARTITION jan081to15, PARTITION jan0816to31);
-```
-
-If your partition design has a default partition, you must split the default partition to add a partition.
-
-When using the `INTO` clause, specify the current default partition as the second partition name. For example, to split a default range partition to add a new monthly partition for January 2009:
-
-``` sql
-ALTER TABLE sales SPLIT DEFAULT PARTITION
-START ('2009-01-01') INCLUSIVE
-END ('2009-02-01') EXCLUSIVE
-INTO (PARTITION jan09, default partition);
-```
-
-### <a id="topic85"></a>Modifying a Subpartition Template 
-
-Use `ALTER TABLE` SET SUBPARTITION TEMPLATE to modify the subpartition template of a partitioned table. Partitions added after you set a new subpartition template have the new partition design. Existing partitions are not modified.
-
-The following example alters the subpartition template of this partitioned table:
-
-``` sql
-CREATE TABLE sales (trans_id int, date date, amount decimal(9,2), region text)
-  DISTRIBUTED BY (trans_id)
-  PARTITION BY RANGE (date)
-  SUBPARTITION BY LIST (region)
-  SUBPARTITION TEMPLATE
-    ( SUBPARTITION usa VALUES ('usa'),
-      SUBPARTITION asia VALUES ('asia'),
-      SUBPARTITION europe VALUES ('europe'),
-      DEFAULT SUBPARTITION other_regions )
-  ( START (date '2014-01-01') INCLUSIVE
-    END (date '2014-04-01') EXCLUSIVE
-    EVERY (INTERVAL '1 month') );
-```
-
-This `ALTER TABLE` command, modifies the subpartition template.
-
-``` sql
-ALTER TABLE sales SET SUBPARTITION TEMPLATE
-( SUBPARTITION usa VALUES ('usa'),
-  SUBPARTITION asia VALUES ('asia'),
-  SUBPARTITION europe VALUES ('europe'),
-  SUBPARTITION africa VALUES ('africa'),
-  DEFAULT SUBPARTITION regions );
-```
-
-When you add a date-range partition of the table sales, it includes the new regional list subpartition for Africa. For example, the following command creates the subpartitions `usa`, `asia`, `europe`, `africa`, and a default partition named `other`:
-
-``` sql
-ALTER TABLE sales ADD PARTITION "4"
-  START ('2014-04-01') INCLUSIVE
-  END ('2014-05-01') EXCLUSIVE ;
-```
-
-To view the tables created for the partitioned table `sales`, you can use the command `\dt sales*` from the psql command line.
-
-To remove a subpartition template, use `SET SUBPARTITION TEMPLATE` with empty parentheses. For example, to clear the sales table subpartition template:
-
-``` sql
-ALTER TABLE sales SET SUBPARTITION TEMPLATE ();
-```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/ddl/ddl-schema.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-schema.html.md.erb b/ddl/ddl-schema.html.md.erb
deleted file mode 100644
index 7c361ba..0000000
--- a/ddl/ddl-schema.html.md.erb
+++ /dev/null
@@ -1,88 +0,0 @@
----
-title: Creating and Managing Schemas
----
-
-Schemas logically organize objects and data in a database. Schemas allow you to have more than one object \(such as tables\) with the same name in the database without conflict if the objects are in different schemas.
-
-## <a id="topic18"></a>The Default "Public" Schema 
-
-Every database has a default schema named *public*. If you do not create any schemas, objects are created in the *public* schema. All database roles \(users\) have `CREATE` and `USAGE` privileges in the *public* schema. When you create a schema, you grant privileges to your users to allow access to the schema.
-
-## <a id="topic19"></a>Creating a Schema 
-
-Use the `CREATE SCHEMA` command to create a new schema. For example:
-
-``` sql
-=> CREATE SCHEMA myschema;
-```
-
-To create or access objects in a schema, write a qualified name consisting of the schema name and table name separated by a period. For example:
-
-```
-myschema.table
-```
-
-See [Schema Search Paths](#topic20) for information about accessing a schema.
-
-You can create a schema owned by someone else, for example, to restrict the activities of your users to well-defined namespaces. The syntax is:
-
-``` sql
-=> CREATE SCHEMA schemaname AUTHORIZATION username;
-```
-
-## <a id="topic20"></a>Schema Search Paths 
-
-To specify an object's location in a database, use the schema-qualified name. For example:
-
-``` sql
-=> SELECT * FROM myschema.mytable;
-```
-
-You can set the `search_path` configuration parameter to specify the order in which to search the available schemas for objects. The schema listed first in the search path becomes the *default* schema. If a schema is not specified, objects are created in the default schema.
-
-### <a id="topic21"></a>Setting the Schema Search Path 
-
-The `search_path` configuration parameter sets the schema search order. The `ALTER DATABASE` command sets the search path. For example:
-
-``` sql
-=> ALTER DATABASE mydatabase SET search_path TO myschema,
-public, pg_catalog;
-```
-
-### <a id="topic22"></a>Viewing the Current Schema 
-
-Use the `current_schema()` function to view the current schema. For example:
-
-``` sql
-=> SELECT current_schema();
-```
-
-Use the `SHOW` command to view the current search path. For example:
-
-``` sql
-=> SHOW search_path;
-```
-
-## <a id="topic23"></a>Dropping a Schema 
-
-Use the `DROP SCHEMA` command to drop \(delete\) a schema. For example:
-
-``` sql
-=> DROP SCHEMA myschema;
-```
-
-By default, the schema must be empty before you can drop it. To drop a schema and all of its objects \(tables, data, functions, and so on\) use:
-
-``` sql
-=> DROP SCHEMA myschema CASCADE;
-```
-
-## <a id="topic24"></a>System Schemas 
-
-The following system-level schemas exist in every database:
-
--   `pg_catalog` contains the system catalog tables, built-in data types, functions, and operators. It is always part of the schema search path, even if it is not explicitly named in the search path.
--   `information_schema` consists of a standardized set of views that contain information about the objects in the database. These views get system information from the system catalog tables in a standardized way.
--   `pg_toast` stores large objects such as records that exceed the page size. This schema is used internally by the HAWQ system.
--   `pg_bitmapindex` stores bitmap index objects such as lists of values. This schema is used internally by the HAWQ system.
--   `hawq_toolkit` is an administrative schema that contains external tables, views, and functions that you can access with SQL commands. All database users can access `hawq_toolkit` to view and query the system log files and other system metrics.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/ddl/ddl-storage.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-storage.html.md.erb b/ddl/ddl-storage.html.md.erb
deleted file mode 100644
index 264e552..0000000
--- a/ddl/ddl-storage.html.md.erb
+++ /dev/null
@@ -1,71 +0,0 @@
----
-title: Table Storage Model and Distribution Policy
----
-
-HAWQ supports several storage models and a mix of storage models. When you create a table, you choose how to store its data. This topic explains the options for table storage and how to choose the best storage model for your workload.
-
-**Note:** To simplify the creation of database tables, you can specify the default values for some table storage options with the HAWQ server configuration parameter `gp_default_storage_options`.
-
-## <a id="topic39"></a>Row-Oriented Storage 
-
-HAWQ provides storage orientation models of either row-oriented or Parquet tables. Evaluate performance using your own data and query workloads to determine the best alternatives.
-
--   Row-oriented storage: good for OLTP types of workloads with many iterative transactions and many columns of a single row needed all at once, so retrieving is efficient.
-
-    **Note:** Column-oriented storage is no longer available. Parquet storage should be used, instead.
-
-Row-oriented storage provides the best options for the following situations:
-
--   **Frequent INSERTs.** Where rows are frequently inserted into the table
--   **Number of columns requested in queries.** Where you typically request all or the majority of columns in the `SELECT` list or `WHERE` clause of your queries, choose a row-oriented model. 
--   **Number of columns in the table.** Row-oriented storage is most efficient when many columns are required at the same time, or when the row-size of a table is relatively small. 
-
-## <a id="topic55"></a>Altering a Table 
-
-The `ALTER TABLE`command changes the definition of a table. Use `ALTER TABLE` to change table attributes such as column definitions, distribution policy, storage model, and partition structure \(see also [Maintaining Partitioned Tables](ddl-partition.html)\). For example, to add a not-null constraint to a table column:
-
-``` sql
-=> ALTER TABLE address ALTER COLUMN street SET NOT NULL;
-```
-
-### <a id="topic56"></a>Altering Table Distribution 
-
-`ALTER TABLE` provides options to change a table's distribution policy . When the table distribution options change, the table data is redistributed on disk, which can be resource intensive. You can also redistribute table data using the existing distribution policy.
-
-### <a id="topic57"></a>Changing the Distribution Policy 
-
-For partitioned tables, changes to the distribution policy apply recursively to the child partitions. This operation preserves the ownership and all other attributes of the table. For example, the following command redistributes the table sales across all segments using the customer\_id column as the distribution key:
-
-``` sql
-ALTER TABLE sales SET DISTRIBUTED BY (customer_id);
-```
-
-When you change the hash distribution of a table, table data is automatically redistributed. Changing the distribution policy to a random distribution does not cause the data to be redistributed. For example:
-
-``` sql
-ALTER TABLE sales SET DISTRIBUTED RANDOMLY;
-```
-
-### <a id="topic58"></a>Redistributing Table Data 
-
-To redistribute table data for tables with a random distribution policy \(or when the hash distribution policy has not changed\) use `REORGANIZE=TRUE`. Reorganizing data may be necessary to correct a data skew problem, or when segment resources are added to the system. For example, the following command redistributes table data across all segments using the current distribution policy, including random distribution.
-
-``` sql
-ALTER TABLE sales SET WITH (REORGANIZE=TRUE);
-```
-
-## <a id="topic62"></a>Dropping a Table 
-
-The`DROP TABLE`command removes tables from the database. For example:
-
-``` sql
-DROP TABLE mytable;
-```
-
-`DROP TABLE` always removes any indexes, rules, triggers, and constraints that exist for the target table. Specify `CASCADE`to drop a table that is referenced by a view. `CASCADE` removes dependent views.
-
-To empty a table of rows without removing the table definition, use `TRUNCATE`. For example:
-
-``` sql
-TRUNCATE mytable;
-```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/ddl/ddl-table.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-table.html.md.erb b/ddl/ddl-table.html.md.erb
deleted file mode 100644
index bc4f0c4..0000000
--- a/ddl/ddl-table.html.md.erb
+++ /dev/null
@@ -1,149 +0,0 @@
----
-title: Creating and Managing Tables
----
-
-HAWQ Tables are similar to tables in any relational database, except that table rows are distributed across the different segments in the system. When you create a table, you specify the table's distribution policy.
-
-## <a id="topic26"></a>Creating a Table 
-
-The `CREATE TABLE` command creates a table and defines its structure. When you create a table, you define:
-
--   The columns of the table and their associated data types. See [Choosing Column Data Types](#topic27).
--   Any table constraints to limit the data that a column or table can contain. See [Setting Table Constraints](#topic28).
--   The distribution policy of the table, which determines how HAWQ divides data is across the segments. See [Choosing the Table Distribution Policy](#topic34).
--   The way the table is stored on disk.
--   The table partitioning strategy for large tables, which specifies how the data should be divided. See [Creating and Managing Databases](../ddl/ddl-database.html).
-
-### <a id="topic27"></a>Choosing Column Data Types 
-
-The data type of a column determines the types of data values the column can contain. Choose the data type that uses the least possible space but can still accommodate your data and that best constrains the data. For example, use character data types for strings, date or timestamp data types for dates, and numeric data types for numbers.
-
-There are no performance differences among the character data types `CHAR`, `VARCHAR`, and `TEXT` apart from the increased storage size when you use the blank-padded type. In most situations, use `TEXT` or `VARCHAR` rather than `CHAR`.
-
-Use the smallest numeric data type that will accommodate your numeric data and allow for future expansion. For example, using `BIGINT` for data that fits in `INT` or `SMALLINT` wastes storage space. If you expect that your data values will expand over time, consider that changing from a smaller datatype to a larger datatype after loading large amounts of data is costly. For example, if your current data values fit in a `SMALLINT` but it is likely that the values will expand, `INT` is the better long-term choice.
-
-Use the same data types for columns that you plan to use in cross-table joins. When the data types are different, the database must convert one of them so that the data values can be compared correctly, which adds unnecessary overhead.
-
-HAWQ supports the parquet columnar storage format, which can increase performance on large queries. Use parquet tables for HAWQ internal tables.
-
-### <a id="topic28"></a>Setting Table Constraints 
-
-You can define constraints to restrict the data in your tables. HAWQ support for constraints is the same as PostgreSQL with some limitations, including:
-
--   `CHECK` constraints can refer only to the table on which they are defined.
--   `FOREIGN KEY` constraints are allowed, but not enforced.
--   Constraints that you define on partitioned tables apply to the partitioned table as a whole. You cannot define constraints on the individual parts of the table.
-
-#### <a id="topic29"></a>Check Constraints 
-
-Check constraints allow you to specify that the value in a certain column must satisfy a Boolean \(truth-value\) expression. For example, to require positive product prices:
-
-``` sql
-=> CREATE TABLE products
-     ( product_no integer,
-       name text,
-       price numeric CHECK (price > 0) );
-```
-
-#### <a id="topic30"></a>Not-Null Constraints 
-
-Not-null constraints specify that a column must not assume the null value. A not-null constraint is always written as a column constraint. For example:
-
-``` sql
-=> CREATE TABLE products
-     ( product_no integer NOT NULL,
-       name text NOT NULL,
-       price numeric );
-```
-
-#### <a id="topic33"></a>Foreign Keys 
-
-Foreign keys are not supported. You can declare them, but referential integrity is not enforced.
-
-Foreign key constraints specify that the values in a column or a group of columns must match the values appearing in some row of another table to maintain referential integrity between two related tables. Referential integrity checks cannot be enforced between the distributed table segments of a HAWQ database.
-
-### <a id="topic34"></a>Choosing the Table Distribution Policy 
-
-All HAWQ tables are distributed. The default is `DISTRIBUTED RANDOMLY` \(round-robin distribution\) to determine the table row distribution. However, when you create or alter a table, you can optionally specify `DISTRIBUTED BY` to distribute data according to a hash-based policy. In this case, the `bucketnum` attribute sets the number of hash buckets used by a hash-distributed table. Columns of geometric or user-defined data types are not eligible as HAWQ distribution key columns. 
-
-Randomly distributed tables have benefits over hash distributed tables. For example, after expansion, HAWQ's elasticity feature lets it automatically use more resources without needing to redistribute the data. For extremely large tables, redistribution is very expensive. Also, data locality for randomly distributed tables is better, especially after the underlying HDFS redistributes its data during rebalancing or because of DataNode failures. This is quite common when the cluster is large.
-
-However, hash distributed tables can be faster than randomly distributed tables. For example, for TPCH queries, where there are several queries, HASH distributed tables can have performance benefits. Choose a distribution policy that best suits your application scenario. When you `CREATE TABLE`, you can also specify the `bucketnum` option. The `bucketnum` determines the number of hash buckets used in creating a hash-distributed table or for PXF external table intermediate processing. The number of buckets also affects how many virtual segments will be created when processing this data. The bucketnumber of a gpfdist external table is the number of gpfdist location, and the bucketnumber of a command external table is `ON #num`. PXF external tables use the `default_hash_table_bucket_number` parameter to control virtual segments. 
-
-HAWQ's elastic execution runtime is based on virtual segments, which are allocated on demand, based on the cost of the query. Each node uses one physical segment and a number of dynamically allocated virtual segments distributed to different hosts, thus simplifying performance tuning. Large queries use large numbers of virtual segments, while smaller queries use fewer virtual segments. Tables do not need to be redistributed when nodes are added or removed.
-
-In general, the more virtual segments are used, the faster the query will be executed. You can tune the parameters for `default_hash_table_bucket_number` and `hawq_rm_nvseg_perquery_limit` to adjust performance by controlling the number of virtual segments used for a query. However, be aware that if the value of `default_hash_table_bucket_number` is changed, data must be redistributed, which can be costly. Therefore, it is better to set the `default_hash_table_bucket_number` up front, if you expect to need a larger number of virtual segments. However, you might need to adjust the value in `default_hash_table_bucket_number` after cluster expansion, but should take care not to exceed the number of virtual segments per query set in `hawq_rm_nvseg_perquery_limit`. Refer to the recommended guidelines for setting the value of `default_hash_table_bucket_number`, later in this section.
-
-For random or gpfdist external tables, as well as user-defined functions, the value set in the `hawq_rm_nvseg_perquery_perseg_limit` parameter limits the number of virtual segments that are used for one segment for one query, to optimize query resources. Resetting this parameter is not recommended.
-
-Consider the following points when deciding on a table distribution policy.
-
--   **Even Data Distribution** \u2014 For the best possible performance, all segments should contain equal portions of data. If the data is unbalanced or skewed, the segments with more data must work harder to perform their portion of the query processing.
--   **Local and Distributed Operations** \u2014 Local operations are faster than distributed operations. Query processing is fastest if the work associated with join, sort, or aggregation operations is done locally, at the segment level. Work done at the system level requires distributing tuples across the segments, which is less efficient. When tables share a common distribution key, the work of joining or sorting on their shared distribution key columns is done locally. With a random distribution policy, local join operations are not an option.
--   **Even Query Processing** \u2014 For best performance, all segments should handle an equal share of the query workload. Query workload can be skewed if a table's data distribution policy and the query predicates are not well matched. For example, suppose that a sales transactions table is distributed based on a column that contains corporate names \(the distribution key\), and the hashing algorithm distributes the data based on those values. If a predicate in a query references a single value from the distribution key, query processing runs on only one segment. This works if your query predicates usually select data on a criteria other than corporation name. For queries that use corporation name in their predicates, it's possible that only one segment instance will handle the query workload.
-
-HAWQ utilizes dynamic parallelism, which can affect the performance of a query execution significantly. Performance depends on the following factors:
-
--   The size of a randomly distributed table.
--   The `bucketnum` of a hash distributed table.
--   Data locality.
--   The values of `default_hash_table_bucket_number`, and `hawq_rm_nvseg_perquery_limit` \(including defaults and user-defined values\).
-
-For any specific query, the first four factors are fixed values, while the configuration parameters in the last item can be used to tune performance of the query execution. In querying a random table, the query resource load is related to the data size of the table, usually one virtual segment for one HDFS block. As a result, querying a large table could use a large number of resources.
-
-The `bucketnum` for a hash table specifies the number of hash buckets to be used in creating virtual segments. A HASH distributed table is created with `default_hash_table_bucket_number` buckets. The default bucket value can be changed in session level or in the `CREATE TABLE` DDL by using the `bucketnum` storage parameter.
-
-In an Ambari-managed HAWQ cluster, the default bucket number \(`default_hash_table_bucket_number`\) is derived from the number of segment nodes. In command-line-managed HAWQ environments, you can use the `--bucket_number` option of `hawq init` to explicitly set `default_hash_table_bucket_number` during cluster initialization.
-
-**Note:** For best performance with large tables, the number of buckets should not exceed the value of the `default_hash_table_bucket_number` parameter. Small tables can use one segment node, `WITH bucketnum=1`. For larger tables, the `bucketnum` is set to a multiple of the number of segment nodes, for the best load balancing on different segment nodes. The elastic runtime will attempt to find the optimal number of buckets for the number of nodes being processed. Larger tables need more virtual segments, and hence use larger numbers of buckets.
-
-The following statement creates a table \u201csales\u201d with 8 buckets, which would be similar to a hash-distributed table on 8 segments.
-
-``` sql
-=> CREATE TABLE sales(id int, profit float)  WITH (bucketnum=8) DISTRIBUTED BY (id);
-```
-
-There are four ways of creating a table from an origin table. The ways in which the new table is generated from the original table are listed below.
-
-<table>
-  <tr>
-    <th></th>
-    <th>Syntax</th>
-  </tr>
-  <tr><td>INHERITS</td><td><pre><code>CREATE TABLE new_table INHERITS (origintable) [WITH(bucketnum=x)] <br/>[DISTRIBUTED BY col]</code></pre></td></tr>
-  <tr><td>LIKE</td><td><pre><code>CREATE TABLE new_table (LIKE origintable) [WITH(bucketnum=x)] <br/>[DISTRIBUTED BY col]</code></pre></td></tr>
-  <tr><td>AS</td><td><pre><code>CREATE TABLE new_table [WITH(bucketnum=x)] AS SUBQUERY [DISTRIBUTED BY col]</code></pre></td></tr>
-  <tr><td>SELECT INTO</td><td><pre><code>CREATE TABLE origintable [WITH(bucketnum=x)] [DISTRIBUTED BY col]; SELECT * <br/>INTO new_table FROM origintable;</code></pre></td></tr>
-</table>
-
-The optional `INHERITS` clause specifies a list of tables from which the new table automatically inherits all columns. Hash tables inherit bucketnumbers from their origin table if not otherwise specified. If `WITH` specifies `bucketnum` in creating a hash-distributed table, it will be copied. If distribution is specified by column, the table will inherit it. Otherwise, the table will use default distribution from `default_hash_table_bucket_number`.
-
-The `LIKE` clause specifies a table from which the new table automatically copies all column names, data types, not-null constraints, and distribution policy. If a `bucketnum` is specified, it will be copied. Otherwise, the table will use default distribution.
-
-For hash tables, the `SELECT INTO` function always uses random distribution.
-
-#### <a id="topic_kjg_tqm_gv"></a>Declaring Distribution Keys 
-
-`CREATE TABLE`'s optional clause `DISTRIBUTED BY` specifies the distribution policy for a table. The default is a random distribution policy. You can also choose to distribute data as a hash-based policy, where the `bucketnum` attribute sets the number of hash buckets used by a hash-distributed table. HASH distributed tables are created with the number of hash buckets specified by the `default_hash_table_bucket_number` parameter.
-
-Policies for different application scenarios can be specified to optimize performance. The number of virtual segments used for query execution can now be tuned using the `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, in connection with the `default_hash_table_bucket_number` parameter, which sets the default `bucketnum`. For more information, see the guidelines for Virtual Segments in the next section and in [Query Performance](../query/query-performance.html#topic38).
-
-#### <a id="topic_wff_mqm_gv"></a>Performance Tuning 
-
-Adjusting the values of the configuration parameters `default_hash_table_bucket_number` and `hawq_rm_nvseg_perquery_limit` can tune performance by controlling the number of virtual segments being used. In most circumstances, HAWQ's elastic runtime will dynamically allocate virtual segments to optimize performance, so further tuning should not be needed..
-
-Hash tables are created using the value specified in `default_hash_table_bucket_number`. Queries for hash tables use a fixed number of buckets, regardless of the amount of data present. Explicitly setting `default_hash_table_bucket_number` can be useful in managing resources. If you desire a larger or smaller number of hash buckets, set this value before you create tables. Resources are dynamically allocated to a multiple of the number of nodes. If you use `hawq init --bucket_number` to set the value of `default_hash_table_bucket_number` during cluster initialization or expansion, the value should not exceed the value of `hawq_rm_nvseg_perquery_limit`. This server parameter defines the maximum number of virtual segments that can be used for a query \(default = 512, with a maximum of 65535\). Modifying the value to greater than 1000 segments is not recommended.
-
-The following per-node guidelines apply to values for `default_hash_table_bucket_number`.
-
-|Number of Nodes|default\_hash\_table\_bucket\_number value|
-|---------------|------------------------------------------|
-|<= 85|6 \* \#nodes|
-|\> 85 and <= 102|5 \* \#nodes|
-|\> 102 and <= 128|4 \* \#nodes|
-|\> 128 and <= 170|3 \* \#nodes|
-|\> 170 and <= 256|2 \* \#nodes|
-|\> 256 and <= 512|1 \* \#nodes|
-|\> 512|512|
-
-Reducing the value of `hawq_rm_nvseg_perquery_perseg_limit`can improve concurrency and increasing the value of `hawq_rm_nvseg_perquery_perseg_limit`could possibly increase the degree of parallelism. However, for some queries, increasing the degree of parallelism will not improve performance if the query has reached the limits set by the hardware. Therefore, increasing the value of `hawq_rm_nvseg_perquery_perseg_limit` above the default value is not recommended. Also, changing the value of `default_hash_table_bucket_number` after initializing a cluster means the hash table data must be redistributed. If you are expanding a cluster, you might wish to change this value, but be aware that retuning could adversely affect performance.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/ddl/ddl-tablespace.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-tablespace.html.md.erb b/ddl/ddl-tablespace.html.md.erb
deleted file mode 100644
index 8720665..0000000
--- a/ddl/ddl-tablespace.html.md.erb
+++ /dev/null
@@ -1,154 +0,0 @@
----
-title: Creating and Managing Tablespaces
----
-
-Tablespaces allow database administrators to have multiple file systems per machine and decide how to best use physical storage to store database objects. They are named locations within a filespace in which you can create objects. Tablespaces allow you to assign different storage for frequently and infrequently used database objects or to control the I/O performance on certain database objects. For example, place frequently-used tables on file systems that use high performance solid-state drives \(SSD\), and place other tables on standard hard drives.
-
-A tablespace requires a file system location to store its database files. In HAWQ, the master and each segment require a distinct storage location. The collection of file system locations for all components in a HAWQ system is a *filespace*. Filespaces can be used by one or more tablespaces.
-
-## <a id="topic10"></a>Creating a Filespace 
-
-A filespace sets aside storage for your HAWQ system. A filespace is a symbolic storage identifier that maps onto a set of locations in your HAWQ hosts' file systems. To create a filespace, prepare the logical file systems on all of your HAWQ hosts, then use the `hawq filespace` utility to define the filespace. You must be a database superuser to create a filespace.
-
-**Note:** HAWQ is not directly aware of the file system boundaries on your underlying systems. It stores files in the directories that you tell it to use. You cannot control the location on disk of individual files within a logical file system.
-
-### <a id="im178954"></a>To create a filespace using hawq filespace 
-
-1.  Log in to the HAWQ master as the `gpadmin` user.
-
-    ``` shell
-    $ su - gpadmin
-    ```
-
-2.  Create a filespace configuration file:
-
-    ``` shell
-    $ hawq filespace -o hawqfilespace_config
-    ```
-
-3.  At the prompt, enter a name for the filespace, a master file system location, and the primary segment file system locations. For example:
-
-    ``` shell
-    $ hawq filespace -o hawqfilespace_config
-    ```
-    ``` pre
-    Enter a name for this filespace
-    > testfs
-    Enter replica num for filespace. If 0, default replica num is used (default=3)
-    > 
-
-    Please specify the DFS location for the filespace (for example: localhost:9000/fs)
-    location> localhost:8020/fs        
-    20160409:16:53:25:028082 hawqfilespace:gpadmin:gpadmin-[INFO]:-[created]
-    20160409:16:53:25:028082 hawqfilespace:gpadmin:gpadmin-[INFO]:-
-    To add this filespace to the database please run the command:
-       hawqfilespace --config /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
-    ```
-       
-    ``` shell
-    $ cat /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
-    ```
-    ``` pre
-    filespace:testfs
-    fsreplica:3
-    dfs_url::localhost:8020/fs
-    ```
-    ``` shell
-    $ hawq filespace --config /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
-    ```
-    ``` pre
-    Reading Configuration file: '/Users/gpadmin/curwork/git/hawq/hawqfilespace_config'
-
-    CREATE FILESPACE testfs ON hdfs 
-    ('localhost:8020/fs/testfs') WITH (NUMREPLICA = 3);
-    20160409:16:57:56:028104 hawqfilespace:gpadmin:gpadmin-[INFO]:-Connecting to database
-    20160409:16:57:56:028104 hawqfilespace:gpadmin:gpadmin-[INFO]:-Filespace "testfs" successfully created
-
-    ```
-
-
-4.  `hawq filespace` creates a configuration file. Examine the file to verify that the hawq filespace configuration is correct. The following is a sample configuration file:
-
-    ```
-    filespace:fastdisk
-    mdw:1:/hawq_master_filespc/gp-1
-    sdw1:2:/hawq_pri_filespc/gp0
-    sdw2:3:/hawq_pri_filespc/gp1
-    ```
-
-5.  Run hawq filespace again to create the filespace based on the configuration file:
-
-    ``` shell
-    $ hawq filespace -c hawqfilespace_config
-    ```
-
-
-## <a id="topic13"></a>Creating a Tablespace 
-
-After you create a filespace, use the `CREATE TABLESPACE` command to define a tablespace that uses that filespace. For example:
-
-``` sql
-=# CREATE TABLESPACE fastspace FILESPACE fastdisk;
-```
-
-Database superusers define tablespaces and grant access to database users with the `GRANT``CREATE`command. For example:
-
-``` sql
-=# GRANT CREATE ON TABLESPACE fastspace TO admin;
-```
-
-## <a id="topic14"></a>Using a Tablespace to Store Database Objects 
-
-Users with the `CREATE` privilege on a tablespace can create database objects in that tablespace, such as tables, indexes, and databases. The command is:
-
-``` sql
-CREATE TABLE tablename(options) TABLESPACE spacename
-```
-
-For example, the following command creates a table in the tablespace *space1*:
-
-``` sql
-CREATE TABLE foo(i int) TABLESPACE space1;
-```
-
-You can also use the `default_tablespace` parameter to specify the default tablespace for `CREATE TABLE` and `CREATE INDEX` commands that do not specify a tablespace:
-
-``` sql
-SET default_tablespace = space1;
-CREATE TABLE foo(i int);
-```
-
-The tablespace associated with a database stores that database's system catalogs, temporary files created by server processes using that database, and is the default tablespace selected for tables and indexes created within the database, if no `TABLESPACE` is specified when the objects are created. If you do not specify a tablespace when you create a database, the database uses the same tablespace used by its template database.
-
-You can use a tablespace from any database if you have appropriate privileges.
-
-## <a id="topic15"></a>Viewing Existing Tablespaces and Filespaces 
-
-Every HAWQ system has the following default tablespaces.
-
--   `pg_global` for shared system catalogs.
--   `pg_default`, the default tablespace. Used by the *template1* and *template0* databases.
-
-These tablespaces use the system default filespace, `pg_system`, the data directory location created at system initialization.
-
-To see filespace information, look in the *pg\_filespace* and *pg\_filespace\_entry* catalog tables. You can join these tables with *pg\_tablespace* to see the full definition of a tablespace. For example:
-
-``` sql
-=# SELECT spcname AS tblspc, fsname AS filespc,
-          fsedbid AS seg_dbid, fselocation AS datadir
-   FROM   pg_tablespace pgts, pg_filespace pgfs,
-          pg_filespace_entry pgfse
-   WHERE  pgts.spcfsoid=pgfse.fsefsoid
-          AND pgfse.fsefsoid=pgfs.oid
-   ORDER BY tblspc, seg_dbid;
-```
-
-## <a id="topic16"></a>Dropping Tablespaces and Filespaces 
-
-To drop a tablespace, you must be the tablespace owner or a superuser. You cannot drop a tablespace until all objects in all databases using the tablespace are removed.
-
-Only a superuser can drop a filespace. A filespace cannot be dropped until all tablespaces using that filespace are removed.
-
-The `DROP TABLESPACE` command removes an empty tablespace.
-
-The `DROP FILESPACE` command removes an empty filespace.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/ddl/ddl-view.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-view.html.md.erb b/ddl/ddl-view.html.md.erb
deleted file mode 100644
index 35da41e..0000000
--- a/ddl/ddl-view.html.md.erb
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Creating and Managing Views
----
-
-Views enable you to save frequently used or complex queries, then access them in a `SELECT` statement as if they were a table. A view is not physically materialized on disk: the query runs as a subquery when you access the view.
-
-If a subquery is associated with a single query, consider using the `WITH` clause of the `SELECT` command instead of creating a seldom-used view.
-
-## <a id="topic101"></a>Creating Views 
-
-The `CREATE VIEW`command defines a view of a query. For example:
-
-``` sql
-CREATE VIEW comedies AS SELECT * FROM films WHERE kind = 'comedy';
-```
-
-Views ignore `ORDER BY` and `SORT` operations stored in the view.
-
-## <a id="topic102"></a>Dropping Views 
-
-The `DROP VIEW` command removes a view. For example:
-
-``` sql
-DROP VIEW topten;
-```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/ddl/ddl.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl.html.md.erb b/ddl/ddl.html.md.erb
deleted file mode 100644
index 7873fe7..0000000
--- a/ddl/ddl.html.md.erb
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Defining Database Objects
----
-
-This section covers data definition language \(DDL\) in HAWQ and how to create and manage database objects.
-
-Creating objects in a HAWQ includes making up-front choices about data distribution, storage options, data loading, and other HAWQ features that will affect the ongoing performance of your database system. Understanding the options that are available and how the database will be used will help you make the right decisions.
-
-Most of the advanced HAWQ features are enabled with extensions to the SQL `CREATE` DDL statements.
-
-This section contains the topics:
-
-*  <a class="subnav" href="./ddl-database.html">Creating and Managing Databases</a>
-*  <a class="subnav" href="./ddl-tablespace.html">Creating and Managing Tablespaces</a>
-*  <a class="subnav" href="./ddl-schema.html">Creating and Managing Schemas</a>
-*  <a class="subnav" href="./ddl-table.html">Creating and Managing Tables</a>
-*  <a class="subnav" href="./ddl-storage.html">Table Storage Model and Distribution Policy</a>
-*  <a class="subnav" href="./ddl-partition.html">Partitioning Large Tables</a>
-*  <a class="subnav" href="./ddl-view.html">Creating and Managing Views</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/02-pipeline.png
----------------------------------------------------------------------
diff --git a/images/02-pipeline.png b/images/02-pipeline.png
deleted file mode 100644
index 26fec1b..0000000
Binary files a/images/02-pipeline.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/03-gpload-files.jpg
----------------------------------------------------------------------
diff --git a/images/03-gpload-files.jpg b/images/03-gpload-files.jpg
deleted file mode 100644
index d50435f..0000000
Binary files a/images/03-gpload-files.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/basic_query_flow.png
----------------------------------------------------------------------
diff --git a/images/basic_query_flow.png b/images/basic_query_flow.png
deleted file mode 100644
index 59172a2..0000000
Binary files a/images/basic_query_flow.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/ext-tables-xml.png
----------------------------------------------------------------------
diff --git a/images/ext-tables-xml.png b/images/ext-tables-xml.png
deleted file mode 100644
index f208828..0000000
Binary files a/images/ext-tables-xml.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/ext_tables.jpg
----------------------------------------------------------------------
diff --git a/images/ext_tables.jpg b/images/ext_tables.jpg
deleted file mode 100644
index d5a0940..0000000
Binary files a/images/ext_tables.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/ext_tables_multinic.jpg
----------------------------------------------------------------------
diff --git a/images/ext_tables_multinic.jpg b/images/ext_tables_multinic.jpg
deleted file mode 100644
index fcf09c4..0000000
Binary files a/images/ext_tables_multinic.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/gangs.jpg
----------------------------------------------------------------------
diff --git a/images/gangs.jpg b/images/gangs.jpg
deleted file mode 100644
index 0d14585..0000000
Binary files a/images/gangs.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/gporca.png
----------------------------------------------------------------------
diff --git a/images/gporca.png b/images/gporca.png
deleted file mode 100644
index 2909443..0000000
Binary files a/images/gporca.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/hawq_hcatalog.png
----------------------------------------------------------------------
diff --git a/images/hawq_hcatalog.png b/images/hawq_hcatalog.png
deleted file mode 100644
index 35b74c3..0000000
Binary files a/images/hawq_hcatalog.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/images/slice_plan.jpg
----------------------------------------------------------------------
diff --git a/images/slice_plan.jpg b/images/slice_plan.jpg
deleted file mode 100644
index ad8da83..0000000
Binary files a/images/slice_plan.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/install/aws-config.html.md.erb
----------------------------------------------------------------------
diff --git a/install/aws-config.html.md.erb b/install/aws-config.html.md.erb
deleted file mode 100644
index 21cadf5..0000000
--- a/install/aws-config.html.md.erb
+++ /dev/null
@@ -1,123 +0,0 @@
----
-title: Amazon EC2 Configuration
----
-
-Amazon Elastic Compute Cloud (EC2) is a service provided by Amazon Web Services (AWS).  You can install and configure HAWQ on virtual servers provided by Amazon EC2. The following information describes some considerations when deploying a HAWQ cluster in an Amazon EC2 environment.
-
-## <a id="topic_wqv_yfx_y5"></a>About Amazon EC2 
-
-Amazon EC2 can be used to launch as many virtual servers as you need, configure security and networking, and manage storage. An EC2 *instance* is a virtual server in the AWS cloud virtual computing environment.
-
-EC2 instances are managed by AWS. AWS isolates your EC2 instances from other users in a virtual private cloud (VPC) and lets you control access to the instances. You can configure instance features such as operating system, network connectivity (network ports and protocols, IP addresses), access to the Internet, and size and type of disk storage. 
-
-For information about Amazon EC2, see the [EC2 User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html).
-
-## <a id="topic_nhk_df4_2v"></a>Create and Launch HAWQ Instances
-
-Use the *Amazon EC2 Console* to launch instances and configure, start, stop, and terminate (delete) virtual servers. When you launch a HAWQ instance, you select and configure key attributes via the EC2 Console.
-
-
-### <a id="topic_amitype"></a>Choose AMI Type
-
-An Amazon Machine Image (AMI) is a template that contains a software configuration including the operating system, application server, and applications that best suit your purpose. When configuring a HAWQ virtual instance, we recommend you use a *hardware virtualized* AMI running 64-bit Red Hat Enterprise Linux version 6.4 or 6.5 or 64-bit CentOS 6.4 or 6.5.  Obtain the licenses and instances directly from the OS provider.
-
-### <a id="topic_selcfgstorage"></a>Consider Storage
-EC2 instances can be launched as either Elastic Block Store (EBS)-backed or instance store-backed.  
-
-Instance store-backed storage is generally better performing than EBS and recommended for HAWQ's large data workloads. SSD (solid state) instance store is preferred over magnetic drives.
-
-**Note** EC2 *instance store* provides temporary block-level storage. This storage is located on disks that are physically attached to the host computer. While instance store provides high performance, powering off the instance causes data loss. Soft reboots preserve instance store data. 
-     
-Virtual devices for instance store volumes for HAWQ EC2 instance store instances are named `ephemeralN` (where *N* varies based on instance type). CentOS instance store block device are named `/dev/xvdletter` (where *letter* is a lower case letter of the alphabet).
-
-### <a id="topic_cfgplacegrp"></a>Configure Placement Group 
-
-A placement group is a logical grouping of instances within a single availability zone that together participate in a low-latency, 10 Gbps network.  Your HAWQ master and segment cluster instances should support enhanced networking and reside in a single placement group (and subnet) for optimal network performance.  
-
-If your Ambari node is not a DataNode, locating the Ambari node instance in a subnet separate from the HAWQ master/segment placement group enables you to manage multiple HAWQ clusters from the single Ambari instance.
-
-Amazon recommends that you use the same instance type for all instances in the placement group and that you launch all instances within the placement group at the same time.
-
-Membership in a placement group has some implications on your HAWQ cluster.  Specifically, growing the cluster over capacity may require shutting down all HAWQ instances in the current placement group and restarting the instances to a new placement group. Instance store volumes are lost in this scenario.
-
-### <a id="topic_selinsttype"></a>Select EC2 Instance Type
-
-An EC2 instance type is a specific combination of CPU, memory, default storage, and networking capacity.  
-
-Several instance store-backed EC2 instance types have shown acceptable performance for HAWQ nodes in development and production environments: 
-
-| Instance Type  | Env | vCPUs | Memory (GB) | Disk Capacity (GB) | Storage Type |
-|-------|-----|------|--------|----------|--------|
-| cc2.8xlarge  | Dev | 32 | 60.5 | 4 x 840 | HDD |
-| d2.2xlarge  | Dev | 8 | 60 | 6 x 2000 | HDD |
-| d2.4xlarge  | Dev/QA | 16 | 122 | 12 x 2000 | HDD |
-| i2.8xlarge  | Prod | 32 | 244 | 8 x 800 | SSD |
-| hs1.8xlarge  | Prod | 16 | 117 | 24 x 2000 | HDD |
-| d2.8xlarge  | Prod | 36 | 244 | 24 x 2000 | HDD |
- 
-For optimal network performance, the chosen HAWQ instance type should support EC2 enhanced networking. Enhanced networking results in higher performance, lower latency, and lower jitter. Refer to [Enhanced Networking on Linux Instances](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html) for detailed information on enabling enhanced networking in your instances.
-
-All instance types identified in the table above support enhanced networking.
-
-### <a id="topic_cfgnetw"></a>Configure Networking 
-
-Your HAWQ cluster instances should be in a single VPC and on the same subnet. Instances are always assigned a VPC internal IP address. This internal IP address should be used for HAWQ communication between hosts. You can also use the internal IP address to access an instance from another instance within the HAWQ VPC.
-
-You may choose to locate your Ambari node on a separate subnet in the VPC. Both a public IP address for the instance and an Internet gateway configured for the EC2 VPC are required to access the Ambari instance from an external source and for the instance to access the Internet. 
-
-Ensure your Ambari and HAWQ master instances are each assigned a public IP address for external and internet access. We recommend you also assign an Elastic IP Address to the HAWQ master instance.
-
-
-###Configure Security Groups<a id="topic_cfgsecgrp"></a>
-
-A security group is a set of rules that control network traffic to and from your HAWQ instance.  One or more rules may be associated with a security group, and one or more security groups may be associated with an instance.
-
-To configure HAWQ communication between nodes in the HAWQ cluster, include and open the following ports in the appropriate security group for the HAWQ master and segment nodes:
-
-| Port  | Application |
-|-------|-------------------------------------|
-| 22    | ssh - secure connect to other hosts |
-
-To allow access to/from a source external to the Ambari management node, include and open the following ports in an appropriate security group for your Ambari node:
-
-| Port  | Application |
-|-------|-------------------------------------|
-| 22    | ssh - secure connect to other hosts |
-| 8080  | Ambari - HAWQ admin/config web console |  
-
-
-###Generate Key Pair<a id="topic_cfgkeypair"></a>
-AWS uses public-key cryptography to secure the login information for your instance. You use the EC2 console to generate and name a key pair when you launch your instance.  
-
-A key pair for an EC2 instance consists of a *public key* that AWS stores, and a *private key file* that you maintain. Together, they allow you to connect to your instance securely. The private key file name typically has a `.pem` suffix.
-
-This example logs into an into EC2 instance from an external location with the private key file `my-test.pem` as user `user1`.  In this example, the instance is configured with the public IP address `192.0.2.0` and the private key file resides in the current directory.
-
-```shell
-$ ssh -i my-test.pem user1@192.0.2.0
-```
-
-##Additional HAWQ Considerations <a id="topic_mj4_524_2v"></a>
-
-After launching your HAWQ instance, you will connect to and configure the instance. The  *Instances* page of the EC2 Console lists the running instances and their associated network access information.
-
-Before installing HAWQ, set up the EC2 instances as you would local host server machines. Configure the host operating system, configure host network information (for example, update the `/etc/hosts` file), set operating system parameters, and install operating system packages. For information about how to prepare your operating system environment for HAWQ, see [Apache HAWQ System Requirements](../requirements/system-requirements.html) and [Select HAWQ Host Machines](../install/select-hosts.html).
-
-###Passwordless SSH Configuration<a id="topic_pwdlessssh_cc"></a>
-
-HAWQ hosts will be configured during the installation process to use passwordless SSH for intra-cluster communications. Temporary password-based authentication must be enabled on each HAWQ host in preparation for this configuration. Password authentication is typically disabled by default in cloud images. Update the cloud configuration in `/etc/cloud/cloud.cfg` to enable password authentication in your AMI(s). Set `ssh_pwauth: True` in this file. If desired, disable password authentication after HAWQ installation by setting the property back to `False`.
-  
-##References<a id="topic_hgz_zwy_bv"></a>
-
-Links to related Amazon Web Services and EC2 features and information.
-
-- [Amazon Web Services](https://aws.amazon.com)
-- [Amazon Machine Image \(AMI\)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
-- [EC2 Instance Store](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html)
-- [Elastic Block Store](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html)
-- [EC2 Key Pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
-- [Elastic IP Address](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html)
-- [Enhanced Networking on Linux Instances](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html)
-- [Internet Gateways] (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html)
-- [Subnet Public IP Addressing](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html#subnet-public-ip)
-- [Virtual Private Cloud](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/install/select-hosts.html.md.erb
----------------------------------------------------------------------
diff --git a/install/select-hosts.html.md.erb b/install/select-hosts.html.md.erb
deleted file mode 100644
index ecbe0b5..0000000
--- a/install/select-hosts.html.md.erb
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Select HAWQ Host Machines
----
-
-Before you begin to install HAWQ, follow these steps to select and prepare the host machines.
-
-Complete this procedure for all HAWQ deployments:
-
-1.  **Choose the host machines that will host a HAWQ segment.** Keep in mind these restrictions and requirements:
-    -   Each host must meet the system requirements for the version of HAWQ you are installing.
-    -   Each HAWQ segment must be co-located on a host that runs an HDFS DataNode.
-    -   The HAWQ master segment and standby master segment must be hosted on separate machines.
-2.  **Choose the host machines that will run PXF.** Keep in mind these restrictions and requirements:
-    -   PXF must be installed on the HDFS NameNode *and* on all HDFS DataNodes.
-    -   If you have configured Hadoop with high availability, PXF must also be installed on all HDFS nodes including all NameNode services.
-    -   If you want to use PXF with HBase or Hive, you must first install the HBase client \(hbase-client\) and/or Hive client \(hive-client\) on each machine where you intend to install PXF. See the [HDP installation documentation](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/index.html) for more information.
-3.  **Verify that required ports on all machines are unused.** By default, a HAWQ master or standby master service configuration uses port 5432. Hosts that run other PostgreSQL instances cannot be used to run a default HAWQ master or standby service configuration because the default PostgreSQL port \(5432\) conflicts with the default HAWQ port. You must either change the default port configuration of the running PostgreSQL instance or change the HAWQ master port setting during the HAWQ service installation to avoid port conflicts.
-    
-    **Note:** The Ambari server node uses PostgreSQL as the default metadata database. The Hive Metastore uses MySQL as the default metadata database.
\ No newline at end of file


[23/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_tablespace.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_tablespace.html.md.erb b/markdown/reference/catalog/pg_tablespace.html.md.erb
new file mode 100644
index 0000000..493c6b0
--- /dev/null
+++ b/markdown/reference/catalog/pg_tablespace.html.md.erb
@@ -0,0 +1,22 @@
+---
+title: pg_tablespace
+---
+
+The `pg_tablespace` system catalog table stores information about the available tablespaces. Tables can be placed in particular tablespaces to aid administration of disk layout. Unlike most system catalogs, `pg_tablespace` is shared across all databases of a HAWQ system: there is only one copy of `pg_tablespace` per system, not one per database.
+
+<a id="topic1__hx156260"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_tablespace</span>
+
+| column            | type        | references        | description                                                                                                                 |
+|-------------------|-------------|-------------------|-----------------------------------------------------------------------------------------------------------------------------|
+| `spcname`         | name        | �                 | Tablespace name.                                                                                                            |
+| `spcowner`        | oid         | pg\_authid.oid    | Owner of the tablespace, usually the user who created it.                                                                   |
+| `spclocation`     | text\[\]    | �                 | Deprecated.                                                                                                                 |
+| `spcacl `         | aclitem\[\] | �                 | Tablespace access privileges.                                                                                               |
+| `spcprilocations` | text\[\]    | �                 | Deprecated.                                                                                                                 |
+| `spcmrilocations` | text\[\]    | �                 | Deprecated.                                                                                                                 |
+| `spcfsoid`        | oid         | pg\_filespace.oid | The object id of the filespace used by this tablespace. A filespace defines directory locations on the master and segments. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_trigger.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_trigger.html.md.erb b/markdown/reference/catalog/pg_trigger.html.md.erb
new file mode 100644
index 0000000..3074e46
--- /dev/null
+++ b/markdown/reference/catalog/pg_trigger.html.md.erb
@@ -0,0 +1,114 @@
+---
+title: pg_trigger
+---
+
+The `pg_trigger` system catalog table stores triggers on tables.
+
+**Note:** HAWQ does not support triggers.
+
+<a id="topic1__hy183441"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_trigger</span>
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">tgrelid</code></td>
+<td>oid</td>
+<td><em>pg_class.oid</em>
+<p>Note that HAWQ does not enforce referential integrity.</p></td>
+<td>The table this trigger is on.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">tgname</code></td>
+<td>name</td>
+<td>�</td>
+<td>Trigger name (must be unique among triggers of same table).</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">tgfoid</code></td>
+<td>oid</td>
+<td><em>pg_proc.oid</em>
+<p>Note that HAWQ does not enforce referential integrity.</p></td>
+<td>The function to be called.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">tgtype</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>Bit mask identifying trigger conditions.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">tgenabled</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if trigger is enabled.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">tgisconstraint</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if trigger implements a referential integrity constraint.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">tgconstrname</code></td>
+<td>name</td>
+<td>�</td>
+<td>Referential integrity constraint name.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">tgconstrrelid</code></td>
+<td>oid</td>
+<td><em>pg_class.oid</em>
+<p>Note that HAWQ does not enforce referential integrity.</p></td>
+<td>The table referenced by an referential integrity constraint.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">tgdeferrable</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if deferrable.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">tginitdeferred</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if initially deferred.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">tgnargs</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>Number of argument strings passed to trigger function.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">tgattr</code></td>
+<td>int2vector</td>
+<td>�</td>
+<td>Currently not used.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">tgargs</code></td>
+<td>bytea</td>
+<td>�</td>
+<td>Argument strings to pass to trigger, each NULL-terminated.</td>
+</tr>
+</tbody>
+</table>
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_type.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_type.html.md.erb b/markdown/reference/catalog/pg_type.html.md.erb
new file mode 100644
index 0000000..e2ea28a
--- /dev/null
+++ b/markdown/reference/catalog/pg_type.html.md.erb
@@ -0,0 +1,176 @@
+---
+title: pg_type
+---
+
+The `pg_type` system catalog table stores information about data types. Base types (scalar types) are created with `CREATE TYPE`, and domains with `CREATE DOMAIN`. A composite type is automatically created for each table in the database, to represent the row structure of the table. It is also possible to create composite types with `CREATE TYPE AS`.
+
+<a id="topic1__hz156260"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_type</span>
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">typname</code></td>
+<td>name</td>
+<td>�</td>
+<td>Data type name.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typnamespace</code></td>
+<td>oid</td>
+<td>pg_namespace.oid</td>
+<td>The OID of the namespace that contains this type.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typowner</code></td>
+<td>oid</td>
+<td>pg_authid.oid</td>
+<td>Owner of the type.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typlen</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>For a fixed-size type, <code class="ph codeph">typlen</code> is the number of bytes in the internal representation of the type. But for a variable-length type, <code class="ph codeph">typlen</code> is negative. <code class="ph codeph">-1</code> indicates a 'varlena' type (one that has a length word), <code class="ph codeph">-2</code> indicates a null-terminated C string.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typbyval</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>Determines whether internal routines pass a value of this type by value or by reference. <code class="ph codeph">typbyval </code>had better be false if <code class="ph codeph">typlen</code> is not 1, 2, or 4 (or 8 on machines where Datum is 8 bytes). Variable-length types are always passed by reference. Note that <code class="ph codeph">typbyval</code> can be false even if the length would allow pass-by-value; this is currently true for type <code class="ph codeph">float4</code>, for example.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typtype</code></td>
+<td>char</td>
+<td>�</td>
+<td><code class="ph codeph">b</code> for a base type, <code class="ph codeph">c</code> for a composite type, <code class="ph codeph">d</code> for a domain, or <code class="ph codeph">p</code> for a pseudo-type.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typisdefined</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if the type is defined, false if this is a placeholder entry for a not-yet-defined type. When false, nothing except the type name, namespace, and OID can be relied on.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typdelim</code></td>
+<td>char</td>
+<td>�</td>
+<td>Character that separates two values of this type when parsing array input. Note that the delimiter is associated with the array element data type, not the array data type.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typrelid</code></td>
+<td>oid</td>
+<td>pg_class.oid</td>
+<td>If this is a composite type, then this column points to the <code class="ph codeph">pg_class</code> entry that defines the corresponding table. (For a free-standing composite type, the <code class="ph codeph">pg_class</code> entry does not really represent a table, but it is needed anyway for the type's <code class="ph codeph">pg_attribute</code> entries to link to.) Zero for non-composite types.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typelem</code></td>
+<td>oid</td>
+<td>pg_type.oid</td>
+<td>If not <code class="ph codeph">0</code> then it identifies another row in pg_type. The current type can then be subscripted like an array yielding values of type <code class="ph codeph">typelem</code>. A true array type is variable length (<code class="ph codeph">typlen</code> = <code class="ph codeph">-1</code>), but some fixed-length (<code class="ph codeph">tylpen</code> &gt; <code class="ph codeph">0</code>) types also have nonzero <code class="ph codeph">typelem</code>, for example <code class="ph codeph">name</code> and <code class="ph codeph">point</code>. If a fixed-length type has a <code class="ph codeph">typelem</code> then its internal representation must be some number of values of the <code class="ph codeph">typelem</code> data type with no other data. Variable-length array types have a header defined by the array subroutines.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typinput</code></td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>Input conversion function (text format).</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typoutput</code></td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>Output conversion function (text format).</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typreceive</code></td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>Input conversion function (binary format), or 0 if none.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typsend</code></td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>Output conversion function (binary format), or 0 if none.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typanalyze</code></td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>Custom <code class="ph codeph">ANALYZE</code> function, or 0 to use the standard function.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typalign</code></td>
+<td>char</td>
+<td>�</td>
+<td>The alignment required when storing a value of this type. It applies to storage on disk as well as most representations of the value inside HAWQ. When multiple values are stored consecutively, such as in the representation of a complete row on disk, padding is inserted before a datum of this type so that it begins on the specified boundary. The alignment reference is the beginning of the first datum in the sequence. Possible values are:
+<p><code class="ph codeph">c</code> = char alignment (no alignment needed).</p>
+<p><code class="ph codeph">s</code> = short alignment (2 bytes on most machines).</p>
+<p><code class="ph codeph">i</code> = int alignment (4 bytes on most machines).</p>
+<p><code class="ph codeph">d</code> = double alignment (8 bytes on many machines, but not all).</p></td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typstorage</code></td>
+<td>char</td>
+<td>�</td>
+<td>For varlena types (those with <code class="ph codeph">typlen</code> = -1) tells if the type is prepared for toasting and what the default strategy for attributes of this type should be. Possible values are:
+<p><code class="ph codeph">p</code>: Value must always be stored plain.</p>
+<p><code class="ph codeph">e</code>: Value can be stored in a secondary relation (if relation has one, see <code class="ph codeph">pg_class.reltoastrelid</code>).</p>
+<p><code class="ph codeph">m</code>: Value can be stored compressed inline.</p>
+<p><code class="ph codeph">x</code>: Value can be stored compressed inline or stored in secondary storage.</p>
+<p>Note that <code class="ph codeph">m</code> columns can also be moved out to secondary storage, but only as a last resort (<code class="ph codeph">e</code> and <code class="ph codeph">x</code> columns are moved first).</p></td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typnotnull</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>Represents a not-null constraint on a type. Used for domains only.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typbasetype</code></td>
+<td>oid</td>
+<td>pg_type.oid</td>
+<td>Identifies the type that a domain is based on. Zero if this type is not a domain.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typtypmod</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Domains use typtypmod to record the typmod to be applied to their base type (-1 if base type does not use a typmod). -1 if this type is not a domain.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typndims</code></td>
+<td>integer</td>
+<td>�</td>
+<td>The number of array dimensions for a domain that is an array (if <code class="ph codeph">typbasetype</code> is an array type; the domain's <code class="ph codeph">typelem</code> will match the base type's <code class="ph codeph">typelem</code>). Zero for types other than array domains.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">typdefaultbin</code></td>
+<td>text</td>
+<td>�</td>
+<td>If not null, it is the <code class="ph codeph">nodeToString()</code> representation of a default expression for the type. This is only used for domains.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">typdefault</code></td>
+<td>text</td>
+<td>�</td>
+<td>Null if the type has no associated default value. If not null, typdefault must contain a human-readable version of the default expression represented by typdefaultbin. If typdefaultbin is null and typdefault is not, then typdefault is the external representation of the type's default value, which may be fed to the type's input converter to produce a constant.</td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_type_encoding.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_type_encoding.html.md.erb b/markdown/reference/catalog/pg_type_encoding.html.md.erb
new file mode 100644
index 0000000..b38ff10
--- /dev/null
+++ b/markdown/reference/catalog/pg_type_encoding.html.md.erb
@@ -0,0 +1,15 @@
+---
+title: pg_type_encoding
+---
+
+The `pg_type_encoding` system catalog table contains the column storage type information.
+
+<a id="topic1__ia177831"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_type\_encoding</span>
+
+| column       | type       | modifers | storage  | description                                                                      |
+|--------------|------------|----------|----------|----------------------------------------------------------------------------------|
+| `typeid`     | oid        | not null | plain    | Foreign key to [pg\_attribute](pg_attribute.html#topic1) |
+| `typoptions` | text \[ \] | �        | extended | The actual options                                                               |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_window.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_window.html.md.erb b/markdown/reference/catalog/pg_window.html.md.erb
new file mode 100644
index 0000000..afe4c0d
--- /dev/null
+++ b/markdown/reference/catalog/pg_window.html.md.erb
@@ -0,0 +1,97 @@
+---
+title: pg_window
+---
+
+The `pg_window` table stores information about window functions. Window functions are often used to compose complex OLAP (online analytical processing) queries. Window functions are applied to partitioned result sets within the scope of a single query expression. A window partition is a subset of rows returned by a query, as defined in a special `OVER()` clause. Typical window functions are `rank`, `dense_rank`, and `row_number`. Each entry in `pg_window` is an extension of an entry in [pg\_proc](pg_proc.html#topic1). The [pg\_proc](pg_proc.html#topic1) entry carries the window function's name, input and output data types, and other information that is similar to ordinary functions.
+
+<a id="topic1__ic143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_window</span>
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">winfnoid</code></td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>The OID in <code class="ph codeph">pg_proc</code> of the window function.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">winrequireorder</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>The window function requires its window specification to have an <code class="ph codeph">ORDER BY</code> clause.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">winallowframe</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>The window function permits its window specification to have a <code class="ph codeph">ROWS</code> or <code class="ph codeph">RANGE</code> framing clause.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">winpeercount</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>The peer group row count is required to compute this window function, so the Window node implementation must 'look ahead' as necessary to make this available in its internal state.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">wincount</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>The partition row count is required to compute this window function.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">winfunc</code></td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>The OID in <code class="ph codeph">pg_proc</code> of a function to compute the value of an immediate-type window function.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">winprefunc</code></td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>The OID in <code class="ph codeph">pg_proc</code> of a preliminary window function to compute the partial value of a deferred-type window function.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">winpretype</code></td>
+<td>oid</td>
+<td>pg_type.oid</td>
+<td>The OID in <code class="ph codeph">pg_type</code> of the preliminary window function's result type.</td>
+</tr>
+<tr class="odd">
+<td>winfinfunc</td>
+<td>regproc</td>
+<td>pg_proc.oid</td>
+<td>The OID in <code class="ph codeph">pg_proc</code> of a function to compute the final value of a deferred-type window function from the partition row count and the result of <code class="ph codeph">winprefunc</code>.</td>
+</tr>
+<tr class="even">
+<td>winkind</td>
+<td>char</td>
+<td>�</td>
+<td>A character indicating membership of the window function in a class of related functions:
+<p><code class="ph codeph">w</code> - ordinary window functions</p>
+<p><code class="ph codeph">n</code> - NTILE functions</p>
+<p><code class="ph codeph">f</code> - FIRST_VALUE functions</p>
+<p><code class="ph codeph">l</code> - LAST_VALUE functions</p>
+<p><code class="ph codeph">g</code> - LAG functions</p>
+<p><code class="ph codeph">d</code> - LEAD functions</p></td>
+</tr>
+</tbody>
+</table>
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/analyzedb.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/analyzedb.html.md.erb b/markdown/reference/cli/admin_utilities/analyzedb.html.md.erb
new file mode 100644
index 0000000..0384c34
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/analyzedb.html.md.erb
@@ -0,0 +1,160 @@
+---
+title: analyzedb
+---
+
+A utility that performs `ANALYZE` operations on tables incrementally and concurrently.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+analyzedb -d <dbname> -s <schema>
+   [ --full ]    
+   [ -l | --list ]
+   [ -p <parallel-level> ]
+   [ -v | --verbose ]
+   [ -a ]
+   
+analyzedb -d <dbname> -t <schema>.<table> 
+   [ -i col1[, col2, ...] | -x col1[, col2, ...] ]
+   [ --full ]
+   [ -l | --list ]
+   [ -p <parallel-level> ]
+   [ -v | --verbose ]
+   [ -a ]
+     
+analyzedb -d <dbname> -f <config-file> | --file <config-file>
+   [ --full ]
+   [ -l | --list ]
+   [ -p <parallel-level> ]
+   [ -v | --verbose ]  
+   [ -a ]
+
+analyzedb -d <dbname> --clean_last | --clean_all 
+
+analyzedb --version
+
+analyzedb  -? | -h | --help 
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `analyzedb` utility updates statistics on table data for the specified tables in a HAWQ database incrementally and concurrently.
+
+While performing `ANALYZE` operations, `analyzedb` creates a snapshot of the table metadata and stores it on disk on the master host. An `ANALYZE` operation is performed only if the table has been modified. If a table or partition has not been modified since the last time it was analyzed, `analyzedb` automatically skips the table or partition because it already contains up-to-date statistics.
+
+For a partitioned table `analyzedb` analyzes only those partitions that have no statistics, or that have stale statistics. `analyzedb` also refreshes the statistics on the root partition.
+
+By default, `analyzedb` creates a maximum of 5 concurrent sessions to analyze tables in parallel. For each session, `analyzedb` issues an `ANALYZE` command to the database and specifies different table names. The `-p` option controls the maximum number of concurrent sessions.
+
+## <a id="topic1__section4"></a>Notes
+
+The utility determines if a table has been modified by comparing catalog metadata of tables with the snapshot of metadata taken during a previous `analyzedb` operation. The snapshots of table metadata are stored as state files in the directory `db_analyze` in the HAWQ master data directory. You can specify the `--clean_last` or `--clean_all` option to remove state files generated by `analyzedb`.
+
+If you do not specify a table, set of tables, or schema, the `analyzedb` utility collects the statistics as needed on all system catalog tables and user-defined tables in the database.
+
+External tables are not affected by `analyzedb`.
+
+Table names that contain spaces are not supported.
+
+
+## <a id="topic1__section5"></a>Arguments
+
+<dt>-d \<dbname\>  </dt>
+<dd>Specifies the name of the database that contains the tables to be analyzed. If this option is not specified, the database name is read from the environment variable `PGDATABASE`. If `PGDATABASE` is not set, the user name specified for the connection is used.</dd>
+
+<dt>-s \<schema\> </dt>
+<dd>Specify a schema to analyze. All tables in the schema will be analyzed. Only a single schema name can be specified on the command line.
+
+Only one of the options can be used to specify the files to be analyzed: `-f` or `--file`, `-t` , or `-s`.</dd>
+
+<dt>-t \<schema\>.\<table\>  </dt>
+<dd>Collect statistics only on \<schema\>.\<table\>. The table name must be qualified with a schema name. Only a single table name can be specified on the command line. You can specify the `-f` option to specify multiple tables in a file or the `-s` option to specify all the tables in a schema.
+
+Only one of these options can be used to specify the files to be analyzed: `-f` or `--file`, `-t` , or `-s`.</dd>
+
+<dt>-f, -\\\-file \<config-file\>  </dt>
+<dd>Text file that contains a list of tables to be analyzed. A relative file path from current directory can be specified.
+
+The file lists one table per line. Table names must be qualified with a schema name. Optionally, a list of columns can be specified using the `-i` or `-x`. No other options are allowed in the file. Other options such as `--full` must be specified on the command line.
+
+Only one of the options can be used to specify the files to be analyzed: `-f` or `--file`, `-t` , or `-s`.
+
+When performing `ANALYZE` operations on multiple tables, `analyzedb` creates concurrent sessions to analyze tables in parallel. The `-p` option controls the maximum number of concurrent sessions.
+
+In the following example, the first line performs an `ANALYZE` operation on the table `public.nation`, the second line performs an `ANALYZE` operation only on the columns `l_shipdate` and `l_receiptdate` in the table `public.lineitem`.
+
+``` pre
+public.nation
+public.lineitem -i l_shipdate, l_receiptdate 
+```
+</dd>
+
+
+## <a id="topic1__section5"></a>Options
+
+
+<dt>-x \<col1\>, \<col2\>, ...  </dt>
+<dd>Optional. Must be specified with the `-t` option. For the table specified with the `-t` option, exclude statistics collection for the specified columns. Statistics are collected only on the columns that are not listed.
+
+Only `-i`, or `-x` can be specified. Both options cannot be specified.</dd>
+
+<dt>-i \<col1\>, \<col2\>, ...  </dt>
+<dd>Optional. Must be specified with the `-t` option. For the table specified with the `-t` option, collect statistics only for the specified columns.
+
+Only `-i`, or `-x` can be specified. Both options cannot be specified.</dd>
+
+<dt>-\\\-full  </dt>
+<dd>Perform an `ANALYZE` operation on all the specified tables. The operation is performed even if the statistics are up to date.</dd>
+
+<dt>-l, -\\\-list  </dt>
+<dd>Lists the tables that would have been analyzed with the specified options. The `ANALYZE` operations are not performed.</dd>
+
+<dt>-p \<parallel-level\>  </dt>
+<dd>The number of tables that are analyzed in parallel. The value for <parallel-level> can be an integer between 1 and 10, inclusive. Default value is 5.</dd>
+
+<dt>-a </dt>
+<dd>Quiet mode. Do not prompt for user confirmation.</dd>
+
+<dt> -v, -\\\-verbose  </dt>
+<dd>If specified, sets the logging level to verbose. Additional log information is written to the log file and the command line during command execution.</dd>
+
+<dt>-\\\-clean\_last  </dt>
+<dd>Remove the state files generated by last `analyzedb` operation. All other options except `-d` are ignored.</dd>
+
+<dt>-\\\-clean\_all  </dt>
+<dd>Remove all the state files generated by `analyzedb`. All other options except` -d` are ignored.</dd>
+
+<dt>-h, -?, -\\\-help   </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+
+## <a id="topic1__section6"></a>Examples
+
+An example that collects statistics only on a set of table columns. In the database `mytest`, collect statistics on the columns `shipdate` and `receiptdate` in the table `public.orders`:
+
+``` shell
+$ analyzedb -d mytest -t public.orders -i shipdate, receiptdate
+```
+
+An example that collects statistics on a table and exclude a set of columns. In the database `mytest`, collect statistics on the table `public.foo`, and do not collect statistics on the columns `bar` and `test2`.
+
+``` shell
+$ analyzedb -d mytest -t public.foo -x bar, test2
+```
+
+An example that specifies a file that contains a list of tables. This command collect statistics on the tables listed in the file `analyze-tables` in the database named `mytest`.
+
+``` shell
+$ analyzedb -d mytest -f analyze-tables
+```
+
+If you do not specify a table, set of tables, or schema, the `analyzedb` utility collects the statistics as needed on all catalog tables and user-defined tables in the specified database. This command refreshes table statistics on the system catalog tables and user-defined tables in the database `mytest`.
+
+``` shell
+$ analyzedb -d mytest
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/gpfdist.html.md.erb b/markdown/reference/cli/admin_utilities/gpfdist.html.md.erb
new file mode 100644
index 0000000..1683ddb
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/gpfdist.html.md.erb
@@ -0,0 +1,157 @@
+---
+title: gpfdist
+---
+
+Serves data files to or writes data files out from HAWQ segments.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+gpfdist [-d <directory>] [-p <http_port>] [-l <log_file>] [-t <timeout>] 
+   [-S] [-w <time>] [-v | -V] [-s] [-m <max_length>] [--ssl <certificate_path>]
+
+gpfdist -? | --help 
+
+gpfdist --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+`gpfdist` is HAWQ parallel file distribution program. It is used by readable external tables and `hawq load` to serve external table files to all HAWQ segments in parallel. It is used by writable external tables to accept output streams from HAWQ segments in parallel and write them out to a file.
+
+In order for `gpfdist` to be used by an external table, the `LOCATION` clause of the external table definition must specify the external table data using the `gpfdist://` protocol (see the HAWQ command `CREATE EXTERNAL TABLE`).
+
+**Note:** If the `--ssl` option is specified to enable SSL security, create the external table with the `gpfdists://` protocol.
+
+The benefit of using `gpfdist` is that you are guaranteed maximum parallelism while reading from or writing to external tables, thereby offering the best performance as well as easier administration of external tables.
+
+For readable external tables, `gpfdist` parses and serves data files evenly to all the segment instances in the HAWQ system when users `SELECT` from the external table. For writable external tables, `gpfdist` accepts parallel output streams from the segments when users `INSERT` into the external table, and writes to an output file.
+
+For readable external tables, if load files are compressed using `gzip` or `bzip2` (have a `.gz` or `.bz2` file extension), `gpfdist` uncompresses the files automatically before loading provided that `gunzip` or `bunzip2` is in your path.
+
+**Note:** Currently, readable external tables do not support compression on Windows platforms, and writable external tables do not support compression on any platforms.
+
+To run `gpfdist` on your ETL machines, refer to [Client-Based HAWQ Load Tools](../../../datamgmt/load/client-loadtools.html) for more information.
+
+**Note:** When using IPv6, always enclose the numeric IP address in brackets.
+
+You can also run `gpfdist` as a Windows Service. See [Running gpfdist as a Windows Service](#topic1__section5) for more details.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-d \<directory\>  </dt>
+<dd>The directory from which `gpfdist` will serve files for readable external tables or create output files for writable external tables. If not specified, defaults to the current directory.</dd>
+
+<dt>-l \<log\_file\>  </dt>
+<dd>The fully qualified path and log file name where standard output messages are to be logged.</dd>
+
+<dt>-p \<http\_port\>  </dt>
+<dd>The HTTP port on which `gpfdist` will serve files. Defaults to 8080.</dd>
+
+<dt>-t \<timeout\>  </dt>
+<dd>Sets the time allowed for HAWQ to establish a connection to a `gpfdist` process. Default is 5 seconds. Allowed values are 2 to 600 seconds. May need to be increased on systems with a lot of network traffic.</dd>
+
+<dt>-m \<max\_length\>  </dt>
+<dd>Sets the maximum allowed data row length in bytes. Default is 32768. Should be used when user data includes very wide rows (or when `line too long` error message occurs). Should not be used otherwise as it increases resource allocation. Valid range is 32K to 256MB. (The upper limit is 1MB on Windows systems.)</dd>
+
+<dt>-s  </dt>
+<dd>Enables simplified logging. When this option is specified, only messages with `WARN` level and higher are written to the `gpfdist` log file. `INFO` level messages are not written to the log file. If this option is not specified, all `gpfdist` messages are written to the log file.
+
+You can specify this option to reduce the information written to the log file.</dd>
+
+<dt>-S (use O\_SYNC)  </dt>
+<dd>Opens the file for synchronous I/O with the `O_SYNC` flag. Any writes to the resulting file descriptor block `gpfdist` until the data is physically written to the underlying hardware.</dd>
+
+<dt>-w \<time\>  </dt>
+<dd>Sets the number of seconds that HAWQ delays before closing a target file such as a named pipe. The default value is 0, no delay. The maximum value is 600 seconds, 10 minutes.
+
+For a HAWQ with multiple segments, there might be a delay between segments when writing data from different segments to the file. You can specify a time to wait before HAWQ closes the file to ensure all the data is written to the file.</dd>
+
+<dt>-\\\-ssl \<certificate\_path\>  </dt>
+<dd>Adds SSL encryption to data transferred with `gpfdist`. After executing `gpfdist` with the `--ssl <certificate_path>` option, the only way to load data from this file server is with the `gpfdist://` protocol.
+
+The location specified in \<certificate\_path\> must contain the following files:
+
+-   The server certificate file, `server.crt`
+-   The server private key file, `server.key`
+-   The trusted certificate authorities, `root.crt`
+
+The root directory (`/`) cannot be specified as \<certificate\_path\>.</dd>
+
+<dt>-v (verbose)  </dt>
+<dd>Verbose mode shows progress and status messages.</dd>
+
+<dt>-V (very verbose)  </dt>
+<dd>Verbose mode shows all output messages generated by this utility.</dd>
+
+<dt>-? (help)  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section5"></a>Running gpfdist as a Windows Service
+
+HAWQ Loaders allow `gpfdist` to run as a Windows Service.
+
+Follow the instructions below to download, register and activate `gpfdist` as a service:
+
+1.  Update your HAWQ Loaders for Windows package to the latest version. See [HAWQ Loader Tools for Windows](../../../datamgmt/load/client-loadtools.html#installloadrunwin) for install and configuration information.
+    
+2.  Register `gpfdist` as a Windows service:
+    1.  Open a Windows command window
+    2.  Run the following command:
+
+        ``` pre
+        sc create gpfdist binpath= "<loader_install_dir>\bin\gpfdist.exe -p 8081 -d \"<external_load_files_path>\" -l \"<log_file_path>\""
+        ```
+
+        You can create multiple instances of `gpfdist` by running the same command again, with a unique name and port number for each instance:
+
+        ``` pre
+        sc create gpfdistN binpath= "<loader_install_dir>\bin\gpfdist.exe -p 8082 -d \"<external_load_files_path>\" -l \"<log_file_path>\""
+        ```
+
+3.  Activate the `gpfdist` service:
+    1.  Open the Windows Control Panel and select **Administrative Tools &gt; Services**.
+    2.  Highlight then right-click on the `gpfdist` service in the list of services.
+    3.  Select **Properties** from the right-click menu, the Service Properties window opens.
+
+        Note that you can also stop this service from the Service Properties window.
+
+    4.  Optional: Change the **Startup Type** to **Automatic** (after a system restart, this service will be running), then under **Service** status, click **Start**.
+    5.  Click **OK**.
+
+Repeat the above steps for each instance of `gpfdist` that you created.
+
+## <a id="topic1__section6"></a>Examples
+
+To serve files from a specified directory using port 8081 (and start `gpfdist` in the background):
+
+``` shell
+$ gpfdist -d /var/load_files -p 8081 &
+```
+
+To start `gpfdist` in the background and redirect output and errors to a log file:
+
+``` shell
+$ gpfdist -d /var/load_files -p 8081 -l /home/gpadmin/log &
+```
+
+To stop `gpfdist` when it is running in the background:
+
+--First find its process id:
+
+``` shell
+$ ps ax | grep gpfdist
+```
+
+--Then kill the process, for example:
+
+``` shell
+$ kill 3456
+```
+
+## <a id="topic1__section7"></a>See Also
+
+[hawq load](hawqload.html#topic1), [CREATE EXTERNAL TABLE](../../sql/CREATE-EXTERNAL-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/gplogfilter.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/gplogfilter.html.md.erb b/markdown/reference/cli/admin_utilities/gplogfilter.html.md.erb
new file mode 100644
index 0000000..44e73c5
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/gplogfilter.html.md.erb
@@ -0,0 +1,180 @@
+---
+title: gplogfilter
+---
+
+Searches through HAWQ log files for specified entries.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+gplogfilter [<timestamp_options>] [<pattern_matching_options>] 
+     [<output_options>] [<input_options>]  
+
+gplogfilter --help 
+
+gplogfilter --version
+```
+where:
+
+``` pre
+<timestamp_options> =
+     [-b <datetime> | --begin <datetime>]
+     [-e <datetime> | --end <datetime>]
+     [-d <time> | --duration <time>]
+
+<pattern_matching_options> =
+     [-c i[gnore] | r[espect] | --case i[gnore] | r[espect]]
+     [-C '<string>'  | --columns '<string>']
+     [-f '<string>' | --find '<string>']
+     [-F '<string> | --nofind '<string>']
+     [-m <regex> | --match <regex>]
+     [-M <regex>] | --nomatch <regex>]
+     [-t | --trouble]
+     
+<output_options> =
+     [-n <integer> |  --tail <integer>]
+     [-s <offset> [<limit>] | --slice <offset> [<limit>]]
+     [-o <output_file> | --out <output_file>]   
+     [-z <0..9> | --zip <0..9>]
+     [-a | --append]
+     
+<input_options> =
+     [<input_file> [-u | --unzip]]       
+```
+
+
+## <a id="topic1__section3"></a>Description
+
+The `gplogfilter` utility can be used to search through a HAWQ log file for entries matching the specified criteria. To read from standard input, use a dash (`-`) as the input file name. Input files may be compressed using `gzip`. In an input file, a log entry is identified by its timestamp in `YYYY-MM-DD [hh:mm[:ss]]` format.
+
+You can also use `gplogfilter` to search through all segment log files at once by running it through the [hawq ssh](hawqssh.html#topic1) utility. For example, to display the last three lines of each segment log file:
+
+``` shell
+$ hawq ssh -f seg_hostfile_hawqssh
+=> source /usr/local/hawq/greenplum_path.sh
+=> gplogfilter -n 3 /data/hawq-install-path/segmentdd/pg_log/hawq*.csv
+```
+
+By default, the output of `gplogfilter` is sent to standard output. Use the `-o` option to send the output to a file or a directory. If you supply an output file name ending in `.gz`, the output file will be compressed by default using maximum compression. If the output destination is a directory, the output file is given the same name as the input file.
+
+## <a id="topic1__section4"></a>Options
+
+
+**\<input_options\>**
+
+<dt>\<input\_file\></dt>
+<dd>The name of the input log file(s) to search through. To read from standard input, use a dash (`-`) as the input file name.</dd>
+
+<dt>-u, -\\\-unzip  </dt>
+<dd>Uncompress the input file using `gunzip`. If the input file name ends in `.gz`, it will be uncompressed by default.</dd>
+
+**\<output_options\>**
+
+<dt>-n, -\\\-tail \<integer\>  </dt>
+<dd>Limits the output to the last \<integer\> of qualifying log entries found.</dd>
+
+<dt>-s,  -\\\-slice \<offset\> \[\<limit\>\] </dt>
+<dd>From the list of qualifying log entries, returns the \<limit\> number of entries starting at the \<offset\> entry number, where an \<offset\> of zero (`0`) denotes the first entry in the result set and an \<offset\> of any number greater than zero counts back from the end of the result set.</dd>
+
+<dt>-o, -\\\-out \<output\_file\> </dt>
+<dd>Writes the output to the specified file or directory location instead of `STDOUT`.</dd>
+
+<dt>-z, -\\\-zip \<0..9\>  </dt>
+<dd>Compresses the output file to the specified compression level using `gzip`, where `0` is no compression and `9` is maximum compression. If you supply an output file name ending in `.gz`, the output file will be compressed by default using maximum compression.</dd>
+
+<dt>-a, -\\\-append  </dt>
+<dd>If the output file already exists, appends to the file instead of overwriting it.</dd>
+
+
+**\<pattern\_matching\_options\>**
+
+<dt>-c, -\\\-case i\[gnore\] | r\[espect\]  </dt>
+<dd>Matching of alphabetic characters is case sensitive by default unless proceeded by the `--case=ignore` option.</dd>
+
+<dt>-C, -\\\-columns '\<string\>'  </dt>
+<dd>Selects specific columns from the log file. Specify the desired columns as a comma-delimited string of column numbers beginning with 1, where the second column from left is 2, the third is 3, and so on.</dd>
+
+<dt>-f, -\\\-find '\<string\>'  </dt>
+<dd>Finds the log entries containing the specified string.</dd>
+
+<dt>-F, -\\\-nofind '\<string\>'  </dt>
+<dd>Rejects the log entries containing the specified string.</dd>
+
+<dt>-m, -\\\-match \<regex\>  </dt>
+<dd>Finds log entries that match the specified Python regular expression. See [http://docs.python.org/library/re.html](http://docs.python.org/library/re.html) for Python regular expression syntax.</dd>
+
+<dt>-M, -\\\-nomatch \<regex\> </dt>
+<dd>Rejects log entries that match the specified Python regular expression. See [http://docs.python.org/library/re.html](http://docs.python.org/library/re.html) for Python regular expression syntax.</dd>
+
+<dt>-t, -\\\-trouble  </dt>
+<dd>Finds only the log entries that have `ERROR:`, `FATAL:`, or `PANIC:` in the first line.</dd>
+
+**\<timestamp_options\>**
+
+<dt>-b, -\\\-begin \<datetime\>  </dt>
+<dd>Specifies a starting date and time to begin searching for log entries in the format of `YYYY-MM-DD [hh:mm[:ss]]`.
+
+If a time is specified, the date and time must be enclosed in either single or double quotes. This example encloses the date and time in single quotes:
+
+``` shell
+$ gplogfilter -b '2016-02-13 14:23'
+```
+</dd>
+
+<dt>-e, -\\\-end \<datetime\>  </dt>
+<dd>Specifies an ending date and time to stop searching for log entries in the format of `YYYY-MM-DD [hh:mm[:ss]]`.
+
+If a time is specified, the date and time must be enclosed in either single or double quotes. This example encloses the date and time in single quotes:
+
+``` shell
+$ gplogfilter -e '2016-02-13 14:23' 
+```
+</dd>
+
+<dt>-d, -\\\-duration \<time\>  </dt>
+<dd>Specifies a time duration to search for log entries in the format of `[hh][:mm[:ss]]`. If used without either the `-b` or `-e` option, will use the current time as a basis.</dd>
+
+**Other Options**
+
+<dt>-\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section9"></a>Examples
+
+Display the last three error messages in the identified log file:
+
+``` shell
+$ gplogfilter -t -n 3 "/data/hawq/master/pg_log/hawq-2016-09-01_134934.csv"
+```
+
+Display the last five error messages in a date-specified log file:
+
+``` shell
+$ gplogfilter -t -n 5 "/data/hawq-file-path/hawq-yyyy-mm-dd*.csv"
+```
+
+Display all log messages in the date-specified log file timestamped in the last 10 minutes:
+
+``` shell
+$ gplogfilter -d :10 "/data/hawq-file-path/hawq-yyyy-mm-dd*.csv"
+```
+
+Display log messages in the identified log file containing the string `|con6 cmd11|`:
+
+``` shell
+$ gplogfilter -f '|con6 cmd11|' "/data/hawq/master/pg_log/hawq-2016-09-01_134934.csv"
+```
+
+Using [hawq ssh](hawqssh.html#topic1), run `gplogfilter` on the segment hosts and search for log messages in the segment log files containing the string `con6` and save output to a file.
+
+``` shell
+$ hawq ssh -f /data/hawq-2.x/segmentdd/pg_hba.conf -e 'source /usr/local/hawq/greenplum_path.sh ; 
+gplogfilter -f con6 /data/hawq-2.x/pg_log/hawq*.csv' > seglog.out
+```
+
+## <a id="topic1__section10"></a>See Also
+
+[hawq ssh](hawqssh.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqactivate.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqactivate.html.md.erb b/markdown/reference/cli/admin_utilities/hawqactivate.html.md.erb
new file mode 100644
index 0000000..7afd6c7
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqactivate.html.md.erb
@@ -0,0 +1,87 @@
+---
+title: hawq activate
+---
+
+Activates a standby master host and makes it the active master for the HAWQ system.
+
+**Note:** If HAWQ was installed using Ambari, do not use `hawq activate` to activate a standby master host. The system catalogs could become unsynchronized if you mix Ambari and command line functions. For Ambari-managed HAWQ clusters, always use the Ambari administration interface to activate a standby master. For more information, see [Manging HAWQ Using Ambari](../../../admin/ambari-admin.html#topic1).
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq activate standby 
+     [-M (smart|fast|immediate) | --mode (smart|fast|immediate)] 
+     [-t <time> | --timeout <time>] 
+     [-l <logfile_directory> | --logdir <logfile_directory>]
+     [(-v | --verbose) | (-q | --quiet)] 
+     [--ignore-bad-hosts]
+
+hawq activate [-h | --help]
+
+```
+
+## <a id="topic1__section3"></a>Description
+
+If the primary master fails, the log replication process is shut down, and the standby master can be activated in its place. The `hawq activate                                         standby` utility activates a backup, standby master host and brings it into operation as the active master instance for a HAWQ system. The activated standby master effectively becomes the HAWQ master, accepting client connections on the master port.
+
+When you initialize a standby master, the default is to use the same port as the active master. For information about the master port for the standby master, see [hawq init](hawqinit.html#topic1).
+
+You must run this utility from the master host you are activating, not the failed master host you are disabling. Running this utility assumes you have a standby master host configured for the system .
+
+The utility will perform the following steps:
+
+-   Stops the synchronization process (`walreceiver`) on the standby master
+-   Updates the system catalog tables of the standby master using the logs
+-   Activates the standby master to be the new active master for the system
+-   Restarts the HAWQ system with the new master host
+
+In order to use `hawq activate standby` to activate a new primary master host, the master host that was previously serving as the primary master cannot be running. The utility checks for a `postmaster.pid` file in the data directory of the disabled master host, and if it finds it there, it will assume the old master host is still active. In some cases, you may need to remove the `postmaster.pid` file from the disabled master host data directory before running `hawq activate                                         standby` (for example, if the disabled master host process was terminated unexpectedly).
+
+After activating a standby master, run `ANALYZE` to update the database query statistics. For example:
+
+``` shell
+$ psql <dbname> -c 'ANALYZE;'
+```
+
+After you activate the standby master as the primary master, the HAWQ system no longer has a standby master configured. You might want to specify another host to be the new standby with the [hawq init](hawqinit.html#topic1) utility.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-M, -\\\-mode (smart | fast | immediate) </dt>
+<dd>Stop with one of the specified modes.
+
+Smart shutdown is the default. Shutdown fails with a warning message, if active connections are found.
+
+Fast shut down interrupts and rolls back any transactions currently in progress.
+
+Immediate shutdown aborts transactions in progress and kills all `postgres` processes without allowing the database server to complete transaction processing or clean up any temporary or in-process work files. Because of this, immediate shutdown is not recommended. In some instances, it can cause database corruption that requires manual recovery.</dd>
+
+<dt>-t, -\\\-timeout \<timeout\_seconds\>  </dt>
+<dd>Seconds to wait before discontinuing the operation. If not specified, the default timeout is 60 seconds.</dd>
+
+<dt>-l, -\\\-logdir \<logfile\_directory\> </dt>
+<dd>Specifies the log directory for logs of the management tools. The default is `~/hawq/Adminlogs/`.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages output by the utility.</dd>
+
+<dt>-q, -\\\-quiet  </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.</dd>
+
+<dt>-\\\-ignore-bad-hosts  </dt>
+<dd>Overrides copying configuration files to a host on which SSH validation fails. If ssh to a skipped host is reestablished, make sure the configuration files are re-synched once it is reachable.</dd>
+
+<dt>-h, -\\\-help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section5"></a>Example
+
+Activate the standby master host and make it the active master instance for a HAWQ system (run from backup master host you are activating):
+
+``` shell
+$ hawq activate standby
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq init](hawqinit.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqcheck.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqcheck.html.md.erb b/markdown/reference/cli/admin_utilities/hawqcheck.html.md.erb
new file mode 100644
index 0000000..23a496d
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqcheck.html.md.erb
@@ -0,0 +1,126 @@
+---
+title: hawq check
+---
+
+Verifies and validates HAWQ platform settings.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq check -f <hostfile_hawq_check> | (-h <hostname> | --host <hostname>)
+    [--hadoop <hadoop_home> | --hadoop-home <hadoop_home>]
+    [--config <config_file>] 
+    [--stdout | --zipout]
+    [--kerberos] 
+    [--hdfs-ha] 
+    [--yarn] 
+    [--yarn-ha]
+         
+hawq check --zipin <hawq_check_zipfile>
+
+hawq check --version
+
+hawq check -?
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq check` utility determines the platform on which you are running HAWQ and validates various platform-specific configuration settings as well as HAWQ and HDFS-specific configuration settings. In order to perform HAWQ configuration checks, make sure HAWQ has been already started and `hawq config` works. For HDFS checks, you should either set the `$HADOOP_HOME` environment variable or provide the full path to the hadoop installation location using the `--hadoop` option.
+
+The `hawq check` utility can use a host file or a file previously created with the `--zipout `option to validate platform settings. If `GPCHECK_ERROR` displays, one or more validation checks failed. You can also use `hawq check` to gather and view platform settings on hosts without running validation checks. When running checks, `hawq check` compares your actual configuration setting with an expected value listed in a config file (`$GPHOME/etc/hawq_check.cnf` by default). You must modify your configuration values for "mount.points" and "diskusage.monitor.mounts" to reflect the actual mount points you want to check, as a comma-separated list. Otherwise, the utility only checks the root directory, which may not be helpful.
+
+An example is shown below:
+
+``` pre
+[linux.mount] 
+mount.points = /,/data1,/data2 
+
+[linux.diskusage] 
+diskusage.monitor.mounts = /,/data1,/data2
+```
+## <a id="args"></a>Arguments
+
+<dt>-f \<hostfile\_hawq\_check\>  </dt>
+<dd>The name of a file that contains a list of hosts that `hawq check` uses to validate platform-specific settings. This file should contain a single host name for all hosts in your HAWQ system (master, standby master, and segments).</dd>
+
+<dt>-h, -\\\-host \<hostname\>  </dt>
+<dd>Specifies a single host on which platform-specific settings will be validated.</dd>
+
+<dt>-\\\-zipin \<hawq\_check\_zipfile\>  </dt>
+<dd>Use this option to decompress and check a .zip file created with the `--zipout` option. If you specify the `--zipin` option, `hawq check` performs validation tasks against the specified file.</dd>
+
+
+## <a id="topic1__section4"></a>Options
+
+
+<dt>-\\\-config \<config\_file\>   </dt>
+<dd>The name of a configuration file to use instead of the default file `$GPHOME/etc/hawq_check.cnf`.</dd>
+
+<dt>-\\\-hadoop, -\\\-hadoop-home \<hadoop\_home\>  </dt>
+<dd>Use this option to specify the full path to your hadoop installation location so that `hawq check` can validate HDFS settings. This option is not needed if the `$HADOOP_HOME` environment variable is set.</dd>
+
+<dt>-\\\-stdout  </dt>
+<dd>Send collected host information from `hawq check` to standard output. No checks or validations are performed.</dd>
+
+<dt>-\\\-zipout  </dt>
+<dd>Save all collected data to a .zip file in the current working directory. `hawq check` automatically creates the .zip file and names it `hawq_check_timestamp.tar.gz.` No checks or validations are performed.</dd>
+
+<dt>-\\\-kerberos  </dt>
+<dd>Use this option to check HDFS and YARN when running Kerberos mode. This allows `hawq check` to validate HAWQ/HDFS/YARN settings with Kerberos enabled.</dd>
+
+<dt>-\\\-hdfs-ha  </dt>
+<dd>Use this option to indicate that HDFS-HA mode is enabled, allowing `hawq               check` to validate HDFS settings with HA mode enabled.</dd>
+
+<dt>-\\\-yarn  </dt>
+<dd>If HAWQ is using YARN, enables yarn mode, allowing `hawq check` to validate the basic YARN settings.</dd>
+
+<dt>-\\\-yarn-ha  </dt>
+<dd>Use this option to indicate HAWQ is using YARN with High Availability mode enabled, to allow `hawq check` to validate HAWQ-YARN settings with YARN-HA enabled.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+<dt>-? (help)  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Verify and validate the HAWQ platform settings by entering a host file and specifying the full hadoop install path:
+
+``` shell
+$ hawq check -f hostfile_hawq_check --hadoop /usr/hdp/<version>/hadoop
+```
+
+Verify and validate the HAWQ platform settings with HDFS HA enabled, YARN HA enabled and Kerberos enabled:
+
+``` shell
+$ hawq check -f hostfile_hawq_check --hadoop /usr/local/hadoop-<version> --hdfs-ha --yarn-ha --kerberos
+```
+
+Verify and validate the HAWQ platform settings with HDFS HA enabled, and Kerberos enabled:
+
+``` shell
+$ hawq check -f hostfile_hawq_check --hadoop /usr/hdp/<version>/hadoop --hdfs-ha --kerberos
+```
+
+Save HAWQ platform settings to a zip file, when the `$HADOOP_HOME` environment variable is set:
+
+``` shell
+$ hawq check -f hostfile_hawq_check --zipout  
+```
+
+Verify and validate the HAWQ platform settings using a zip file created with the `--zipout` option:
+
+``` shell
+$ hawq check --zipin hawq_check_timestamp.tar.gz
+```
+
+View collected HAWQ platform settings:
+
+``` shell
+$ hawq check -f hostfile_hawq_check --hadoop /usr/local/hadoop-<version> --stdout
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq checkperf](hawqcheckperf.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqcheckperf.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqcheckperf.html.md.erb b/markdown/reference/cli/admin_utilities/hawqcheckperf.html.md.erb
new file mode 100644
index 0000000..f5c7e2c
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqcheckperf.html.md.erb
@@ -0,0 +1,137 @@
+---
+title: hawq checkperf
+---
+
+Verifies the baseline hardware performance of the specified hosts.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq checkperf -d <test_directory> [-d <test_directory> ...] 
+����(-f�<hostfile_checkperf> | - h <hostname> [-h <hostname> ...]) 
+����[-r ds] 
+����[-B <block_size>] 
+����[-S <file_size>]
+����[-D]
+����[-v|-V]
+
+hawq checkperf -d <temp_directory>
+����(-f�<hostfile_checknet> | - h <hostname> [-h <hostname> ...]) 
+����[-r n|N|M [--duration <time>] [--netperf]] 
+����[-D]
+����[-v|-V]
+
+hawq checkperf --version
+
+hawq checkperf -?
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq checkperf` utility starts a session on the specified hosts and runs the following performance tests:
+
+-   **Disk I/O Test (dd test)** \u2014 To test the sequential throughput performance of a logical disk or file system, the utility uses the **dd** command, which is a standard UNIX utility. It times how long it takes to write and read a large file to and from disk and calculates your disk I/O performance in megabytes (MB) per second. By default, the file size that is used for the test is calculated at two times the total random access memory (RAM) on the host. This ensures that the test is truly testing disk I/O and not using the memory cache.
+-   **Memory Bandwidth Test (stream)** \u2014 To test memory bandwidth, the utility uses the STREAM benchmark program to measure sustainable memory bandwidth (in MB/s). This tests that your system is not limited in performance by the memory bandwidth of the system in relation to the computational performance of the CPU. In applications where the data set is large (as in HAWQ), low memory bandwidth is a major performance issue. If memory bandwidth is significantly lower than the theoretical bandwidth of the CPU, then it can cause the CPU to spend significant amounts of time waiting for data to arrive from system memory.
+-   **Network Performance Test (gpnetbench\*)** \u2014 To test network performance (and thereby the performance of the HAWQ interconnect), the utility runs a network benchmark program that transfers a 5 second stream of data from the current host to each remote host included in the test. The data is transferred in parallel to each remote host and the minimum, maximum, average and median network transfer rates are reported in megabytes (MB) per second. If the summary transfer rate is slower than expected (less than 100 MB/s), you can run the network test serially using the `-r n` option to obtain per-host results. To run a full-matrix bandwidth test, you can specify `-r M` which will cause every host to send and receive data from every other host specified. This test is best used to validate if the switch fabric can tolerate a full-matrix workload.
+
+To specify the hosts to test, use the `-f` option to specify a file containing a list of host names, or use the `-h` option to name single host names on the command-line. If running the network performance test, all entries in the host file must be for network interfaces within the same subnet. If your segment hosts have multiple network interfaces configured on different subnets, run the network test once for each subnet.
+
+You must also specify at least one test directory (with `-d`). The user who runs `hawq checkperf` must have write access to the specified test directories on all remote hosts. For the disk I/O test, the test directories should correspond to your segment data directories. For the memory bandwidth and network tests, a temporary directory is required for the test program files.
+
+Before using `hawq checkperf`, you must have a trusted host setup between the hosts involved in the performance test. You can use the utility `hawq           ssh-exkeys` to update the known host files and exchange public keys between hosts if you have not done so already. Note that `hawq checkperf` calls to `hawq ssh` and `hawq scp`, so these HAWQ utilities must also be in your `$PATH`.
+
+## <a id="args"></a>Arguments
+
+<dt>-d \<test\_directory\> </dt>
+<dd>For the disk I/O test, specifies the file system directory locations to test. You must have write access to the test directory on all hosts involved in the performance test. You can use the `-d` option multiple times to specify multiple test directories (for example, to test disk I/O of your data directories).</dd>
+
+<dt>-d \<temp\_directory\>  </dt>
+<dd>For the network and stream tests, specifies a single directory where the test program files will be copied for the duration of the test. You must have write access to this directory on all hosts involved in the test.</dd>
+
+<dt>-f \<hostfile\_checkperf\>  </dt>
+<dd>For the disk I/O and stream tests, specifies the name of a file that contains one host name per host that will participate in the performance test. The host name is required, and you can optionally specify an alternate user name and/or SSH port number per host. The syntax of the host file is one host per line as follows:
+
+``` pre
+[username@]hostname[:ssh_port]
+```
+</dd>
+
+<dt>-f \<hostfile\_checknet\>  </dt>
+<dd>For the network performance test, all entries in the host file must be for host adresses within the same subnet. If your segment hosts have multiple network interfaces configured on different subnets, run the network test once for each subnet. For example (a host file containing segment host address names for interconnect subnet 1):
+
+``` pre
+sdw1-1
+sdw2-1
+sdw3-1
+```
+</dd>
+
+<dt>-h \<hostname\>  </dt>
+<dd>Specifies a single host name (or host address) that will participate in the performance test. You can use the `-h` option multiple times to specify multiple host names.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-r ds{n|N|M}  </dt>
+<dd>Specifies which performance tests to run. The default is `dsn`:
+
+-   Disk I/O test (`d`)
+-   Stream test (`s`)
+-   Network performance test in sequential (`n`), parallel (`N`), or full-matrix (`M`) mode. The optional `--duration` option specifies how long (in seconds) to run the network test. To use the parallel (`N`) mode, you must run the test on an *even* number of hosts.
+
+    If you would rather use `netperf` ([http://www.netperf.org](http://www.netperf.org)) instead of the HAWQ network test, you can download it and install it into `$GPHOME/bin/lib` on all HAWQ hosts (master and segments). You would then specify the optional `--netperf` option to use the `netperf` binary instead of the default `gpnetbench*` utilities.</dd>
+
+<dt>-B \<block\_size\>  </dt>
+<dd>Specifies the block size (in KB or MB) to use for disk I/O test. The default is 32KB, which is the same as the HAWQ page size. The maximum block size is 1 MB.</dd>
+
+<dt>-S \<file\_size\>  </dt>
+<dd>Specifies the total file size to be used for the disk I/O test for all directories specified with `-d`. \<file\_size\> should equal two times total RAM on the host. If not specified, the default is calculated at two times the total RAM on the host where `hawq checkperf` is executed. This ensures that the test is truly testing disk I/O and not using the memory cache. You can specify sizing in KB, MB, or GB.</dd>
+
+<dt>-D (display per-host results)  </dt>
+<dd>Reports performance results for each host for the disk I/O tests. The default is to report results for just the hosts with the minimum and maximum performance, as well as the total and average performance of all hosts.</dd>
+
+<dt>-\\\-duration \<time\>  </dt>
+<dd>Specifies the duration of the network test in seconds (s), minutes (m), hours (h), or days (d). The default is 15 seconds.</dd>
+
+<dt>-\\\-netperf  </dt>
+<dd>Specifies that the `netperf` binary should be used to perform the network test instead of the HAWQ network test. To use this option, you must download `netperf` from [http://www.netperf.org](http://www.netperf.org) and install it into `$GPHOME/bin/lib` on all HAWQ hosts (master and segments).</dd>
+
+<dt>-v (verbose) | -V (very verbose)  </dt>
+<dd>Verbose mode shows progress and status messages of the performance tests as they are run. Very verbose mode shows all output messages generated by this utility.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+<dt>-? (help)  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Run the disk I/O and memory bandwidth tests on all the hosts in the file *host\_file* using the test directory of */data1* and */data2*:
+
+``` shell
+$ hawq checkperf -f hostfile_checkperf -d /data1 -d /data2 -r ds
+```
+
+Run only the disk I/O test on the hosts named *sdw1* and sdw2 using the test directory of */data1*. Show individual host results and run in verbose mode:
+
+``` shell
+$ hawq checkperf -h sdw1 -h sdw2 -d /data1 -r d -D -v
+```
+
+Run the parallel network test using the test directory of */tmp,* where *hostfile\_check\_ic\** specifies all network interface host address names within the same interconnect subnet:
+
+``` shell
+$ hawq checkperf -f hostfile_checknet_ic1 -r N -d /tmp
+$ hawq checkperf -f hostfile_checknet_ic2 -r N -d /tmp
+```
+
+Run the same test as above, but use `netperf` instead of the HAWQ network test (note that `netperf` must be installed in `$GPHOME/bin/lib` on all HAWQ hosts):
+
+``` shell
+$ hawq checkperf -f hostfile_checknet_ic1 -r N --netperf -d /tmp
+$ hawq checkperf -f hostfile_checknet_ic2 -r N --netperf -d /tmp
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq ssh](hawqssh.html#topic1), [hawq scp](hawqscp.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqconfig.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqconfig.html.md.erb b/markdown/reference/cli/admin_utilities/hawqconfig.html.md.erb
new file mode 100644
index 0000000..9f5e840
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqconfig.html.md.erb
@@ -0,0 +1,134 @@
+---
+title: hawq config
+---
+
+Sets server configuration parameters on all nodes (master and segments) for HAWQ systems that are managed using command-line utilities.
+
+**Note:** If you install and manage HAWQ using Ambari, do not use `hawq config` to configure HAWQ properties. Ambari will overwrite any changes that were made by `hawq config` when it restarts the cluster. For Ambari-managed HAWQ clusters, always use the Ambari administration interface to set or change HAWQ configuration properties.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq config -c <hawq_property> | --change <hawq_property> 
+    -v <hawq_property_value> | --value <hawq_property_value> 
+    [--skipvalidation] [--ignore-bad-hosts]
+�������
+hawq config -r <hawq_property> | --remove <hawq_property> 
+    [--skipvalidation] [--ignore-bad-hosts]  
+  
+hawq config -l | --list 
+    [--ignore-bad-hosts] 
+    
+hawq config -s <hawq_property> | --show <hawq_property> 
+    [--ignore-bad-hosts] 
+    
+hawq config --help
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq config` utility allows you to set, unset, or view configuration properties from the `hawq-site.xml` files of all instances in your HAWQ system.
+
+**Note:** The `hawq config` utility makes configuration properties identical and consistent across all nodes, including the master and segments. Using the utility will override any unique configurations that were defined manually in `hawq-site.xml`.
+
+`hawq config` can only be used to manage specific properties. For example, you cannot use it to set properties such as `port`, which is required to be distinct for every segment instance. Use the `-l` (list) option to see a complete list of configuration properties supported by `hawq config`.
+
+When `hawq config` sets a configuration property in a `hawq_site.xml` file, the new property setting always displays at the bottom of the file. When you use `hawq config` to remove a configuration property setting, `hawq config` comments out the property in all `hawq-site.xml` files, thereby restoring the system default setting. For example, if you use `hawq config `to remove (comment out) a property and later add it back (set a new value), there will be two instances of the property; one that is commented out, and one that is enabled and inserted at the bottom of the `hawq-site.xml` file.
+
+After setting a property, you must restart your HAWQ system or reload the `hawq-site.xml` file for the change to take effect. Whether you require a restart or a reload depends on the property being set. To reload the configuration files, use `hawq stop cluster -u`. To restart the system, use `hawq restart` .
+
+To show the currently set values for a property across the system, use the `-s` option.
+
+`hawq config` uses the following environment variables to connect to the HAWQ master instance and obtain system configuration information:
+
+-   `PGHOST`
+-   `PGPORT`
+-   `PGUSER`
+-   `PGPASSWORD`
+-   `PGDATABASE`
+
+## <a id="topic1__section4"></a>Options
+
+<dt>
+-c, -\\\-change \<hawq\_property\>
+</dt> 
+<dd>Changes a HAWQ property setting by adding the new setting to the bottom of the `hawq-site.xml` files.</dd>
+
+<dt>
+-v, -\\\-value \<hawq\_property\_value\>  
+</dt>
+<dd>
+Set the value of the HAWQ property setting in the `hawq-site.xml` files.
+</dd>
+
+<dt>
+-r, -\\\-remove \<hawq\_property\> 
+</dt>
+<dd>
+Removes a HAWQ property setting by commenting out the entry in the `hawq-site.xml` files.
+</dd>
+
+<dt>
+-s, -\\\-show \<hawq\_property\> 
+</dt>
+<dd>
+Shows the value for a HAWQ property name used on all instances (master and segments) in the HAWQ system. If there is a discrepancy in a parameter value between segment instances, the `hawq config` utility displays an error message. Note that the `hawq config` utility reads property values directly from the database, and not the `hawq-site.xml` file. If you are using `hawq config` to set properties across all segments, then running `hawq               config -s` to verify the changes, you might still see the previous (old) values. You must reload the configuration files (`hawq stop cluster -u`) or restart the system (`hawq restart`) for changes to take effect.
+</dd>
+
+<dt>
+-l, -\\\-list
+</dt>
+<dd>
+Lists all HAWQ property settings supported by the `hawq config` utility.
+</dd>
+
+<dt>
+-\\\-skipvalidation 
+</dt>
+<dd>
+Overrides the system validation checks of `hawq config` and allows you to operate on any server property, including hidden parameters and restricted parameters that cannot be changed by `hawq config`. Do not modify hidden or restricted parameters unless you are aware of all potential consequences. 
+</dd>
+
+<dt>
+-\\\-ignore-bad-hosts 
+</dt>
+<dd>
+Overrides copying configuration files to a host on which SSH validation fails. If ssh to a skipped host is reestablished, make sure the configuration files are re-synched once it is reachable.
+</dd>
+
+<dt>
+-h, -\\\-help  
+</dt>
+<dd>
+Displays the online help.
+</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Set the `max_connections` setting to 100:
+
+``` shell
+$ hawq config -c max_connections -v 100
+```
+
+Comment out all instances of the `default_statistics_target` property, and restore the system default:
+
+``` shell
+$ hawq config -r default_statistics_target
+```
+
+List all properties supported by `hawq config`:
+
+``` shell
+$ hawq config -l
+```
+
+Show the values of a particular property across the system:
+
+``` shell
+$ hawq config -s max_connections
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq stop](hawqstop.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqextract.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqextract.html.md.erb b/markdown/reference/cli/admin_utilities/hawqextract.html.md.erb
new file mode 100644
index 0000000..b338523
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqextract.html.md.erb
@@ -0,0 +1,319 @@
+---
+title: hawq extract
+---
+
+Extracts the metadata of a specified table into a YAML file.
+
+## Synopsis
+
+``` pre
+hawq extract [<connection_options>] [-o <output_file>] <tablename>
+
+hawq extract -?
+
+hawq extract --version
+```
+where:
+
+``` pre
+<connection_options> =
+  [-h <host>] 
+  [-p <port>] 
+  [-U <username>] 
+  [-d <database>]
+  [-W]
+```
+
+## Description
+
+`hawq extract` is a utility that extracts a table's metadata into a YAML formatted file. HAWQ's InputFormat uses this YAML-formatted file to read a HAWQ file stored on HDFS directly into the MapReduce program. The YAML configuration file can also be used provide the metadata for registering files into HAWQ with the `hawq register` command.
+
+**Note:**
+`hawq extract` is bound by the following rules:
+
+-   You must start up HAWQ to use `hawq extract`.
+-   `hawq extract` only supports AO and Parquet tables.
+-   `hawq extract` supports partitioned tables, but does not support sub-partitions.
+
+## Arguments
+
+<dt>&lt;tablename&gt;  </dt>
+<dd>Name of the table that you need to extract metadata. You can use the format *namespace\_name.table\_name*.</dd>
+
+## Options
+
+<dt>-o &lt;output\_file&gt;  </dt>
+<dd>Is the name of a file that `hawq extract` uses to write the metadata. If you do not specify a name, `hawq extract` writes to `stdout`.</dd>
+
+<dt>-v (verbose mode)  </dt>
+<dd>Displays the verbose output of the extraction process.</dd>
+
+<dt>-? (help)  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+**&lt;connection_options&gt;**
+
+<dt>-h &lt;host&gt;  </dt>
+<dd>Specifies the host name of the machine on which the HAWQ master database server is running. If not specified, it reads from the environment variable `$PGHOST` or defaults to `localhost`.</dd>
+
+<dt>-p &lt;port&gt;  </dt>
+<dd>Specifies the TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `$PGPORT` or defaults to 5432.</dd>
+
+<dt>-U &lt;username&gt;  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `$PGUSER` or defaults to the current system user name.</dd>
+
+<dt>-d &lt;database&gt;  </dt>
+<dd>The database to connect to. If not specified, it reads from the environment variable `$PGDATABASE` or defaults to `template1`.</dd>
+
+<dt>-W (force password prompt)  </dt>
+<dd>Force a password prompt. If not specified, reads the password from the environment variable `$PGPASSWORD` or from a password file specified by `$PGPASSFILE` or in `~/.pgpass`.</dd>
+
+## Metadata File Format
+
+`hawq extract` exports the table metadata into a file using YAML 1.1 document format. The file contains various key information about the table, such as table schema, data file locations and sizes, partition constraints and so on.
+
+The basic structure of the metadata file is as follows:
+
+``` pre
+Version: string (1.0.0)
+DBVersion: string 
+FileFormat: string (AO/Parquet) 
+TableName: string (schemaname.tablename)
+DFS_URL: string (hdfs://127.0.0.1:9000)
+Encoding: UTF8
+AO_Schema: 
+    - name: string
+      type: string
+      Bucketnum: 6
+      Distribution_policy: DISTRIBUTED RANDOMLY 
+ 
+AO_FileLocations:
+      Blocksize: int
+      Checksum: boolean
+      CompressionType: string
+      CompressionLevel: int
+      PartitionBy: string ('PARTITION BY ...')
+      Files:
+      - path: string (/gpseg0/16385/35469/35470.1)
+        size: long
+ 
+      Partitions:
+      - Blocksize: int
+        Checksum: Boolean
+        CompressionType: string
+        CompressionLevel: int
+        Name: string
+        Constraint: string (PARTITION Jan08 START (date '2008-01-01') INCLUSIVE)
+        Files:
+        - path: string
+          size: long
+
+Parquet_Schema: 
+    - name: string
+      type: string
+      �
+Parquet_FileLocations:
+  RowGroupSize: long
+  PageSize: long
+  CompressionType: string
+  CompressionLevel: int
+  Checksum: boolean
+  EnableDictionary: boolean
+  PartitionBy: string
+  Files:
+  - path: string
+    size: long
+  Partitions:
+  - Name: string
+    RowGroupSize: long
+    PageSize: long
+    CompressionType: string
+    CompressionLevel: int
+    Checksum: boolean
+    EnableDictionary: boolean
+    Constraint: string
+    Files:
+    - path: string
+      size: long
+```
+
+## Example - Extracting an AO table
+
+Extract the `rank` table's metadata into a file named `rank_table.yaml`:
+
+``` shell
+$ hawq extract -o rank_table.yaml -d postgres rank
+```
+
+**Output content in rank\_table.yaml**
+
+``` pre
+AO_FileLocations:
+    Blocksize: 32768
+    Checksum: false
+    CompressionLevel: 0
+    CompressionType: null
+    Files:
+    - path: /gpseg0/16385/35469/35692.1
+      size: 0
+    - path: /gpseg1/16385/35469/35692.1
+      size: 0
+    PartitionBy: PARTITION BY list (gender)
+    Partitions:
+    - Blocksize: 32768
+      Checksum: false
+      CompressionLevel: 0
+      CompressionType: null
+      Constraint: PARTITION girls VALUES('F') WITH (appendonly=true)
+    Files:
+    - path: /gpseg0/16385/35469/35697.1
+      size: 0
+    - path: /gpseg1/16385/35469/35697.1
+      size: 0
+      Name: girls
+    - Blocksize: 32768
+      Checksum: false
+      CompressionLevel: 0
+      CompressionType: null
+      Constraint: PARTITION boys VALUES('M') WITH (appendonly=true)
+      Files:
+      - path: /gpseg0/16385/35469/35703.1
+        size: 0
+      - path: /gpseg1/16385/35469/35703.1
+        size: 0
+      Name: boys
+    - Blocksize: 32768
+      Checksum: false
+      CompressionLevel: 0
+      CompressionType: null
+      Constraint: DEFAULT PARTITION other WITH appendonly=true)
+      Files:
+      - path: /gpseg0/16385/35469/35709.1
+        size: 90071728
+      - path: /gpseg1/16385/35469/35709.1
+        size: 90071512
+      Name: other
+    AO_Schema:
+    - name: id
+      type: int4
+    - name: rank
+      type: int4
+    - name: year
+      type: int4
+    - name: gender
+      type: bpchar
+    - name: count
+      type: int4
+    DFS_URL: hdfs://127.0.0.1:9000
+    Distribution_policy: DISTRIBUTED RANDOMLY
+    Encoding: UTF8
+    FileFormat: AO
+    TableName: public.rank
+    Version: 1.0.0
+```
+
+## Example - Extracting a Parquet table
+
+Extract the `orders` table's metadata into a file named `orders.yaml`:
+
+``` shell
+$ hawq extract -o orders.yaml -d postgres orders
+```
+
+**Output content in orders.yaml**
+
+``` pre
+DFS_URL: hdfs://127.0.0.1:9000
+Encoding: UTF8
+FileFormat: Parquet
+TableName: public.orders
+Version: 1.0.0
+Parquet_FileLocations:
+  Checksum: false
+  CompressionLevel: 0
+  CompressionType: none
+  EnableDictionary: false
+  Files:
+  - path: /hawq-data/gpseg0/16385/16626/16657.1
+    size: 0
+  - path: /hawq-data/gpseg1/16385/16626/16657.1
+    size: 0
+  PageSize: 1048576
+  PartitionBy: PARTITION BY range (o_orderdate)
+  Partitions:
+  - Checksum: false
+    CompressionLevel: 0
+    CompressionType: none
+    Constraint: PARTITION p1_1 START ('1992-01-01'::date) END ('1994-12-31'::date)
+      EVERY ('3 years'::interval) WITH (appendonly=true, orientation=parquet, pagesize=1048576,
+      rowgroupsize=8388608, compresstype=none, compresslevel=0)
+    EnableDictionary: false
+    Files:
+    - path: /hawq-data/gpseg0/16385/16626/16662.1
+      size: 8140599
+    - path: /hawq-data/gpseg1/16385/16626/16662.1
+      size: 8099760
+    Name: orders_1_prt_p1_1
+    PageSize: 1048576
+    RowGroupSize: 8388608
+  - Checksum: false
+    CompressionLevel: 0
+    CompressionType: none
+    Constraint: PARTITION p1_11 START ('1995-01-01'::date) END ('1997-12-31'::date)
+      EVERY ('e years'::interval) WITH (appendonly=true, orientation=parquet, pagesize=1048576,
+      rowgroupsize=8388608, compresstype=none, compresslevel=0)
+    EnableDictionary: false
+    Files:
+    - path: /hawq-data/gpseg0/16385/16626/16668.1
+      size: 8088559
+    - path: /hawq-data/gpseg1/16385/16626/16668.1
+      size: 8075056
+    Name: orders_1_prt_p1_11
+    PageSize: 1048576
+    RowGroupSize: 8388608
+  - Checksum: false
+    CompressionLevel: 0
+    CompressionType: none
+    Constraint: PARTITION p1_21 START ('1998-01-01'::date) END ('2000-12-31'::date)
+      EVERY ('3 years'::interval) WITH (appendonly=true, orientation=parquet, pagesize=1048576,
+      rowgroupsize=8388608, compresstype=none, compresslevel=0)
+    EnableDictionary: false
+    Files:
+    - path: /hawq-data/gpseg0/16385/16626/16674.1
+      size: 8065770
+    - path: /hawq-data/gpseg1/16385/16626/16674.1
+      size: 8126669
+    Name: orders_1_prt_p1_21
+    PageSize: 1048576
+    RowGroupSize: 8388608
+  RowGroupSize: 8388608
+  Parquet_Schema:
+  - name: o_orderkey
+    type: int8
+  - name: o_custkey
+    type: int4
+  - name: o_orderstatus
+    type: bpchar
+  - name: o_totalprice
+    type: numeric
+  - name: o_orderdate
+    type: date
+  - name: o_orderpriority
+    type: bpchar
+  - name: o_clerk
+    type: bpchar
+  - name: o_shippriority
+    type: int4
+  - name: o_comment
+    type: varchar
+    Distribution_policy: DISTRIBUTED RANDOMLY
+```
+
+## See Also
+
+[hawq load](hawqload.html#topic1), [hawq register](hawqregister.html#topic1)
+
+


[45/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/HAWQInputFormatforMapReduce.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/HAWQInputFormatforMapReduce.html.md.erb b/datamgmt/HAWQInputFormatforMapReduce.html.md.erb
deleted file mode 100644
index a6fcca2..0000000
--- a/datamgmt/HAWQInputFormatforMapReduce.html.md.erb
+++ /dev/null
@@ -1,304 +0,0 @@
----
-title: HAWQ InputFormat for MapReduce
----
-
-MapReduce is a programming model developed by Google for processing and generating large data sets on an array of commodity servers. You can use the HAWQ InputFormat class to enable MapReduce jobs to access HAWQ data stored in HDFS.
-
-To use HAWQ InputFormat, you need only to provide the URL of the database to connect to, along with the table name you want to access. HAWQ InputFormat fetches only the metadata of the database and table of interest, which is much less data than the table data itself. After getting the metadata, HAWQ InputFormat determines where and how the table data is stored in HDFS. It reads and parses those HDFS files and processes the parsed table tuples directly inside a Map task.
-
-This chapter describes the document format and schema for defining HAWQ MapReduce jobs.
-
-## <a id="supporteddatatypes"></a>Supported Data Types
-
-HAWQ InputFormat supports the following data types:
-
-| SQL/HAWQ                | JDBC/JAVA                                        | setXXX        | getXXX        |
-|-------------------------|--------------------------------------------------|---------------|---------------|
-| DECIMAL/NUMERIC         | java.math.BigDecimal                             | setBigDecimal | getBigDecimal |
-| FLOAT8/DOUBLE PRECISION | double                                           | setDouble     | getDouble     |
-| INT8/BIGINT             | long                                             | setLong       | getLong       |
-| INTEGER/INT4/INT        | int                                              | setInt        | getInt        |
-| FLOAT4/REAL             | float                                            | setFloat      | getFloat      |
-| SMALLINT/INT2           | short                                            | setShort      | getShort      |
-| BOOL/BOOLEAN            | boolean                                          | setBoolean    | getBoolean    |
-| VARCHAR/CHAR/TEXT       | String                                           | setString     | getString     |
-| DATE                    | java.sql.Date                                    | setDate       | getDate       |
-| TIME/TIMETZ             | java.sql.Time                                    | setTime       | getTime       |
-| TIMESTAMP/TIMSTAMPTZ    | java.sql.Timestamp                               | setTimestamp  | getTimestamp  |
-| ARRAY                   | java.sq.Array                                    | setArray      | getArray      |
-| BIT/VARBIT              | com.pivotal.hawq.mapreduce.datatype.             | setVarbit     | getVarbit     |
-| BYTEA                   | byte\[\]                                         | setByte       | getByte       |
-| INTERVAL                | com.pivotal.hawq.mapreduce.datatype.HAWQInterval | setInterval   | getInterval   |
-| POINT                   | com.pivotal.hawq.mapreduce.datatype.HAWQPoint    | setPoint      | getPoint      |
-| LSEG                    | com.pivotal.hawq.mapreduce.datatype.HAWQLseg     | setLseg       | getLseg       |
-| BOX                     | com.pivotal.hawq.mapreduce.datatype.HAWQBox      | setBox        | getBox        |
-| CIRCLE                  | com.pivotal.hawq.mapreduce.datatype.HAWQCircle   | setVircle     | getCircle     |
-| PATH                    | com.pivotal.hawq.mapreduce.datatype.HAWQPath     | setPath       | getPath       |
-| POLYGON                 | com.pivotal.hawq.mapreduce.datatype.HAWQPolygon  | setPolygon    | getPolygon    |
-| MACADDR                 | com.pivotal.hawq.mapreduce.datatype.HAWQMacaddr  | setMacaddr    | getMacaddr    |
-| INET                    | com.pivotal.hawq.mapreduce.datatype.HAWQInet     | setInet       | getInet       |
-| CIDR                    | com.pivotal.hawq.mapreduce.datatype.HAWQCIDR     | setCIDR       | getCIDR       |
-
-## <a id="hawqinputformatexample"></a>HAWQ InputFormat Example
-
-The following example shows how you can use the `HAWQInputFormat` class to access HAWQ table data from MapReduce jobs.
-
-``` java
-package com.mycompany.app;
-import com.pivotal.hawq.mapreduce.HAWQException;
-import com.pivotal.hawq.mapreduce.HAWQInputFormat;
-import com.pivotal.hawq.mapreduce.HAWQRecord;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.Mapper;
-import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-import org.apache.hadoop.io.IntWritable;
-
-import java.io.IOException;
-public class HAWQInputFormatDemoDriver extends Configured
-implements Tool {
-
-    // CREATE TABLE employees (
-    // id INTEGER NOT NULL, name VARCHAR(32) NOT NULL);
-    public static class DemoMapper extends
-        Mapper<Void, HAWQRecord, IntWritable, Text> {
-       int id = 0;
-       String name = null;
-       public void map(Void key, HAWQRecord value, Context context)
-        throws IOException, InterruptedException {
-        try {
-        id = value.getInt(1);
-        name = value.getString(2);
-        } catch (HAWQException hawqE) {
-        throw new IOException(hawqE.getMessage());
-        }
-        context.write(new IntWritable(id), new Text(name));
-       }
-    }
-    private static int printUsage() {
-       System.out.println("HAWQInputFormatDemoDriver
-       <database_url> <table_name> <output_path> [username]
-       [password]");
-       ToolRunner.printGenericCommandUsage(System.out);
-       return 2;
-    }
- 
-    public int run(String[] args) throws Exception {
-       if (args.length < 3) {
-        return printUsage();
-       }
-       Job job = Job.getInstance(getConf());
-       job.setJobName("hawq-inputformat-demo");
-       job.setJarByClass(HAWQInputFormatDemoDriver.class);
-       job.setMapperClass(DemoMapper.class);
-       job.setMapOutputValueClass(Text.class);
-       job.setOutputValueClass(Text.class);
-       String db_url = args[0];
-       String table_name = args[1];
-       String output_path = args[2];
-       String user_name = null;
-       if (args.length > 3) {
-         user_name = args[3];
-       }
-       String password = null;
-       if (args.length > 4) {
-         password = args[4];
-       }
-       job.setInputFormatClass(HAWQInputFormat.class);
-       HAWQInputFormat.setInput(job.getConfiguration(), db_url,
-       user_name, password, table_name);
-       FileOutputFormat.setOutputPath(job, new
-       Path(output_path));
-       job.setNumReduceTasks(0);
-       int res = job.waitForCompletion(true) ? 0 : 1;
-       return res;
-    }
-    
-    public static void main(String[] args) throws Exception {
-       int res = ToolRunner.run(new Configuration(),
-         new HAWQInputFormatDemoDriver(), args);
-       System.exit(res);
-    }
-}
-```
-
-**To compile and run the example:**
-
-1.  Create a work directory:
-
-    ``` shell
-    $ mkdir mrwork
-    $ cd mrwork
-    ```
- 
-2.  Copy and paste the Java code above into a `.java` file.
-
-    ``` shell
-    $ mkdir -p com/mycompany/app
-    $ cd com/mycompany/app
-    $ vi HAWQInputFormatDemoDriver.java
-    ```
-
-3.  Note the following dependencies required for compilation:
-    1.  `HAWQInputFormat` jars (located in the `$GPHOME/lib/postgresql/hawq-mr-io` directory):
-        -   `hawq-mapreduce-common.jar`
-        -   `hawq-mapreduce-ao.jar`
-        -   `hawq-mapreduce-parquet.jar`
-        -   `hawq-mapreduce-tool.jar`
-
-    2.  Required 3rd party jars (located in the `$GPHOME/lib/postgresql/hawq-mr-io/lib` directory):
-        -   `parquet-common-1.1.0.jar`
-        -   `parquet-format-1.1.0.jar`
-        -   `parquet-hadoop-1.1.0.jar`
-        -   `postgresql-n.n-n-jdbc4.jar`
-        -   `snakeyaml-n.n.jar`
-
-    3.  Hadoop Mapreduce related jars (located in�the install directory of your Hadoop distribution).
-
-4.  Compile the Java program.  You may choose to use a different compilation command:
-
-    ``` shell
-    javac -classpath /usr/hdp/2.4.2.0-258/hadoop-mapreduce/*:/usr/local/hawq/lib/postgresql/hawq-mr-io/*:/usr/local/hawq/lib/postgresql/hawq-mr-io/lib/*:/usr/hdp/current/hadoop-client/* HAWQInputFormatDemoDriver.java
-    ```
-   
-5.  Build the JAR file.
-
-    ``` shell
-    $ cd ../../..
-    $ jar cf my-app.jar com
-    $ cp my-app.jar /tmp
-    ```
-    
-6.  Check that you have installed HAWQ and HDFS and your HAWQ cluster is running.
-
-7.  Create sample table:
-    1.  Log in to HAWQ:
-
-        ``` shell
-        �$ psql -d postgres�
-        ```
-
-    2.  Create the table:
-
-        ``` sql
-        CREATE TABLE employees (
-        id INTEGER NOT NULL,
-        name TEXT NOT NULL);
-        ```
-
-        Or a Parquet table:
-
-        ``` sql
-        CREATE TABLE employees ( id INTEGER NOT NULL, name TEXT NOT NULL) WITH (APPENDONLY=true, ORIENTATION=parquet);
-        ```
-
-    3.  Insert one tuple:
-
-        ``` sql
-        INSERT INTO employees VALUES (1, 'Paul');
-        \q
-        ```
-8.  Ensure the system `pg_hba.conf` configuration file is set up to allow `gpadmin` access to the `postgres` database.
-
-8.  Use the following shell script snippet showing how to run the Mapreduce job:
-
-    ``` shell
-    #!/bin/bash
-    
-    # set up environment variables
-    HAWQMRLIB=/usr/local/hawq/lib/postgresql/hawq-mr-io
-    export HADOOP_CLASSPATH=$HAWQMRLIB/hawq-mapreduce-ao.jar:$HAWQMRLIB/hawq-mapreduce-common.jar:$HAWQMRLIB/hawq-mapreduce-tool.jar:$HAWQMRLIB/hawq-mapreduce-parquet.jar:$HAWQMRLIB/lib/postgresql-9.2-1003-jdbc4.jar:$HAWQMRLIB/lib/snakeyaml-1.12.jar:$HAWQMRLIB/lib/parquet-hadoop-1.1.0.jar:$HAWQMRLIB/lib/parquet-common-1.1.0.jar:$HAWQMRLIB/lib/parquet-format-1.0.0.jar
-    export LIBJARS=$HAWQMRLIB/hawq-mapreduce-ao.jar,$HAWQMRLIB/hawq-mapreduce-common.jar,$HAWQMRLIB/hawq-mapreduce-tool.jar,$HAWQMRLIB/lib/postgresql-9.2-1003-jdbc4.jar,$HAWQMRLIB/lib/snakeyaml-1.12.jar,$HAWQMRLIB/hawq-mapreduce-parquet.jar,$HAWQMRLIB/lib/parquet-hadoop-1.1.0.jar,$HAWQMRLIB/lib/parquet-common-1.1.0.jar,$HAWQMRLIB/lib/parquet-format-1.0.0.jar
-    
-    # usage:  hadoop jar JARFILE CLASSNAME -libjars JARS <database_url> <table_name> <output_path_on_HDFS>
-    #   - writing output to HDFS, so run as hdfs user
-    #   - if not using the default postgres port, replace 5432 with port number for your HAWQ cluster
-    HADOOP_USER_NAME=hdfs hadoop jar /tmp/my-app.jar com.mycompany.app.HAWQInputFormatDemoDriver -libjars $LIBJARS localhost:5432/postgres employees /tmp/employees
-    ```
-    
-    The MapReduce job output is written to the `/tmp/employees` directory on the HDFS file system.
-
-9.  Use the following command to check the result of the Mapreduce job:
-
-    ``` shell
-    $ sudo -u hdfs hdfs dfs -ls /tmp/employees
-    $ sudo -u hdfs hdfs dfs -cat /tmp/employees/*
-    ```
-
-    The output will appear as follows:
-
-    ``` pre
-    1 Paul
-    ```
-        
-10.  If you choose to run the program again, delete the output file and directory:
-    
-    ``` shell
-    $ sudo -u hdfs hdfs dfs -rm /tmp/employees/*
-    $ sudo -u hdfs hdfs dfs -rmdir /tmp/employees
-    ```
-
-## <a id="accessinghawqdata"></a>Accessing HAWQ Data
-
-You can access HAWQ data using the `HAWQInputFormat.setInput()` interface.  You will use a different API signature depending on whether HAWQ is running or not.
-
--   When HAWQ is running, use `HAWQInputFormat.setInput(Configuration conf, String db_url, String username, String password, String tableName)`.
--   When HAWQ is not running, first extract the table metadata to a file with the Metadata Export Tool and then use `HAWQInputFormat.setInput(Configuration conf, String pathStr)`.
-
-### <a id="hawqinputformatsetinput"></a>HAWQ is Running
-
-``` java
-  /**
-    * Initializes the map-part of the job with the appropriate input settings
-    * through connecting to Database.
-    *
-    * @param conf
-    * The map-reduce job configuration
-    * @param db_url
-    * The database URL to connect to
-    * @param username
-    * The username for setting up a connection to the database
-    * @param password
-    * The password for setting up a connection to the database
-    * @param tableName
-    * The name of the table to access to
-    * @throws Exception
-    */
-public static void setInput(Configuration conf, String db_url,
-    String username, String password, String tableName)
-throws Exception;
-```
-
-### <a id="metadataexporttool"></a>HAWQ is not Running
-
-Use the metadata export tool, `hawq extract`, to export the metadata of the target table into a local YAML file:
-
-``` shell
-$ hawq extract [-h hostname] [-p port] [-U username] [-d database] [-o output_file] [-W] <tablename>
-```
-
-Using the extracted metadata, access HAWQ data through the following interface.  Pass the complete path to the `.yaml` file in the `pathStr` argument.
-
-``` java
- /**
-   * Initializes the map-part of the job with the appropriate input settings through reading metadata file stored in local filesystem.
-   *
-   * To get metadata file, please use hawq extract first
-   *
-   * @param conf
-   * The map-reduce job configuration
-   * @param pathStr
-   * The metadata file path in local filesystem. e.g.
-   * /home/gpadmin/metadata/postgres_test
-   * @throws Exception
-   */
-public static void setInput(Configuration conf, String pathStr)
-   throws Exception;
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/Transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/Transactions.html.md.erb b/datamgmt/Transactions.html.md.erb
deleted file mode 100644
index dfc9a5e..0000000
--- a/datamgmt/Transactions.html.md.erb
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: Working with Transactions
----
-
-This topic describes transaction support in HAWQ.
-
-Transactions allow you to bundle multiple SQL statements in one all-or-nothing operation.
-
-The following are the HAWQ SQL transaction commands:
-
--   `BEGIN` or `START TRANSACTION `starts a transaction block.
--   `END` or `COMMIT` commits the results of a transaction.
--   `ROLLBACK` abandons a transaction without making any changes.
--   `SAVEPOINT` marks a place in a transaction and enables partial rollback. You can roll back commands executed after a savepoint while maintaining commands executed before the savepoint.
--   `ROLLBACK TO SAVEPOINT `rolls back a transaction to a savepoint.
--   `RELEASE SAVEPOINT `destroys a savepoint within a transaction.
-
-## <a id="topic8"></a>Transaction Isolation Levels
-
-HAWQ accepts the standard SQL transaction levels as follows:
-
--   *read uncommitted* and *read committed* behave like the standard *read committed*
--   serializable and repeatable read behave like the standard serializable
-
-The following information describes the behavior of the HAWQ transaction levels:
-
--   **read committed/read uncommitted** \u2014 Provides fast, simple, partial transaction isolation. With read committed and read uncommitted transaction isolation, `SELECT` transactions operate on a snapshot of the database taken when the query started.
-
-A `SELECT` query:
-
--   Sees data committed before the query starts.
--   Sees updates executed within the transaction.
--   Does not see uncommitted data outside the transaction.
--   Can possibly see changes that concurrent transactions made if the concurrent transaction is committed after the initial read in its own transaction.
-
-Successive `SELECT` queries in the same transaction can see different data if other concurrent transactions commit changes before the queries start.
-
-Read committed or read uncommitted transaction isolation may be inadequate for applications that perform complex queries and require a consistent view of the database.
-
--   **serializable/repeatable read** \u2014 Provides strict transaction isolation in which transactions execute as if they run one after another rather than concurrently. Applications on the serializable or repeatable read level must be designed to retry transactions in case of serialization failures.
-
-A `SELECT` query:
-
--   Sees a snapshot of the data as of the start of the transaction (not as of the start of the current query within the transaction).
--   Sees only data committed before the query starts.
--   Sees updates executed within the transaction.
--   Does not see uncommitted data outside the transaction.
--   Does not see changes that concurrent transactions made.
-
-    Successive `SELECT` commands within a single transaction always see the same data.
-
-The default transaction isolation level in HAWQ is *read committed*. To change the isolation level for a transaction, declare the isolation level when you `BEGIN` the transaction or use the `SET TRANSACTION` command after the transaction starts.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/about_statistics.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/about_statistics.html.md.erb b/datamgmt/about_statistics.html.md.erb
deleted file mode 100644
index 5e2184a..0000000
--- a/datamgmt/about_statistics.html.md.erb
+++ /dev/null
@@ -1,209 +0,0 @@
----
-title: About Database Statistics
----
-
-## <a id="overview"></a>Overview
-
-Statistics are metadata that describe the data stored in the database. The query optimizer needs up-to-date statistics to choose the best execution plan for a query. For example, if a query joins two tables and one of them must be broadcast to all segments, the optimizer can choose the smaller of the two tables to minimize network traffic.
-
-The statistics used by the optimizer are calculated and saved in the system catalog by the `ANALYZE` command. There are three ways to initiate an analyze operation:
-
--   You can run the `ANALYZE` command directly.
--   You can run the `analyzedb` management utility outside of the database, at the command line.
--   An automatic analyze operation can be triggered when DML operations are performed on tables that have no statistics or when a DML operation modifies a number of rows greater than a specified threshold.
-
-These methods are described in the following sections.
-
-Calculating statistics consumes time and resources, so HAWQ produces estimates by calculating statistics on samples of large tables. In most cases, the default settings provide the information needed to generate correct execution plans for queries. If the statistics produced are not producing optimal query execution plans, the administrator can tune configuration parameters to produce more accurate stastistics by increasing the sample size or the granularity of statistics saved in the system catalog. Producing more accurate statistics has CPU and storage costs and may not produce better plans, so it is important to view explain plans and test query performance to ensure that the additional statistics-related costs result in better query performance.
-
-## <a id="topic_oq3_qxj_3s"></a>System Statistics
-
-### <a id="tablesize"></a>Table Size
-
-The query planner seeks to minimize the disk I/O and network traffic required to execute a query, using estimates of the number of rows that must be processed and the number of disk pages the query must access. The data from which these estimates are derived are the `pg_class` system table columns `reltuples` and `relpages`, which contain the number of rows and pages at the time a `VACUUM` or `ANALYZE` command was last run. As rows are added, the numbers become less accurate. However, an accurate count of disk pages is always available from the operating system, so as long as the ratio of `reltuples` to `relpages` does not change significantly, the optimizer can produce an estimate of the number of rows that is sufficiently accurate to choose the correct query execution plan.
-
-In append-optimized tables, the number of tuples is kept up-to-date in the system catalogs, so the `reltuples` statistic is not an estimate. Non-visible tuples in the table are subtracted from the total. The `relpages` value is estimated from the append-optimized block sizes.
-
-When the `reltuples` column differs significantly from the row count returned by `SELECT COUNT(*)`, an analyze should be performed to update the statistics.
-
-### <a id="views"></a>The pg\_statistic System Table and pg\_stats View
-
-The `pg_statistic` system table holds the results of the last `ANALYZE` operation on each database table. There is a row for each column of every table. It has the following columns:
-
-starelid  
-The object ID of the table or index the column belongs to.
-
-staatnum  
-The number of the described column, beginning with 1.
-
-stanullfrac  
-The fraction of the column's entries that are null.
-
-stawidth  
-The average stored width, in bytes, of non-null entries.
-
-stadistinct  
-The number of distinct nonnull data values in the column.
-
-stakind*N*  
-A code number indicating the kind of statistics stored in the *N*th slot of the `pg_statistic` row.
-
-staop*N*  
-An operator used to derive the statistics stored in the *N*th slot.
-
-stanumbers*N*  
-Numerical statistics of the appropriate kind for the *N*th slot, or NULL if the slot kind does not involve numerical values.
-
-stavalues*N*  
-Column data values of the appropriate kind for the *N*th slot, or NULL if the slot kind does not store any data values.
-
-The statistics collected for a column vary for different data types, so the `pg_statistic` table stores statistics that are appropriate for the data type in four *slots*, consisting of four columns per slot. For example, the first slot, which normally contains the most common values for a column, consists of the columns `stakind1`, `staop1`, `stanumbers1`, and `stavalues1`. Also see [pg\_statistic](../reference/catalog/pg_statistic.html#topic1).
-
-The `stakindN` columns each contain a numeric code to describe the type of statistics stored in their slot. The `stakind` code numbers from 1 to 99 are reserved for core PostgreSQL data types. HAWQ uses code numbers 1, 2, and 3. A value of 0 means the slot is unused. The following table describes the kinds of statistics stored for the three codes.
-
-<a id="topic_oq3_qxj_3s__table_upf_1yc_nt"></a>
-
-<table>
-<caption><span class="tablecap">Table 1. Contents of pg_statistic &quot;slots&quot;</span></caption>
-<colgroup>
-<col width="50%" />
-<col width="50%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>stakind Code</th>
-<th>Description</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>1</td>
-<td><em>Most CommonValues (MCV) Slot</em>
-<ul>
-<li><code class="ph codeph">staop</code> contains the object ID of the &quot;=&quot; operator, used to decide whether values are the same or not.</li>
-<li><code class="ph codeph">stavalues</code> contains an array of the <em>K</em> most common non-null values appearing in the column.</li>
-<li><code class="ph codeph">stanumbers</code> contains the frequencies (fractions of total row count) of the values in the <code class="ph codeph">stavalues</code> array.</li>
-</ul>
-The values are ordered in decreasing frequency. Since the arrays are variable-size, <em>K</em> can be chosen by the statistics collector. Values must occur more than once to be added to the <code class="ph codeph">stavalues</code> array; a unique column has no MCV slot.</td>
-</tr>
-<tr class="even">
-<td>2</td>
-<td><em>Histogram Slot</em> \u2013 describes the distribution of scalar data.
-<ul>
-<li><code class="ph codeph">staop</code> is the object ID of the &quot;&lt;&quot; operator, which describes the sort ordering.</li>
-<li><code class="ph codeph">stavalues</code> contains <em>M</em> (where <em>M</em>&gt;=2) non-null values that divide the non-null column data values into <em>M</em>-1 bins of approximately equal population. The first <code class="ph codeph">stavalues</code> item is the minimum value and the last is the maximum value.</li>
-<li><code class="ph codeph">stanumbers</code> is not used and should be null.</li>
-</ul>
-<p>If a Most Common Values slot is also provided, then the histogram describes the data distribution after removing the values listed in the MCV array. (It is a <em>compressed histogram</em> in the technical parlance). This allows a more accurate representation of the distribution of a column with some very common values. In a column with only a few distinct values, it is possible that the MCV list describes the entire data population; in this case the histogram reduces to empty and should be omitted.</p></td>
-</tr>
-<tr class="odd">
-<td>3</td>
-<td><em>Correlation Slot</em> \u2013 describes the correlation between the physical order of table tuples and the ordering of data values of this column.
-<ul>
-<li><code class="ph codeph">staop</code> is the object ID of the &quot;&lt;&quot; operator. As with the histogram, more than one entry could theoretically appear.</li>
-<li><code class="ph codeph">stavalues</code> is not used and should be NULL.</li>
-<li><code class="ph codeph">stanumbers</code> contains a single entry, the correlation coefficient between the sequence of data values and the sequence of their actual tuple positions. The coefficient ranges from +1 to -1.</li>
-</ul></td>
-</tr>
-</tbody>
-</table>
-
-The `pg_stats` view presents the contents of `pg_statistic` in a friendlier format. For more information, see [pg\_stats](../reference/catalog/pg_stats.html#topic1).
-
-Newly created tables and indexes have no statistics.
-
-### <a id="topic_oq3_qxj_3s__section_wsy_1rv_mt"></a>Sampling
-
-When calculating statistics for large tables, HAWQ creates a smaller table by sampling the base table. If the table is partitioned, samples are taken from all partitions.
-
-If a sample table is created, the number of rows in the sample is calculated to provide a maximum acceptable relative error. The amount of acceptable error is specified with the `gp_analyze_relative_error` system configuration parameter, which is set to .25 (25%) by default. This is usually sufficiently accurate to generate correct query plans. If `ANALYZE` is not producing good estimates for a table column, you can increase the sample size by setting the `gp_analyze_relative_error` configuration parameter to a lower value. Beware that setting this parameter to a low value can lead to a very large sample size and dramatically increase analyze time.
-
-### <a id="topic_oq3_qxj_3s__section_u5p_brv_mt"></a>Updating Statistics
-
-Running `ANALYZE` with no arguments updates statistics for all tables in the database. This could take a very long time, so it is better to analyze tables selectively after data has changed. You can also analyze a subset of the columns in a table, for example columns used in joins, `WHERE` clauses, `SORT` clauses, `GROUP BY` clauses, or `HAVING` clauses.
-
-See the SQL Command Reference for details of running the `ANALYZE` command.
-
-Refer to the Management Utility Reference for details of running the `analyzedb` command.
-
-### <a id="topic_oq3_qxj_3s__section_cv2_crv_mt"></a>Analyzing Partitioned and Append-Optimized Tables
-
-When the `ANALYZE` command is run on a partitioned table, it analyzes each leaf-level subpartition, one at a time. You can run `ANALYZE` on just new or changed partition files to avoid analyzing partitions that have not changed. If a table is partitioned, you can analyze just new or changed partitions.
-
-The `analyzedb` command-line utility skips unchanged partitions automatically. It also runs concurrent sessions so it can analyze several partitions concurrently. It runs five sessions by default, but the number of sessions can be set from 1 to 10 with the `-p` command-line option. Each time `analyzedb` runs, it saves state information for append-optimized tables and partitions in the `db_analyze` directory in the master data directory. The next time it runs, `analyzedb` compares the current state of each table with the saved state and skips analyzing a table or partition if it is unchanged. Heap tables are always analyzed.
-
-If the Pivotal Query Optimizer is enabled, you also need to run `ANALYZE             ROOTPARTITION` to refresh the root partition statistics. The Pivotal Query Optimizer requires statistics at the root level for partitioned tables. The legacy optimizer does not use these statistics. Enable the Pivotal Query Optimizer by setting both the `optimizer` and `optimizer_analyze_root_partition` system configuration parameters to on. The root level statistics are then updated when you run `ANALYZE` or `ANALYZE ROOTPARTITION`. The time to run `ANALYZE ROOTPARTITION` is similar to the time to analyze a single partition since `ANALYZE ROOTPARTITION`. The `analyzedb` utility updates root partition statistics by default .
-
-## <a id="topic_gyb_qrd_2t"></a>Configuring Statistics
-
-There are several options for configuring HAWQ statistics collection.
-
-### <a id="statstarget"></a>Statistics Target
-
-The statistics target is the size of the `most_common_vals`, `most_common_freqs`, and `histogram_bounds` arrays for an individual column. By default, the target is 25. The default target can be changed by setting a server configuration parameter and the target can be set for any column using the `ALTER TABLE` command. Larger values increase the time needed to do `ANALYZE`, but may improve the quality of the legacy query optimizer (planner) estimates.
-
-Set the system default statistics target to a different value by setting the `default_statistics_target` server configuration parameter. The default value is usually sufficient, and you should only raise or lower it if your tests demonstrate that query plans improve with the new target. 
-
-You will perform different procedures to set server configuration parameters for your whole HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters via the Ambari Web UI only. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set server configuration parameters.
-
-The following examples show how to raise the default statistics target from 25 to 50.
-
-If you use Ambari to manage your HAWQ cluster:
-
-1. Set the `default_statistics_target` configuration property to `50` via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down.
-2. Select **Service Actions > Restart All** to load the updated configuration.
-
-If you manage your HAWQ cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-1. Use the `hawq config` utility to set `default_statistics_target`:
-
-    ``` shell
-    $ hawq config -c default_statistics_target -v 50
-    ```
-2. Reload the HAWQ configuration:
-
-    ``` shell
-    $ hawq stop cluster -u
-    ```
-
-The statististics target for individual columns can be set with the `ALTER             TABLE` command. For example, some queries can be improved by increasing the target for certain columns, especially columns that have irregular distributions. You can set the target to zero for columns that never contribute to query optimization. When the target is 0, `ANALYZE` ignores the column. For example, the following `ALTER TABLE` command sets the statistics target for the `notes` column in the `emp` table to zero:
-
-``` sql
-ALTER TABLE emp ALTER COLUMN notes SET STATISTICS 0;
-```
-
-The statistics target can be set in the range 0 to 1000, or set it to -1 to revert to using the system default statistics target.
-
-Setting the statistics target on a parent partition table affects the child partitions. If you set statistics to 0 on some columns on the parent table, the statistics for the same columns are set to 0 for all children partitions. However, if you later add or exchange another child partition, the new child partition will use either the default statistics target or, in the case of an exchange, the previous statistics target. Therefore, if you add or exchange child partitions, you should set the statistics targets on the new child table.
-
-### <a id="topic_gyb_qrd_2t__section_j3p_drv_mt"></a>Automatic Statistics Collection
-
-HAWQ can be set to automatically run `ANALYZE` on a table that either has no statistics or has changed significantly when certain operations are performed on the table. For partitioned tables, automatic statistics collection is only triggered when the operation is run directly on a leaf table, and then only the leaf table is analyzed.
-
-Automatic statistics collection has three modes:
-
--   `none` disables automatic statistics collection.
--   `on_no_stats` triggers an analyze operation for a table with no existing statistics when any of the commands `CREATE TABLE AS SELECT`, `INSERT`, or `COPY` are executed on the table.
--   `on_change` triggers an analyze operation when any of the commands `CREATE TABLE AS SELECT`, `INSERT`, or `COPY` are executed on the table and the number of rows affected exceeds the threshold defined by the `gp_autostats_on_change_threshold` configuration parameter.
-
-The automatic statistics collection mode is set separately for commands that occur within a procedural language function and commands that execute outside of a function:
-
--   The `gp_autostats_mode` configuration parameter controls automatic statistics collection behavior outside of functions and is set to `on_no_stats` by default.
-
-With the `on_change` mode, `ANALYZE` is triggered only if the number of rows affected exceeds the threshold defined by the `gp_autostats_on_change_threshold` configuration parameter. The default value for this parameter is a very high value, 2147483647, which effectively disables automatic statistics collection; you must set the threshold to a lower number to enable it. The `on_change` mode could trigger large, unexpected analyze operations that could disrupt the system, so it is not recommended to set it globally. It could be useful in a session, for example to automatically analyze a table following a load.
-
-To disable automatic statistics collection outside of functions, set the `gp_autostats_mode` parameter to `none`. For a command-line-managed HAWQ cluster:
-
-``` shell
-$ hawq configure -c gp_autostats_mode -v none
-```
-
-For an Ambari-managed cluster, set `gp_autostats_mode` via the Ambari Web UI.
-
-Set the `log_autostats` system configuration parameter to `on` if you want to log automatic statistics collection operations.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/dml.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/dml.html.md.erb b/datamgmt/dml.html.md.erb
deleted file mode 100644
index 681883a..0000000
--- a/datamgmt/dml.html.md.erb
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Managing Data with HAWQ
----
-
-This chapter provides information about manipulating data and concurrent access in HAWQ.
-
--   **[Basic Data Operations](../datamgmt/BasicDataOperations.html)**
-
-    This topic describes basic data operations that you perform in HAWQ.
-
--   **[About Database Statistics](../datamgmt/about_statistics.html)**
-
-    An overview of statistics gathered by the `ANALYZE` command in HAWQ.
-
--   **[Concurrency Control](../datamgmt/ConcurrencyControl.html)**
-
-    This topic discusses the mechanisms used in HAWQ to provide concurrency control.
-
--   **[Working with Transactions](../datamgmt/Transactions.html)**
-
-    This topic describes transaction support in HAWQ.
-
--   **[Loading and Unloading Data](../datamgmt/load/g-loading-and-unloading-data.html)**
-
-    The topics in this section describe methods for loading and writing data into and out of HAWQ, and how to format data files.
-
--   **[Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html)**
-
-    HAWQ Extension Framework (PXF) is an extensible framework that allows HAWQ to query external system data.�
-
--   **[HAWQ InputFormat for MapReduce](../datamgmt/HAWQInputFormatforMapReduce.html)**
-
-    MapReduce is a programming model developed by Google for processing and generating large data sets on an array of commodity servers. You can use the HAWQ InputFormat option to enable MapReduce jobs to access HAWQ data stored in HDFS.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/client-loadtools.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/client-loadtools.html.md.erb b/datamgmt/load/client-loadtools.html.md.erb
deleted file mode 100644
index fe291d0..0000000
--- a/datamgmt/load/client-loadtools.html.md.erb
+++ /dev/null
@@ -1,104 +0,0 @@
----
-title: Client-Based HAWQ Load Tools
----
-HAWQ supports data loading from Red Hat Enterprise Linux 5, 6, and 7 and Windows XP client systems. HAWQ Load Tools include both a loader program and a parallel file distribution program.
-
-This topic presents the instructions to install the HAWQ Load Tools on your client machine. It also includes the information necessary to configure HAWQ databases to accept remote client connections.
-
-## <a id="installloadrunrhel"></a>RHEL Load Tools
-
-The RHEL Load Tools are provided in a HAWQ distribution. 
-
-
-### <a id="installloadrunux"></a>Installing the RHEL Loader
-
-1. Download a HAWQ installer package or build HAWQ from source.
- 
-2. Refer to the HAWQ command line install instructions to set up your package repositories and install the HAWQ binary.
-
-3. Install the `libevent` and `libyaml` packages. These libraries are required by the HAWQ file server. You must have superuser privileges on the system.
-
-    ``` shell
-    $ sudo yum install -y libevent libyaml
-    ```
-
-### <a id="installrhelloadabout"></a>About the RHEL Loader Installation
-
-The files/directories of interest in a HAWQ RHEL Load Tools installation include:
-
-`bin/` \u2014 data loading command-line tools ([gpfdist](../../reference/cli/admin_utilities/gpfdist.html) and [hawq load](../../reference/cli/admin_utilities/hawqload.html))   
-`greenplum_path.sh` \u2014 environment set up file
-
-### <a id="installloadrhelcfgenv"></a>Configuring the RHEL Load Environment
-
-A `greenplum_path.sh` file is located in the HAWQ base install directory following installation. Source `greenplum_path.sh` before running the HAWQ RHEL Load Tools to set up your HAWQ environment:
-
-``` shell
-$ . /usr/local/hawq/greenplum_path.sh
-```
-
-Continue to [Using the HAWQ File Server (gpfdist)](g-using-the-hawq-file-server--gpfdist-.html) for specific information about using the HAWQ load tools.
-
-## <a id="installloadrunwin"></a>Windows Load Tools
-
-### <a id="installpythonwin"></a>Installing Python 2.5
-The HAWQ Load Tools for Windows requires that the 32-bit version of Python 2.5 be installed on your system. 
-
-**Note**: The 64-bit version of Python is **not** compatible with the HAWQ Load Tools for Windows.
-
-1. Download the [Python 2.5 installer for Windows](https://www.python.org/downloads/).  Make note of the directory to which it was downloaded.
-
-2. Double-click on the `python Load Tools for Windows-2.5.x.msi` package to launch the installer.
-3. Select **Install for all users** and click **Next**.
-4. The default Python install location is `C:\Pythonxx`. Click **Up** or **New** to choose another location. Click **Next**.
-5. Click **Next** to install the selected Python components.
-6. Click **Finish** to complete the Python installation.
-
-
-### <a id="installloadrunwin"></a>Running the Windows Installer
-
-1. Download the `greenplum-loaders-4.3.x.x-build-n-WinXP-x86_32.msi` installer package from [Pivotal Network](https://network.pivotal.io/products/pivotal-gpdb). Make note of the directory to which it was downloaded.
- 
-2. Double-click the `greenplum-loaders-4.3.x.x-build-n-WinXP-x86_32.msi` file to launch the installer.
-3. Click **Next** on the **Welcome** screen.
-4. Click **I Agree** on the **License Agreement** screen.
-5. The default install location for HAWQ Loader Tools for Windows is `C:\"Program Files (x86)"\Greenplum\greenplum-loaders-4.3.8.1-build-1`. Click **Browse** to choose another location.
-6. Click **Next**.
-7. Click **Install** to begin the installation.
-8. Click **Finish** to exit the installer.
-
-    
-### <a id="installloadabout"></a>About the Windows Loader Installation
-Your HAWQ Windows Load Tools installation includes the following files and directories:
-
-`bin/` \u2014 data loading command-line tools ([gpfdist](http://gpdb.docs.pivotal.io/4380/client_tool_guides/load/unix/gpfdist.html) and [gpload](http://gpdb.docs.pivotal.io/4380/client_tool_guides/load/unix/gpload.html))  
-`lib/` \u2014 data loading library files  
-`greenplum_loaders_path.bat` \u2014 environment set up file
-
-
-### <a id="installloadcfgenv"></a>Configuring the Windows Load Environment
-
-A `greenplum_loaders_path.bat` file is provided in your load tools base install directory following installation. This file sets the following environment variables:
-
-- `GPHOME_LOADERS` - base directory of loader installation
-- `PATH` - adds the loader and component program directories
-- `PYTHONPATH` - adds component library directories
-
-Execute `greenplum_loaders_path.bat` to set up your HAWQ environment before running the HAWQ Windows Load Tools.
- 
-
-## <a id="installloadenableclientconn"></a>Enabling Remote Client Connections
-The HAWQ master database must be configured to accept remote client connections.  Specifically, you need to identify the client hosts and database users that will be connecting to the HAWQ database.
-
-1. Ensure that the HAWQ database master `pg_hba.conf` file is correctly configured to allow connections from the desired users operating on the desired database from the desired hosts, using the authentication method you choose. For details, see [Configuring Client Access](../../clientaccess/client_auth.html#topic2).
-
-    Make sure the authentication method you choose is supported by the client tool you are using.
-    
-2. If you edited the `pg_hba.conf` file, reload the server configuration. If you have any active database connections, you must include the `-M fast` option in the `hawq stop` command:
-
-    ``` shell
-    $ hawq stop cluster -u [-M fast]
-    ```
-   
-
-3. Verify and/or configure the databases and roles you are using to connect, and that the roles have the correct privileges to the database objects.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/creating-external-tables-examples.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/creating-external-tables-examples.html.md.erb b/datamgmt/load/creating-external-tables-examples.html.md.erb
deleted file mode 100644
index 8cdbff1..0000000
--- a/datamgmt/load/creating-external-tables-examples.html.md.erb
+++ /dev/null
@@ -1,117 +0,0 @@
----
-title: Creating External Tables - Examples
----
-
-The following examples show how to define external data with different protocols. Each `CREATE EXTERNAL TABLE` command can contain only one protocol.
-
-**Note:** When using IPv6, always enclose the numeric IP addresses in square brackets.
-
-Start `gpfdist` before you create external tables with the `gpfdist` protocol. The following code starts the `gpfdist` file server program in the background on port *8081* serving files from directory `/var/data/staging`. The logs are saved in `/home/gpadmin/log`.
-
-``` shell
-$ gpfdist -p 8081 -d /var/data/staging -l /home/gpadmin/log &
-```
-
-## <a id="ex1"></a>Example 1 - Single gpfdist instance on single-NIC machine
-
-Creates a readable external table, `ext_expenses`, using the `gpfdist` protocol. The files are formatted with a pipe (|) as the column delimiter.
-
-``` sql
-=# CREATE EXTERNAL TABLE ext_expenses
-        ( name text, date date, amount float4, category text, desc1 text )
-    LOCATION ('gpfdist://etlhost-1:8081/*', 'gpfdist://etlhost-1:8082/*')
-    FORMAT 'TEXT' (DELIMITER '|');
-```
-
-## <a id="ex2"></a>Example 2 - Multiple gpfdist instances
-
-Creates a readable external table, *ext\_expenses*, using the `gpfdist` protocol from all files with the *txt* extension. The column delimiter is a pipe ( | ) and NULL is a space (' ').
-
-``` sql
-=# CREATE EXTERNAL TABLE ext_expenses
-        ( name text, date date, amount float4, category text, desc1 text )
-    LOCATION ('gpfdist://etlhost-1:8081/*.txt', 'gpfdist://etlhost-2:8081/*.txt')
-    FORMAT 'TEXT' ( DELIMITER '|' NULL ' ') ;
-    
-```
-
-## <a id="ex3"></a>Example 3 - Multiple gpfdists instances
-
-Creates a readable external table, *ext\_expenses,* from all files with the *txt* extension using the `gpfdists` protocol. The column delimiter is a pipe ( | ) and NULL is a space (' '). For information about the location of security certificates, see [gpfdists Protocol](g-gpfdists-protocol.html).
-
-1.  Run `gpfdist` with the `--ssl` option.
-2.  Run the following command.
-
-    ``` sql
-    =# CREATE EXTERNAL TABLE ext_expenses
-             ( name text, date date, amount float4, category text, desc1 text )
-        LOCATION ('gpfdists://etlhost-1:8081/*.txt', 'gpfdists://etlhost-2:8082/*.txt')
-        FORMAT 'TEXT' ( DELIMITER '|' NULL ' ') ;
-        
-    ```
-
-## <a id="ex4"></a>Example 4 - Single gpfdist instance with error logging
-
-Uses the gpfdist protocol to create a readable external table, `ext_expenses,` from all files with the *txt* extension. The column delimiter is a pipe ( | ) and NULL (' ') is a space.
-
-Access to the external table is single row error isolation mode. Input data formatting errors can be captured so that you can view the errors, fix the issues, and then reload the rejected data. If the error count on a segment is greater than five (the `SEGMENT REJECT LIMIT` value), the entire external table operation fails and no rows are processed.
-
-``` sql
-=# CREATE EXTERNAL TABLE ext_expenses
-         ( name text, date date, amount float4, category text, desc1 text )
-    LOCATION ('gpfdist://etlhost-1:8081/*.txt', 'gpfdist://etlhost-2:8082/*.txt')
-    FORMAT 'TEXT' ( DELIMITER '|' NULL ' ')
-    LOG ERRORS INTO expenses_errs SEGMENT REJECT LIMIT 5;
-    
-```
-
-To create the readable `ext_expenses` table from CSV-formatted text files:
-
-``` sql
-=# CREATE EXTERNAL TABLE ext_expenses
-         ( name text, date date, amount float4, category text, desc1 text )
-    LOCATION ('gpfdist://etlhost-1:8081/*.txt', 'gpfdist://etlhost-2:8082/*.txt')
-    FORMAT 'CSV' ( DELIMITER ',' )
-    LOG ERRORS INTO expenses_errs SEGMENT REJECT LIMIT 5;
-    
-```
-
-## <a id="ex5"></a>Example 5 - Readable Web External Table with Script
-
-Creates a readable web external table that executes a script once on five virtual segments:
-
-``` sql
-=# CREATE EXTERNAL WEB TABLE log_output (linenum int, message text)
-    EXECUTE '/var/load_scripts/get_log_data.sh' ON 5
-    FORMAT 'TEXT' (DELIMITER '|');
-    
-```
-
-## <a id="ex6"></a>Example 6 - Writable External Table with gpfdist
-
-Creates a writable external table, *sales\_out*, that uses `gpfdist` to write output data to the file *sales.out*. The column delimiter is a pipe ( | ) and NULL is a space (' '). The file will be created in the directory specified when you started the gpfdist file server.
-
-``` sql
-=# CREATE WRITABLE EXTERNAL TABLE sales_out (LIKE sales)
-    LOCATION ('gpfdist://etl1:8081/sales.out')
-    FORMAT 'TEXT' ( DELIMITER '|' NULL ' ')
-    DISTRIBUTED BY (txn_id);
-    
-```
-
-## <a id="ex7"></a>Example 7 - Writable External Web Table with Script
-
-Creates a writable external web table, `campaign_out`, that pipes output data recieved by the segments to an executable script, `to_adreport_etl.sh`:
-
-``` sql
-=# CREATE WRITABLE EXTERNAL WEB TABLE campaign_out
-        (LIKE campaign)
-        EXECUTE '/var/unload_scripts/to_adreport_etl.sh' ON 6
-        FORMAT 'TEXT' (DELIMITER '|');
-```
-
-## <a id="ex8"></a>Example 8 - Readable and Writable External Tables with XML Transformations
-
-HAWQ can read and write XML data to and from external tables with gpfdist. For information about setting up an XML transform, see [Transforming XML Data](g-transforming-xml-data.html#topic75).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-about-gpfdist-setup-and-performance.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-about-gpfdist-setup-and-performance.html.md.erb b/datamgmt/load/g-about-gpfdist-setup-and-performance.html.md.erb
deleted file mode 100644
index 28a0bfe..0000000
--- a/datamgmt/load/g-about-gpfdist-setup-and-performance.html.md.erb
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: About gpfdist Setup and Performance
----
-
-Consider the following scenarios for optimizing your ETL network performance.
-
--   Allow network traffic to use all ETL host Network Interface Cards (NICs) simultaneously. Run one instance of `gpfdist` on the ETL host, then declare the host name of each NIC in the `LOCATION` clause of your external table definition (see [Creating External Tables - Examples](creating-external-tables-examples.html#topic44)).
-
-<a id="topic14__du165872"></a>
-<span class="figtitleprefix">Figure: </span>External Table Using Single gpfdist Instance with Multiple NICs
-
-<img src="../../images/ext_tables_multinic.jpg" class="image" width="472" height="271" />
-
--   Divide external table data equally among multiple `gpfdist` instances on the ETL host. For example, on an ETL system with two NICs, run two `gpfdist` instances (one on each NIC) to optimize data load performance and divide the external table data files evenly between the two `gpfdists`.
-
-<a id="topic14__du165882"></a>
-
-<span class="figtitleprefix">Figure: </span>External Tables Using Multiple gpfdist Instances with Multiple NICs
-
-<img src="../../images/ext_tables.jpg" class="image" width="467" height="282" />
-
-**Note:** Use pipes (|) to separate formatted text when you submit files to `gpfdist`. HAWQ encloses comma-separated text strings in single or double quotes. `gpfdist` has to remove the quotes to parse the strings. Using pipes to separate formatted text avoids the extra step and improves performance.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-character-encoding.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-character-encoding.html.md.erb b/datamgmt/load/g-character-encoding.html.md.erb
deleted file mode 100644
index 9f3756d..0000000
--- a/datamgmt/load/g-character-encoding.html.md.erb
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Character Encoding
----
-
-Character encoding systems consist of a code that pairs each character from a character set with something else, such as a sequence of numbers or octets, to facilitate data stransmission and storage. HAWQ supports a variety of character sets, including single-byte character sets such as the ISO 8859 series and multiple-byte character sets such as EUC (Extended UNIX Code), UTF-8, and Mule internal code. Clients can use all supported character sets transparently, but a few are not supported for use within the server as a server-side encoding.
-
-Data files must be in a character encoding recognized by HAWQ. Data files that contain invalid or unsupported encoding sequences encounter errors when loading into HAWQ.
-
-**Note:** On data files generated on a Microsoft Windows operating system, run the `dos2unix` system command to remove any Windows-only characters before loading into HAWQ.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-command-based-web-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-command-based-web-external-tables.html.md.erb b/datamgmt/load/g-command-based-web-external-tables.html.md.erb
deleted file mode 100644
index 7830cc3..0000000
--- a/datamgmt/load/g-command-based-web-external-tables.html.md.erb
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: Command-based Web External Tables
----
-
-The output of a shell command or script defines command-based web table data. Specify the command in the `EXECUTE` clause of `CREATE EXTERNAL WEB                 TABLE`. The data is current as of the time the command runs. The `EXECUTE` clause runs the shell command or script on the specified master or virtual segments. The virtual segments run the command in parallel. Scripts must be executable by the gpadmin user and reside in the same location on the master or the hosts of virtual segments.
-
-The command that you specify in the external table definition executes from the database and cannot access environment variables from `.bashrc` or `.profile`. Set environment variables in the `EXECUTE` clause. The following external web table, for example, runs a command on the HAWQ master host:
-
-``` sql
-CREATE EXTERNAL WEB TABLE output (output text)
-EXECUTE 'PATH=/home/gpadmin/programs; export PATH; myprogram.sh'
-    ON MASTER 
-FORMAT 'TEXT';
-```
-
-The following command defines a web table that runs a script on five virtual segments.
-
-``` sql
-CREATE EXTERNAL WEB TABLE log_output (linenum int, message text) 
-EXECUTE '/var/load_scripts/get_log_data.sh' ON 5 
-FORMAT 'TEXT' (DELIMITER '|');
-```
-
-The virtual segments are selected by the resource manager at runtime.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-configuration-file-format.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-configuration-file-format.html.md.erb b/datamgmt/load/g-configuration-file-format.html.md.erb
deleted file mode 100644
index 73f51a9..0000000
--- a/datamgmt/load/g-configuration-file-format.html.md.erb
+++ /dev/null
@@ -1,66 +0,0 @@
----
-title: Configuration File Format
----
-
-The `gpfdist` configuration file uses the YAML 1.1 document format and implements a schema for defining the transformation parameters. The configuration file must be a valid YAML document.
-
-The `gpfdist` program processes the document in order and uses indentation (spaces) to determine the document hierarchy and relationships of the sections to one another. The use of white space is significant. Do not use white space for formatting and do not use tabs.
-
-The following is the basic structure of a configuration file.
-
-``` pre
----
-VERSION:   1.0.0.1
-TRANSFORMATIONS: 
-transformation_name1:
-TYPE:      input | output
-COMMAND:   command
-CONTENT:   data | paths
-SAFE:      posix-regex
-STDERR:    server | console
-transformation_name2:
-TYPE:      input | output
-COMMAND:   command 
-...
-```
-
-VERSION  
-Required. The version of the `gpfdist` configuration file schema. The current version is 1.0.0.1.
-
-TRANSFORMATIONS  
-Required. Begins the transformation specification section. A configuration file must have at least one transformation. When `gpfdist` receives a transformation request, it looks in this section for an entry with the matching transformation name.
-
-TYPE  
-Required. Specifies the direction of transformation. Values are `input` or `output`.
-
--   `input`: `gpfdist` treats the standard output of the transformation process as a stream of records to load into HAWQ.
--   `output` <span class="ph">: </span> `gpfdist` treats the standard input of the transformation process as a stream of records from HAWQ to transform and write to the appropriate output.
-
-COMMAND  
-Required. Specifies the command `gpfdist` will execute to perform the transformation.
-
-For input transformations, `gpfdist` invokes the command specified in the `CONTENT` setting. The command is expected to open the underlying file(s) as appropriate and produce one line of `TEXT` for each row to load into HAWQ /&gt;. The input transform determines whether the entire content should be converted to one row or to multiple rows.
-
-For output transformations, `gpfdist` invokes this command as specified in the `CONTENT` setting. The output command is expected to open and write to the underlying file(s) as appropriate. The output transformation determines the final placement of the converted output.
-
-CONTENT  
-Optional. The values are `data` and `paths`. The default value is `data`.
-
--   When `CONTENT` specifies `data`, the text `%filename%` in the `COMMAND` section is replaced by the path to the file to read or write.
--   When `CONTENT` specifies `paths`, the text `%filename%` in the `COMMAND` section is replaced by the path to the temporary file that contains the list of files to read or write.
-
-The following is an example of a `COMMAND` section showing the text `%filename%` that is replaced.
-
-``` pre
-COMMAND: /bin/bash input_transform.sh %filename%
-```
-
-SAFE  
-Optional. A `POSIX `regular expression that the paths must match to be passed to the transformation. Specify `SAFE` when there is a concern about injection or improper interpretation of paths passed to the command. The default is no restriction on paths.
-
-STDERR  
-Optional.The values are `server` and `console`.
-
-This setting specifies how to handle standard error output from the transformation. The default, `server`, specifies that `gpfdist` will capture the standard error output from the transformation in a temporary file and send the first 8k of that file to HAWQ as an error message. The error message will appear as a SQL error. `Console` specifies that `gpfdist` does not redirect or transmit the standard error output from the transformation.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-controlling-segment-parallelism.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-controlling-segment-parallelism.html.md.erb b/datamgmt/load/g-controlling-segment-parallelism.html.md.erb
deleted file mode 100644
index 4e0096c..0000000
--- a/datamgmt/load/g-controlling-segment-parallelism.html.md.erb
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Controlling Segment Parallelism
----
-
-The `gp_external_max_segs` server configuration parameter controls the number of virtual segments that can simultaneously access a single `gpfdist` instance. The default is 64. You can set the number of segments such that some segments process external data files and some perform other database processing. Set this parameter in the `hawq-site.xml` file of your master instance.
-
-The number of segments in the `gpfdist` location list specify the minimum number of virtual segments required to serve data to a `gpfdist` external table.
-
-The `hawq_rm_nvseg_perquery_perseg_limit` and `hawq_rm_nvseg_perquery_limit` parameters also control segment parallelism by specifying the maximum number of segments used in running queries on a `gpfdist` external table on the cluster.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html.md.erb b/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html.md.erb
deleted file mode 100644
index ade14ea..0000000
--- a/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html.md.erb
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Capture Row Formatting Errors and Declare a Reject Limit
----
-
-The following SQL fragment captures formatting errors internally in HAWQ and declares a reject limit of 10 rows.
-
-``` sql
-LOG ERRORS INTO errortable SEGMENT REJECT LIMIT 10 ROWS
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-creating-and-using-web-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-creating-and-using-web-external-tables.html.md.erb b/datamgmt/load/g-creating-and-using-web-external-tables.html.md.erb
deleted file mode 100644
index 4ef6cab..0000000
--- a/datamgmt/load/g-creating-and-using-web-external-tables.html.md.erb
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Creating and Using Web External Tables
----
-
-`CREATE EXTERNAL WEB TABLE` creates a web table definition. Web external tables allow HAWQ to treat dynamic data sources like regular database tables. Because web table data can change as a query runs, the data is not rescannable.
-
-You can define command-based or URL-based web external tables. The definition forms are distinct: you cannot mix command-based and URL-based definitions.
-
--   **[Command-based Web External Tables](../../datamgmt/load/g-command-based-web-external-tables.html)**
-
--   **[URL-based Web External Tables](../../datamgmt/load/g-url-based-web-external-tables.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html.md.erb b/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html.md.erb
deleted file mode 100644
index e0c3c17..0000000
--- a/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: Define an External Table with Single Row Error Isolation
----
-
-The following example logs errors internally in HAWQ and sets an error threshold of 10 errors.
-
-``` sql
-=# CREATE EXTERNAL TABLE ext_expenses ( name text, date date, amount float4, category text, desc1 text )
-   LOCATION ('gpfdist://etlhost-1:8081/*', 'gpfdist://etlhost-2:8082/*')
-   FORMAT 'TEXT' (DELIMITER '|')
-   LOG ERRORS INTO errortable SEGMENT REJECT LIMIT 10 ROWS;
-```
-
-The following example creates an external table, *ext\_expenses*, sets an error threshold of 10 errors, and writes error rows to the table *err\_expenses*.
-
-``` sql
-=# CREATE EXTERNAL TABLE ext_expenses
-     ( name text, date date, amount float4, category text, desc1 text )
-   LOCATION ('gpfdist://etlhost-1:8081/*', 'gpfdist://etlhost-2:8082/*')
-   FORMAT 'TEXT' (DELIMITER '|')
-   LOG ERRORS INTO err_expenses SEGMENT REJECT LIMIT 10 ROWS;
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html.md.erb b/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html.md.erb
deleted file mode 100644
index 8a24474..0000000
--- a/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html.md.erb
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: Defining a Command-Based Writable External Web Table
----
-
-You can define writable external web tables to send output rows to an application or script. The application must accept an input stream, reside in the same location on all of the HAWQ segment hosts, and be executable by the `gpadmin` user. All segments in the HAWQ system run the application or script, whether or not a segment has output rows to process.
-
-Use `CREATE WRITABLE EXTERNAL WEB TABLE` to define the external table and specify the application or script to run on the segment hosts. Commands execute from within the database and cannot access environment variables (such as `$PATH`). Set environment variables in the `EXECUTE` clause of your writable external table definition. For example:
-
-``` sql
-=# CREATE WRITABLE EXTERNAL WEB TABLE output (output text) 
-    EXECUTE 'export PATH=$PATH:/home/gpadmin/programs; myprogram.sh' 
-    ON 6
-    FORMAT 'TEXT'
-    DISTRIBUTED RANDOMLY;
-```
-
-The following HAWQ variables are available for use in OS commands executed by a web or writable external table. Set these variables as environment variables in the shell that executes the command(s). They can be used to identify a set of requests made by an external table statement across the HAWQ array of hosts and segment instances.
-
-<caption><span class="tablecap">Table 1. External Table EXECUTE Variables</span></caption>
-
-<a id="topic71__du224024"></a>
-
-| Variable            | Description                                                                                                                |
-|---------------------|----------------------------------------------------------------------------------------------------------------------------|
-| $GP\_CID            | Command count of the transaction executing the external table statement.                                                   |
-| $GP\_DATABASE       | The database in which the external table definition resides.                                                               |
-| $GP\_DATE           | The date on which the external table command ran.                                                                          |
-| $GP\_MASTER\_HOST   | The host name of the HAWQ master host from which the external table statement was dispatched.                              |
-| $GP\_MASTER\_PORT   | The port number of the HAWQ master instance from which the external table statement was dispatched.                        |
-| $GP\_SEG\_DATADIR   | The location of the data directory of the segment instance executing the external table command.                           |
-| $GP\_SEG\_PG\_CONF  | The location of the `hawq-site.xml` file of the segment instance executing the external table command.                     |
-| $GP\_SEG\_PORT      | The port number of the segment instance executing the external table command.                                              |
-| $GP\_SEGMENT\_COUNT | The total number of segment instances in the HAWQ system.                                                                  |
-| $GP\_SEGMENT\_ID    | The ID number of the segment instance executing the external table command (same as `dbid` in `gp_segment_configuration`). |
-| $GP\_SESSION\_ID    | The database session identifier number associated with the external table statement.                                       |
-| $GP\_SN             | Serial number of the external table scan node in the query plan of the external table statement.                           |
-| $GP\_TIME           | The time the external table command was executed.                                                                          |
-| $GP\_USER           | The database user executing the external table statement.                                                                  |
-| $GP\_XID            | The transaction ID of the external table statement.                                                                        |
-
--   **[Disabling EXECUTE for Web or Writable External Tables](../../datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-defining-a-file-based-writable-external-table.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-defining-a-file-based-writable-external-table.html.md.erb b/datamgmt/load/g-defining-a-file-based-writable-external-table.html.md.erb
deleted file mode 100644
index fa1ddfa..0000000
--- a/datamgmt/load/g-defining-a-file-based-writable-external-table.html.md.erb
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Defining a File-Based Writable External Table
----
-
-Writable external tables that output data to files use the HAWQ parallel file server program, `gpfdist`, or HAWQ Extensions Framework (PXF).
-
-Use the `CREATE WRITABLE EXTERNAL TABLE` command to define the external table and specify the location and format of the output files.
-
--   With a writable external table using the `gpfdist` protocol, the HAWQ segments send their data to `gpfdist`, which writes the data to the named file. `gpfdist` must run on a host that the HAWQ segments can access over the network. `gpfdist` points to a file location on the output host and writes data received from the HAWQ segments to the file. To divide the output data among multiple files, list multiple `gpfdist` URIs in your writable external table definition.
--   A writable external web table sends data to an application as a stream of data. For example, unload data from HAWQ and send it to an application that connects to another database or ETL tool to load the data elsewhere. Writable external web tables use the `EXECUTE` clause to specify a shell command, script, or application to run on the segment hosts and accept an input stream of data. See [Defining a Command-Based Writable External Web Table](g-defining-a-command-based-writable-external-web-table.html#topic71) for more information about using `EXECUTE` commands in a writable external table definition.
-
-You can optionally declare a distribution policy for your writable external tables. By default, writable external tables use a random distribution policy. If the source table you are exporting data from has a hash distribution policy, defining the same distribution key column(s) for the writable external table improves unload performance by eliminating the requirement to move rows over the interconnect. If you unload data from a particular table, you can use the `LIKE` clause to copy the column definitions and distribution policy from the source table.
-
--   **[Example - HAWQ file server (gpfdist)](../../datamgmt/load/g-example-hawq-file-server-gpfdist.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-determine-the-transformation-schema.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-determine-the-transformation-schema.html.md.erb b/datamgmt/load/g-determine-the-transformation-schema.html.md.erb
deleted file mode 100644
index 1a4eb9b..0000000
--- a/datamgmt/load/g-determine-the-transformation-schema.html.md.erb
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Determine the Transformation Schema
----
-
-To prepare for the transformation project:
-
-1.  <span class="ph">Determine the goal of the project, such as indexing data, analyzing data, combining data, and so on.</span>
-2.  <span class="ph">Examine the XML file and note the file structure and element names. </span>
-3.  <span class="ph">Choose the elements to import and decide if any other limits are appropriate. </span>
-
-For example, the following XML file, *prices.xml*, is a simple, short file that contains price records. Each price record contains two fields: an item number and a price.
-
-``` xml
-<?xml version="1.0" encoding="ISO-8859-1" ?>
-<prices>
-  <pricerecord>
-    <itemnumber>708421</itemnumber>
-    <price>19.99</price>
-  </pricerecord>
-  <pricerecord>
-    <itemnumber>708466</itemnumber>
-    <price>59.25</price>
-  </pricerecord>
-  <pricerecord>
-    <itemnumber>711121</itemnumber>
-    <price>24.99</price>
-  </pricerecord>
-</prices>
-```
-
-The goal is to import all the data into a HAWQ table with an integer `itemnumber` column and a decimal `price` column.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html.md.erb b/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html.md.erb
deleted file mode 100644
index f0332b5..0000000
--- a/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html.md.erb
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Disabling EXECUTE for Web or Writable External Tables
----
-
-There is a security risk associated with allowing external tables to execute OS commands or scripts. To disable the use of `EXECUTE` in web and writable external table definitions, set the `gp_external_enable_exec server` configuration parameter to off in your master `hawq-site.xml` file:
-
-``` pre
-gp_external_enable_exec = off
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-escaping-in-csv-formatted-files.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-escaping-in-csv-formatted-files.html.md.erb b/datamgmt/load/g-escaping-in-csv-formatted-files.html.md.erb
deleted file mode 100644
index d07b463..0000000
--- a/datamgmt/load/g-escaping-in-csv-formatted-files.html.md.erb
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Escaping in CSV Formatted Files
----
-
-By default, the escape character is a `"` (double quote) for CSV-formatted files. If you want to use a different escape character, use the `ESCAPE` clause of `COPY`, `CREATE EXTERNAL TABLE` or the `hawq load` control file to declare a different escape character. In cases where your selected escape character is present in your data, you can use it to escape itself.
-
-For example, suppose you have a table with three columns and you want to load the following three fields:
-
--   `Free trip to A,B`
--   `5.89`
--   `Special rate "1.79"`
-
-Your designated delimiter character is `,` (comma), and your designated escape character is `"` (double quote). The formatted row in your data file looks like this:
-
-``` pre
-         "Free trip to A,B","5.89","Special rate ""1.79"""
-
-      
-```
-
-The data value with a comma character that is part of the data is enclosed in double quotes. The double quotes that are part of the data are escaped with a double quote even though the field value is enclosed in double quotes.
-
-Embedding the entire field inside a set of double quotes guarantees preservation of leading and trailing whitespace characters:
-
-`"`Free trip to A,B `"`,`"`5.89 `"`,`"`Special rate `""`1.79`""             "`
-
-**Note:** In CSV mode, all characters are significant. A quoted value surrounded by white space, or any characters other than `DELIMITER`, includes those characters. This can cause errors if you import data from a system that pads CSV lines with white space to some fixed width. In this case, preprocess the CSV file to remove the trailing white space before importing the data into HAWQ.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-escaping-in-text-formatted-files.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-escaping-in-text-formatted-files.html.md.erb b/datamgmt/load/g-escaping-in-text-formatted-files.html.md.erb
deleted file mode 100644
index e24a2b7..0000000
--- a/datamgmt/load/g-escaping-in-text-formatted-files.html.md.erb
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Escaping in Text Formatted Files
----
-
-By default, the escape character is a \\ (backslash) for text-formatted files. You can declare a different escape character in the `ESCAPE` clause of `COPY`, `CREATE EXTERNAL TABLE`, or the `hawq             load` control file. If your escape character appears in your data, use it to escape itself.
-
-For example, suppose you have a table with three columns and you want to load the following three fields:
-
--   `backslash = \`
--   `vertical bar = |`
--   `exclamation point = !`
-
-Your designated delimiter character is `|` (pipe character), and your designated escape character is `\` (backslash). The formatted row in your data file looks like this:
-
-``` pre
-backslash = \\ | vertical bar = \| | exclamation point = !
-```
-
-Notice how the backslash character that is part of the data is escaped with another backslash character, and the pipe character that is part of the data is escaped with a backslash character.
-
-You can use the escape character to escape octal and hexidecimal sequences. The escaped value is converted to the equivalent character when loaded into HAWQ. For example, to load the ampersand character (`&`), use the escape character to escape its equivalent hexidecimal (`\0x26`) or octal (`\046`) representation.
-
-You can disable escaping in `TEXT`-formatted files using the `ESCAPE` clause of `COPY`, `CREATE EXTERNAL TABLE` or the `hawq load` control file as follows:
-
-``` pre
-ESCAPE 'OFF'
-```
-
-This is useful for input data that contains many backslash characters, such as web log data.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-escaping.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-escaping.html.md.erb b/datamgmt/load/g-escaping.html.md.erb
deleted file mode 100644
index 0a1e62a..0000000
--- a/datamgmt/load/g-escaping.html.md.erb
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Escaping
----
-
-There are two reserved characters that have special meaning to HAWQ:
-
--   The designated delimiter character separates columns or fields in the data file.
--   The newline character designates a new row in the data file.
-
-If your data contains either of these characters, you must escape the character so that HAWQ treats it as data and not as a field separator or new row. By default, the escape character is a \\ (backslash) for text-formatted files and a double quote (") for csv-formatted files.
-
--   **[Escaping in Text Formatted Files](../../datamgmt/load/g-escaping-in-text-formatted-files.html)**
-
--   **[Escaping in CSV Formatted Files](../../datamgmt/load/g-escaping-in-csv-formatted-files.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html.md.erb b/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html.md.erb
deleted file mode 100644
index 4f61396..0000000
--- a/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html.md.erb
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Command-based Web External Tables
----
-
-The output of a shell command or script defines command-based web table data. Specify the command in the `EXECUTE` clause of `CREATE EXTERNAL WEB                 TABLE`. The data is current as of the time the command runs. The `EXECUTE` clause runs the shell command or script on the specified master, and/or segment host or hosts. The command or script must reside on the hosts corresponding to the host(s) defined in the `EXECUTE` clause.
-
-By default, the command is run on segment hosts when active segments have output rows to process. For example, if each segment host runs four primary segment instances that have output rows to process, the command runs four times per segment host. You can optionally limit the number of segment instances that execute the web table command. All segments included in the web table definition in the `ON` clause run the command in parallel.
-
-The command that you specify in the external table definition executes from the database and cannot access environment variables from `.bashrc` or `.profile`. Set environment variables in the `EXECUTE` clause. For example:
-
-``` sql
-=# CREATE EXTERNAL WEB TABLE output (output text)
-EXECUTE 'PATH=/home/gpadmin/programs; export PATH; myprogram.sh'
-    ON MASTER
-FORMAT 'TEXT';
-```
-
-Scripts must be executable by the `gpadmin` user and reside in the same location on the master or segment hosts.
-
-The following command defines a web table that runs a script. The script runs on five virtual segments selected by the resource manager at runtime.
-
-``` sql
-=# CREATE EXTERNAL WEB TABLE log_output
-(linenum int, message text)
-EXECUTE '/var/load_scripts/get_log_data.sh' ON 5
-FORMAT 'TEXT' (DELIMITER '|');
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-example-hawq-file-server-gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-example-hawq-file-server-gpfdist.html.md.erb b/datamgmt/load/g-example-hawq-file-server-gpfdist.html.md.erb
deleted file mode 100644
index a0bf669..0000000
--- a/datamgmt/load/g-example-hawq-file-server-gpfdist.html.md.erb
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Example - HAWQ file server (gpfdist)
----
-
-``` sql
-=# CREATE WRITABLE EXTERNAL TABLE unload_expenses
-( LIKE expenses )
-LOCATION ('gpfdist://etlhost-1:8081/expenses1.out',
-'gpfdist://etlhost-2:8081/expenses2.out')
-FORMAT 'TEXT' (DELIMITER ',');
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html.md.erb b/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html.md.erb
deleted file mode 100644
index 6f5b9e3..0000000
--- a/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html.md.erb
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: Example using IRS MeF XML Files (In demo Directory)
----
-
-This example demonstrates loading a sample IRS Modernized eFile tax return using a Joost STX transformation. The data is in the form of a complex XML file.
-
-The U.S. Internal Revenue Service (IRS) made a significant commitment to XML and specifies its use in its Modernized e-File (MeF) system. In MeF, each tax return is an XML document with a deep hierarchical structure that closely reflects the particular form of the underlying tax code.
-
-XML, XML Schema and stylesheets play a role in their data representation and business workflow. The actual XML data is extracted from a ZIP file attached to a MIME "transmission file" message. For more information about MeF, see [Modernized e-File (Overview)](http://www.irs.gov/uac/Modernized-e-File-Overview) on the IRS web site.
-
-The sample XML document, *RET990EZ\_2006.xml*, is about 350KB in size with two elements:
-
--   ReturnHeader
--   ReturnData
-
-The &lt;ReturnHeader&gt; element contains general details about the tax return such as the taxpayer's name, the tax year of the return, and the preparer. The &lt;ReturnData&gt; element contains multiple sections with specific details about the tax return and associated schedules.
-
-The following is an abridged sample of the XML file.
-
-``` xml
-<?xml version="1.0" encoding="UTF-8"?> 
-<Return returnVersion="2006v2.0"
-   xmlns="http://www.irs.gov/efile" 
-   xmlns:efile="http://www.irs.gov/efile"
-   xsi:schemaLocation="http://www.irs.gov/efile"
-   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 
-   <ReturnHeader binaryAttachmentCount="1">
-     <ReturnId>AAAAAAAAAAAAAAAAAAAA</ReturnId>
-     <Timestamp>1999-05-30T12:01:01+05:01</Timestamp>
-     <ReturnType>990EZ</ReturnType>
-     <TaxPeriodBeginDate>2005-01-01</TaxPeriodBeginDate>
-     <TaxPeriodEndDate>2005-12-31</TaxPeriodEndDate>
-     <Filer>
-       <EIN>011248772</EIN>
-       ... more data ...
-     </Filer>
-     <Preparer>
-       <Name>Percy Polar</Name>
-       ... more data ...
-     </Preparer>
-     <TaxYear>2005</TaxYear>
-   </ReturnHeader>
-   ... more data ..
-```
-
-The goal is to import all the data into a HAWQ database. First, convert the XML document into text with newlines "escaped", with two columns: `ReturnId` and a single column on the end for the entire MeF tax return. For example:
-
-``` pre
-AAAAAAAAAAAAAAAAAAAA|<Return returnVersion="2006v2.0"... 
-```
-
-Load the data into HAWQ.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-example-witsml-files-in-demo-directory.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-example-witsml-files-in-demo-directory.html.md.erb b/datamgmt/load/g-example-witsml-files-in-demo-directory.html.md.erb
deleted file mode 100644
index 0484523..0000000
--- a/datamgmt/load/g-example-witsml-files-in-demo-directory.html.md.erb
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: Example using WITSML\u2122 Files (In demo Directory)
----
-
-This example demonstrates loading sample data describing an oil rig using a Joost STX transformation. The data is in the form of a complex XML file downloaded from energistics.org.
-
-The Wellsite Information Transfer Standard Markup Language (WITSML\u2122) is an oil industry initiative to provide open, non-proprietary, standard interfaces for technology and software to share information among oil companies, service companies, drilling contractors, application vendors, and regulatory agencies. For more information about WITSML\u2122, see [http://www.witsml.org](http://www.witsml.org).
-
-The oil rig information consists of a top level `<rigs>` element with multiple child elements such as `<documentInfo>,                             <rig>`, and so on. The following excerpt from the file shows the type of information in the `<rig>` tag.
-
-``` xml
-<?xml version="1.0" encoding="UTF-8"?>
-<?xml-stylesheet href="../stylesheets/rig.xsl" type="text/xsl" media="screen"?>
-<rigs 
- xmlns="http://www.witsml.org/schemas/131" 
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
- xsi:schemaLocation="http://www.witsml.org/schemas/131 ../obj_rig.xsd" 
- version="1.3.1.1">
- <documentInfo>
- ... misc data ...
- </documentInfo>
- <rig uidWell="W-12" uidWellbore="B-01" uid="xr31">
-     <nameWell>6507/7-A-42</nameWell>
-     <nameWellbore>A-42</nameWellbore>
-     <name>Deep Drill #5</name>
-     <owner>Deep Drilling Co.</owner>
-     <typeRig>floater</typeRig>
-     <manufacturer>Fitsui Engineering</manufacturer>
-     <yearEntService>1980</yearEntService>
-     <classRig>ABS Class A1 M CSDU AMS ACCU</classRig>
-     <approvals>DNV</approvals>
- ... more data ...
-```
-
-The goal is to import the information for this rig into HAWQ.
-
-The sample document, *rig.xml*, is about 11KB in size. The input does not contain tabs so the relevant information can be converted into records delimited with a pipe (|).
-
-`W-12|6507/7-A-42|xr31|Deep Drill #5|Deep Drilling Co.|John                             Doe|John.Doe@example.com|`
-
-With the columns:
-
--   `well_uid text`, -- e.g. W-12
--   `well_name text`, -- e.g. 6507/7-A-42
--   `rig_uid text`, -- e.g. xr31
--   `rig_name text`, -- e.g. Deep Drill \#5
--   `rig_owner text`, -- e.g. Deep Drilling Co.
--   `rig_contact text`, -- e.g. John Doe
--   `rig_email text`, -- e.g. John.Doe@example.com
--   `doc xml`
-
-Then, load the data into HAWQ.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/load/g-examples-read-fixed-width-data.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/g-examples-read-fixed-width-data.html.md.erb b/datamgmt/load/g-examples-read-fixed-width-data.html.md.erb
deleted file mode 100644
index 174529a..0000000
--- a/datamgmt/load/g-examples-read-fixed-width-data.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Examples - Read Fixed-Width Data
----
-
-The following examples show how to read fixed-width data.
-
-## Example 1 \u2013 Loading a table with PRESERVED\_BLANKS on
-
-``` sql
-CREATE READABLE EXTERNAL TABLE students (
-  name varchar(20), address varchar(30), age int)
-LOCATION ('gpfdist://host:port/file/path/')
-FORMAT 'CUSTOM' (formatter=fixedwidth_in, name=20, address=30, age=4,
-        preserve_blanks='on',null='NULL');
-```
-
-## Example 2 \u2013 Loading data with no line delimiter
-
-``` sql
-CREATE READABLE EXTERNAL TABLE students (
-  name varchar(20), address varchar(30), age int)
-LOCATION ('gpfdist://host:port/file/path/')
-FORMAT 'CUSTOM' (formatter=fixedwidth_in, name='20', address='30', age='4', 
-        line_delim='?@');
-```
-
-## Example 3 \u2013 Create a writable external table with a \\r\\n line delimiter
-
-``` sql
-CREATE WRITABLE EXTERNAL TABLE students_out (
-  name varchar(20), address varchar(30), age int)
-LOCATION ('gpfdist://host:port/file/path/filename')     
-FORMAT 'CUSTOM' (formatter=fixedwidth_out, 
-   name=20, address=30, age=4, line_delim=E'\r\n');
-```
-
-



[10/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/svg/hawq_architecture_components.svg
----------------------------------------------------------------------
diff --git a/mdimages/svg/hawq_architecture_components.svg b/mdimages/svg/hawq_architecture_components.svg
deleted file mode 100644
index 78d421a..0000000
--- a/mdimages/svg/hawq_architecture_components.svg
+++ /dev/null
@@ -1,1083 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   version="1.1"
-   viewBox="0 0 960 720"
-   stroke-miterlimit="10"
-   id="svg3984"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="hawq_architecture_components.svg"
-   width="960"
-   height="720"
-   style="fill:none;stroke:none;stroke-linecap:square;stroke-miterlimit:10"
-   inkscape:export-filename="/Users/stymon/workspace/docs-apache-hawq/hawq/images/hawq_architecture_components.png"
-   inkscape:export-xdpi="92.099998"
-   inkscape:export-ydpi="92.099998">
-  <metadata
-     id="metadata4339">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title />
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <defs
-     id="defs4337">
-    <marker
-       inkscape:isstock="true"
-       style="overflow:visible;"
-       id="marker10787"
-       refX="0.0"
-       refY="0.0"
-       orient="auto"
-       inkscape:stockid="Arrow1Lend">
-      <path
-         transform="scale(0.8) rotate(180) translate(12.5,0)"
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         id="path10789" />
-    </marker>
-    <marker
-       inkscape:stockid="Arrow1Lstart"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="marker10121"
-       style="overflow:visible"
-       inkscape:isstock="true">
-      <path
-         id="path10123"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
-         transform="scale(0.8) translate(12.5,0)" />
-    </marker>
-    <marker
-       inkscape:isstock="true"
-       style="overflow:visible"
-       id="marker9953"
-       refX="0.0"
-       refY="0.0"
-       orient="auto"
-       inkscape:stockid="Arrow1Lstart">
-      <path
-         transform="scale(0.8) translate(12.5,0)"
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         id="path9955" />
-    </marker>
-    <marker
-       inkscape:isstock="true"
-       style="overflow:visible"
-       id="marker9779"
-       refX="0.0"
-       refY="0.0"
-       orient="auto"
-       inkscape:stockid="Arrow1Lstart">
-      <path
-         transform="scale(0.8) translate(12.5,0)"
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         id="path9781" />
-    </marker>
-    <marker
-       inkscape:stockid="Arrow1Lstart"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="marker9605"
-       style="overflow:visible"
-       inkscape:isstock="true">
-      <path
-         id="path9607"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
-         transform="scale(0.8) translate(12.5,0)" />
-    </marker>
-    <marker
-       inkscape:stockid="Arrow1Lend"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="marker4821"
-       style="overflow:visible;"
-       inkscape:isstock="true">
-      <path
-         id="path4823"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
-         transform="scale(0.8) rotate(180) translate(12.5,0)" />
-    </marker>
-    <marker
-       inkscape:stockid="Arrow1Lend"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="Arrow1Lend"
-       style="overflow:visible;"
-       inkscape:isstock="true"
-       inkscape:collect="always">
-      <path
-         id="path4522"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
-         transform="scale(0.8) rotate(180) translate(12.5,0)" />
-    </marker>
-    <marker
-       inkscape:stockid="Arrow1Lstart"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="Arrow1Lstart"
-       style="overflow:visible"
-       inkscape:isstock="true"
-       inkscape:collect="always">
-      <path
-         id="path4519"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
-         transform="scale(0.8) translate(12.5,0)" />
-    </marker>
-  </defs>
-  <sodipodi:namedview
-     pagecolor="#ffffff"
-     bordercolor="#666666"
-     borderopacity="1"
-     objecttolerance="10"
-     gridtolerance="10"
-     guidetolerance="10"
-     inkscape:pageopacity="0"
-     inkscape:pageshadow="2"
-     inkscape:window-width="1264"
-     inkscape:window-height="851"
-     id="namedview4335"
-     showgrid="false"
-     inkscape:zoom="0.88611111"
-     inkscape:cx="387.77249"
-     inkscape:cy="319.83874"
-     inkscape:window-x="221"
-     inkscape:window-y="172"
-     inkscape:window-maximized="0"
-     inkscape:current-layer="svg3984"
-     fit-margin-top="0"
-     fit-margin-left="0"
-     fit-margin-right="0"
-     fit-margin-bottom="0"
-     inkscape:snap-global="false" />
-  <clipPath
-     id="p.0">
-    <path
-       d="M 0,0 960,0 960,720 0,720 0,0 Z"
-       id="path3987"
-       inkscape:connector-curvature="0"
-       style="clip-rule:nonzero" />
-  </clipPath>
-  <g
-     clip-path="url(#p.0)"
-     id="g3989">
-    <path
-       d="m 0,0 960,0 0,720 -960,0 z"
-       id="path3991"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 602.9554,414.2446 0,0 c 0,-13.48706 10.93341,-24.42044 24.42041,-24.42044 l 97.67883,0 0,0 c 6.47668,0 12.68817,2.57285 17.26788,7.15256 4.57972,4.57974 7.15259,10.79117 7.15259,17.26788 l 0,145.33237 c 0,13.487 -10.93341,24.42041 -24.42047,24.42041 l -97.67883,0 c -13.487,0 -24.42041,-10.93341 -24.42041,-24.42041 z"
-       id="path3993"
-       inkscape:connector-curvature="0"
-       style="fill:#3d85c6;fill-rule:nonzero" />
-    <path
-       d="m 602.9554,414.2446 0,0 c 0,-13.48706 10.93341,-24.42044 24.42041,-24.42044 l 97.67883,0 0,0 c 6.47668,0 12.68817,2.57285 17.26788,7.15256 4.57972,4.57974 7.15259,10.79117 7.15259,17.26788 l 0,145.33237 c 0,13.487 -10.93341,24.42041 -24.42047,24.42041 l -97.67883,0 c -13.487,0 -24.42041,-10.93341 -24.42041,-24.42041 z"
-       id="path3995"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 370.95538,414.2446 0,0 c 0,-13.48706 10.93341,-24.42044 24.42044,-24.42044 l 97.6788,0 0,0 c 6.47672,0 12.68814,2.57285 17.26785,7.15256 4.57975,4.57974 7.15262,10.79117 7.15262,17.26788 l 0,145.33237 c 0,13.487 -10.93344,24.42041 -24.42047,24.42041 l -97.6788,0 c -13.48703,0 -24.42044,-10.93341 -24.42044,-24.42041 z"
-       id="path3997"
-       inkscape:connector-curvature="0"
-       style="fill:#3d85c6;fill-rule:nonzero" />
-    <path
-       d="m 370.95538,414.2446 0,0 c 0,-13.48706 10.93341,-24.42044 24.42044,-24.42044 l 97.6788,0 0,0 c 6.47672,0 12.68814,2.57285 17.26785,7.15256 4.57975,4.57974 7.15262,10.79117 7.15262,17.26788 l 0,145.33237 c 0,13.487 -10.93344,24.42041 -24.42047,24.42041 l -97.6788,0 c -13.48703,0 -24.42044,-10.93341 -24.42044,-24.42041 z"
-       id="path3999"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 130.95538,414.2655 0,0 c 0,-13.48703 10.93339,-24.42044 24.42044,-24.42044 l 97.6788,0 0,0 c 6.47671,0 12.68814,2.57285 17.26785,7.15259 4.57975,4.57971 7.15259,10.79114 7.15259,17.26785 l 0,145.33234 c 0,13.48706 -10.93341,24.42047 -24.42044,24.42047 l -97.6788,0 c -13.48704,0 -24.42044,-10.93341 -24.42044,-24.42047 z"
-       id="path4001"
-       inkscape:connector-curvature="0"
-       style="fill:#3d85c6;fill-rule:nonzero" />
-    <path
-       d="m 130.95538,414.2655 0,0 c 0,-13.48703 10.93339,-24.42044 24.42044,-24.42044 l 97.6788,0 0,0 c 6.47671,0 12.68814,2.57285 17.26785,7.15259 4.57975,4.57971 7.15259,10.79114 7.15259,17.26785 l 0,145.33234 c 0,13.48706 -10.93341,24.42047 -24.42044,24.42047 l -97.6788,0 c -13.48704,0 -24.42044,-10.93341 -24.42044,-24.42047 z"
-       id="path4003"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 260,162.79868 0,0 c 0,-21.20758 17.19214,-38.39973 38.39972,-38.39973 l 275.98798,0 c 10.1842,0 19.95136,4.04568 27.15271,11.24702 7.20135,7.20134 11.24701,16.96846 11.24701,27.15271 l 0,153.59427 c 0,21.20758 -17.19214,38.39972 -38.39972,38.39972 l -275.98798,0 C 277.19214,354.79267 260,337.60053 260,316.39295 Z"
-       id="path4005"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 260,162.79868 0,0 c 0,-21.20758 17.19214,-38.39973 38.39972,-38.39973 l 275.98798,0 c 10.1842,0 19.95136,4.04568 27.15271,11.24702 7.20135,7.20134 11.24701,16.96846 11.24701,27.15271 l 0,153.59427 c 0,21.20758 -17.19214,38.39972 -38.39972,38.39972 l -275.98798,0 C 277.19214,354.79267 260,337.60053 260,316.39295 Z"
-       id="path4007"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 555.3648,15.619595 0,0 c 0,-4.766269 3.86383,-8.630093 8.63013,-8.630093 l 129.25946,0 c 2.28888,0 4.48394,0.9092393 6.10241,2.5276961 1.61847,1.6184559 2.52771,3.8135539 2.52771,6.1023969 l 0,34.51934 c 0,4.76627 -3.86383,8.630093 -8.63012,8.630093 l -129.25946,0 c -4.7663,0 -8.63013,-3.863823 -8.63013,-8.630093 z"
-       id="path4009"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 555.3648,15.619595 0,0 c 0,-4.766269 3.86383,-8.630093 8.63013,-8.630093 l 129.25946,0 c 2.28888,0 4.48394,0.9092393 6.10241,2.5276961 1.61847,1.6184559 2.52771,3.8135539 2.52771,6.1023969 l 0,34.51934 c 0,4.76627 -3.86383,8.630093 -8.63012,8.630093 l -129.25946,0 c -4.7663,0 -8.63013,-3.863823 -8.63013,-8.630093 z"
-       id="path4011"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 608.76154,24.25364 1.60937,0.40625 q -0.51562,1.984375 -1.82812,3.03125 -1.3125,1.03125 -3.21875,1.03125 -1.96875,0 -3.20313,-0.796875 -1.21875,-0.796875 -1.875,-2.3125 -0.64062,-1.53125 -0.64062,-3.265625 0,-1.90625 0.71875,-3.3125 0.73437,-1.421875 2.07812,-2.15625 1.34375,-0.734375 2.95313,-0.734375 1.82812,0 3.0625,0.9375 1.25,0.921875 1.73437,2.609375 l -1.57812,0.375 q -0.42188,-1.328125 -1.23438,-1.9375 -0.79687,-0.609375 -2.01562,-0.609375 -1.40625,0 -2.35938,0.671875 -0.9375,0.671875 -1.32812,1.8125 -0.375,1.125 -0.375,2.328125 0,1.5625 0.45312,2.71875 0.45313,1.15625 1.40625,1.734375 0.96875,0.5625 2.07813,0.5625 1.34375,0 2.28125,-0.78125 0.9375,-0.78125 1.28125,-2.3125 z m 9.38885,3.171875 q -0.82812,0.71875 -1.60937,1.015625 -0.76563,0.28125 -1.64063,0.28125 -1.45312,0 -2.23437,-0.703125 -0.78125,-0.71875 -0.78125,-1.828125 0,-0.640625 0.29687,-1.171875 0.29688,-0.546875 0.76563,-0.859375 0.48437,-0.328125 1.07812,-0.5 0.45313,-0.109375 1.32813,-0.21875 1.81
 25,-0.21875 2.67187,-0.515625 0,-0.3125 0,-0.390625 0,-0.90625 -0.42187,-1.28125 -0.5625,-0.515625 -1.70313,-0.515625 -1.04687,0 -1.54687,0.375 -0.5,0.359375 -0.75,1.3125 l -1.45313,-0.203125 q 0.20313,-0.9375 0.65625,-1.515625 0.45313,-0.578125 1.3125,-0.890625 0.85938,-0.3125 2,-0.3125 1.14063,0 1.84375,0.265625 0.70313,0.265625 1.03125,0.671875 0.32813,0.40625 0.46875,1.015625 0.0781,0.375 0.0781,1.375 l 0,2 q 0,2.078125 0.0937,2.640625 0.0937,0.546875 0.375,1.046875 l -1.5625,0 q -0.23438,-0.46875 -0.29688,-1.09375 z m -0.125,-3.328125 q -0.8125,0.328125 -2.4375,0.5625 -0.92187,0.125 -1.3125,0.296875 -0.375,0.171875 -0.59375,0.5 -0.20312,0.3125 -0.20312,0.703125 0,0.59375 0.45312,1 0.45313,0.390625 1.32813,0.390625 0.85937,0 1.53125,-0.375 0.67187,-0.390625 1,-1.03125 0.23437,-0.515625 0.23437,-1.5 l 0,-0.546875 z m 7.27771,3.078125 0.20313,1.328125 q -0.625,0.125 -1.125,0.125 -0.8125,0 -1.26563,-0.25 -0.4375,-0.265625 -0.625,-0.671875 -0.1875,-0.421875 -0.1875,-1.765625 l 0,-5.
 078125 -1.09375,0 0,-1.15625 1.09375,0 0,-2.1875 1.48438,-0.890625 0,3.078125 1.51562,0 0,1.15625 -1.51562,0 0,5.15625 q 0,0.640625 0.0781,0.828125 0.0781,0.171875 0.25,0.28125 0.1875,0.109375 0.53125,0.109375 0.23438,0 0.65625,-0.0625 z m 7.29865,0.25 q -0.82813,0.71875 -1.60938,1.015625 -0.76562,0.28125 -1.64062,0.28125 -1.45313,0 -2.23438,-0.703125 -0.78125,-0.71875 -0.78125,-1.828125 0,-0.640625 0.29688,-1.171875 0.29687,-0.546875 0.76562,-0.859375 0.48438,-0.328125 1.07813,-0.5 0.45312,-0.109375 1.32812,-0.21875 1.8125,-0.21875 2.67188,-0.515625 0,-0.3125 0,-0.390625 0,-0.90625 -0.42188,-1.28125 -0.5625,-0.515625 -1.70312,-0.515625 -1.04688,0 -1.54688,0.375 -0.5,0.359375 -0.75,1.3125 l -1.45312,-0.203125 q 0.20312,-0.9375 0.65625,-1.515625 0.45312,-0.578125 1.3125,-0.890625 0.85937,-0.3125 2,-0.3125 1.14062,0 1.84375,0.265625 0.70312,0.265625 1.03125,0.671875 0.32812,0.40625 0.46875,1.015625 0.0781,0.375 0.0781,1.375 l 0,2 q 0,2.078125 0.0937,2.640625 0.0937,0.546875 0.375,1.04
 6875 l -1.5625,0 q -0.23437,-0.46875 -0.29687,-1.09375 z m -0.125,-3.328125 q -0.8125,0.328125 -2.4375,0.5625 -0.92188,0.125 -1.3125,0.296875 -0.375,0.171875 -0.59375,0.5 -0.20313,0.3125 -0.20313,0.703125 0,0.59375 0.45313,1 0.45312,0.390625 1.32812,0.390625 0.85938,0 1.53125,-0.375 0.67188,-0.390625 1,-1.03125 0.23438,-0.515625 0.23438,-1.5 l 0,-0.546875 z m 3.98083,4.421875 0,-12.171875 1.48438,0 0,12.171875 -1.48438,0 z m 3.31855,-4.40625 q 0,-2.453125 1.35937,-3.625 1.14063,-0.984375 2.78125,-0.984375 1.8125,0 2.96875,1.203125 1.15625,1.1875 1.15625,3.28125 0,1.703125 -0.51562,2.6875 -0.51563,0.96875 -1.48438,1.515625 -0.96875,0.53125 -2.125,0.53125 -1.85937,0 -3,-1.1875 -1.14062,-1.1875 -1.14062,-3.421875 z m 1.53125,0 q 0,1.6875 0.73437,2.53125 0.75,0.84375 1.875,0.84375 1.10938,0 1.84375,-0.84375 0.73438,-0.84375 0.73438,-2.578125 0,-1.640625 -0.75,-2.484375 -0.73438,-0.84375 -1.82813,-0.84375 -1.125,0 -1.875,0.84375 -0.73437,0.84375 -0.73437,2.53125 z m 8.38708,5.140625 1.45
 313,0.21875 q 0.0937,0.671875 0.51562,0.96875 0.54688,0.421875 1.51563,0.421875 1.03125,0 1.59375,-0.421875 0.5625,-0.40625 0.76562,-1.15625 0.125,-0.453125 0.10938,-1.921875 -0.98438,1.15625 -2.4375,1.15625 -1.8125,0 -2.8125,-1.3125 -1,-1.3125 -1,-3.140625 0,-1.265625 0.45312,-2.328125 0.46875,-1.078125 1.32813,-1.65625 0.875,-0.578125 2.03125,-0.578125 1.5625,0 2.57812,1.265625 l 0,-1.0625 1.375,0 0,7.609375 q 0,2.0625 -0.42187,2.921875 -0.40625,0.859375 -1.32813,1.359375 -0.90625,0.5 -2.23437,0.5 -1.57813,0 -2.54688,-0.71875 -0.96875,-0.703125 -0.9375,-2.125 z m 1.23438,-5.296875 q 0,1.734375 0.6875,2.53125 0.70312,0.796875 1.73437,0.796875 1.03125,0 1.71875,-0.796875 0.70313,-0.796875 0.70313,-2.484375 0,-1.625 -0.71875,-2.4375 -0.71875,-0.828125 -1.73438,-0.828125 -0.98437,0 -1.6875,0.8125 -0.70312,0.8125 -0.70312,2.40625 z"
-       id="path4013"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 600.5002,45.613014 1.51562,-0.140625 q 0.10938,0.921875 0.5,1.515625 0.39063,0.578125 1.21875,0.9375 0.84375,0.359375 1.875,0.359375 0.92188,0 1.625,-0.265625 0.70313,-0.28125 1.04688,-0.75 0.35937,-0.484375 0.35937,-1.046875 0,-0.578125 -0.34375,-1 -0.32812,-0.4375 -1.09375,-0.71875 -0.48437,-0.203125 -2.17187,-0.59375 -1.67188,-0.40625 -2.34375,-0.765625 -0.875,-0.453125 -1.29688,-1.125 -0.42187,-0.6875 -0.42187,-1.515625 0,-0.921875 0.51562,-1.71875 0.53125,-0.8125 1.53125,-1.21875 1,-0.421875 2.23438,-0.421875 1.34375,0 2.375,0.4375 1.04687,0.4375 1.59375,1.28125 0.5625,0.84375 0.59375,1.921875 l -1.53125,0.109375 q -0.125,-1.15625 -0.84375,-1.734375 -0.71875,-0.59375 -2.125,-0.59375 -1.45313,0 -2.125,0.53125 -0.67188,0.53125 -0.67188,1.296875 0,0.65625 0.46875,1.078125 0.46875,0.421875 2.42188,0.875 1.96875,0.4375 2.70312,0.765625 1.0625,0.484375 1.5625,1.234375 0.51563,0.75 0.51563,1.734375 0,0.96875 -0.5625,1.828125 -0.54688,0.859375 -1.59375,1.34375 -1.04688,0.46
 875 -2.34375,0.46875 -1.65625,0 -2.78125,-0.46875 -1.10938,-0.484375 -1.75,-1.453125 -0.625,-0.96875 -0.65625,-2.1875 z m 17.94836,1.0625 1.54688,0.203125 q -0.375,1.34375 -1.35938,2.09375 -0.98437,0.75 -2.51562,0.75 -1.9375,0 -3.07813,-1.1875 -1.125,-1.203125 -1.125,-3.34375 0,-2.234375 1.14063,-3.453125 1.14062,-1.234375 2.96875,-1.234375 1.78125,0 2.89062,1.203125 1.125,1.203125 1.125,3.390625 0,0.125 -0.0156,0.390625 l -6.5625,0 q 0.0781,1.453125 0.8125,2.234375 0.75,0.765625 1.84375,0.765625 0.82812,0 1.40625,-0.421875 0.57812,-0.4375 0.92187,-1.390625 z m -4.90625,-2.40625 4.92188,0 q -0.0937,-1.109375 -0.5625,-1.671875 -0.71875,-0.859375 -1.85938,-0.859375 -1.01562,0 -1.71875,0.6875 -0.70312,0.6875 -0.78125,1.84375 z m 8.49646,5.25 0,-8.8125 1.34375,0 0,1.328125 q 0.51563,-0.9375 0.9375,-1.234375 0.4375,-0.296875 0.96875,-0.296875 0.75,0 1.53125,0.484375 l -0.51562,1.390625 q -0.54688,-0.328125 -1.09375,-0.328125 -0.48438,0 -0.875,0.296875 -0.39063,0.296875 -0.5625,0.8125 -0.
 25,0.796875 -0.25,1.75 l 0,4.609375 -1.48438,0 z m 8.22351,0 -3.34375,-8.8125 1.57813,0 1.89062,5.28125 q 0.3125,0.84375 0.5625,1.765625 0.20313,-0.6875 0.5625,-1.671875 l 1.95313,-5.375 1.53125,0 -3.32813,8.8125 -1.40625,0 z m 6.22657,-10.453125 0,-1.71875 1.5,0 0,1.71875 -1.5,0 z m 0,10.453125 0,-8.8125 1.5,0 0,8.8125 -1.5,0 z m 9.59979,-3.234375 1.46875,0.203125 q -0.23438,1.515625 -1.23438,2.375 -0.98437,0.859375 -2.4375,0.859375 -1.8125,0 -2.90625,-1.1875 -1.09375,-1.1875 -1.09375,-3.390625 0,-1.421875 0.46875,-2.484375 0.46875,-1.078125 1.4375,-1.609375 0.96875,-0.546875 2.10938,-0.546875 1.4375,0 2.34375,0.734375 0.90625,0.71875 1.17187,2.0625 l -1.45312,0.21875 q -0.20313,-0.890625 -0.73438,-1.328125 -0.53125,-0.453125 -1.28125,-0.453125 -1.125,0 -1.82812,0.8125 -0.70313,0.796875 -0.70313,2.546875 0,1.78125 0.67188,2.59375 0.6875,0.796875 1.78125,0.796875 0.875,0 1.46875,-0.53125 0.59375,-0.546875 0.75,-1.671875 z m 8.94531,0.390625 1.54688,0.203125 q -0.375,1.34375 -1.35938
 ,2.09375 -0.98437,0.75 -2.51562,0.75 -1.9375,0 -3.07813,-1.1875 -1.125,-1.203125 -1.125,-3.34375 0,-2.234375 1.14063,-3.453125 1.14062,-1.234375 2.96875,-1.234375 1.78125,0 2.89062,1.203125 1.125,1.203125 1.125,3.390625 0,0.125 -0.0156,0.390625 l -6.5625,0 q 0.0781,1.453125 0.8125,2.234375 0.75,0.765625 1.84375,0.765625 0.82812,0 1.40625,-0.421875 0.57812,-0.4375 0.92187,-1.390625 z m -4.90625,-2.40625 4.92188,0 q -0.0937,-1.109375 -0.5625,-1.671875 -0.71875,-0.859375 -1.85938,-0.859375 -1.01562,0 -1.71875,0.6875 -0.70312,0.6875 -0.78125,1.84375 z"
-       id="path4015"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 192.41995,15.619595 0,0 c 0,-4.766269 3.86382,-8.630093 8.63008,-8.630093 l 129.2595,0 c 2.28885,0 4.48395,0.9092393 6.10239,2.5276961 1.61847,1.6184559 2.52771,3.8135539 2.52771,6.1023969 l 0,34.51934 c 0,4.76627 -3.86383,8.630093 -8.6301,8.630093 l -129.2595,0 c -4.76626,0 -8.63008,-3.863823 -8.63008,-8.630093 z"
-       id="path4017"
-       inkscape:connector-curvature="0"
-       style="fill:#ff9900;fill-rule:nonzero" />
-    <path
-       d="m 192.41995,15.619595 0,0 c 0,-4.766269 3.86382,-8.630093 8.63008,-8.630093 l 129.2595,0 c 2.28885,0 4.48395,0.9092393 6.10239,2.5276961 1.61847,1.6184559 2.52771,3.8135539 2.52771,6.1023969 l 0,34.51934 c 0,4.76627 -3.86383,8.630093 -8.6301,8.630093 l -129.2595,0 c -4.76626,0 -8.63008,-3.863823 -8.63008,-8.630093 z"
-       id="path4019"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 245.06015,39.799263 0,-5.765625 -5.23438,-7.828125 2.1875,0 2.67188,4.09375 q 0.75,1.15625 1.39062,2.296875 0.60938,-1.0625 1.48438,-2.40625 l 2.625,-3.984375 2.10937,0 -5.4375,7.828125 0,5.765625 -1.79687,0 z m 7.11545,0 5.23437,-13.59375 1.9375,0 5.5625,13.59375 -2.04687,0 -1.59375,-4.125 -5.6875,0 -1.48438,4.125 -1.92187,0 z m 3.92187,-5.578125 4.60938,0 -1.40625,-3.78125 q -0.65625,-1.703125 -0.96875,-2.8125 -0.26563,1.3125 -0.73438,2.59375 l -1.5,4 z m 10.05295,5.578125 0,-13.59375 6.03125,0 q 1.8125,0 2.75,0.359375 0.95313,0.359375 1.51563,1.296875 0.5625,0.921875 0.5625,2.046875 0,1.453125 -0.9375,2.453125 -0.92188,0.984375 -2.89063,1.25 0.71875,0.34375 1.09375,0.671875 0.78125,0.734375 1.48438,1.8125 l 2.375,3.703125 -2.26563,0 -1.79687,-2.828125 q -0.79688,-1.21875 -1.3125,-1.875 -0.5,-0.65625 -0.90625,-0.90625 -0.40625,-0.265625 -0.8125,-0.359375 -0.3125,-0.07813 -1.01563,-0.07813 l -2.07812,0 0,6.046875 -1.79688,0 z m 1.79688,-7.59375 3.85937,0 q 1.23438,0 1.9
 2188,-0.25 0.70312,-0.265625 1.0625,-0.828125 0.375,-0.5625 0.375,-1.21875 0,-0.96875 -0.70313,-1.578125 -0.70312,-0.625 -2.21875,-0.625 l -4.29687,0 0,4.5 z m 11.62918,7.59375 0,-13.59375 1.84375,0 7.14062,10.671875 0,-10.671875 1.71875,0 0,13.59375 -1.84375,0 -7.14062,-10.6875 0,10.6875 -1.71875,0 z"
-       id="path4021"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 145.27034,447.79276 0,0 c 0,-3.1398 2.54533,-5.68515 5.68515,-5.68515 l 106.51945,0 c 1.50782,0 2.95386,0.59897 4.02002,1.66516 1.06617,1.06616 1.66514,2.51221 1.66514,4.01999 l 0,22.73993 c 0,3.13983 -2.54532,5.68515 -5.68515,5.68515 l -106.51946,0 c -3.13982,0 -5.68515,-2.54532 -5.68515,-5.68515 z"
-       id="path4023"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 145.27034,447.79276 0,0 c 0,-3.1398 2.54533,-5.68515 5.68515,-5.68515 l 106.51945,0 c 1.50782,0 2.95386,0.59897 4.02002,1.66516 1.06617,1.06616 1.66514,2.51221 1.66514,4.01999 l 0,22.73993 c 0,3.13983 -2.54532,5.68515 -5.68515,5.68515 l -106.51946,0 c -3.13982,0 -5.68515,-2.54532 -5.68515,-5.68515 z"
-       id="path4025"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 167.73932,461.70773 1.6875,-0.14062 q 0.125,1.01562 0.5625,1.67187 0.4375,0.65625 1.35937,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79688,-0.3125 1.1875,-0.84375 0.39063,-0.53125 0.39063,-1.15625 0,-0.64062 -0.375,-1.10937 -0.375,-0.48438 -1.23438,-0.8125 -0.54687,-0.21875 -2.42187,-0.65625 -1.875,-0.45313 -2.625,-0.85938 -0.96875,-0.51562 -1.45313,-1.26562 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57813,-1.92188 0.59375,-0.90625 1.70312,-1.35937 1.125,-0.46875 2.5,-0.46875 1.51563,0 2.67188,0.48437 1.15625,0.48438 1.76562,1.4375 0.625,0.9375 0.67188,2.14063 l -1.71875,0.125 q -0.14063,-1.28125 -0.95313,-1.9375 -0.79687,-0.67188 -2.35937,-0.67188 -1.625,0 -2.375,0.60938 -0.75,0.59375 -0.75,1.4375 0,0.73437 0.53125,1.20312 0.51562,0.46875 2.70312,0.96875 2.20313,0.5 3.01563,0.875 1.1875,0.54688 1.75,1.39063 0.57812,0.82812 0.57812,1.92187 0,1.09375 -0.625,2.0625 -0.625,0.95313 -1.79687,1.48438 -1.15625,0.53125 -2.60938,0.53125 -1.84375,0 -3.09375
 ,-0.53125 -1.25,-0.54688 -1.96875,-1.625 -0.70312,-1.07813 -0.73437,-2.45313 z m 19.5842,1.20313 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 5.5,0 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.81322,6.6875 1.60937,0.25 q 0.10938,0.75 0.57813,1.09375 0.60937,0.45313 1.6875,0.45313 1.17187,0 1.79687,-0.46875 0.625,-0.45313 0.85938,-1.28125 0.125,-0.51563 0.10937,-2.15625 -1.09375,1.29687 -2.71875,1.29687 -2.03125,0 -3.15625,-1.46875 -1.10937,-1.46875 -1.10937,-3.51562 0,-1.40625 0.51562,-2.59375 0.51563,-1
 .20313 1.48438,-1.84375 0.96875,-0.65625 2.26562,-0.65625 1.75,0 2.875,1.40625 l 0,-1.1875 1.54688,0 0,8.51562 q 0,2.3125 -0.46875,3.26563 -0.46875,0.96875 -1.48438,1.51562 -1.01562,0.5625 -2.5,0.5625 -1.76562,0 -2.85937,-0.79687 -1.07813,-0.79688 -1.03125,-2.39063 z m 1.375,-5.92187 q 0,1.95312 0.76562,2.84375 0.78125,0.89062 1.9375,0.89062 1.14063,0 1.92188,-0.89062 0.78125,-0.89063 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79688,-0.92188 -1.92188,-0.92188 -1.10937,0 -1.89062,0.90625 -0.78125,0.89063 -0.78125,2.67188 z m 9.29759,5.10937 0,-9.85937 1.5,0 0,1.39062 q 0.45313,-0.71875 1.21875,-1.15625 0.78125,-0.45312 1.76563,-0.45312 1.09375,0 1.79687,0.45312 0.70313,0.45313 0.98438,1.28125 1.17187,-1.73437 3.04689,-1.73437 1.46875,0 2.25,0.8125 0.79687,0.8125 0.79687,2.5 l 0,6.76562 -1.67187,0 0,-6.20312 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45313 -0.59375,-0.71875 -0.42188,-0.26563 -1,-0.26563 -1.03127,0 -1.71877,0.6875 -0.6875,0.6875 -0.6875,2.21875 l 0,5.71875 -1.67187,0 0,-6
 .40625 q 0,-1.10937 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70313,0 -1.3125,0.375 -0.59375,0.35938 -0.85938,1.07813 -0.26562,0.71875 -0.26562,2.0625 l 0,5.10937 -1.67188,0 z m 22.29082,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 5.5,0 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 9.1101,5.875 0,-9.85937 1.5,0 0,1.40625 q 1.09375,-1.625 3.14062,-1.625 0.89063,0 1.64063,0.32812 0.75,0.3125 1.10937,0.84375 0.375,0.51563 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 l 0,6.0625 -1.67187,0 
 0,-6 q 0,-1.01562 -0.20313,-1.51562 -0.1875,-0.51563 -0.6875,-0.8125 -0.5,-0.29688 -1.17187,-0.29688 -1.0625,0 -1.84375,0.67188 -0.76563,0.67187 -0.76563,2.57812 l 0,5.375 -1.67187,0 z m 14.03196,-1.5 0.23438,1.48438 q -0.70313,0.14062 -1.26563,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98437 l 0,-5.65625 -1.23437,0 0,-1.3125 1.23437,0 0,-2.4375 1.65625,-1 0,3.4375 1.6875,0 0,1.3125 -1.6875,0 0,5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29687,0.32813 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z"
-       id="path4027"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 617.6798,447.79276 0,0 c 0,-3.1398 2.54529,-5.68515 5.68512,-5.68515 l 106.51947,0 c 1.50781,0 2.95386,0.59897 4.02002,1.66516 1.06616,1.06616 1.66516,2.51221 1.66516,4.01999 l 0,22.73993 c 0,3.13983 -2.54535,5.68515 -5.68518,5.68515 l -106.51947,0 c -3.13983,0 -5.68512,-2.54532 -5.68512,-5.68515 z"
-       id="path4029"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 617.6798,447.79276 0,0 c 0,-3.1398 2.54529,-5.68515 5.68512,-5.68515 l 106.51947,0 c 1.50781,0 2.95386,0.59897 4.02002,1.66516 1.06616,1.06616 1.66516,2.51221 1.66516,4.01999 l 0,22.73993 c 0,3.13983 -2.54535,5.68515 -5.68518,5.68515 l -106.51947,0 c -3.13983,0 -5.68512,-2.54532 -5.68512,-5.68515 z"
-       id="path4031"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 640.1488,461.70773 1.6875,-0.14062 q 0.125,1.01562 0.5625,1.67187 0.4375,0.65625 1.35938,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79687,-0.3125 1.1875,-0.84375 0.39062,-0.53125 0.39062,-1.15625 0,-0.64062 -0.375,-1.10937 -0.375,-0.48438 -1.23437,-0.8125 -0.54688,-0.21875 -2.42188,-0.65625 -1.875,-0.45313 -2.625,-0.85938 -0.96875,-0.51562 -1.45312,-1.26562 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57812,-1.92188 0.59375,-0.90625 1.70313,-1.35937 1.125,-0.46875 2.5,-0.46875 1.51562,0 2.67187,0.48437 1.15625,0.48438 1.76563,1.4375 0.625,0.9375 0.67187,2.14063 l -1.71875,0.125 q -0.14062,-1.28125 -0.95312,-1.9375 -0.79688,-0.67188 -2.35938,-0.67188 -1.625,0 -2.375,0.60938 -0.75,0.59375 -0.75,1.4375 0,0.73437 0.53125,1.20312 0.51563,0.46875 2.70313,0.96875 2.20312,0.5 3.01562,0.875 1.1875,0.54688 1.75,1.39063 0.57813,0.82812 0.57813,1.92187 0,1.09375 -0.625,2.0625 -0.625,0.95313 -1.79688,1.48438 -1.15625,0.53125 -2.60937,0.53125 -1.84375,0 -3.09375,
 -0.53125 -1.25,-0.54688 -1.96875,-1.625 -0.70313,-1.07813 -0.73438,-2.45313 z m 19.58417,1.20313 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 5.5,0 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.81323,6.6875 1.60938,0.25 q 0.10937,0.75 0.57812,1.09375 0.60938,0.45313 1.6875,0.45313 1.17188,0 1.79688,-0.46875 0.625,-0.45313 0.85937,-1.28125 0.125,-0.51563 0.10938,-2.15625 -1.09375,1.29687 -2.71875,1.29687 -2.03125,0 -3.15625,-1.46875 -1.10938,-1.46875 -1.10938,-3.51562 0,-1.40625 0.51563,-2.59375 0.51562,-1
 .20313 1.48437,-1.84375 0.96875,-0.65625 2.26563,-0.65625 1.75,0 2.875,1.40625 l 0,-1.1875 1.54687,0 0,8.51562 q 0,2.3125 -0.46875,3.26563 -0.46875,0.96875 -1.48437,1.51562 -1.01563,0.5625 -2.5,0.5625 -1.76563,0 -2.85938,-0.79687 -1.07812,-0.79688 -1.03125,-2.39063 z m 1.375,-5.92187 q 0,1.95312 0.76563,2.84375 0.78125,0.89062 1.9375,0.89062 1.14062,0 1.92187,-0.89062 0.78125,-0.89063 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79687,-0.92188 -1.92187,-0.92188 -1.10938,0 -1.89063,0.90625 -0.78125,0.89063 -0.78125,2.67188 z m 9.29761,5.10937 0,-9.85937 1.5,0 0,1.39062 q 0.45313,-0.71875 1.21875,-1.15625 0.78125,-0.45312 1.76563,-0.45312 1.09375,0 1.79687,0.45312 0.70313,0.45313 0.98438,1.28125 1.17187,-1.73437 3.04687,-1.73437 1.46875,0 2.25,0.8125 0.79688,0.8125 0.79688,2.5 l 0,6.76562 -1.67188,0 0,-6.20312 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45313 -0.59375,-0.71875 -0.42187,-0.26563 -1,-0.26563 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 l 0,5.71875 -1.67187,0 0,-6
 .40625 q 0,-1.10937 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70313,0 -1.3125,0.375 -0.59375,0.35938 -0.85938,1.07813 -0.26562,0.71875 -0.26562,2.0625 l 0,5.10937 -1.67188,0 z m 22.29077,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42188,-1.32812 -1.26562,-1.32813 -1.26562,-3.73438 0,-2.48437 1.26562,-3.85937 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92187,2.48438 0.82813,0.85937 2.0625,0.85937 0.90625,0 1.54688,-0.46875 0.65625,-0.48437 1.04687,-1.54687 z m -5.48437,-2.70313 5.5,0 q -0.10938,-1.23437 -0.625,-1.85937 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76562 -0.85937,2.04687 z m 9.1101,5.875 0,-9.85937 1.5,0 0,1.40625 q 1.09375,-1.625 3.14063,-1.625 0.89062,0 1.64062,0.32812 0.75,0.3125 1.10938,0.84375 0.375,0.51563 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 l 0,6.0625 -1.67188,0 
 0,-6 q 0,-1.01562 -0.20312,-1.51562 -0.1875,-0.51563 -0.6875,-0.8125 -0.5,-0.29688 -1.17188,-0.29688 -1.0625,0 -1.84375,0.67188 -0.76562,0.67187 -0.76562,2.57812 l 0,5.375 -1.67188,0 z m 14.03199,-1.5 0.23437,1.48438 q -0.70312,0.14062 -1.26562,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70313,-0.75 -0.20312,-0.46875 -0.20312,-1.98437 l 0,-5.65625 -1.23438,0 0,-1.3125 1.23438,0 0,-2.4375 1.65625,-1 0,3.4375 1.6875,0 0,1.3125 -1.6875,0 0,5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29688,0.32813 0.20312,0.125 0.57812,0.125 0.26563,0 0.73438,-0.0781 z"
-       id="path4033"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 389.47507,447.79276 0,0 c 0,-3.1398 2.54532,-5.68515 5.68515,-5.68515 l 106.51947,0 c 1.50778,0 2.95383,0.59897 4.01999,1.66516 1.06619,1.06616 1.66516,2.51221 1.66516,4.01999 l 0,22.73993 c 0,3.13983 -2.54535,5.68515 -5.68515,5.68515 l -106.51947,0 c -3.13983,0 -5.68515,-2.54532 -5.68515,-5.68515 z"
-       id="path4035"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 389.47507,447.79276 0,0 c 0,-3.1398 2.54532,-5.68515 5.68515,-5.68515 l 106.51947,0 c 1.50778,0 2.95383,0.59897 4.01999,1.66516 1.06619,1.06616 1.66516,2.51221 1.66516,4.01999 l 0,22.73993 c 0,3.13983 -2.54535,5.68515 -5.68515,5.68515 l -106.51947,0 c -3.13983,0 -5.68515,-2.54532 -5.68515,-5.68515 z"
-       id="path4037"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 411.94406,461.70773 1.6875,-0.14062 q 0.125,1.01562 0.5625,1.67187 0.4375,0.65625 1.35937,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79688,-0.3125 1.1875,-0.84375 0.39063,-0.53125 0.39063,-1.15625 0,-0.64062 -0.375,-1.10937 -0.375,-0.48438 -1.23438,-0.8125 -0.54687,-0.21875 -2.42187,-0.65625 -1.875,-0.45313 -2.625,-0.85938 -0.96875,-0.51562 -1.45313,-1.26562 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57813,-1.92188 0.59375,-0.90625 1.70312,-1.35937 1.125,-0.46875 2.5,-0.46875 1.51563,0 2.67188,0.48437 1.15625,0.48438 1.76562,1.4375 0.625,0.9375 0.67188,2.14063 l -1.71875,0.125 q -0.14063,-1.28125 -0.95313,-1.9375 -0.79687,-0.67188 -2.35937,-0.67188 -1.625,0 -2.375,0.60938 -0.75,0.59375 -0.75,1.4375 0,0.73437 0.53125,1.20312 0.51562,0.46875 2.70312,0.96875 2.20313,0.5 3.01563,0.875 1.1875,0.54688 1.75,1.39063 0.57812,0.82812 0.57812,1.92187 0,1.09375 -0.625,2.0625 -0.625,0.95313 -1.79687,1.48438 -1.15625,0.53125 -2.60938,0.53125 -1.84375,0 -3.09375
 ,-0.53125 -1.25,-0.54688 -1.96875,-1.625 -0.70312,-1.07813 -0.73437,-2.45313 z m 19.5842,1.20313 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 5.5,0 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.8132,6.6875 1.60938,0.25 q 0.10937,0.75 0.57812,1.09375 0.60938,0.45313 1.6875,0.45313 1.17188,0 1.79688,-0.46875 0.625,-0.45313 0.85937,-1.28125 0.125,-0.51563 0.10938,-2.15625 -1.09375,1.29687 -2.71875,1.29687 -2.03125,0 -3.15625,-1.46875 -1.10938,-1.46875 -1.10938,-3.51562 0,-1.40625 0.51563,-2.59375 0.51562,-1.
 20313 1.48437,-1.84375 0.96875,-0.65625 2.26563,-0.65625 1.75,0 2.875,1.40625 l 0,-1.1875 1.54687,0 0,8.51562 q 0,2.3125 -0.46875,3.26563 -0.46875,0.96875 -1.48437,1.51562 -1.01563,0.5625 -2.5,0.5625 -1.76563,0 -2.85938,-0.79687 -1.07812,-0.79688 -1.03125,-2.39063 z m 1.375,-5.92187 q 0,1.95312 0.76563,2.84375 0.78125,0.89062 1.9375,0.89062 1.14062,0 1.92187,-0.89062 0.78125,-0.89063 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79687,-0.92188 -1.92187,-0.92188 -1.10938,0 -1.89063,0.90625 -0.78125,0.89063 -0.78125,2.67188 z m 9.29761,5.10937 0,-9.85937 1.5,0 0,1.39062 q 0.45313,-0.71875 1.21875,-1.15625 0.78125,-0.45312 1.76563,-0.45312 1.09375,0 1.79687,0.45312 0.70313,0.45313 0.98438,1.28125 1.17187,-1.73437 3.04687,-1.73437 1.46875,0 2.25,0.8125 0.79688,0.8125 0.79688,2.5 l 0,6.76562 -1.67188,0 0,-6.20312 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45313 -0.59375,-0.71875 -0.42187,-0.26563 -1,-0.26563 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 l 0,5.71875 -1.67187,0 0,-6.
 40625 q 0,-1.10937 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70313,0 -1.3125,0.375 -0.59375,0.35938 -0.85938,1.07813 -0.26562,0.71875 -0.26562,2.0625 l 0,5.10937 -1.67188,0 z m 22.2908,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42188,-1.32812 -1.26562,-1.32813 -1.26562,-3.73438 0,-2.48437 1.26562,-3.85937 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 l -7.34375,0 q 0.0937,1.625 0.92187,2.48438 0.82813,0.85937 2.0625,0.85937 0.90625,0 1.54688,-0.46875 0.65625,-0.48437 1.04687,-1.54687 z m -5.48437,-2.70313 5.5,0 q -0.10938,-1.23437 -0.625,-1.85937 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76562 -0.85937,2.04687 z m 9.11008,5.875 0,-9.85937 1.5,0 0,1.40625 q 1.09375,-1.625 3.14062,-1.625 0.89063,0 1.64063,0.32812 0.75,0.3125 1.10937,0.84375 0.375,0.51563 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 l 0,6.0625 -1.67187,0 0
 ,-6 q 0,-1.01562 -0.20313,-1.51562 -0.1875,-0.51563 -0.6875,-0.8125 -0.5,-0.29688 -1.17187,-0.29688 -1.0625,0 -1.84375,0.67188 -0.76563,0.67187 -0.76563,2.57812 l 0,5.375 -1.67187,0 z m 14.03198,-1.5 0.23437,1.48438 q -0.70312,0.14062 -1.26562,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70313,-0.75 -0.20312,-0.46875 -0.20312,-1.98437 l 0,-5.65625 -1.23438,0 0,-1.3125 1.23438,0 0,-2.4375 1.65625,-1 0,3.4375 1.6875,0 0,1.3125 -1.6875,0 0,5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29688,0.32813 0.20312,0.125 0.57812,0.125 0.26563,0 0.73438,-0.0781 z"
-       id="path4039"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 252,154.79868 0,0 c 0,-21.20758 17.19214,-38.39973 38.39972,-38.39973 l 275.98798,0 c 10.1842,0 19.95136,4.04568 27.15271,11.24702 7.20135,7.20134 11.24701,16.96846 11.24701,27.15271 l 0,153.59427 c 0,21.20758 -17.19214,38.39972 -38.39972,38.39972 l -275.98798,0 C 269.19214,346.79267 252,329.60053 252,308.39295 Z"
-       id="path4041"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 252,154.79868 0,0 c 0,-21.20758 17.19214,-38.39973 38.39972,-38.39973 l 275.98798,0 c 10.1842,0 19.95136,4.04568 27.15271,11.24702 7.20135,7.20134 11.24701,16.96846 11.24701,27.15271 l 0,153.59427 c 0,21.20758 -17.19214,38.39972 -38.39972,38.39972 l -275.98798,0 C 269.19214,346.79267 252,329.60053 252,308.39295 Z"
-       id="path4043"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 230.41995,409.83597 83.33859,-88.09451"
-       id="path4045"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 230.41995,409.83597 78.6282,-83.11533"
-       id="path4047"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:2, 6" />
-    <path
-       d="m 309.04816,326.72064 0.0882,3.17957 2.61282,-6.03476 -5.88062,2.94339 z"
-       id="path4049"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="M 648.32544,410.00262 313.77429,321.71914"
-       id="path4051"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 648.32544,410.00262 320.40158,323.46801"
-       id="path4053"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:2, 6" />
-    <path
-       d="m 320.40158,323.46802 2.7486,-1.60083 -6.54886,0.59799 5.40112,3.75144 z"
-       id="path4055"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="M 362.65616,115.20998 265.67978,58.76903"
-       id="path4057"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 356.73224,111.76222 271.6037,62.216783"
-       id="path4059"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 356.73227,111.76221 -3.07529,0.81254 6.4722,1.16451 -4.20947,-5.05231 z"
-       id="path4061"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 271.6037,62.216785 3.07526,-0.812538 -6.4722,-1.164498 4.20947,5.052304 z"
-       id="path4063"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 355.75198,702.83203 65.70078,0"
-       id="path4065"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 355.75198,702.83203 58.84662,0"
-       id="path4067"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:2, 6" />
-    <path
-       d="m 414.59857,702.83203 -2.24915,2.24915 6.17954,-2.24915 -6.17954,-2.24921 z"
-       id="path4069"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 429.81235,685.7769 95.2756,0 0,34.11023 -95.2756,0 z"
-       id="path4071"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 440.0936,710.1369 0,-11.45313 1.51562,0 0,4.70313 5.95313,0 0,-4.70313 1.51562,0 0,11.45313 -1.51562,0 0,-5.40625 -5.95313,0 0,5.40625 -1.51562,0 z m 17.00781,-2.67188 1.45313,0.17188 q -0.34375,1.28125 -1.28125,1.98437 -0.92188,0.70313 -2.35938,0.70313 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14063 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14063 1.0625,1.125 1.0625,3.17187 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10938 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29688 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04687 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64063 -0.71875,1.71875 z m 13.24218,3.92188 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54687,0.26563 -1.375,0 -2.10938,-0.67188 -0.73437,-0.67187 -0.73437,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0
 .42188,-0.10937 1.25,-0.20312 1.70313,-0.20313 2.51563,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39063,-1.20312 -0.54687,-0.48438 -1.60937,-0.48438 -0.98438,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14063 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35937 0.9375,-0.98437 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51563 z m 3.58594,4.17188 0,-8.29688 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45313 l -0.48438,
 1.29687 q -0.51562,-0.29687 -1.03125,-0.29687 -0.45312,0 -0.82812,0.28125 -0.35938,0.26562 -0.51563,0.76562 -0.23437,0.75 -0.23437,1.64063 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26563 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23437 -0.42187,-0.25 -0.59375,-0.64063 -0.17187,-0.40625 -0.17187,-1.67187 l 0,-4.76563 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60938 0.0625,0.78125 0.0781,0.17188 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 2.67968,1.26563 -1.3125,0 0,-11.45313 1.40625,0 0,4.07813 q 0.89063,-1.10938 2.28125,-1.10938 0.76563,0 1.4375,0.3125 0.6875,0.29688 1.125,0.85938 0.45313,0.5625 0.70313,1.35937 0.25,0.78125 0.25,1.67188 0,2.14062 -1.0625,3.3125 -1.04688,1.15625 -2.53125,1.15625 -1.46875,0 -2.29688,-1.23438 l 0,1.04688 z m -0.0156,-4.21875 q 0,1.5 0.40625,2.15625 0.65625,1.09375 1.79687,1.09375 0.92188,0 1.59375,-0.79688 0.67188,-0.8125 0.67188,-2.
 39062 0,-1.625 -0.65625,-2.39063 -0.64063,-0.78125 -1.54688,-0.78125 -0.92187,0 -1.59375,0.79688 -0.67187,0.79687 -0.67187,2.3125 z m 13.28906,1.54687 1.45313,0.17188 q -0.34375,1.28125 -1.28125,1.98437 -0.92188,0.70313 -2.35938,0.70313 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14063 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14063 1.0625,1.125 1.0625,3.17187 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10938 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29688 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04687 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64063 -0.71875,1.71875 z m 13.24218,3.92188 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54687,0.26563 -1.375,0 -2.10938,-0.67188 -0.73437,-0.67187 -0.73437,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10937 1.25,-0.20312
  1.70313,-0.20313 2.51563,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39063,-1.20312 -0.54687,-0.48438 -1.60937,-0.48438 -0.98438,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14063 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35937 0.9375,-0.98437 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51563 z m 6.66406,2.90625 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23437 -0.42188,-0.25 -0.59375,-0.64063 -0.17188,-0.40625 -0.17188,-1.67187 l 0,-4.76563 -1.0312
 5,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60938 0.0625,0.78125 0.0781,0.17188 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z"
-       id="path4073"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 677.8845,212.63277 0,0 c 0,-6.04482 4.90027,-10.9451 10.94507,-10.9451 l 124.62952,0 c 2.90283,0 5.68677,1.15314 7.73938,3.20575 2.05255,2.0526 3.20569,4.83653 3.20569,7.73935 l 0,43.7791 c 0,6.0448 -4.90027,10.9451 -10.94507,10.9451 l -124.62952,0 c -6.0448,0 -10.94507,-4.9003 -10.94507,-10.9451 z"
-       id="path4075"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 677.8845,212.63277 0,0 c 0,-6.04482 4.90027,-10.9451 10.94507,-10.9451 l 124.62952,0 c 2.90283,0 5.68677,1.15314 7.73938,3.20575 2.05255,2.0526 3.20569,4.83653 3.20569,7.73935 l 0,43.7791 c 0,6.0448 -4.90027,10.9451 -10.94507,10.9451 l -124.62952,0 c -6.0448,0 -10.94507,-4.9003 -10.94507,-10.9451 z"
-       id="path4077"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 732.6905,234.22356 0,-13.64063 1.53125,0 0,1.28125 q 0.53125,-0.75 1.20313,-1.125 0.6875,-0.375 1.64062,-0.375 1.26563,0 2.23438,0.65625 0.96875,0.64063 1.45312,1.82813 0.5,1.1875 0.5,2.59375 0,1.51562 -0.54687,2.73437 -0.54688,1.20313 -1.57813,1.84375 -1.03125,0.64063 -2.17187,0.64063 -0.84375,0 -1.51563,-0.34375 -0.65625,-0.35938 -1.07812,-0.89063 l 0,4.79688 -1.67188,0 z m 1.51563,-8.65625 q 0,1.90625 0.76562,2.8125 0.78125,0.90625 1.875,0.90625 1.10938,0 1.89063,-0.9375 0.79687,-0.9375 0.79687,-2.92188 0,-1.875 -0.78125,-2.8125 -0.76562,-0.9375 -1.84375,-0.9375 -1.0625,0 -1.89062,1 -0.8125,1 -0.8125,2.89063 z m 8.18823,1.9375 1.65625,-0.26563 q 0.14062,1 0.76562,1.53125 0.64063,0.51563 1.78125,0.51563 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89063 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60938 -0.35937,-1.32813 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625
  0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48437 0.67187,-0.20313 1.4375,-0.20313 1.17187,0 2.04687,0.34375 0.875,0.32813 1.28125,0.90625 0.42188,0.5625 0.57813,1.51563 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39063 -0.48437,0.375 -0.48437,0.875 0,0.32812 0.20312,0.59375 0.20313,0.26562 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76562 0.70312,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48438,1.57813 -0.48437,0.73437 -1.40625,1.14062 -0.92187,0.39063 -2.07812,0.39063 -1.92188,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 16.28125,6.71875 0,-4.82813 q -0.39063,0.54688 -1.09375,0.90625 -0.6875,0.35938 -1.48438,0.35938 -1.75,0 -3.01562,-1.39063 -1.26563,-1.40625 -1.26563,-3.84375 0,-1.48437 0.51563,-2.65625 0.51562,-1.1875 1.48437,-1.79687 0.98438,-0.60938 2.15625,-0.60938 1.82813,0 2.875,1.54688 l 0,-1.32813 1.5,0 0,13.64063 -1.67187,0 z m -5.14063,-8.73438 q 0,1.90625 
 0.79688,2.85938 0.79687,0.9375 1.90625,0.9375 1.0625,0 1.82812,-0.89063 0.78125,-0.90625 0.78125,-2.76562 0,-1.95313 -0.8125,-2.95313 -0.8125,-1 -1.90625,-1 -1.09375,0 -1.84375,0.9375 -0.75,0.92188 -0.75,2.875 z m 9.20386,4.95313 0,-13.59375 1.67187,0 0,13.59375 -1.67187,0 z m 2.92609,0.23437 3.9375,-14.0625 1.34375,0 -3.9375,14.0625 -1.34375,0 z"
-       id="path4079"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 712.4629,240.78606 0,-1.9375 1.65625,0 0,1.9375 -1.65625,0 z m -2.125,15.48439 0.3125,-1.42189 q 0.5,0.125 0.79687,0.125 0.51563,0 0.76563,-0.34375 0.25,-0.32813 0.25,-1.6875 l 0,-10.35938 1.65625,0 0,10.39063 q 0,1.82812 -0.46875,2.54687 -0.59375,0.92189 -2,0.92189 -0.67188,0 -1.3125,-0.17187 z m 12.66046,-3.82814 0,-1.25 q -0.9375,1.46875 -2.75,1.46875 -1.17187,0 -2.17187,-0.64063 -0.98438,-0.65625 -1.53125,-1.8125 -0.53125,-1.17187 -0.53125,-2.6875 0,-1.46875 0.48437,-2.67187 0.5,-1.20313 1.46875,-1.84375 0.98438,-0.64063 2.20313,-0.64063 0.89062,0 1.57812,0.375 0.70313,0.375 1.14063,0.98438 l 0,-4.875 1.65625,0 0,13.59375 -1.54688,0 z m -5.28125,-4.92188 q 0,1.89063 0.79688,2.82813 0.8125,0.9375 1.89062,0.9375 1.09375,0 1.85938,-0.89063 0.76562,-0.89062 0.76562,-2.73437 0,-2.01563 -0.78125,-2.95313 -0.78125,-0.95312 -1.92187,-0.95312 -1.10938,0 -1.85938,0.90625 -0.75,0.90625 -0.75,2.85937 z m 10.81317,4.92188 -1.54687,0 0,-13.59375 1.65625,0 0,4.84375 q 1.0625,-1.328
 13 2.70312,-1.32813 0.90625,0 1.71875,0.375 0.8125,0.35938 1.32813,1.03125 0.53125,0.65625 0.82812,1.59375 0.29688,0.9375 0.29688,2 0,2.53125 -1.25,3.92188 -1.25,1.375 -3,1.375 -1.75,0 -2.73438,-1.45313 l 0,1.23438 z m -0.0156,-5 q 0,1.76562 0.46875,2.5625 0.79687,1.28125 2.14062,1.28125 1.09375,0 1.89063,-0.9375 0.79687,-0.95313 0.79687,-2.84375 0,-1.92188 -0.76562,-2.84375 -0.76563,-0.92188 -1.84375,-0.92188 -1.09375,0 -1.89063,0.95313 -0.79687,0.95312 -0.79687,2.75 z m 15.28198,1.39062 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,-2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.5
 9375 0.65625,-0.60938 0.84375,-1.85938 z m 1.64062,3.84375 3.9375,-14.0625 1.34375,0 -3.9375,14.0625 -1.34375,0 z m 5.80829,-5.15625 q 0,-2.73437 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32813 1.29688,3.67188 0,1.90625 -0.57813,3 -0.5625,1.07812 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82813,2.82813 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95313 0.82813,-2.89063 0,-1.82812 -0.82813,-2.76562 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82812 z m 15.67261,4.92188 0,-1.25 q -0.9375,1.46875 -2.75,1.46875 -1.17188,0 -2.17188,-0.64063 -0.98437,-0.65625 -1.53125,-1.8125 -0.53125,-1.17187 -0.53125,-2.6875 0,-1.46875 0.48438,-2.67187 0.5,-1.20313 1.46875,-1.84375 0.98437,-0.64063 2.20312,-0.64063 0.89063,0 1.57813,0.375 0.70312,0.375 1.14062,0.98438 l 0,-4.875 1.65625,0 0,13.59375 -1.54687,0 
 z m -5.28125,-4.92188 q 0,1.89063 0.79687,2.82813 0.8125,0.9375 1.89063,0.9375 1.09375,0 1.85937,-0.89063 0.76563,-0.89062 0.76563,-2.73437 0,-2.01563 -0.78125,-2.95313 -0.78125,-0.95312 -1.92188,-0.95312 -1.10937,0 -1.85937,0.90625 -0.75,0.90625 -0.75,2.85937 z m 10.81323,4.92188 -1.54687,0 0,-13.59375 1.65625,0 0,4.84375 q 1.0625,-1.32813 2.70312,-1.32813 0.90625,0 1.71875,0.375 0.8125,0.35938 1.32813,1.03125 0.53125,0.65625 0.82812,1.59375 0.29688,0.9375 0.29688,2 0,2.53125 -1.25,3.92188 -1.25,1.375 -3,1.375 -1.75,0 -2.73438,-1.45313 l 0,1.23438 z m -0.0156,-5 q 0,1.76562 0.46875,2.5625 0.79687,1.28125 2.14062,1.28125 1.09375,0 1.89063,-0.9375 0.79687,-0.95313 0.79687,-2.84375 0,-1.92188 -0.76562,-2.84375 -0.76563,-0.92188 -1.84375,-0.92188 -1.09375,0 -1.89063,0.95313 -0.79687,0.95312 -0.79687,2.75 z m 15.28192,1.39062 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,
 -2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z"
-       id="path4081"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 678.6772,232.54068 -73.88977,-0.94487"
-       id="path4083"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 666.67816,232.38724 -49.89172,-0.638"
-       id="path4085"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 666.6359,235.69043 9.11768,-3.18713 -9.03321,-3.41925 z"
-       id="path4087"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 616.8287,228.44604 -9.11774,3.18713 9.03327,3.41925 z"
-       id="path4089"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="M 628.6247,58.769028 543.58533,116.40682"
-       id="path4091"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 618.6913,65.50165 553.51869,109.6742"
-       id="path4093"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 620.54474,68.23619 5.65967,-7.826756 -9.36652,2.357666 z"
-       id="path4095"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 551.6653,106.93966 -5.65973,7.82676 9.36652,-2.35767 z"
-       id="path4097"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 441.77298,321.8084 2.45666,68"
-       id="path4099"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 442.2062,333.80057 1.59021,44.01566"
-       id="path4101"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 445.50754,333.6813 -3.629,-8.95102 -2.97363,9.18955 z"
-       id="path4103"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 440.4951,377.9355 3.629,8.95102 2.97363,-9.18955 z"
-       id="path4105"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 441.77298,321.8084 206.55118,88.18896"
-       id="path4107"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 452.80914,326.52042 637.28796,405.2854"
-       id="path4109"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 454.1063,323.48227 -9.64435,-0.52579 7.05002,6.60205 z"
-       id="path4111"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 635.99084,408.32352 9.64435,0.52579 -7.05005,-6.60205 z"
-       id="path4113"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 230.41995,409.83597 211.3386,-88.03149"
-       id="path4115"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 241.49736,405.22174 430.68109,326.41867"
-       id="path4117"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 240.22711,402.17227 -7.10815,6.53943 9.64863,-0.44046 z"
-       id="path4119"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 431.95135,329.46817 7.10815,-6.53946 -9.64862,0.44049 z"
-       id="path4121"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 124.80052,127.79265 138.3622,0 0,51.77953 -138.3622,0 z"
-       id="path4123"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 513.7402,162.28366 0,0 c 0,-5.23305 4.24219,-9.47527 9.47522,-9.47527 l 66.08887,0 c 2.513,0 4.9231,0.9983 6.70001,2.77524 1.77698,1.77696 2.77527,4.18703 2.77527,6.70003 l 0,37.89987 c 0,5.23305 -4.24225,9.47527 -9.47528,9.47527 l -66.08887,0 c -5.23303,0 -9.47522,-4.24222 -9.47522,-9.47527 z"
-       id="path4127"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 513.7402,162.28366 0,0 c 0,-5.23305 4.24219,-9.47527 9.47522,-9.47527 l 66.08887,0 c 2.513,0 4.9231,0.9983 6.70001,2.77524 1.77698,1.77696 2.77527,4.18703 2.77527,6.70003 l 0,37.89987 c 0,5.23305 -4.24225,9.47527 -9.47528,9.47527 l -66.08887,0 c -5.23303,0 -9.47522,-4.24222 -9.47522,-9.47527 z"
-       id="path4129"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 535.7806,177.0336 0,-9.3125 3.51563,0 q 0.92187,0 1.40625,0.0937 0.6875,0.10937 1.15625,0.4375 0.46875,0.3125 0.75,0.89062 0.28125,0.57813 0.28125,1.28125 0,1.1875 -0.76563,2.01563 -0.75,0.8125 -2.71875,0.8125 l -2.39062,0 0,3.78125 -1.23438,0 z m 1.23438,-4.875 2.40625,0 q 1.1875,0 1.6875,-0.4375 0.51562,-0.45313 0.51562,-1.26563 0,-0.57812 -0.29687,-0.98437 -0.29688,-0.42188 -0.78125,-0.5625 -0.3125,-0.0781 -1.15625,-0.0781 l -2.375,0 0,3.32813 z m 11.90539,4.04687 q -0.625,0.53125 -1.21875,0.76563 -0.57812,0.21875 -1.25,0.21875 -1.125,0 -1.71875,-0.54688 -0.59375,-0.54687 -0.59375,-1.39062 0,-0.48438 0.21875,-0.89063 0.23438,-0.42187 0.59375,-0.67187 0.375,-0.25 0.82813,-0.375 0.32812,-0.0781 1.01562,-0.17188 1.375,-0.15625 2.03125,-0.39062 0.0156,-0.23438 0.0156,-0.29688 0,-0.70312 -0.32813,-0.98437 -0.4375,-0.39063 -1.29687,-0.39063 -0.8125,0 -1.20313,0.28125 -0.375,0.28125 -0.5625,1 l -1.10937,-0.14062 q 0.14062,-0.71875 0.48437,-1.15625 0.35938,-0.45313 1.01563,-0
 .6875 0.67187,-0.23438 1.53125,-0.23438 0.875,0 1.40625,0.20313 0.54687,0.20312 0.79687,0.51562 0.25,0.29688 0.35938,0.76563 0.0469,0.29687 0.0469,1.0625 l 0,1.51562 q 0,1.59375 0.0781,2.01563 0.0781,0.42187 0.28125,0.8125 l -1.1875,0 q -0.17188,-0.35938 -0.23438,-0.82813 z m -0.0937,-2.5625 q -0.625,0.26563 -1.85937,0.4375 -0.70313,0.10938 -1,0.23438 -0.29688,0.125 -0.45313,0.375 -0.15625,0.23437 -0.15625,0.53125 0,0.45312 0.34375,0.76562 0.34375,0.29688 1.01563,0.29688 0.65625,0 1.17187,-0.28125 0.51563,-0.29688 0.76563,-0.79688 0.17187,-0.375 0.17187,-1.14062 l 0,-0.42188 z m 3.09998,3.39063 0,-6.73438 1.03125,0 0,1.01563 q 0.39062,-0.71875 0.71875,-0.9375 0.34375,-0.23438 0.73437,-0.23438 0.57813,0 1.17188,0.35938 l -0.39063,1.0625 q -0.42187,-0.25 -0.82812,-0.25 -0.375,0 -0.6875,0.23437 -0.29688,0.21875 -0.42188,0.625 -0.1875,0.60938 -0.1875,1.32813 l 0,3.53125 -1.14062,0 z m 4.00085,-2.01563 1.125,-0.17187 q 0.0937,0.67187 0.53125,1.04687 0.4375,0.35938 1.21875,0.35938 0.78125
 ,0 1.15625,-0.3125 0.39063,-0.32813 0.39063,-0.76563 0,-0.39062 -0.34375,-0.60937 -0.23438,-0.15625 -1.17188,-0.39063 -1.25,-0.3125 -1.73437,-0.54687 -0.48438,-0.23438 -0.73438,-0.64063 -0.25,-0.40625 -0.25,-0.90625 0,-0.45312 0.20313,-0.82812 0.20312,-0.39063 0.5625,-0.64063 0.26562,-0.20312 0.71875,-0.32812 0.46875,-0.14063 1,-0.14063 0.78125,0 1.375,0.23438 0.60937,0.21875 0.89062,0.60937 0.29688,0.39063 0.40625,1.04688 l -1.125,0.15625 q -0.0781,-0.53125 -0.4375,-0.8125 -0.35937,-0.29688 -1.03125,-0.29688 -0.78125,0 -1.125,0.26563 -0.34375,0.25 -0.34375,0.60937 0,0.21875 0.14063,0.39063 0.14062,0.1875 0.4375,0.3125 0.17187,0.0625 1.01562,0.28125 1.21875,0.32812 1.6875,0.53125 0.48438,0.20312 0.75,0.60937 0.28125,0.39063 0.28125,0.96875 0,0.57813 -0.34375,1.07813 -0.32812,0.5 -0.95312,0.78125 -0.625,0.28125 -1.42188,0.28125 -1.3125,0 -2,-0.54688 -0.6875,-0.54687 -0.875,-1.625 z m 11.72656,-0.15625 1.1875,0.14063 q -0.28125,1.04687 -1.04687,1.625 -0.75,0.5625 -1.92188,0.5625 -1.48
 437,0 -2.35937,-0.90625 -0.85938,-0.92188 -0.85938,-2.5625 0,-1.70313 0.875,-2.64063 0.89063,-0.9375 2.28125,-0.9375 1.35938,0 2.20313,0.92188 0.85937,0.92187 0.85937,2.57812 0,0.10938 0,0.3125 l -5.03125,0 q 0.0625,1.10938 0.625,1.70313 0.5625,0.59375 1.40625,0.59375 0.64063,0 1.07813,-0.32813 0.45312,-0.34375 0.70312,-1.0625 z m -3.75,-1.84375 3.76563,0 q -0.0781,-0.85937 -0.4375,-1.28125 -0.54688,-0.65625 -1.40625,-0.65625 -0.79688,0 -1.32813,0.53125 -0.53125,0.51563 -0.59375,1.40625 z m 6.53748,4.01563 0,-6.73438 1.03125,0 0,1.01563 q 0.39062,-0.71875 0.71875,-0.9375 0.34375,-0.23438 0.73437,-0.23438 0.57813,0 1.17188,0.35938 l -0.39063,1.0625 q -0.42187,-0.25 -0.82812,-0.25 -0.375,0 -0.6875,0.23437 -0.29688,0.21875 -0.42188,0.625 -0.1875,0.60938 -0.1875,1.32813 l 0,3.53125 -1.14062,0 z m 3.59466,0.15625 2.70313,-9.625 0.90625,0 -2.6875,9.625 -0.92188,0 z"
-       id="path4131"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 530.31683,193.0336 3.57812,-9.3125 1.3125,0 3.8125,9.3125 -1.40625,0 -1.07812,-2.8125 -3.89063,0 -1.03125,2.8125 -1.29687,0 z m 2.6875,-3.82813 3.15625,0 -0.98438,-2.57812 q -0.4375,-1.17188 -0.65625,-1.92188 -0.17187,0.89063 -0.5,1.78125 l -1.01562,2.71875 z m 7.07727,3.82813 0,-6.73438 1.03125,0 0,0.95313 q 0.73438,-1.10938 2.14063,-1.10938 0.60937,0 1.10937,0.21875 0.51563,0.21875 0.76563,0.57813 0.26562,0.34375 0.35937,0.84375 0.0625,0.3125 0.0625,1.10937 l 0,4.14063 -1.14062,0 0,-4.09375 q 0,-0.70313 -0.14063,-1.04688 -0.125,-0.34375 -0.46875,-0.54687 -0.32812,-0.21875 -0.78125,-0.21875 -0.73437,0 -1.26562,0.46875 -0.53125,0.45312 -0.53125,1.75 l 0,3.6875 -1.14063,0 z m 11.8031,-0.82813 q -0.625,0.53125 -1.21875,0.76563 -0.57812,0.21875 -1.25,0.21875 -1.125,0 -1.71875,-0.54688 -0.59375,-0.54687 -0.59375,-1.39062 0,-0.48438 0.21875,-0.89063 0.23438,-0.42187 0.59375,-0.67187 0.375,-0.25 0.82813,-0.375 0.32812,-0.0781 1.01562,-0.17188 1.375,-0.15625 2.03125,-0.39062 0.
 0156,-0.23438 0.0156,-0.29688 0,-0.70312 -0.32813,-0.98437 -0.4375,-0.39063 -1.29687,-0.39063 -0.8125,0 -1.20313,0.28125 -0.375,0.28125 -0.5625,1 l -1.10937,-0.14062 q 0.14062,-0.71875 0.48437,-1.15625 0.35938,-0.45313 1.01563,-0.6875 0.67187,-0.23438 1.53125,-0.23438 0.875,0 1.40625,0.20313 0.54687,0.20312 0.79687,0.51562 0.25,0.29688 0.35938,0.76563 0.0469,0.29687 0.0469,1.0625 l 0,1.51562 q 0,1.59375 0.0781,2.01563 0.0781,0.42187 0.28125,0.8125 l -1.1875,0 q -0.17188,-0.35938 -0.23438,-0.82813 z m -0.0937,-2.5625 q -0.625,0.26563 -1.85937,0.4375 -0.70313,0.10938 -1,0.23438 -0.29688,0.125 -0.45313,0.375 -0.15625,0.23437 -0.15625,0.53125 0,0.45312 0.34375,0.76562 0.34375,0.29688 1.01563,0.29688 0.65625,0 1.17187,-0.28125 0.51563,-0.29688 0.76563,-0.79688 0.17187,-0.375 0.17187,-1.14062 l 0,-0.42188 z m 3.08435,3.39063 0,-9.3125 1.14063,0 0,9.3125 -1.14063,0 z m 2.94544,2.59375 -0.14063,-1.0625 q 0.375,0.0937 0.65625,0.0937 0.39063,0 0.60938,-0.125 0.23437,-0.125 0.375,-0.35938 0.10
 937,-0.17187 0.35937,-0.84375 0.0312,-0.0937 0.0937,-0.28125 l -2.5625,-6.75 1.23438,0 1.40625,3.89063 q 0.26562,0.75 0.48437,1.5625 0.20313,-0.78125 0.46875,-1.53125 l 1.45313,-3.92188 1.14062,0 -2.5625,6.84375 q -0.42187,1.10938 -0.64062,1.53125 -0.3125,0.5625 -0.70313,0.82813 -0.39062,0.26562 -0.9375,0.26562 -0.32812,0 -0.73437,-0.14062 z m 6.10156,-2.59375 0,-0.92188 4.29687,-4.9375 q -0.73437,0.0469 -1.29687,0.0469 l -2.73438,0 0,-0.92188 5.5,0 0,0.75 -3.64062,4.28125 -0.71875,0.78125 q 0.78125,-0.0625 1.45312,-0.0625 l 3.10938,0 0,0.98438 -5.96875,0 z m 11.88281,-2.17188 1.1875,0.14063 q -0.28125,1.04687 -1.04687,1.625 -0.75,0.5625 -1.92188,0.5625 -1.48437,0 -2.35937,-0.90625 -0.85938,-0.92188 -0.85938,-2.5625 0,-1.70313 0.875,-2.64063 0.89063,-0.9375 2.28125,-0.9375 1.35938,0 2.20313,0.92188 0.85937,0.92187 0.85937,2.57812 0,0.10938 0,0.3125 l -5.03125,0 q 0.0625,1.10938 0.625,1.70313 0.5625,0.59375 1.40625,0.59375 0.64063,0 1.07813,-0.32813 0.45312,-0.34375 0.70312,-1.0625 z
  m -3.75,-1.84375 3.76563,0 q -0.0781,-0.85937 -0.4375,-1.28125 -0.54688,-0.65625 -1.40625,-0.65625 -0.79688,0 -1.32813,0.53125 -0.53125,0.51563 -0.59375,1.40625 z m 6.53748,4.01563 0,-6.73438 1.03125,0 0,1.01563 q 0.39062,-0.71875 0.71875,-0.9375 0.34375,-0.23438 0.73437,-0.23438 0.57813,0 1.17188,0.35938 l -0.39063,1.0625 q -0.42187,-0.25 -0.82812,-0.25 -0.375,0 -0.6875,0.23437 -0.29688,0.21875 -0.42188,0.625 -0.1875,0.60938 -0.1875,1.32813 l 0,3.53125 -1.14062,0 z"
-       id="path4133"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 394.13516,162.28366 0,0 c 0,-5.23305 4.24222,-9.47527 9.47525,-9.47527 l 76.3251,0 c 2.513,0 4.92307,0.9983 6.70001,2.77524 1.77695,1.77696 2.77524,4.18703 2.77524,6.70003 l 0,37.89987 c 0,5.23305 -4.24222,9.47527 -9.47525,9.47527 l -76.3251,0 c -5.23303,0 -9.47525,-4.24222 -9.47525,-9.47527 z"
-       id="path4135"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 394.13516,162.28366 0,0 c 0,-5.23305 4.24222,-9.47527 9.47525,-9.47527 l 76.3251,0 c 2.513,0 4.92307,0.9983 6.70001,2.77524 1.77695,1.77696 2.77524,4.18703 2.77524,6.70003 l 0,37.89987 c 0,5.23305 -4.24222,9.47527 -9.47525,9.47527 l -76.3251,0 c -5.23303,0 -9.47525,-4.24222 -9.47525,-9.47527 z"
-       id="path4137"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 411.13965,181.07922 q 0,-2.67188 1.4375,-4.17188 1.4375,-1.51562 3.70313,-1.51562 1.5,0 2.6875,0.71875 1.1875,0.70312 1.8125,1.96875 0.64062,1.26562 0.64062,2.875 0,1.64062 -0.67187,2.9375 -0.65625,1.28125 -1.85938,1.95312 -1.20312,0.65625 -2.60937,0.65625 -1.51563,0 -2.71875,-0.73437 -1.1875,-0.73438 -1.8125,-2 -0.60938,-1.26563 -0.60938,-2.6875 z m 1.46875,0.0312 q 0,1.9375 1.04688,3.0625 1.04687,1.10937 2.625,1.10937 1.59375,0 2.625,-1.125 1.04687,-1.125 1.04687,-3.20312 0,-1.3125 -0.45312,-2.28125 -0.4375,-0.98438 -1.29688,-1.51563 -0.84375,-0.54687 -1.90625,-0.54687 -1.51562,0 -2.60937,1.04687 -1.07813,1.03125 -1.07813,3.45313 z m 10.19699,8.1875 0,-10.76563 1.20313,0 0,1.01563 q 0.42187,-0.59375 0.95312,-0.89063 0.54688,-0.29687 1.3125,-0.29687 0.98438,0 1.75,0.51562 0.76563,0.51563 1.14063,1.45313 0.39062,0.92187 0.39062,2.03125 0,1.20312 -0.42187,2.15625 -0.42188,0.95312 -1.25,1.46875 -0.8125,0.5 -1.71875,0.5 -0.65625,0 -1.1875,-0.26563 -0.51563,-0.28125 -0.84375
 ,-0.71875 l 0,3.79688 -1.32813,0 z m 1.20313,-6.82813 q 0,1.5 0.60937,2.21875 0.60938,0.71875 1.46875,0.71875 0.875,0 1.5,-0.73437 0.625,-0.75 0.625,-2.3125 0,-1.48438 -0.60937,-2.21875 -0.60938,-0.75 -1.45313,-0.75 -0.84375,0 -1.5,0.79687 -0.64062,0.78125 -0.64062,2.28125 z m 9.83859,2.67188 0.1875,1.15625 q -0.5625,0.125 -1,0.125 -0.71875,0 -1.125,-0.23438 -0.39063,-0.23437 -0.54688,-0.59375 -0.15625,-0.375 -0.15625,-1.5625 l 0,-4.46875 -0.96875,0 0,-1.03125 0.96875,0 0,-1.92187 1.3125,-0.79688 0,2.71875 1.32813,0 0,1.03125 -1.32813,0 0,4.54688 q 0,0.5625 0.0625,0.73437 0.0781,0.15625 0.23438,0.25 0.15625,0.0937 0.4375,0.0937 0.23437,0 0.59375,-0.0469 z m 1.19699,-8.04688 0,-1.51562 1.3125,0 0,1.51562 -1.3125,0 z m 0,9.21875 0,-7.78125 1.3125,0 0,7.78125 -1.3125,0 z m 3.24051,0 0,-7.78125 1.1875,0 0,1.09375 q 0.35937,-0.57812 0.96875,-0.92187 0.60937,-0.34375 1.39062,-0.34375 0.85938,0 1.40625,0.35937 0.5625,0.35938 0.78125,1 0.92188,-1.35937 2.40625,-1.35937 1.15625,0 1.78125,0.6
 4062 0.625,0.64063 0.625,1.96875 l 0,5.34375 -1.3125,0 0,-4.90625 q 0,-0.78125 -0.125,-1.125 -0.125,-0.35937 -0.46875,-0.5625 -0.34375,-0.21875 -0.79687,-0.21875 -0.8125,0 -1.35938,0.54688 -0.54687,0.54687 -0.54687,1.75 l 0,4.51562 -1.3125,0 0,-5.04687 q 0,-0.89063 -0.32813,-1.32813 -0.3125,-0.4375 -1.04687,-0.4375 -0.5625,0 -1.03125,0.29688 -0.46875,0.29687 -0.6875,0.85937 -0.20313,0.5625 -0.20313,1.625 l 0,4.03125 -1.32812,0 z m 12.2244,-9.21875 0,-1.51562 1.3125,0 0,1.51562 -1.3125,0 z m 0,9.21875 0,-7.78125 1.3125,0 0,7.78125 -1.3125,0 z m 2.55303,0 0,-1.0625 4.95313,-5.6875 q -0.84375,0.0469 -1.5,0.0469 l -3.15625,0 0,-1.07813 6.34375,0 0,0.875 -4.20313,4.9375 -0.8125,0.90625 q 0.89063,-0.0781 1.65625,-0.0781 l 3.59375,0 0,1.14062 -6.875,0 z m 13.34375,-2.5 1.35938,0.15625 q -0.3125,1.20313 -1.1875,1.85938 -0.875,0.65625 -2.23438,0.65625 -1.70312,0 -2.70312,-1.04688 -1,-1.04687 -1,-2.95312 0,-1.95313 1.01562,-3.03125 1.01563,-1.09375 2.625,-1.09375 1.5625,0 2.54688,1.0625 0.984
 37,1.0625 0.98437,2.98437 0,0.125 0,0.35938 l -5.8125,0 q 0.0781,1.28125 0.71875,1.96875 0.65625,0.67187 1.64063,0.67187 0.71875,0 1.23437,-0.375 0.51563,-0.39062 0.8125,-1.21875 z m -4.32812,-2.14062 4.34375,0 q -0.0937,-0.98438 -0.5,-1.46875 -0.625,-0.76563 -1.625,-0.76563 -0.92188,0 -1.54688,0.60938 -0.60937,0.60937 -0.67187,1.625 z m 7.13547,4.64062 0,-7.78125 1.1875,0 0,1.1875 q 0.45312,-0.82812 0.84375,-1.09375 0.39062,-0.26562 0.84375,-0.26562 0.67187,0 1.35937,0.42187 l -0.45312,1.21875 q -0.48438,-0.28125 -0.96875,-0.28125 -0.4375,0 -0.78125,0.26563 -0.34375,0.25 -0.48438,0.71875 -0.21875,0.70312 -0.21875,1.53125 l 0,4.07812 -1.32812,0 z"
-       id="path4139"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 386.19815,274.43326 0,0 c 0,-5.23303 4.24222,-9.47525 9.47528,-9.47525 l 92.1991,0 c 2.51297,0 4.92304,0.99829 6.70001,2.77524 1.77695,1.77694 2.77524,4.18701 2.77524,6.70001 l 0,37.89987 c 0,5.23306 -4.24222,9.47528 -9.47525,9.47528 l -92.1991,0 0,0 c -5.23306,0 -9.47528,-4.24222 -9.47528,-9.47528 z"
-       id="path4141"
-       inkscape:connector-curvature="0"
-       style="fill:#efefef;fill-rule:nonzero" />
-    <path
-       d="m 386.19815,274.43326 0,0 c 0,-5.23303 4.24222,-9.47525 9.47528,-9.47525 l 92.1991,0 c 2.51297,0 4.92304,0.99829 6.70001,2.77524 1.77695,1.77694 2.77524,4.18701 2.77524,6.70001 l 0,37.89987 c 0,5.23306 -4.24222,9.47528 -9.47525,9.47528 l -92.1991,0 0,0 c -5.23306,0 -9.47528,-4.24222 -9.47528,-9.47528 z"
-       id="path4143"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 405.86636,289.4632 0,-10.73438 3.70312,0 q 1.25,0 1.90625,0.15625 0.92188,0.20313 1.57813,0.76563 0.84375,0.71875 1.26562,1.84375 0.42188,1.10937 0.42188,2.54687 0,1.21875 -0.28125,2.17188 -0.28125,0.9375 -0.73438,1.5625 -0.45312,0.60937 -0.98437,0.96875 -0.53125,0.35937 -1.28125,0.54687 -0.75,0.17188 -1.71875,0.17188 l -3.875,0 z m 1.42187,-1.26563 2.29688,0 q 1.0625,0 1.65625,-0.1875 0.60937,-0.20312 0.96875,-0.5625 0.5,-0.51562 0.78125,-1.35937 0.28125,-0.85938 0.28125,-2.07813 0,-1.6875 -0.54688,-2.57812 -0.54687,-0.90625 -1.34375,-1.21875 -0.57812,-0.21875 -1.84375,-0.21875 l -2.25,0 0,8.20312 z m 9.00617,-7.95312 0,-1.51563 1.3125,0 0,1.51563 -1.3125,0 z m 0,9.21875 0,-7.78125 1.3125,0 0,7.78125 -1.3125,0 z m 2.72488,-2.32813 1.29688,-0.20312 q 0.10937,0.78125 0.60937,1.20312 0.5,0.42188 1.40625,0.42188 0.90625,0 1.34375,-0.35938 0.4375,-0.375 0.4375,-0.875 0,-0.45312 -0.39062,-0.70312 -0.26563,-0.1875 -1.34375,-0.45313 -1.45313,-0.35937 -2.01563,-0.625 -0.54687,-0
 .28125 -0.84375,-0.75 -0.28125,-0.46875 -0.28125,-1.04687 0,-0.51563 0.23438,-0.95313 0.23437,-0.45312 0.64062,-0.73437 0.3125,-0.23438 0.84375,-0.39063 0.53125,-0.15625 1.14063,-0.15625 0.90625,0 1.59375,0.26563 0.70312,0.26562 1.03125,0.71875 0.32812,0.4375 0.45312,1.20312 l -1.28125,0.17188 q -0.0937,-0.60938 -0.51562,-0.9375 -0.42188,-0.34375 -1.1875,-0.34375 -0.90625,0 -1.29688,0.3125 -0.39062,0.29687 -0.39062,0.70312 0,0.25 0.15625,0.45313 0.17187,0.21875 0.51562,0.35937 0.1875,0.0625 1.15625,0.32813 1.40625,0.375 1.95313,0.60937 0.5625,0.23438 0.875,0.70313 0.3125,0.45312 0.3125,1.125 0,0.65625 -0.39063,1.23437 -0.375,0.57813 -1.10937,0.90625 -0.71875,0.3125 -1.64063,0.3125 -1.51562,0 -2.3125,-0.625 -0.78125,-0.625 -1,-1.875 z m 7.84375,5.3125 0,-10.76562 1.20313,0 0,1.01562 q 0.42187,-0.59375 0.95312,-0.89062 0.54688,-0.29688 1.3125,-0.29688 0.98438,0 1.75,0.51563 0.76563,0.51562 1.14063,1.45312 0.39062,0.92188 0.39062,2.03125 0,1.20313 -0.42187,2.15625 -0.42188,0.95313 -1.2
 5,1.46875 -0.8125,0.5 -1.71875,0.5 -0.65625,0 -1.1875,-0.26562 -0.51563,-0.28125 -0.84375,-0.71875 l 0,3.79687 -1.32813,0 z m 1.20313,-6.82812 q 0,1.5 0.60937,2.21875 0.60938,0.71875 1.46875,0.71875 0.875,0 1.5,-0.73438 0.625,-0.75 0.625,-2.3125 0,-1.48437 -0.60937,-2.21875 -0.60938,-0.75 -1.45313,-0.75 -0.84375,0 -1.5,0.79688 -0.64062,0.78125 -0.64062,2.28125 z m 12.02612,2.89062 q -0.73437,0.60938 -1.40625,0.875 -0.67187,0.25 -1.45312,0.25 -1.28125,0 -1.96875,-0.625 -0.6875,-0.625 -0.6875,-1.59375 0,-0.57812 0.25,-1.04687 0.26562,-0.46875 0.6875,-0.75 0.42187,-0.29688 0.95312,-0.4375 0.375,-0.10938 1.17188,-0.20313 1.59375,-0.1875 2.34375,-0.45312 0.0156,-0.26563 0.0156,-0.34375 0,-0.8125 -0.375,-1.14063 -0.51562,-0.4375 -1.5,-0.4375 -0.9375,0 -1.39062,0.32813 -0.4375,0.3125 -0.64063,1.14062 l -1.29687,-0.17187 q 0.17187,-0.82813 0.57812,-1.32813 0.40625,-0.51562 1.17188,-0.78125 0.76562,-0.28125 1.76562,-0.28125 1,0 1.60938,0.23438 0.625,0.23437 0.92187,0.59375 0.29688,0.34375 0.
 40625,0.89062 0.0625,0.34375 0.0625,1.21875 l 0,1.75 q 0,1.84375 0.0781,2.32813 0.0937,0.48437 0.34375,0.9375 l -1.375,0 q -0.20313,-0.40625 -0.26563,-0.95313 z m -0.10937,-2.95312 q -0.71875,0.29687 -2.15625,0.5 -0.8125,0.125 -1.15625,0.26562 -0.32813,0.14063 -0.51563,0.42188 -0.17187,0.28125 -0.17187,0.625 0,0.53125 0.39062,0.89062 0.40625,0.34375 1.17188,0.34375 0.76562,0 1.35937,-0.32812 0.59375,-0.34375 0.875,-0.92188 0.20313,-0.4375 0.20313,-1.3125 l 0,-0.48437 z m 6.07296,2.73437 0.1875,1.15625 q -0.5625,0.125 -1,0.125 -0.71875,0 -1.125,-0.23437 -0.39062,-0.23438 -0.54687,-0.59375 -0.15625,-0.375 -0.15625,-1.5625 l 0,-4.46875 -0.96875,0 0,-1.03125 0.96875,0 0,-1.92188 1.3125,-0.79687 0,2.71875 1.32812,0 0,1.03125 -1.32812,0 0,4.54687 q 0,0.5625 0.0625,0.73438 0.0781,0.15625 0.23437,0.25 0.15625,0.0937 0.4375,0.0937 0.23438,0 0.59375,-0.0469 z m 6.2595,-1.67187 1.29684,0.15625 q -0.20313,1.34375 -1.09372,2.10937 -0.875,0.75 -2.14062,0.75 -1.59375,0 -2.5625,-1.03125 -0.96875,-1
 .04687 -0.96875,-3 0,-1.26562 0.40625,-2.20312 0.42187,-0.95313 1.26562,-1.42188 0.85938,-0.46875 1.85938,-0.46875 1.26562,0 2.07812,0.64063 0.81247,0.64062 1.03122,1.8125 l -1.28122,0.20312 q -0.1875,-0.78125 -0.65625,-1.17187 -0.45312,-0.40625 -1.10937,-0.40625 -1,0 -1.625,0.71875 -0.625,0.71875 -0.625,2.26562 0,1.5625 0.59375,2.28125 0.60937,0.70313 1.57812,0.70313 0.78125,0 1.29688,-0.46875 0.51562,-0.48438 0.65625,-1.46875 z m 2.24996,2.84375 0,-10.73438 1.32813,0 0,3.84375 q 0.92187,-1.0625 2.32812,-1.0625 0.85938,0 1.5,0.34375 0.64063,0.34375 0.90625,0.9375 0.28125,0.59375 0.28125,1.75 l 0,4.92188 -1.32812,0 0,-4.92188 q 0,-1 -0.42188,-1.4375 -0.42187,-0.45312 -1.21875,-0.45312 -0.57812,0 -1.09375,0.29687 -0.51562,0.29688 -0.73437,0.82813 -0.21875,0.51562 -0.21875,1.4375 l 0,4.25 -1.32813,0 z m 13.47925,-2.5 1.35938,0.15625 q -0.3125,1.20312 -1.1875,1.85937 -0.875,0.65625 -2.23438,0.65625 -1.70312,0 -2.70312,-1.04687 -1,-1.04688 -1,-2.95313 0,-1.95312 1.01562,-3.03125 1.01563
 ,-1.09375 2.625,-1.09375 1.5625,0 2.54688,1.0625 0.98437,1.0625 0.98437,2.98438 0,0.125 0,0.35937 l -5.8125,0 q 0.0781,1.28125 0.71875,1.96875 0.65625,0.67188 1.64063,0.67188 0.71875,0 1.23437,-0.375 0.51563,-0.39063 0.8125,-1.21875 z m -4.32812,-2.14063 4.34375,0 q -0.0937,-0.98437 -0.5,-1.46875 -0.625,-0.76562 -1.625,-0.76562 -0.92188,0 -1.54688,0.60937 -0.60937,0.60938 -0.67187,1.625 z m 7.13547,4.64063 0,-7.78125 1.1875,0 0,1.1875 q 0.45312,-0.82813 0.84375,-1.09375 0.39062,-0.26563 0.84375,-0.26563 0.67187,0 1.35937,0.42188 l -0.45312,1.21875 q -0.48438,-0.28125 -0.96875,-0.28125 -0.4375,0 -0.78125,0.26562 -0.34375,0.25 -0.48438,0.71875 -0.21875,0.70313 -0.21875,1.53125 l 0,4.07813 -1.32812,0 z m 3.91189,0.1875 3.10938,-11.10938 1.0625,0 -3.10938,11.10938 -1.0625,0 z"
-       id="path4145"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 412.2953,303.69757 1.42187,0.35937 q -0.4375,1.75 -1.60937,2.67188 -1.15625,0.92187 -2.82813,0.92187 -1.73437,0 -2.82812,-0.70312 -1.09375,-0.71875 -1.65625,-2.04688 -0.5625,-1.34375 -0.5625,-2.89062 0,-1.67188 0.64062,-2.92188 0.64063,-1.25 1.8125,-1.89062 1.1875,-0.65625 2.60938,-0.65625 1.60937,0 2.70312,0.82812 1.10938,0.8125 1.54688,2.29688 l -1.40625,0.32812 q -0.375,-1.17187 -1.09375,-1.70312 -0.70313,-0.53125 -1.78125,-0.53125 -1.23438,0 -2.0625,0.59375 -0.82813,0.59375 -1.17188,1.59375 -0.32812,1 -0.32812,2.0625 0,1.35937 0.39062,2.39062 0.40625,1.01563 1.23438,1.53125 0.84375,0.5 1.82812,0.5 1.20313,0 2.03125,-0.6875 0.82813,-0.6875 1.10938,-2.04687 z m 2.27179,-0.125 q 0,-2.15625 1.20312,-3.20313 1,-0.85937 2.4375,-0.85937 1.60938,0 2.625,1.04687 1.01563,1.04688 1.01563,2.90625 0,1.5 -0.45313,2.35938 -0.4375,0.85937 -1.3125,1.34375 -0.85937,0.46875 -1.875,0.46875 -1.625,0 -2.64062,-1.04688 -1,-1.04687 -1,-3.01562 z m 1.35937,0 q 0,1.5 0.64063,2.25 0.65625,0.73
 437 1.64062,0.73437 0.98438,0 1.64063,-0.75 0.65625,-0.75 0.65625,-2.28125 0,-1.4375 -0.65625,-2.17187 -0.65625,-0.75 -1.64063,-0.75 -0.98437,0 -1.64062,0.73437 -0.64063,0.73438 -0.64063,2.23438 z m 6.79172,0 q 0,-2.15625 1.20313,-3.20313 1,-0.85937 2.4375,-0.85937 1.60937,0 2.625,1.04687 1.01562,1.04688 1.01562,2.90625 0,1.5 -0.45312,2.35938 -0.4375,0.85937 -1.3125,1.34375 -0.85938,0.46875 -1.875,0.46875 -1.625,0 -2.64063,-1.04688 -1,-1.04687 -1,-3.01562 z m 1.35938,0 q 0,1.5 0.64062,2.25 0.65625,0.73437 1.64063,0.73437 0.98437,0 1.64062,-0.75 0.65625,-0.75 0.65625,-2.28125 0,-1.4375 -0.65625,-2.17187 -0.65625,-0.75 -1.64062,-0.75 -0.98438,0 -1.64063,0.73437 -0.64062,0.73438 -0.64062,2.23438 z m 7.2605,3.89062 0,-7.78125 1.1875,0 0,1.1875 q 0.45312,-0.82812 0.84375,-1.09375 0.39062,-0.26562 0.84375,-0.26562 0.67187,0 1.35937,0.42187 l -0.45312,1.21875 q -0.48438,-0.28125 -0.96875,-0.28125 -0.4375,0 -0.78125,0.26563 -0.34375,0.25 -0.48438,0.71875 -0.21875,0.70312 -0.21875,1.53125 l 
 0,4.07812 -1.32812,0 z m 9.94314,0 0,-0.98437 q -0.73437,1.15625 -2.17187,1.15625 -0.92188,0 -1.70313,-0.5 -0.78125,-0.51563 -1.21875,-1.4375 -0.42187,-0.92188 -0.42187,-2.10938 0,-1.17187 0.39062,-2.10937 0.39063,-0.95313 1.15625,-1.45313 0.78125,-0.51562 1.73438,-0.51562 0.70312,0 1.25,0.29687 0.5625,0.29688 0.90625,0.76563 l 0,-3.84375 1.3125,0 0,10.73437 -1.23438,0 z m -4.15625,-3.875 q 0,1.48438 0.625,2.23438 0.625,0.73437 1.48438,0.73437 0.85937,0 1.46875,-0.70312 0.60937,-0.71875 0.60937,-2.15625 0,-1.60938 -0.625,-2.34375 -0.60937,-0.75 -1.51562,-0.75 -0.875,0 -1.46875,0.71875 -0.57813,0.71875 -0.57813,2.26562 z m 7.27609,-5.34375 0,-1.51562 1.3125,0 0,1.51562 -1.3125,0 z m 0,9.21875 0,-7.78125 1.3125,0 0,7.78125 -1.3125,0 z m 3.24054,0 0,-7.78125 1.1875,0 0,1.10938 q 0.85938,-1.28125 2.48438,-1.28125 0.70312,0 1.28125,0.25 0.59375,0.25 0.89062,0.67187 0.29688,0.40625 0.40625,0.96875 0.0781,0.35938 0.0781,1.28125 l 0,4.78125 -1.32813,0 0,-4.73437 q 0,-0.79688 -0.15625,-1.187
 5 -0.14062,-0.40625 -0.53125,-0.64063 -0.39062,-0.25 -0.92187,-0.25 -0.84375,0 -1.45313,0.53125 -0.60937,0.53125 -0.60937,2.03125 l 0,4.25 -1.32813,0 z m 13.22922,-0.95312 q -0.73437,0.60937 -1.40625,0.875 -0.67187,0.25 -1.45312,0.25 -1.28125,0 -1.96875,-0.625 -0.6875,-0.625 -0.6875,-1.59375 0,-0.57813 0.25,-1.04688 0.26562,-0.46875 0.6875,-0.75 0.42187,-0.29687 0.95312,-0.4375 0.375,-0.10937 1.17188,-0.20312 1.59375,-0.1875 2.34375,-0.45313 0.0156,-0.26562 0.0156,-0.34375 0,-0.8125 -0.375,-1.14062 -0.51562,-0.4375 -1.5,-0.4375 -0.9375,0 -1.39062,0.32812 -0.4375,0.3125 -0.64063,1.14063 l -1.29687,-0.17188 q 0.17187,-0.82812 0.57812,-1.32812 0.40625,-0.51563 1.17188,-0.78125 0.76562,-0.28125 1.76562,-0.28125 1,0 1.60938,0.23437 0.625,0.23438 0.92187,0.59375 0.29688,0.34375 0.40625,0.89063 0.0625,0.34375 0.0625,1.21875 l 0,1.75 q 0,1.84375 0.0781,2.32812 0.0937,0.48438 0.34375,0.9375 l -1.375,0 q -0.20313,-0.40625 -0.26563,-0.95312 z m -0.10937,-2.95313 q -0.71875,0.29688 -2.15625,0.5
  -0.8125,0.125 -1.15625,0.26563 -0.32813,0.14062 -0.51563,0.42187 -0.17187,0.28125 -0.17187,0.625 0,0.53125 0.39062,0.89063 0.40625,0.34375 1.17188,0.34375 0.76562,0 1.35937,-0.32813 0.59375,-0.34375 0.875,-0.92187 0.20313,-0.4375 0.20313,-1.3125 l 0,-0.48438 z m 6.07299,2.73438 0.1875,1.15625 q -0.5625,0.125 -1,0.125 -0.71875,0 -1.125,-0.23438 -0.39062,-0.23437 -0.54687,-0.59375 -0.15625,-0.375 -0.15625,-1.5625 l 0,-4.46875 -0.96875,0 0,-1.03125 0.96875,0 0,-1.92187 1.3125,-0.79688 0,2.71875 1.32812,0 0,1.03125 -1.32812,0 0,4.54688 q 0,0.5625 0.0625,0.73437 0.0781,0.15625 0.23437,0.25 0.15625,0.0937 0.4375,0.0937 0.23438,0 0.59375,-0.0469 z m 0.69696,-2.71875 q 0,-2.15625 1.20313,-3.20313 1,-0.85937 2.4375,-0.85937 1.60937,0 2.625,1.04687 1.01562,1.04688 1.01562,2.90625 0,1.5 -0.45312,2.35938 -0.4375,0.85937 -1.3125,1.34375 -0.85938,0.46875 -1.875,0.46875 -1.625,0 -2.64063,-1.04688 -1,-1.04687 -1,-3.01562 z m 1.35938,0 q 0,1.5 0.64062,2.25 0.65625,0.73437 1.64063,0.73437 0.98437,0 
 1.64062,-0.75 0.65625,-0.75 0.65625,-2.28125 0,-1.4375 -0.65625,-2.17187 -0.65625,-0.75 -1.64062,-0.75 -0.98438,0 -1.64063,0.73437 -0.64062,0.73438 -0.64062,2.23438 z m 7.2605,3.89062 0,-7.78125 1.1875,0 0,1.1875 q 0.45312,-0.82812 0.84375,-1.09375 0.39062,-0.26562 0.84375,-0.26562 0.67187,0 1.35937,0.42187 l -0.45312,1.21875 q -0.48438,-0.28125 -0.96875,-0.28125 -0.4375,0 -0.78125,0.26563 -0.34375,0.25 -0.48438,0.71875 -0.21875,0.70312 -0.21875,1.53125 l 0,4.07812 -1.32812,0 z"
-       id="path4147"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 513.7402,181.2336 -24.31497,0"
-       id="path4149"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 513.7402,181.2336 -17.46082,0"
-       id="path4151"
-       inkscape:connector-curvature="0"
-       style="fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 496.27936,181.2336 2.24918,-2.24916 -6.17954,2.24916 6.17954,2.24916 z"
-       id="path4153"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:evenodd;stroke:#000000;stroke-width:2;stroke-linecap:butt" />
-    <path
-       d="m 145.27034,495.79276 0,0 c 0,-3.1398 2.54533,-5.68515 5.68515,-5.68515 l 106.51945,0 c 1.50782,0 2.95386,0.59897 4.02002,1.66516 1.06617,1.06616 1.66514,2.51221 1.66514,4.01999 l 0,22.73996 c 0,3.13983 -2.54532,5.68512 -5.68515,5.68512 l -106.51946,0 c -3.13982,0 -5.68515,-2.54529 -5.68515,-5.68512 z"
-       id="path4155"
-       inkscape:connector-curvature="0"
-       style="fill:#e06666;fill-rule:nonzero" />
-    <path
-       d="m 145.27034,495.79276 0,0 c 0,-3.1398 2.54533,-5.68515 5.68515,-5.68515 l 106.51945,0 c 1.50782,0 2.95386,0.59897 4.02002,1.66516 1.06617,1.06616 1.66514,2.51221 1.66514,4.01999 l 0,22.73996 c 0,3.13983 -2.54532,5.68512 -5.68515,5.68512 l -106.51946,0 c -3.13982,0 -5.68515,-2.54529 -5.68515,-5.68512 z"
-       id="path4157"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 163.68008,514.0827 0,-13.59372 4.6875,0 q 1.57812,0 2.42187,0.1875 1.15625,0.26563 1.98438,0.96875 1.07812,0.92188 1.60937,2.34375 0.53125,1.40625 0.53125,3.21875 0,1.54688 -0.35937,2.75 -0.35938,1.1875 -0.92188,1.98438 -0.5625,0.78122 -1.23437,1.23434 -0.67188,0.4375 -1.625,0.67188 -0.95313,0.23437 -2.1875,0.23437 l -4.90625,0 z m 1.79687,-1.60937 2.90625,0 q 1.34375,0 2.10938,-0.25 

<TRUNCATED>


[18/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/guc/parameter_definitions.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/guc/parameter_definitions.html.md.erb b/markdown/reference/guc/parameter_definitions.html.md.erb
new file mode 100644
index 0000000..b568da8
--- /dev/null
+++ b/markdown/reference/guc/parameter_definitions.html.md.erb
@@ -0,0 +1,3196 @@
+---
+title: Configuration Parameters
+---
+
+Descriptions of the HAWQ server configuration parameters listed alphabetically.
+
+-   **[add\_missing\_from](../../reference/guc/parameter_definitions.html#add_missing_from)**
+
+-   **[application\_name](../../reference/guc/parameter_definitions.html#application_name)**
+
+-   **[array\_nulls](../../reference/guc/parameter_definitions.html#array_nulls)**
+
+-   **[authentication\_timeout](../../reference/guc/parameter_definitions.html#authentication_timeout)**
+
+-   **[backslash\_quote](../../reference/guc/parameter_definitions.html#backslash_quote)**
+
+-   **[block\_size](../../reference/guc/parameter_definitions.html#block_size)**
+
+-   **[bonjour\_name](../../reference/guc/parameter_definitions.html#bonjour_name)**
+
+-   **[check\_function\_bodies](../../reference/guc/parameter_definitions.html#check_function_bodies)**
+
+-   **[client\_encoding](../../reference/guc/parameter_definitions.html#client_encoding)**
+
+-   **[client\_min\_messages](../../reference/guc/parameter_definitions.html#client_min_messages)**
+
+-   **[cpu\_index\_tuple\_cost](../../reference/guc/parameter_definitions.html#cpu_index_tuple_cost)**
+
+-   **[cpu\_operator\_cost](../../reference/guc/parameter_definitions.html#cpu_operator_cost)**
+
+-   **[cpu\_tuple\_cost](../../reference/guc/parameter_definitions.html#cpu_tuple_cost)**
+
+-   **[cursor\_tuple\_fraction](../../reference/guc/parameter_definitions.html#cursor_tuple_fraction)**
+
+-   **[custom\_variable\_classes](../../reference/guc/parameter_definitions.html#custom_variable_classes)**
+
+-   **[DateStyle](../../reference/guc/parameter_definitions.html#DateStyle)**
+
+-   **[db\_user\_namespace](../../reference/guc/parameter_definitions.html#db_user_namespace)**
+
+-   **[deadlock\_timeout](../../reference/guc/parameter_definitions.html#deadlock_timeout)**
+
+-   **[debug\_assertions](../../reference/guc/parameter_definitions.html#debug_assertions)**
+
+-   **[debug\_pretty\_print](../../reference/guc/parameter_definitions.html#debug_pretty_print)**
+
+-   **[debug\_print\_parse](../../reference/guc/parameter_definitions.html#debug_print_parse)**
+
+-   **[debug\_print\_plan](../../reference/guc/parameter_definitions.html#debug_print_plan)**
+
+-   **[debug\_print\_prelim\_plan](../../reference/guc/parameter_definitions.html#debug_print_prelim_plan)**
+
+-   **[debug\_print\_rewritten](../../reference/guc/parameter_definitions.html#debug_print_rewritten)**
+
+-   **[debug\_print\_slice\_table](../../reference/guc/parameter_definitions.html#debug_print_slice_table)**
+
+-   **[default\_hash\_table\_bucket\_number](../../reference/guc/parameter_definitions.html#topic_fqj_4fd_kv)**
+
+-   **[default\_statement\_mem](../../reference/guc/parameter_definitions.html#default_statement_mem)**
+   
+-   **[default\_statistics\_target](../../reference/guc/parameter_definitions.html#default_statistics_target)**
+
+-   **[default\_tablespace](../../reference/guc/parameter_definitions.html#default_tablespace)**
+
+-   **[default\_transaction\_isolation](../../reference/guc/parameter_definitions.html#default_transaction_isolation)**
+
+-   **[default\_transaction\_read\_only](../../reference/guc/parameter_definitions.html#default_transaction_read_only)**
+
+-   **[dfs\_url](../../reference/guc/parameter_definitions.html#dfs_url)**
+
+-   **[dynamic\_library\_path](../../reference/guc/parameter_definitions.html#dynamic_library_path)**
+
+-   **[effective\_cache\_size](../../reference/guc/parameter_definitions.html#effective_cache_size)**
+
+-   **[enable\_bitmapscan](../../reference/guc/parameter_definitions.html#enable_bitmapscan)**
+
+-   **[enable\_groupagg](../../reference/guc/parameter_definitions.html#enable_groupagg)**
+
+-   **[enable\_hashagg](../../reference/guc/parameter_definitions.html#enable_hashagg)**
+
+-   **[enable\_hashjoin](../../reference/guc/parameter_definitions.html#enable_hashjoin)**
+
+-   **[enable\_indexscan](../../reference/guc/parameter_definitions.html#enable_indexscan)**
+
+-   **[enable\_mergejoin](../../reference/guc/parameter_definitions.html#enable_mergejoin)**
+
+-   **[enable\_nestloop](../../reference/guc/parameter_definitions.html#enable_nestloop)**
+
+-   **[enable\_secure\_filesystem](../../reference/guc/parameter_definitions.html#enable_secure_filesystem)**
+  
+-   **[enable\_seqscan](../../reference/guc/parameter_definitions.html#enable_seqscan)**
+
+-   **[enable\_sort](../../reference/guc/parameter_definitions.html#enable_sort)**
+
+-   **[enable\_tidscan](../../reference/guc/parameter_definitions.html#enable_tidscan)**
+
+-   **[escape\_string\_warning](../../reference/guc/parameter_definitions.html#escape_string_warning)**
+  
+-   **[explain\_memory\_verbosity](../../reference/guc/parameter_definitions.html#explain_memory_verbosity)**
+
+-   **[explain\_pretty\_print](../../reference/guc/parameter_definitions.html#explain_pretty_print)**
+
+-   **[extra\_float\_digits](../../reference/guc/parameter_definitions.html#extra_float_digits)**
+
+-   **[from\_collapse\_limit](../../reference/guc/parameter_definitions.html#from_collapse_limit)**
+
+-   **[gp\_adjust\_selectivity\_for\_outerjoins](../../reference/guc/parameter_definitions.html#gp_adjust_selectivity_for_outerjoins)**
+
+-   **[gp\_analyze\_relative\_error](../../reference/guc/parameter_definitions.html#gp_analyze_relative_error)**
+
+-   **[gp\_autostats\_mode](../../reference/guc/parameter_definitions.html#gp_autostats_mode)**
+
+-   **[gp\_autostats\_on\_change\_threshold](../../reference/guc/parameter_definitions.html#topic_imj_zhf_gw)**
+
+-   **[gp\_backup\_directIO](../../reference/guc/parameter_definitions.html#gp_backup_directIO)**
+
+-   **[gp\_backup\_directIO\_read\_chunk\_mb](../../reference/guc/parameter_definitions.html#gp_backup_directIO_read_chunk_mb)**
+
+-   **[gp\_cached\_segworkers\_threshold](../../reference/guc/parameter_definitions.html#gp_cached_segworkers_threshold)**
+
+-   **[gp\_command\_count](../../reference/guc/parameter_definitions.html#gp_command_count)**
+
+-   **[gp\_connections\_per\_thread](../../reference/guc/parameter_definitions.html#gp_connections_per_thread)**
+
+-   **[gp\_debug\_linger](../../reference/guc/parameter_definitions.html#gp_debug_linger)**
+
+-   **[gp\_dynamic\_partition\_pruning](../../reference/guc/parameter_definitions.html#gp_dynamic_partition_pruning)**
+
+-   **[gp\_enable\_agg\_distinct](../../reference/guc/parameter_definitions.html#gp_enable_agg_distinct)**
+
+-   **[gp\_enable\_agg\_distinct\_pruning](../../reference/guc/parameter_definitions.html#gp_enable_agg_distinct_pruning)**
+
+-   **[gp\_enable\_direct\_dispatch](../../reference/guc/parameter_definitions.html#gp_enable_direct_dispatch)**
+
+-   **[gp\_enable\_fallback\_plan](../../reference/guc/parameter_definitions.html#gp_enable_fallback_plan)**
+
+-   **[gp\_enable\_fast\_sri](../../reference/guc/parameter_definitions.html#gp_enable_fast_sri)**
+
+-   **[gp\_enable\_groupext\_distinct\_gather](../../reference/guc/parameter_definitions.html#gp_enable_groupext_distinct_gather)**
+
+-   **[gp\_enable\_groupext\_distinct\_pruning](../../reference/guc/parameter_definitions.html#gp_enable_groupext_distinct_pruning)**
+
+-   **[gp\_enable\_multiphase\_agg](../../reference/guc/parameter_definitions.html#gp_enable_multiphase_agg)**
+
+-   **[gp\_enable\_predicate\_propagation](../../reference/guc/parameter_definitions.html#gp_enable_predicate_propagation)**
+
+-   **[gp\_enable\_preunique](../../reference/guc/parameter_definitions.html#gp_enable_preunique)**
+
+-   **[gp\_enable\_sequential\_window\_plans](../../reference/guc/parameter_definitions.html#gp_enable_sequential_window_plans)**
+
+-   **[gp\_enable\_sort\_distinct](../../reference/guc/parameter_definitions.html#gp_enable_sort_distinct)**
+
+-   **[gp\_enable\_sort\_limit](../../reference/guc/parameter_definitions.html#gp_enable_sort_limit)**
+
+-   **[gp\_external\_enable\_exec](../../reference/guc/parameter_definitions.html#gp_external_enable_exec)**
+
+-   **[gp\_external\_grant\_privileges](../../reference/guc/parameter_definitions.html#gp_external_grant_privileges)**
+
+-   **[gp\_external\_max\_segs](../../reference/guc/parameter_definitions.html#gp_external_max_segs)**
+
+-   **[gp\_filerep\_tcp\_keepalives\_count](../../reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_count)**
+
+-   **[gp\_filerep\_tcp\_keepalives\_idle](../../reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_idle)**
+
+-   **[gp\_filerep\_tcp\_keepalives\_interval](../../reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_interval)**
+
+-   **[gp\_hashjoin\_tuples\_per\_bucket](../../reference/guc/parameter_definitions.html#gp_hashjoin_tuples_per_bucket)**
+
+-   **[gp\_idf\_deduplicate](../../reference/guc/parameter_definitions.html#gp_idf_deduplicate)**
+
+-   **[gp\_interconnect\_fc\_method](../../reference/guc/parameter_definitions.html#gp_interconnect_fc_method)**
+
+-   **[gp\_interconnect\_hash\_multiplier](../../reference/guc/parameter_definitions.html#gp_interconnect_hash_multiplier)**
+
+-   **[gp\_interconnect\_queue\_depth](../../reference/guc/parameter_definitions.html#gp_interconnect_queue_depth)**
+
+-   **[gp\_interconnect\_setup\_timeout](../../reference/guc/parameter_definitions.html#gp_interconnect_setup_timeout)**
+
+-   **[gp\_interconnect\_snd\_queue\_depth](../../reference/guc/parameter_definitions.html#gp_interconnect_snd_queue_depth)**
+
+-   **[gp\_interconnect\_type](../../reference/guc/parameter_definitions.html#gp_interconnect_type)**
+
+-   **[gp\_log\_format](../../reference/guc/parameter_definitions.html#gp_log_format)**
+
+-   **[gp\_max\_csv\_line\_length](../../reference/guc/parameter_definitions.html#gp_max_csv_line_length)**
+
+-   **[gp\_max\_databases](../../reference/guc/parameter_definitions.html#gp_max_databases)**
+
+-   **[gp\_max\_filespaces](../../reference/guc/parameter_definitions.html#gp_max_filespaces)**
+
+-   **[gp\_max\_packet\_size](../../reference/guc/parameter_definitions.html#gp_max_packet_size)**
+
+-   **[gp\_max\_plan\_size](../../reference/guc/parameter_definitions.html#gp_max_plan_size)**
+
+-   **[gp\_max\_tablespaces](../../reference/guc/parameter_definitions.html#gp_max_tablespaces)**
+
+-   **[gp\_motion\_cost\_per\_row](../../reference/guc/parameter_definitions.html#gp_motion_cost_per_row)**
+
+-   **[gp\_reject\_percent\_threshold](../../reference/guc/parameter_definitions.html#gp_reject_percent_threshold)**
+
+-   **[gp\_reraise\_signal](../../reference/guc/parameter_definitions.html#gp_reraise_signal)**
+
+-   **[gp\_role](../../reference/guc/parameter_definitions.html#gp_role)**
+
+-   **[gp\_safefswritesize](../../reference/guc/parameter_definitions.html#gp_safefswritesize)**
+
+-   **[gp\_segment\_connect\_timeout](../../reference/guc/parameter_definitions.html#gp_segment_connect_timeout)**
+
+-   **[gp\_segments\_for\_planner](../../reference/guc/parameter_definitions.html#gp_segments_for_planner)**
+
+-   **[gp\_session\_id](../../reference/guc/parameter_definitions.html#gp_session_id)**
+
+-   **[gp\_set\_proc\_affinity](../../reference/guc/parameter_definitions.html#gp_set_proc_affinity)**
+
+-   **[gp\_set\_read\_only](../../reference/guc/parameter_definitions.html#gp_set_read_only)**
+
+-   **[gp\_statistics\_pullup\_from\_child\_partition](../../reference/guc/parameter_definitions.html#gp_statistics_pullup_from_child_partition)**
+
+-   **[gp\_statistics\_use\_fkeys](../../reference/guc/parameter_definitions.html#gp_statistics_use_fkeys)**
+
+-   **[gp\_vmem\_idle\_resource\_timeout](../../reference/guc/parameter_definitions.html#gp_vmem_idle_resource_timeout)**
+
+-   **[gp\_vmem\_protect\_segworker\_cache\_limit](../../reference/guc/parameter_definitions.html#gp_vmem_protect_segworker_cache_limit)**
+
+-   **[gp\_workfile\_checksumming](../../reference/guc/parameter_definitions.html#gp_workfile_checksumming)**
+
+-   **[gp\_workfile\_compress\_algorithm](../../reference/guc/parameter_definitions.html#gp_workfile_compress_algorithm)**
+
+-   **[gp\_workfile\_limit\_files\_per\_query](../../reference/guc/parameter_definitions.html#gp_workfile_limit_files_per_query)**
+
+-   **[gp\_workfile\_limit\_per\_query](../../reference/guc/parameter_definitions.html#gp_workfile_limit_per_query)**
+
+-   **[gp\_workfile\_limit\_per\_segment](../../reference/guc/parameter_definitions.html#gp_workfile_limit_per_segment)**
+
+-   **[hawq\_dfs\_url](../../reference/guc/parameter_definitions.html#hawq_dfs_url)**
+
+-   **[hawq\_global\_rm\_type](../../reference/guc/parameter_definitions.html#hawq_global_rm_type)**
+
+-   **[hawq\_master\_address\_host](../../reference/guc/parameter_definitions.html#hawq_master_address_host)**
+
+-   **[hawq\_master\_address\_port](../../reference/guc/parameter_definitions.html#hawq_master_address_port)**
+
+-   **[hawq\_master\_directory](../../reference/guc/parameter_definitions.html#hawq_master_directory)**
+
+-   **[hawq\_master\_temp\_directory](../../reference/guc/parameter_definitions.html#hawq_master_temp_directory)**
+
+-   **[hawq\_re\_memory\_overcommit\_max](../../reference/guc/parameter_definitions.html#hawq_re_memory_overcommit_max)**
+
+-   **[hawq\_rm\_cluster\_report\_period](../../reference/guc/parameter_definitions.html#hawq_rm_cluster_report)**
+
+-   **[hawq\_rm\_force\_alterqueue\_cancel\_queued\_request](../../reference/guc/parameter_definitions.html#hawq_rm_force_alterqueue_cancel_queued_request)**
+
+-   **[hawq\_rm\_master\_port](../../reference/guc/parameter_definitions.html#hawq_rm_master_port)**
+
+-   **[hawq\_rm\_memory\_limit\_perseg](../../reference/guc/parameter_definitions.html#hawq_rm_memory_limit_perseg)**
+
+-   **[hawq\_rm\_min\_resource\_perseg](../../reference/guc/parameter_definitions.html#hawq_rm_min_resource_perseg)**
+
+-   **[hawq\_rm\_nresqueue\_limit](../../reference/guc/parameter_definitions.html#hawq_rm_nresqueue_limit)**
+
+-   **[hawq\_rm\_nslice\_perseg\_limit](../../reference/guc/parameter_definitions.html#hawq_rm_nslice_perseg_limit)**
+
+-   **[hawq\_rm\_nvcore\_limit\_perseg](../../reference/guc/parameter_definitions.html#hawq_rm_nvcore_limit_perseg)**
+
+-   **[hawq\_rm\_nvseg\_perquery\_limit](../../reference/guc/parameter_definitions.html#hawq_rm_nvseg_perquery_limit)**
+
+-   **[hawq\_rm\_nvseg\_perquery\_perseg\_limit](../../reference/guc/parameter_definitions.html#hawq_rm_nvseg_perquery_perseg_limit)**
+
+-   **[hawq\_rm\_nvseg\_variance\_amon\_seg\_limit](../../reference/guc/parameter_definitions.html#hawq_rm_nvseg_variance_amon_seg_limit)**
+
+-   **[hawq\_rm\_rejectrequest\_nseg\_limit](../../reference/guc/parameter_definitions.html#hawq_rm_rejectrequest_nseg_limit)**
+
+-   **[hawq\_rm\_resource\_idle\_timeout](../../reference/guc/parameter_definitions.html#hawq_rm_resource_idle_timeout)**
+
+-   **[hawq\_rm\_return\_percent\_on\_overcommit](../../reference/guc/parameter_definitions.html#hawq_rm_return_percent_on_overcommit)**
+
+-   **[hawq\_rm\_segment\_heartbeat\_interval](../../reference/guc/parameter_definitions.html#hawq_rm_segment_heartbeat_interval)**
+
+-   **[hawq\_rm\_segment\_port](../../reference/guc/parameter_definitions.html#hawq_rm_segment_port)**
+
+-   **[hawq\_rm\_stmt\_nvseg](../../reference/guc/parameter_definitions.html#hawq_rm_stmt_nvseg)**
+
+-   **[hawq\_rm\_stmt\_vseg\_memory](../../reference/guc/parameter_definitions.html#hawq_rm_stmt_vseg_memory)**
+
+-   **[hawq\_rm\_tolerate\_nseg\_limit](../../reference/guc/parameter_definitions.html#hawq_rm_tolerate_nseg_limit)**
+
+-   **[hawq\_rm\_yarn\_address](../../reference/guc/parameter_definitions.html#hawq_rm_yarn_address)**
+
+-   **[hawq\_rm\_yarn\_app\_name](../../reference/guc/parameter_definitions.html#hawq_rm_yarn_app_name)**
+
+-   **[hawq\_rm\_yarn\_queue\_name](../../reference/guc/parameter_definitions.html#hawq_rm_yarn_queue_name)**
+
+-   **[hawq\_rm\_yarn\_scheduler\_address](../../reference/guc/parameter_definitions.html#hawq_rm_yarn_scheduler_address)**
+
+-   **[hawq\_segment\_address\_port](../../reference/guc/parameter_definitions.html#hawq_segment_address_port)**
+
+-   **[hawq\_segment\_directory](../../reference/guc/parameter_definitions.html#hawq_segment_directory)**
+
+-   **[hawq\_segment\_temp\_directory](../../reference/guc/parameter_definitions.html#hawq_segment_temp_directory)**
+
+-   **[integer\_datetimes](../../reference/guc/parameter_definitions.html#integer_datetimes)**
+
+-   **[IntervalStyle](../../reference/guc/parameter_definitions.html#IntervalStyle)**
+
+-   **[join\_collapse\_limit](../../reference/guc/parameter_definitions.html#join_collapse_limit)**
+
+-   **[krb\_caseins\_users](../../reference/guc/parameter_definitions.html#krb_caseins_users)**
+
+-   **[krb\_server\_keyfile](../../reference/guc/parameter_definitions.html#krb_server_keyfile)**
+
+-   **[krb\_srvname](../../reference/guc/parameter_definitions.html#krb_srvname)**
+
+-   **[lc\_collate](../../reference/guc/parameter_definitions.html#lc_collate)**
+
+-   **[lc\_ctype](../../reference/guc/parameter_definitions.html#lc_ctype)**
+
+-   **[lc\_messages](../../reference/guc/parameter_definitions.html#lc_messages)**
+
+-   **[lc\_monetary](../../reference/guc/parameter_definitions.html#lc_monetary)**
+
+-   **[lc\_numeric](../../reference/guc/parameter_definitions.html#lc_numeric)**
+
+-   **[lc\_time](../../reference/guc/parameter_definitions.html#lc_time)**
+
+-   **[listen\_addresses](../../reference/guc/parameter_definitions.html#listen_addresses)**
+
+-   **[local\_preload\_libraries](../../reference/guc/parameter_definitions.html#local_preload_libraries)**
+
+-   **[log\_autostats](../../reference/guc/parameter_definitions.html#log_autostats)**
+
+-   **[log\_connections](../../reference/guc/parameter_definitions.html#log_connections)**
+
+-   **[log\_disconnections](../../reference/guc/parameter_definitions.html#log_disconnections)**
+
+-   **[log\_dispatch\_stats](../../reference/guc/parameter_definitions.html#log_dispatch_stats)**
+
+-   **[log\_duration](../../reference/guc/parameter_definitions.html#log_duration)**
+
+-   **[log\_error\_verbosity](../../reference/guc/parameter_definitions.html#log_error_verbosity)**
+
+-   **[log\_executor\_stats](../../reference/guc/parameter_definitions.html#log_executor_stats)**
+
+-   **[log\_hostname](../../reference/guc/parameter_definitions.html#log_hostname)**
+
+-   **[log\_min\_duration\_statement](../../reference/guc/parameter_definitions.html#log_min_duration_statement)**
+
+-   **[log\_min\_error\_statement](../../reference/guc/parameter_definitions.html#log_min_error_statement)**
+
+-   **[log\_min\_messages](../../reference/guc/parameter_definitions.html#log_min_messages)**
+
+-   **[log\_parser\_stats](../../reference/guc/parameter_definitions.html#log_parser_stats)**
+
+-   **[log\_planner\_stats](../../reference/guc/parameter_definitions.html#log_planner_stats)**
+
+-   **[log\_rotation\_age](../../reference/guc/parameter_definitions.html#log_rotation_age)**
+
+-   **[log\_rotation\_size](../../reference/guc/parameter_definitions.html#log_rotation_size)**
+
+-   **[log\_statement](../../reference/guc/parameter_definitions.html#log_statement)**
+
+-   **[log\_statement\_stats](../../reference/guc/parameter_definitions.html#log_statement_stats)**
+
+-   **[log\_timezone](../../reference/guc/parameter_definitions.html#log_timezone)**
+
+-   **[log\_truncate\_on\_rotation](../../reference/guc/parameter_definitions.html#log_truncate_on_rotation)**
+
+-   **[maintenance\_work\_mem](../../reference/guc/parameter_definitions.html#maintenance_work_mem)**
+
+-   **[max\_appendonly\_tables](../../reference/guc/parameter_definitions.html#max_appendonly_tables)**
+
+-   **[max\_connections](../../reference/guc/parameter_definitions.html#max_connections)**
+
+-   **[max\_files\_per\_process](../../reference/guc/parameter_definitions.html#max_files_per_process)**
+
+-   **[max\_fsm\_pages](../../reference/guc/parameter_definitions.html#max_fsm_pages)**
+
+-   **[max\_fsm\_relations](../../reference/guc/parameter_definitions.html#max_fsm_relations)**
+
+-   **[max\_function\_args](../../reference/guc/parameter_definitions.html#max_function_args)**
+
+-   **[max\_identifier\_length](../../reference/guc/parameter_definitions.html#max_identifier_length)**
+
+-   **[max\_index\_keys](../../reference/guc/parameter_definitions.html#max_index_keys)**
+
+-   **[max\_locks\_per\_transaction](../../reference/guc/parameter_definitions.html#max_locks_per_transaction)**
+
+-   **[max\_prepared\_transactions](../../reference/guc/parameter_definitions.html#max_prepared_transactions)**
+
+-   **[max\_stack\_depth](../../reference/guc/parameter_definitions.html#max_stack_depth)**
+
+-   **[optimizer](../../reference/guc/parameter_definitions.html#optimizer)**
+
+-   **[optimizer\_analyze\_root\_partition](../../reference/guc/parameter_definitions.html#optimizer_analyze_root_partition)**
+
+-   **[optimizer\_minidump](../../reference/guc/parameter_definitions.html#optimizer_minidump)**
+
+-   **[optimizer\_parts\_to\_force\_sort\_on\_insert](../../reference/guc/parameter_definitions.html#optimizer_parts_to_force_sort_on_insert)**
+
+-   **[optimizer\_prefer\_scalar\_dqa\_multistage\_agg](../../reference/guc/parameter_definitions.html#optimizer_prefer_scalar_dqa_multistage_agg)**
+
+-   **[password\_encryption](../../reference/guc/parameter_definitions.html#password_encryption)**
+
+-   **[password\_hash\_algorithm](../../reference/guc/parameter_definitions.html#password_hash_algorithm)**
+
+-   **[pgstat\_track\_activity\_query\_size](../../reference/guc/parameter_definitions.html#pgstat_track_activity_query_size)**
+
+-   **[pljava\_classpath](../../reference/guc/parameter_definitions.html#pljava_classpath)**
+
+-   **[pljava\_release\_lingering\_savepoints](../../reference/guc/parameter_definitions.html#pljava_release_lingering_savepoints)**
+
+-   **[pljava\_statement\_cache\_size](../../reference/guc/parameter_definitions.html#pljava_statement_cache_size)**
+
+-   **[pljava\_vmoptions](../../reference/guc/parameter_definitions.html#pljava_vmoptions)**
+
+-   **[port](../../reference/guc/parameter_definitions.html#port)**
+
+-   **[pxf\_enable\_filter\_pushdown](../../reference/guc/parameter_definitions.html#pxf_enable_filter_pushdown)**
+
+-   **[pxf\_enable\_stat\_collection](../../reference/guc/parameter_definitions.html#pxf_enable_stat_collection)**
+
+-   **[pxf\_remote\_service\_login](../../reference/guc/parameter_definitions.html#pxf_remote_service_login)**
+
+-   **[pxf\_remote\_service\_secret](../../reference/guc/parameter_definitions.html#pxf_remote_service_secret)**
+  
+-   **[pxf\_service\_address](../../reference/guc/parameter_definitions.html#pxf_service_address)**
+
+-   **[pxf\_service\_port](../../reference/guc/parameter_definitions.html#pxf_service_port)**
+
+-   **[pxf\_stat\_max\_fragments](../../reference/guc/parameter_definitions.html#pxf_stat_max_fragments)**
+
+-   **[random\_page\_cost](../../reference/guc/parameter_definitions.html#random_page_cost)**
+
+-   **[regex\_flavor](../../reference/guc/parameter_definitions.html#regex_flavor)**
+
+-   **[runaway\_detector\_activation\_percent](../../reference/guc/parameter_definitions.html#runaway_detector_activation_percent)**
+
+-   **[search\_path](../../reference/guc/parameter_definitions.html#search_path)**
+
+-   **[seg\_max\_connections](../../reference/guc/parameter_definitions.html#seg_max_connections)**
+
+-   **[seq\_page\_cost](../../reference/guc/parameter_definitions.html#seq_page_cost)**
+
+-   **[server\_encoding](../../reference/guc/parameter_definitions.html#server_encoding)**
+
+-   **[server\_version](../../reference/guc/parameter_definitions.html#server_version)**
+
+-   **[server\_version\_num](../../reference/guc/parameter_definitions.html#server_version_num)**
+
+-   **[shared\_buffers](../../reference/guc/parameter_definitions.html#shared_buffers)**
+
+-   **[shared\_preload\_libraries](../../reference/guc/parameter_definitions.html#shared_preload_libraries)**
+
+-   **[ssl](../../reference/guc/parameter_definitions.html#ssl)**
+
+-   **[ssl\_ciphers](../../reference/guc/parameter_definitions.html#ssl_ciphers)**
+
+-   **[standard\_conforming\_strings](../../reference/guc/parameter_definitions.html#standard_conforming_strings)**
+
+-   **[statement\_timeout](../../reference/guc/parameter_definitions.html#statement_timeout)**
+
+-   **[superuser\_reserved\_connections](../../reference/guc/parameter_definitions.html#superuser_reserved_connections)**
+
+-   **[tcp\_keepalives\_count](../../reference/guc/parameter_definitions.html#tcp_keepalives_count)**
+
+-   **[tcp\_keepalives\_idle](../../reference/guc/parameter_definitions.html#tcp_keepalives_idle)**
+
+-   **[tcp\_keepalives\_interval](../../reference/guc/parameter_definitions.html#tcp_keepalives_interval)**
+
+-   **[temp\_buffers](../../reference/guc/parameter_definitions.html#temp_buffers)**
+
+-   **[TimeZone](../../reference/guc/parameter_definitions.html#TimeZone)**
+
+-   **[timezone\_abbreviations](../../reference/guc/parameter_definitions.html#timezone_abbreviations)**
+
+-   **[track\_activities](../../reference/guc/parameter_definitions.html#track_activities)**
+
+-   **[track\_counts](../../reference/guc/parameter_definitions.html#track_counts)**
+
+-   **[transaction\_isolation](../../reference/guc/parameter_definitions.html#transaction_isolation)**
+
+-   **[transaction\_read\_only](../../reference/guc/parameter_definitions.html#transaction_read_only)**
+
+-   **[transform\_null\_equals](../../reference/guc/parameter_definitions.html#transform_null_equals)**
+
+-   **[unix\_socket\_directory](../../reference/guc/parameter_definitions.html#unix_socket_directory)**
+
+-   **[unix\_socket\_group](../../reference/guc/parameter_definitions.html#unix_socket_group)**
+
+-   **[unix\_socket\_permissions](../../reference/guc/parameter_definitions.html#unix_socket_permissions)**
+
+-   **[update\_process\_title](../../reference/guc/parameter_definitions.html#update_process_title)**
+
+-   **[vacuum\_cost\_delay](../../reference/guc/parameter_definitions.html#vacuum_cost_delay)**
+
+-   **[vacuum\_cost\_limit](../../reference/guc/parameter_definitions.html#vacuum_cost_limit)**
+
+-   **[vacuum\_cost\_page\_dirty](../../reference/guc/parameter_definitions.html#vacuum_cost_page_dirty)**
+
+-   **[vacuum\_cost\_page\_miss](../../reference/guc/parameter_definitions.html#vacuum_cost_page_miss)**
+
+-   **[vacuum\_freeze\_min\_age](../../reference/guc/parameter_definitions.html#vacuum_freeze_min_age)**
+
+-   **[xid\_stop\_limit](../../reference/guc/parameter_definitions.html#xid_stop_limit)**
+
+
+
+## <a name="add_missing_from"></a>add\_missing\_from 
+
+Automatically adds missing table references to FROM clauses. Present for compatibility with releases of PostgreSQL prior to 8.1, where this behavior was allowed by default.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+
+## <a name="application_name"></a>application\_name 
+
+Sets the application name for a client session. For example, if connecting via `psql`, this will be set to `psql`. Setting an application name allows it to be reported in log messages and statistics views.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| string      | �       | master, session, reload |
+
+
+
+## <a name="array_nulls"></a>array\_nulls 
+
+This controls whether the array input parser recognizes unquoted NULL as specifying a null array element. By default, this is on, allowing array values containing null values to be entered.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+
+
+## <a name="authentication_timeout"></a>authentication\_timeout 
+
+Maximum time to complete client authentication. This prevents hung clients from occupying a connection indefinitely.
+
+| Value Range                                 | Default | Set Classifications    |
+|---------------------------------------------|---------|------------------------|
+| Any valid time expression (number and unit) | 1min    | local, system, restart |
+
+
+## <a name="backslash_quote"></a>backslash\_quote 
+
+This controls whether a quote mark can be represented by `\'` in a string literal. The preferred, SQL-standard way to represent a quote mark is by doubling it (`''`) but PostgreSQL has historically also accepted `\'`. However, use of `\'` creates security risks because in some client character set encodings, there are multibyte characters in which the last byte is numerically equivalent to ASCII `\`.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>on (allow <code class="ph codeph">\'</code> always)
+<p>off (reject always)</p>
+<p>safe_encoding (allow only if client encoding does not allow ASCII <code class="ph codeph">\</code> within a multibyte character)</p></td>
+<td>safe_encoding</td>
+<td>master, session, reload</td>
+</tr>
+</tbody>
+</table>
+
+
+## <a name="block_size"></a>block\_size 
+
+Reports the size of a disk block.
+
+| Value Range     | Default | Set Classifications |
+|-----------------|---------|---------------------|
+| number of bytes | 32768   | read only           |
+
+
+## <a name="bonjour_name"></a>bonjour\_name 
+
+Specifies the Bonjour broadcast name. By default, the computer name is used, specified as an empty string. This option is ignored if the server was not compiled with Bonjour support.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| string      | unset   | master, system, restart |
+
+
+## <a name="check_function_bodies"></a>check\_function\_bodies 
+
+When set to off, disables validation of the function body string during `CREATE FUNCTION`. Disabling validation is occasionally useful to avoid problems such as forward references when restoring function definitions from a dump.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+
+## <a name="client_encoding"></a>client\_encoding 
+
+Sets the client-side encoding (character set). The default is to use the same as the database encoding. See [Supported Character Sets](http://www.postgresql.org/docs/8.1/static/multibyte.html#MULTIBYTE-CHARSET-SUPPORTED) in the PostgreSQL documentation.
+
+| Value Range   | Default | Set Classifications     |
+|---------------|---------|-------------------------|
+| character set | UTF8    | master, session, reload |
+
+
+## <a name="client_min_messages"></a>client\_min\_messages 
+
+Controls which message levels are sent to the client. Each level includes all the levels that follow it. The later the level, the fewer messages are sent.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>DEBUG5
+<p>DEBUG4</p>
+<p>DEBUG3</p>
+<p>DEBUG2</p>
+<p>DEBUG1</p>
+<p>LOG NOTICE</p>
+<p>WARNING</p>
+<p>ERROR</p>
+<p>FATAL</p>
+<p>PANIC</p></td>
+<td>NOTICE</td>
+<td>master, session, reload</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="cpu_index_tuple_cost"></a>cpu\_index\_tuple\_cost 
+
+For the legacy query optimizer (planner), sets the estimate of the cost of processing each index row during an index scan. This is measured as a fraction of the cost of a sequential page fetch.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| floating point | 0.005   | master, session, reload |
+
+
+
+## <a name="cpu_operator_cost"></a>cpu\_operator\_cost 
+
+For the legacy query optimizer (planner), sets the estimate of the cost of processing each operator in a WHERE clause. This is measured as a fraction of the cost of a sequential page fetch.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| floating point | 0.0025  | master, session, reload |
+
+
+## <a name="cpu_tuple_cost"></a>cpu\_tuple\_cost 
+
+For the legacy query optimizer (planner), Sets the estimate of the cost of processing each row during a query. This is measured as a fraction of the cost of a sequential page fetch.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| floating point | 0.01    | master, session, reload |
+
+
+## <a name="cursor_tuple_fraction"></a>cursor\_tuple\_fraction 
+
+Tells the legacy query optimizer (planner) how many rows are expected to be fetched in a cursor query, thereby allowing the legacy optimizer to use this information to optimize the query plan. The default of 1 means all rows will be fetched.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 1       | master, session, reload |
+
+
+## <a name="custom_variable_classes"></a>custom\_variable\_classes 
+
+Specifies one or several class names to be used for custom variables. A custom variable is a variable not normally known to the server but used by some add-on modules. Such variables must have names consisting of a class name, a dot, and a variable name.
+
+| Value Range                         | Default | Set Classifications    |
+|-------------------------------------|---------|------------------------|
+| comma-separated list of class names | unset   | local, system, restart |
+
+
+## <a name="DateStyle"></a>DateStyle 
+
+Sets the display format for date and time values, as well as the rules for interpreting ambiguous date input values. This variable contains two independent components: the output format specification and the input/output specification for year/month/day ordering.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>&lt;format&gt;, &lt;date style&gt;
+<p>where:</p>
+<p>&lt;format&gt; is ISO, Postgres, SQL, or German</p>
+<p>&lt;date style&gt; is DMY, MDY, or YMD</p></td>
+<td>ISO, MDY</td>
+<td>master, session, reload</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="db_user_namespace"></a>db\_user\_namespace 
+
+This enables per-database user names. If on, you should create users as *username@dbname*. To create ordinary global users, simply append @ when specifying the user name in the client.
+
+| Value Range | Default | Set Classifications    |
+|-------------|---------|------------------------|
+| Boolean     | off     | local, system, restart |
+
+
+## <a name="deadlock_timeout"></a>deadlock\_timeout 
+
+The time, in milliseconds, to wait on a lock before checking to see if there is a deadlock condition. On a heavily loaded server you might want to raise this value. Ideally the setting should exceed your typical transaction time, so as to improve the odds that a lock will be released before the waiter decides to check for deadlock.
+
+| Value Range            | Default | Set Classifications    |
+|------------------------|---------|------------------------|
+| integer (milliseconds) | 1000    | local, system, restart |
+
+
+## <a name="debug_assertions"></a>debug\_assertions 
+
+Turns on various assertion checks.
+
+| Value Range | Default | Set Classifications    |
+|-------------|---------|------------------------|
+| Boolean     | off     | local, system, restart |
+
+
+## <a name="debug_pretty_print"></a>debug\_pretty\_print 
+
+Indents debug output to produce a more readable but much longer output format. *client\_min\_messages* or *log\_min\_messages* must be DEBUG1 or lower.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+
+## <a name="debug_print_parse"></a>debug\_print\_parse 
+
+
+For each executed query, prints the resulting parse tree. *client\_min\_messages* or *log\_min\_messages* must be DEBUG1 or lower.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+
+## <a name="debug_print_plan"></a>debug\_print\_plan 
+
+For each executed query, prints the HAWQ parallel query execution plan. *client\_min\_messages* or *log\_min\_messages* must be DEBUG1 or lower.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+## <a name="debug_print_prelim_plan"></a>debug\_print\_prelim\_plan 
+
+For each executed query, prints the preliminary query plan. *client\_min\_messages* or *log\_min\_messages* must be DEBUG1 or lower.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+
+## <a name="debug_print_rewritten"></a>debug\_print\_rewritten 
+
+For each executed query, prints the query rewriter output. *client\_min\_messages* or *log\_min\_messages* must be DEBUG1 or lower.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+
+## <a name="debug_print_slice_table"></a>debug\_print\_slice\_table 
+
+For each executed query, prints the HAWQ query slice plan. *client\_min\_messages* or *log\_min\_messages* must be DEBUG1 or lower.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+## <a name="topic_fqj_4fd_kv"></a>default\_hash\_table\_bucket\_number 
+
+The default number of hash buckets to use when executing a query statement on a hash table. Due to dynamic allocation, when the query is actually executed, the number of virtual segments may differ from this number depending on the query's needs. The total number of segments should never exceed the maximum set in `hawq_rm_nvseg_perquery_limit`.
+
+When expanding the cluster, you should adjust this number to reflect the number of nodes in the new cluster times the number of virtual segments per node. See [Expanding a Cluster](../../admin/ClusterExpansion.html) and [Creating and Managing Tables](../../ddl/ddl-table.html) for more details on modifying this parameter.
+
+| Value Range    | Default         | Set Classifications     |
+|----------------|-----------------|-------------------------|
+| integer &gt; 0 | 6\*SegmentCount | master, session, reload |
+
+
+## <a name="default_statement_mem"></a>default\_statement\_mem 
+
+The default amount of memory, in KB, to allocate to query statements that do not require any segment resources and are executed only on the master host. This type of query execution is rare in HAWQ. 
+
+The default value of this configuration parameter is acceptable for most deployments. Modify this value only if you are using an advanced configuration.
+ 
+
+| Value Range    | Default         | Set Classifications     |
+|----------------|-----------------|-------------------------|
+| integer &gt; 1000 |128000 | master, session, reload |
+
+## <a name="default_statistics_target"></a>default\_statistics\_target 
+
+Sets the default statistics target for table columns that have not had a column-specific target set via `ALTER TABLE SET STATISTICS`. Larger values increase the time needed to do `ANALYZE`, but may improve the quality of the legacy query optimizer (planner) estimates.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| integer &gt; 0 | 25      | master, session, reload |
+
+
+## <a name="default_tablespace"></a>default\_tablespace 
+
+The default tablespace in which to create objects (tables and indexes) when a `CREATE` command does not explicitly specify a tablespace.
+
+| Value Range          | Default | Set Classifications     |
+|----------------------|---------|-------------------------|
+| name of a tablespace | unset   | master, session, reload |
+
+
+## <a name="default_transaction_isolation"></a>default\_transaction\_isolation 
+
+Controls the default isolation level of each new transaction.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>read committed
+<p>read uncommitted</p>
+<p>repeatable read</p>
+<p>serializable</p></td>
+<td>read committed</td>
+<td>master, session, reload</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="default_transaction_read_only"></a>default\_transaction\_read\_only 
+
+Controls the default read-only status of each new transaction. A read-only SQL transaction cannot alter non-temporary tables.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+
+
+
+## <a name="dfs_url"></a>dfs\_url 
+
+See [hawq\_dfs\_url](#hawq_dfs_url).
+
+
+## <a name="dynamic_library_path"></a>dynamic\_library\_path 
+
+If a dynamically loadable module needs to be opened and the file name specified in the `CREATE FUNCTION` or `LOAD` command does not have a directory component (i.e. the name does not contain a slash), the system will search this path for the required file. The compiled-in PostgreSQL package library directory is substituted for $libdir. This is where the modules provided by the standard PostgreSQL distribution are installed.
+
+| Value Range                                            | Default | Set Classifications     |
+|--------------------------------------------------------|---------|-------------------------|
+| a list of absolute directory paths separated by colons | $libdir | master, session, reload |
+
+## <a name="effective_cache_size"></a>effective\_cache\_size 
+
+Sets the assumption about the effective size of the disk cache that is available to a single query for the legacy query optimizer (planner). This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used. This parameter has no effect on the size of shared memory allocated by a HAWQ server instance, nor does it reserve kernel disk cache; it is used only for estimation purposes.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| floating point | 512MB   | master, session, reload |
+
+
+## <a name="enable_bitmapscan"></a>enable\_bitmapscan 
+
+Enables or disables the use of bitmap-scan plan types by the legacy query optimizer (planner). Note that this is different than a Bitmap Index Scan. A Bitmap Scan means that indexes will be dynamically converted to bitmaps in memory when appropriate, giving faster index performance on complex queries against very large tables. It is used when there are multiple predicates on different indexed columns. Each bitmap per column can be compared to create a final list of selected tuples.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+
+## <a name="enable_groupagg"></a>enable\_groupagg 
+
+Enables or disables the use of group aggregation plan types by the legacy query optimizer (planner).
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="enable_hashagg"></a>enable\_hashagg 
+
+Enables or disables the use of hash aggregation plan types by the legacy query optimizer (planner).
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+
+
+## <a name="enable_hashjoin"></a>enable\_hashjoin 
+
+Enables or disables the use of hash-join plan types by the legacy query optimizer (planner).
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="enable_indexscan"></a>enable\_indexscan 
+
+Enables or disables the use of index-scan plan types by the legacy query optimizer (planner).
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="enable_mergejoin"></a>enable\_mergejoin 
+
+Enables or disables the use of merge-join plan types by the legacy query optimizer (planner). Merge join is based on the idea of sorting the left- and right-hand tables into order and then scanning them in parallel. So, both data types must be capable of being fully ordered, and the join operator must be one that can only succeed for pairs of values that fall at the 'same place' in the sort order. In practice this means that the join operator must behave like equality.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+## <a name="enable_nestloop"></a>enable\_nestloop 
+
+Enables or disables the use of nested-loop join plans by the legacy query optimizer (planner). It's not possible to suppress nested-loop joins entirely, but turning this variable off discourages the legacy optimizer from using one if there are other methods available.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+## <a name="enable_secure_filesystem"></a>enable\_secure\_filesystem 
+
+Enables or disables access to a secure HDFS file system.  To enable Kerberos security for HDFS, set this configuration parameter to `on` before starting HAWQ.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, system, superuser |
+
+
+## <a name="enable_seqscan"></a>enable\_seqscan
+
+Enables or disables the use of sequential scan plan types by the legacy query optimizer (planner). It's not possible to suppress sequential scans entirely, but turning this variable off discourages the legacy optimizer from using one if there are other methods available.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="enable_sort"></a>enable\_sort
+
+Enables or disables the use of explicit sort steps by the legacy query optimizer (planner). It's not possible to suppress explicit sorts entirely, but turning this variable off discourages the legacy optimizer from using one if there are other methods available.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="enable_tidscan"></a>enable\_tidscan
+
+Enables or disables the use of tuple identifier (TID) scan plan types by the legacy query optimizer (planner).
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="escape_string_warning"></a>escape\_string\_warning
+
+When on, a warning is issued if a backslash (\\) appears in an ordinary string literal ('...' syntax). Escape string syntax (E'...') should be used for escapes, because in future versions, ordinary strings will have the SQL standard-conforming behavior of treating backslashes literally.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+
+## <a name="explain_memory_verbosity"></a>explain\_memory\_verbosity
+Controls the granularity of memory information displayed in `EXPLAIN ANALYZE` output.  `explain_memory_verbosity` takes three values:
+
+* SUPPRESS - generate only total memory information for the whole query
+* SUMMARY - generate basic memory information for each executor node
+* DETAIL - generate detailed memory information for each executor node
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="34%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>SUPPRESS</p>
+<p>SUMMARY</p>
+<p>DETAIL</p></td>
+<td>SUPPRESS</td>
+<td>master</td>
+</tr>
+</tbody>
+</table>
+
+
+## <a name="explain_pretty_print"></a>explain\_pretty\_print
+
+Determines whether EXPLAIN VERBOSE uses the indented or non-indented format for displaying detailed query-tree dumps.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="extra_float_digits"></a>extra\_float\_digits
+
+Adjusts the number of digits displayed for floating-point values, including float4, float8, and geometric data types. The parameter value is added to the standard number of digits. The value can be set as high as 2, to include partially-significant digits; this is especially useful for dumping float data that needs to be restored exactly. Or it can be set negative to suppress unwanted digits.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 0       | master, session, reload |
+
+## <a name="from_collapse_limit"></a>from\_collapse\_limit
+
+The legacy query optimizer (planner) will merge sub-queries into upper queries if the resulting FROM list would have no more than this many items. Smaller values reduce planning time but may yield inferior query plans.
+
+| Value Range     | Default | Set Classifications     |
+|-----------------|---------|-------------------------|
+| integer (1-*n*) | 20      | master, session, reload |
+
+## <a name="gp_adjust_selectivity_for_outerjoins"></a>gp\_adjust\_selectivity\_for\_outerjoins
+
+Enables the selectivity of NULL tests over outer joins.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_analyze_relative_error"></a>gp\_analyze\_relative\_error
+
+Sets the estimated acceptable error in the cardinality of the table " a value of 0.5 is supposed to be equivalent to an acceptable error of 50% (this is the default value used in PostgreSQL). If the statistics collected during `ANALYZE` are not producing good estimates of cardinality for a particular table attribute, decreasing the relative error fraction (accepting less error) tells the system to sample more rows.
+
+| Value Range             | Default | Set Classifications     |
+|-------------------------|---------|-------------------------|
+| floating point &lt; 1.0 | 0.25    | master, session, reload |
+
+## <a name="gp_autostats_mode"></a>gp\_autostats\_mode
+
+Specifies the mode for triggering automatic statistics collection with `ANALYZE`. The `on_no_stats` option triggers statistics collection for `CREATE TABLE AS SELECT`, `INSERT`, or `COPY` operations on any table that has no existing statistics.
+
+**Warning:** Depending on the specific nature of your database operations, automatic statistics collection can have a negative performance impact. Carefully evaluate whether the default value of `on_no_stats` is appropriate for your system.
+
+The `on_change` option triggers statistics collection only when the number of rows affected exceeds the threshold defined by `gp_autostats_on_change_threshold`. Operations that can trigger automatic statistics collection with `on_change` are:
+
+`CREATE TABLE AS SELECT`
+
+`INSERT`
+
+`COPY`
+
+Default is `on_no_stats`.
+
+**Note:** For partitioned tables, automatic statistics collection is not triggered if data is inserted from the top-level parent table of a partitioned table.
+Automatic statistics collection is triggered if data is inserted directly in a leaf table (where the data is stored) of the partitioned table. Statistics are collected only on the leaf table.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>none
+<p>on_change</p>
+<p>on_no_stats</p></td>
+<td>on_no_ stats</td>
+<td>master, session, reload</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="topic_imj_zhf_gw"></a>gp\_autostats\_on\_change\_threshold
+
+Specifies the threshold for automatic statistics collection when `gp_autostats_mode` is set to `on_change`. When a triggering table operation affects a number of rows exceeding this threshold, `ANALYZE         `is added and statistics are collected for the table.
+
+| Value Range | Default    | Set Classifications     |
+|-------------|------------|-------------------------|
+| integer     | 2147483647 | master, session, reload |
+
+## <a name="gp_backup_directIO"></a>gp\_backup\_directIO
+
+Direct I/O allows HAWQ to bypass the buffering of memory within the file system cache for backup. When Direct I/O is used for a file, data is transferred directly from the disk to the application buffer, without the use of the file buffer cache.
+
+Direct I/O is supported only on Red Hat Enterprise Linux, CentOS, and SUSE.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| on, off     | off     | master, session, reload |
+
+## <a name="gp_backup_directIO_read_chunk_mb"></a>gp\_backup\_directIO\_read\_chunk\_mb
+
+Sets the chunk size in MB when Direct I/O is enabled with [gp\_backup\_directIO](#gp_backup_directIO). The default chunk size is 20MB.
+
+The default value is the optimal setting. Decreasing it will increase the backup time and increasing it will result in little change to backup time.
+
+| Value Range | Default | Set Classifications    |
+|-------------|---------|------------------------|
+| 1-200       | 20 MB   | local, session, reload |
+
+## <a name="gp_cached_segworkers_threshold"></a>gp\_cached\_segworkers\_threshold
+
+When a user starts a session with HAWQ and issues a query, the system creates groups or 'gangs' of worker processes on each segment to do the work. After the work is done, the segment worker processes are destroyed except for a cached number which is set by this parameter. A lower setting conserves system resources on the segment hosts, but a higher setting may improve performance for power-users that want to issue many complex queries in a row.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| integer &gt; 0 | 5       | master, session, reload |
+
+## <a name="gp_command_count"></a>gp\_command\_count
+
+Shows how many commands the master has received from the client. Note that a single SQL command might actually involve more than one command internally, so the counter may increment by more than one for a single query. This counter also is shared by all of the segment processes working on the command.
+
+| Value Range    | Default | Set Classifications |
+|----------------|---------|---------------------|
+| integer &gt; 0 | 1       | read only           |
+
+## <a name="gp_connections_per_thread"></a>gp\_connections\_per\_thread
+
+A value larger than or equal to the number of segments means that each slice in a query plan will get its own thread when dispatching to the segments. A value of 0 indicates that the dispatcher should use a single thread when dispatching all query plan slices to a segment. Lower values will use more threads, which utilizes more resources on the master. Typically, the default does not need to be changed unless there is a known throughput performance problem.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 64      | master, session, reload |
+
+## <a name="gp_debug_linger"></a>gp\_debug\_linger
+
+Number of seconds for a HAWQ process to linger after a fatal internal error.
+
+| Value Range                                 | Default | Set Classifications     |
+|---------------------------------------------|---------|-------------------------|
+| Any valid time expression (number and unit) | 0       | master, session, reload |
+
+## <a name="gp_dynamic_partition_pruning"></a>gp\_dynamic\_partition\_pruning
+
+Enables plans that can dynamically eliminate the scanning of partitions.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| on/off      | on      | master, session, reload |
+
+## <a name="gp_enable_agg_distinct"></a>gp\_enable\_agg\_distinct
+
+Enables or disables two-phase aggregation to compute a single distinct-qualified aggregate. This applies only to subqueries that include a single distinct-qualified aggregate function.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_agg_distinct_pruning"></a>gp\_enable\_agg\_distinct\_pruning
+
+Enables or disables three-phase aggregation and join to compute distinct-qualified aggregates. This applies only to subqueries that include one or more distinct-qualified aggregate functions.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_direct_dispatch"></a>gp\_enable\_direct\_dispatch
+
+Enables or disables the dispatching of targeted query plans for queries that access data on a single segment. When on, queries that target rows on a single segment will only have their query plan dispatched to that segment (rather than to all segments). This significantly reduces the response time of qualifying queries as there is no interconnect setup involved. Direct dispatch does require more CPU utilization on the master.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, system, restart |
+
+## <a name="gp_enable_fallback_plan"></a>gp\_enable\_fallback\_plan
+
+Allows use of disabled plan types when a query would not be feasible without them.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_fast_sri"></a>gp\_enable\_fast\_sri
+
+When set to `on`, the legacy query optimizer (planner) plans single row inserts so that they are sent directly to the correct segment instance (no motion operation required). This significantly improves performance of single-row-insert statements.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_groupext_distinct_gather"></a>gp\_enable\_groupext\_distinct\_gather
+
+Enables or disables gathering data to a single node to compute distinct-qualified aggregates on grouping extension queries. When this parameter and `gp_enable_groupext_distinct_pruning` are both enabled, the legacy query optimizer (planner) uses the cheaper plan.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_groupext_distinct_pruning"></a>gp\_enable\_groupext\_distinct\_pruning
+
+Enables or disables three-phase aggregation and join to compute distinct-qualified aggregates on grouping extension queries. Usually, enabling this parameter generates a cheaper query plan that the legacy query optimizer (planner) will use in preference to existing plan.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_multiphase_agg"></a>gp\_enable\_multiphase\_agg
+
+Enables or disables the use of two or three-stage parallel aggregation plans legacy query optimizer (planner). This approach applies to any subquery with aggregation. If `gp_enable_multiphase_agg` is off, then`           gp_enable_agg_distinct` and `gp_enable_agg_distinct_pruning` are disabled.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_predicate_propagation"></a>gp\_enable\_predicate\_propagation
+
+When enabled, the legacy query optimizer (planner) applies query predicates to both table expressions in cases where the tables are joined on their distribution key column(s). Filtering both tables prior to doing the join (when possible) is more efficient.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_preunique"></a>gp\_enable\_preunique
+
+Enables two-phase duplicate removal for `SELECT DISTINCT` queries (not `SELECT COUNT(DISTINCT)`). When enabled, it adds an extra `SORT           DISTINCT` set of plan nodes before motioning. In cases where the distinct operation greatly reduces the number of rows, this extra `SORT DISTINCT` is much cheaper than the cost of sending the rows across the Interconnect.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_sequential_window_plans"></a>gp\_enable\_sequential\_window\_plans
+
+If on, enables non-parallel (sequential) query plans for queries containing window function calls. If off, evaluates compatible window functions in parallel and rejoins the results. This is an experimental parameter.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_sort_distinct"></a>gp\_enable\_sort\_distinct
+
+Enable duplicates to be removed while sorting.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_enable_sort_limit"></a>gp\_enable\_sort\_limit
+
+Enable `LIMIT` operation to be performed while sorting. Sorts more efficiently when the plan requires the first *limit\_number* of rows at most.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_external_enable_exec"></a>gp\_external\_enable\_exec
+
+Enables or disables the use of external tables that execute OS commands or scripts on the segment hosts (`CREATE EXTERNAL TABLE EXECUTE` syntax).
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, system, restart |
+
+## <a name="gp_external_grant_privileges"></a>gp\_external\_grant\_privileges
+
+Enables or disables non-superusers to issue a `CREATE EXTERNAL [WEB] TABLE` command in cases where the `LOCATION` clause specifies specifies `http` or `gpfdist`. The ability to create an external table can be granted to a role using `CREATE ROLE` or `ALTER         ROLE`.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, system, restart |
+
+## <a name="gp_external_max_segs"></a>gp\_external\_max\_segs
+
+Sets the number of segments that will scan external table data during an external table operation, the purpose being not to overload the system with scanning data and take away resources from other concurrent operations. This only applies to external tables that use the `gpfdist://` protocol to access external table data.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 64      | master, session, reload |
+
+## <a name="gp_filerep_tcp_keepalives_count"></a>gp\_filerep\_tcp\_keepalives\_count
+
+How many keepalives may be lost before the connection is considered dead. A value of 0 uses the system default. If TCP\_KEEPCNT is not supported, this parameter must be 0.
+
+| Value Range               | Default | Set Classifications    |
+|---------------------------|---------|------------------------|
+| number of lost keepalives | 2       | local, system, restart |
+
+## <a name="gp_filerep_tcp_keepalives_idle"></a>gp\_filerep\_tcp\_keepalives\_idle
+
+Number of seconds between sending keepalives on an otherwise idle connection. A value of 0 uses the system default. If TCP\_KEEPIDLE is not supported, this parameter must be 0.
+
+| Value Range       | Default | Set Classifications    |
+|-------------------|---------|------------------------|
+| number of seconds | 1 min   | local, system, restart |
+
+## <a name="gp_filerep_tcp_keepalives_interval"></a>gp\_filerep\_tcp\_keepalives\_interval
+
+How many seconds to wait for a response to a keepalive before retransmitting. A value of 0 uses the system default. If TCP\_KEEPINTVL is not supported, this parameter must be 0.
+
+| Value Range       | Default | Set Classifications    |
+|-------------------|---------|------------------------|
+| number of seconds | 30 sec  | local, system, restart |
+
+## <a name="gp_hashjoin_tuples_per_bucket"></a>gp\_hashjoin\_tuples\_per\_bucket
+
+Sets the target density of the hash table used by HashJoin operations. A smaller value will tend to produce larger hash tables, which can increase join performance.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 5       | master, session, reload |
+
+## <a name="gp_idf_deduplicate"></a>gp\_idf\_deduplicate
+
+Changes the strategy to compute and process MEDIAN, and PERCENTILE\_DISC.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>auto
+<p>none</p>
+<p>force</p></td>
+<td>auto</td>
+<td>master, session, reload</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="gp_interconnect_fc_method"></a>gp\_interconnect\_fc\_method
+
+Specifies the flow control method used for UDP interconnect when the value of [gp\_interconnect\_type](#gp_interconnect_type) is UDPIFC.
+
+For capacity based flow control, senders do not send packets when receivers do not have the capacity.
+
+Loss based flow control is based on capacity based flow control, and also tunes the sending speed according to packet losses.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>CAPACITY
+<p>LOSS</p></td>
+<td>LOSS</td>
+<td>master, session, reload</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="gp_interconnect_hash_multiplier"></a>gp\_interconnect\_hash\_multiplier
+
+Sets the size of the hash table used by the UDP interconnect to track connections. This number is multiplied by the number of segments to determine the number of buckets in the hash table. Increasing the value may increase interconnect performance for complex multi-slice queries (while consuming slightly more memory on the segment hosts).
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| 2-25        | 2       | master, session, reload |
+
+## <a name="gp_interconnect_queue_depth"></a>gp\_interconnect\_queue\_depth
+
+Sets the amount of data per-peer to be queued by the UDP interconnect on receivers (when data is received but no space is available to receive it the data will be dropped, and the transmitter will need to resend it). Increasing the depth from its default value will cause the system to use more memory; but may increase performance. It is reasonable for this to be set between 1 and 10. Queries with data skew potentially perform better when this is increased. Increasing this may radically increase the amount of memory used by the system.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| 1-2048      | 4       | master, session, reload |
+
+## <a name="gp_interconnect_setup_timeout"></a>gp\_interconnect\_setup\_timeout
+
+When the interconnect type is UDP, the time to wait for the Interconnect to complete setup before it times out.
+
+This parameter is used only when [gp\_interconnect\_type](#gp_interconnect_type) is set to UDP.
+
+| Value Range                                 | Default | Set Classifications     |
+|---------------------------------------------|---------|-------------------------|
+| Any valid time expression (number and unit) | 2 hours | master, session, reload |
+
+## <a name="gp_interconnect_snd_queue_depth"></a>gp\_interconnect\_snd\_queue\_depth
+
+Sets the amount of data per-peer to be queued by the UDP interconnect on senders. Increasing the depth from its default value will cause the system to use more memory; but may increase performance. Reasonable values for this parameter are between 1 and 4. Increasing the value might radically increase the amount of memory used by the system.
+
+This parameter is used only when [gp\_interconnect\_type](#gp_interconnect_type) is set to UDPIFC.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| 1 - 4096    | 2       | master, session, reload |
+
+## <a name="gp_interconnect_type"></a>gp\_interconnect\_type
+
+Sets the networking protocol used for Interconnect traffic. With the TCP protocol, HAWQ has an upper limit of 1000 segment instances - less than that if the query workload involves complex, multi-slice queries.
+
+UDP allows for greater interconnect scalability. Note that the HAWQ software does the additional packet verification and checking not performed by UDP, so reliability and performance is equivalent to TCP.
+
+UDPIFC specifies using UDP with flow control for interconnect traffic. Specify the interconnect flow control method with [gp\_interconnect\_fc\_method](#gp_interconnect_fc_method).
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>TCP
+<p>UDP</p>
+<p>UDPIFC</p></td>
+<td>UDPIFC</td>
+<td>local, system, restart</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="gp_log_format"></a>gp\_log\_format
+
+Specifies the format of the server log files. If using *hawq\_toolkit* administrative schema, the log files must be in CSV format.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>csv
+<p>text</p></td>
+<td>csv</td>
+<td>local, system, restart</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="gp_max_csv_line_length"></a>gp\_max\_csv\_line\_length
+
+The maximum length of a line in a CSV formatted file that will be imported into the system. The default is 1MB (1048576 bytes). Maximum allowed is 4MB (4194184 bytes). The default may need to be increased if using the *hawq\_toolkit* administrative schema to read HAWQ log files.
+
+| Value Range     | Default | Set Classifications    |
+|-----------------|---------|------------------------|
+| number of bytes | 1048576 | local, system, restart |
+
+## <a name="gp_max_databases"></a>gp\_max\_databases
+
+The maximum number of databases allowed in a HAWQ system.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 16      | master, system, restart |
+
+## <a name="gp_max_filespaces"></a>gp\_max\_filespaces
+
+The maximum number of filespaces allowed in a HAWQ system.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 8       | master, system, restart |
+
+## <a name="gp_max_packet_size"></a>gp\_max\_packet\_size
+
+Sets the size (in bytes) of messages sent by the UDP interconnect, and sets the tuple-serialization chunk size for both the UDP and TCP interconnect.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| 512-65536   | 8192    | master, system, restart |
+
+## <a name="gp_max_plan_size"></a>gp\_max\_plan\_size
+
+Specifies the total maximum uncompressed size of a query execution plan multiplied by the number of Motion operators (slices) in the plan. If the size of the query plan exceeds the value, the query is cancelled and an error is returned. A value of 0 means that the size of the plan is not monitored.
+
+You can specify a value in KB,MB, or GB. The default unit is KB. For example, a value of 200 is 200KB. A value of 1GB is the same as 1024MB or 1048576KB.
+
+| Value Range | Default | Set Classifications        |
+|-------------|---------|----------------------------|
+| integer     | 0       | master, superuser, session |
+
+## <a name="gp_max_tablespaces"></a>gp\_max\_tablespaces
+
+The maximum number of tablespaces allowed in a HAWQ system.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 16      | master, system, restart |
+
+## <a name="gp_motion_cost_per_row"></a>gp\_motion\_cost\_per\_row
+
+Sets the legacy query optimizer (planner) cost estimate for a Motion operator to transfer a row from one segment to another, measured as a fraction of the cost of a sequential page fetch. If 0, then the value used is two times the value of *cpu\_tuple\_cost*.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| floating point | 0       | master, session, reload |
+
+## <a name="gp_reject_percent_threshold"></a>gp\_reject\_percent\_threshold
+
+For single row error handling on COPY and external table SELECTs, sets the number of rows processed before SEGMENT REJECT LIMIT *n* PERCENT starts calculating.
+
+| Value Range     | Default | Set Classifications     |
+|-----------------|---------|-------------------------|
+| integer (1-*n*) | 300     | master, session, reload |
+
+## <a name="gp_reraise_signal"></a>gp\_reraise\_signal
+
+If enabled, will attempt to dump core if a fatal server error occurs.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_role"></a>gp\_role
+
+The role of this server process " set to *dispatch* for the master and *execute* for a segment.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>dispatch
+<p>execute</p>
+<p>utility</p></td>
+<td>�</td>
+<td>read only</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="gp_safefswritesize"></a>gp\_safefswritesize
+
+Specifies a minimum size for safe write operations to append-only tables in a non-mature file system. When a number of bytes greater than zero is specified, the append-only writer adds padding data up to that number in order to prevent data corruption due to file system errors. Each non-mature file system has a known safe write size that must be specified here when using HAWQ with that type of file system. This is commonly set to a multiple of the extent size of the file system; for example, Linux ext3 is 4096 bytes, so a value of 32768 is commonly used.
+
+| Value Range | Default | Set Classifications    |
+|-------------|---------|------------------------|
+| integer     | 0       | local, system, restart |
+
+## <a name="gp_segment_connect_timeout"></a>gp\_segment\_connect\_timeout
+
+Time that the HAWQ interconnect will try to connect to a segment instance over the network before timing out. Controls the network connection timeout between master and segment replication processes.
+
+| Value Range                                 | Default | Set Classifications   |
+|---------------------------------------------|---------|-----------------------|
+| Any valid time expression (number and unit) | 10min   | local, system, reload |
+
+## <a name="gp_segments_for_planner"></a>gp\_segments\_for\_planner
+
+Sets the number of segment instances for the legacy query optimizer (planner) to assume in its cost and size estimates. If 0, then the value used is the actual number of segments. This variable affects the legacy optimizer's estimates of the number of rows handled by each sending and receiving process in Motion operators.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 0       | master, session, reload |
+
+## <a name="gp_session_id"></a>gp\_session\_id
+
+A system assigned ID number for a client session. Starts counting from 1 when the master instance is first started.
+
+| Value Range     | Default | Set Classifications |
+|-----------------|---------|---------------------|
+| integer (1-*n*) | 14      | read only           |
+
+## <a name="gp_set_proc_affinity"></a>gp\_set\_proc\_affinity
+
+If enabled, when a HAWQ server process (postmaster) is started it will bind to a CPU.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, system, restart |
+
+## <a name="gp_set_read_only"></a>gp\_set\_read\_only
+
+Set to on to disable writes to the database. Any in progress transactions must finish before read-only mode takes affect.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | off     | master, session, reload |
+
+## <a name="gp_statistics_pullup_from_child_partition"></a>gp\_statistics\_pullup\_from\_child\_partition
+
+Enables the use of statistics from child tables when planning queries on the parent table by the legacy query optimizer (planner).
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_statistics_use_fkeys"></a>gp\_statistics\_use\_fkeys
+
+When enabled, allows the legacy query optimizer (planner) to use foreign key information stored in the system catalog to optimize joins between foreign keys and primary keys.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_vmem_idle_resource_timeout"></a>gp\_vmem\_idle\_resource\_timeout
+
+Sets the time in milliseconds a session can be idle before gangs on the segment databases are released to free up resources.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| number of milliseconds | 18000  | master, system, restart |
+
+
+## <a name="gp_vmem_protect_segworker_cache_limit"></a>gp\_vmem\_protect\_segworker\_cache\_limit
+
+If a query executor process consumes more than this configured amount, then the process will not be cached for use in subsequent queries after the process completes. Systems with lots of connections or idle processes may want to reduce this number to free more memory on the segments. Note that this is a local parameter and must be set for every segment.
+
+| Value Range         | Default | Set Classifications    |
+|---------------------|---------|------------------------|
+| number of megabytes | 500     | local, system, restart |
+
+## <a name="gp_workfile_checksumming"></a>gp\_workfile\_checksumming
+
+Adds a checksum value to each block of a work file (or spill file) used by `HashAgg` and `HashJoin` query operators. This adds an additional safeguard from faulty OS disk drivers writing corrupted blocks to disk. When a checksum operation fails, the query will cancel and rollback rather than potentially writing bad data to disk.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="gp_workfile_compress_algorithm"></a>gp\_workfile\_compress\_algorithm
+
+When a hash aggregation or hash join operation spills to disk during query processing, specifies the compression algorithm to use on the spill files. If using zlib, it must be in your $PATH on all segments.
+
+If your HAWQ database installation uses serial ATA (SATA) disk drives, setting the value of this parameter to `zlib` might help to avoid overloading the disk subsystem with IO operations.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Value Range</th>
+<th>Default</th>
+<th>Set Classifications</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>none
+<p>zlib</p></td>
+<td>none</td>
+<td>master, session, reload</td>
+</tr>
+</tbody>
+</table>
+
+## <a name="gp_workfile_limit_files_per_query"></a>gp\_workfile\_limit\_files\_per\_query
+
+Sets the maximum number of temporary spill files (also known as workfiles) allowed per query per segment. Spill files are created when executing a query that requires more memory than it is allocated. The current query is terminated when the limit is exceeded.
+
+Set the value to 0 (zero) to allow an unlimited number of spill files. master session reload
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 3000000 | master, session, reload |
+
+## <a name="gp_workfile_limit_per_query"></a>gp\_workfile\_limit\_per\_query
+
+Sets the maximum disk size an individual query is allowed to use for creating temporary spill files at each segment. The default value is 0, which means a limit is not enforced.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| kilobytes   | 0       | master, session, reload |
+
+## <a name="gp_workfile_limit_per_segment"></a>gp\_workfile\_limit\_per\_segment
+
+Sets the maximum total disk size that all running queries are allowed to use for creating temporary spill files at each segment. The default value is 0, which means a limit is not enforced.
+
+| Value Range | Default | Set Classifications    |
+|-------------|---------|------------------------|
+| kilobytes   | 0       | local, system, restart |
+
+## <a name="hawq_dfs_url"></a>hawq\_dfs\_url
+
+URL for HAWQ data directories on HDFS. The directory that you specify must be writeable by the `gpadmin` user. For example 'localhost:8020/hawq\_default'. If you have high availability enabled for your HDFS NameNodes, then this configuration parameter must be set to the service ID you configured in HDFS. See "HAWQ Filespaces and High Availability Enabled HDFS" for more information.
+
+| Value Range                                                             | Default             | Set Classifications     |
+|-------------------------------------------------------------------------|---------------------|-------------------------|
+| URL in the form of *NameNode\_host name*:*port*/*data\_directory\_name* | localhost:8020/hawq | master, session, reload |
+
+## <a name="hawq_global_rm_type"></a>hawq\_global\_rm\_type
+
+HAWQ global resource manager type. Valid values are `yarn` and `none`. Setting this parameter to `none` indicates that the HAWQ resource manager will manages its own resources. Setting the value to `yarn` means that HAWQ will negotiate with YARN for resources.
+
+| Value Range  | Default | Set Classifications     |
+|--------------|---------|-------------------------|
+| yarn or none | none    | master, system, restart |
+
+## <a name="hawq_master_address_host"></a>hawq\_master\_address\_host
+
+Address or hostname of HAWQ master.
+
+| Value Range     | Default   | Set Classifications     |
+|-----------------|-----------|-------------------------|
+| master hostname | localhost | master, session, reload |
+
+## <a name="hawq_master_address_port"></a>hawq\_master\_address\_port
+
+Port of the HAWQ master.
+
+| Value Range       | Default | Set Classifications     |
+|-------------------|---------|-------------------------|
+| valid port number | �       | master, session, reload |
+
+## <a name="hawq_master_directory"></a>hawq\_master\_directory
+
+Master server data directory.
+
+| Value Range    | Default | Set Classifications     |
+|----------------|---------|-------------------------|
+| directory name | �       | master, session, reload |
+
+## <a name="hawq_master_temp_directory"></a>hawq\_master\_temp\_directory
+
+One or more temporary directories for the HAWQ master. Separate multiple entries with commas.
+
+| Value Range                                               | Default | Set Classifications     |
+|-----------------------------------------------------------|---------|-------------------------|
+| directory name or comma-separated list of directory names | /tmp    | master, session, reload |
+
+## <a name="hawq_re_memory_overcommit_max"></a>hawq\_re\_memory\_overcommit\_max
+
+Sets the maximum quota of memory overcommit (in MB) per physical segment for resource enforcement. This parameter sets the memory quota that can be overcommited beyond the memory quota dynamically assigned by the resource manager.
+
+Specify a larger value to prevent out of memory errors in YARN mode.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| integer     | 8192    | master, system, restart |
+
+## <a name="hawq_rm_cluster_report"></a>hawq\_rm\_cluster\_report\_period
+
+Defines the time period, in seconds, for refreshing the global resource manager\u2019s cluster report.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| 10-100      | 60      | master, session, reload |
+
+## <a name="hawq_rm_force_alterqueue_cancel_queued_request"></a>hawq\_rm\_force\_alterqueue\_cancel\_queued\_request
+
+Instructs HAWQ to cancel by all resource requests that are in conflict with the new resource queue definitions supplied in a ALTER RESOURCE QUEUE statement.
+
+If you set this parameter to false, the actions specified in ALTER RESOURCE QUEUE are canceled if the resource manager finds at least one resource request that is in conflict with the new resource definitions supplied in the altering command.
+
+| Value Range | Default | Set Classifications     |
+|-------------|---------|-------------------------|
+| Boolean     | on      | master, session, reload |
+
+## <a name="hawq_rm_master_port"></a>hawq\_rm\_master\_port
+
+HAWQ resource manager master port number.
+
+| Value Range       | Default | Set Classifications     |
+|-------------------|---------|-------------------------|
+| valid port number | 5437    | master, session, reload |
+
+## <a name="hawq_rm_memory_limit_perseg"></a>hawq\_rm\_memory\_limit\_perseg
+
+Limit of memory usage by a HAWQ segment when `hawq_global_rm_type` is set to `none`. For example, `8GB`.
+
+| V

<TRUNCATED>


[16/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/COPY.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/COPY.html.md.erb b/markdown/reference/sql/COPY.html.md.erb
new file mode 100644
index 0000000..aaa2270
--- /dev/null
+++ b/markdown/reference/sql/COPY.html.md.erb
@@ -0,0 +1,256 @@
+---
+title: COPY
+---
+
+Copies data between a file and a table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+COPY <table> [(<column> [, ...])] FROM {'<file>' | STDIN}
+�����[ [WITH] 
+�������[OIDS]
+�������[HEADER]
+�������[DELIMITER [ AS ] '<delimiter>']
+�������[NULL [ AS ] '<null string>']
+�������[ESCAPE [ AS ] '<escape>' | 'OFF']
+�������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+�������[CSV [QUOTE [ AS ] '<quote>'] 
+������������[FORCE NOT NULL <column> [, ...]]
+�������[FILL MISSING FIELDS]
+  �����[[LOG ERRORS INTO <error_table> [KEEP] 
+�������SEGMENT REJECT LIMIT <count> [ROWS | PERCENT] ]
+
+COPY {<table> [(<column> [, ...])] | (<query>)} TO {'<file>' | STDOUT}
+������[ [WITH] 
+��������[OIDS]
+��������[HEADER]
+��������[DELIMITER [ AS ] '<delimiter>']
+��������[NULL [ AS ] '<null string>']
+��������[ESCAPE [ AS ] '<escape>' | 'OFF']
+��������[CSV [QUOTE [ AS ] '<quote>'] 
+�������������[FORCE QUOTE <column> [, ...]] ]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`COPY` moves data between HAWQ tables and standard file-system files. `COPY TO` copies the contents of a table to a file, while `COPY FROM` copies data from a file to a table (appending the data to whatever is in the table already). `COPY TO` can also copy the results of a `SELECT` query.
+
+If a list of columns is specified, `COPY` will only copy the data in the specified columns to or from the file. If there are any columns in the table that are not in the column list, `COPY FROM` will insert the default values for those columns.
+
+`COPY` with a file name instructs the HAWQ master host to directly read from or write to a file. The file must be accessible to the master host and the name must be specified from the viewpoint of the master host. When `STDIN` or `STDOUT` is specified, data is transmitted via the connection between the client and the master.
+
+If `SEGMENT REJECT LIMIT` is used, then a `COPY FROM` operation will operate in single row error isolation mode. In this release, single row error isolation mode only applies to rows in the input file with format errors \u2014 for example, extra or missing attributes, attributes of a wrong data type, or invalid client encoding sequences. Constraint errors such as violation of a `NOT  NULL`, `CHECK`, or `UNIQUE` constraint will still be handled in 'all-or-nothing' input mode. The user can specify the number of error rows acceptable (on a per-segment basis), after which the entire `COPY FROM` operation will be aborted and no rows will be loaded. Note that the count of error rows is per-segment, not per entire load operation. If the per-segment reject limit is not reached, then all rows not containing an error will be loaded. If the limit is not reached, all good rows will be loaded and any error rows discarded. If you would like to keep error rows for further examination, you can optiona
 lly declare an error table using the `LOG ERRORS INTO` clause. Any rows containing a format error would then be logged to the specified error table.
+
+**Outputs**
+
+On successful completion, a `COPY` command returns a command tag of the form, where \<count\> is the number of rows copied:
+
+``` pre
+COPY <count>
+            
+```
+
+If running a `COPY FROM` command in single row error isolation mode, the following notice message will be returned if any rows were not loaded due to format errors, where \<count\> is the number of rows rejected:
+
+``` pre
+NOTICE: Rejected <count> badly formatted rows.
+```
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt> \<table\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing table.</dd>
+
+<dt> \<column\>   </dt>
+<dd>An optional list of columns to be copied. If no column list is specified, all columns of the table will be copied.</dd>
+
+<dt> \<query\>   </dt>
+<dd>A `SELECT` or `VALUES` command whose results are to be copied. Note that parentheses are required around the query.</dd>
+
+<dt> \<file\>   </dt>
+<dd>The absolute path name of the input or output file.</dd>
+
+<dt>STDIN  </dt>
+<dd>Specifies that input comes from the client application.</dd>
+
+<dt>STDOUT  </dt>
+<dd>Specifies that output goes to the client application.</dd>
+
+<dt>OIDS  </dt>
+<dd>Specifies copying the OID for each row. (An error is raised if OIDS is specified for a table that does not have OIDs, or in the case of copying a query.)</dd>
+
+<dt> \<delimiter\>   </dt>
+<dd>The single ASCII character that separates columns within each row (line) of the file. The default is a tab character in text mode, a comma in `CSV` mode.</dd>
+
+<dt> \<null string\>   </dt>
+<dd>The string that represents a null value. The default is `\N` (backslash-N) in text mode, and a empty value with no quotes in `CSV` mode. You might prefer an empty string even in text mode for cases where you don't want to distinguish nulls from empty strings. When using `COPY FROM`, any data item that matches this string will be stored as a null value, so you should make sure that you use the same string as you used with `COPY TO`.</dd>
+
+<dt> \<escape\>   </dt>
+<dd>Specifies the single character that is used for C escape sequences (such as `\n`,`\t`,`\100`, and so on) and for quoting data characters that might otherwise be taken as row or column delimiters. Make sure to choose an escape character that is not used anywhere in your actual column data. The default escape character is `\` (backslash) for text files or `"` (double quote) for CSV files, however it is possible to specify any other character to represent an escape. It is also possible to disable escaping on text-formatted files by specifying the value '`OFF'` as the escape value. This is very useful for data such as web log data that has many embedded backslashes that are not intended to be escapes.</dd>
+
+<dt>NEWLINE  </dt>
+<dd>Specifies the newline used in your data files \u2014 `LF` (Line feed, 0x0A), `CR` (Carriage return, 0x0D), or `CRLF` (Carriage return plus line feed, 0x0D 0x0A). If not specified, a HAWQ segment will detect the newline type by looking at the first row of data it receives and using the first newline type encountered.</dd>
+
+<dt>CSV  </dt>
+<dd>Selects Comma Separated Value (CSV) mode.</dd>
+
+<dt>HEADER  </dt>
+<dd>Specifies that a file contains a header line with the names of each column in the file. On output, the first line contains the column names from the table, and on input, the first line is ignored.</dd>
+
+<dt> \<quote\>   </dt>
+<dd>Specifies the quotation character in CSV mode. The default is double-quote.</dd>
+
+<dt>FORCE QUOTE  </dt>
+<dd>In `CSV COPY TO` mode, forces quoting to be used for all non-`NULL` values in each specified column. `NULL` output is never quoted.</dd>
+
+<dt>FORCE NOT NULL  </dt>
+<dd>In `CSV COPY FROM` mode, process each specified column as though it were quoted and hence not a `NULL` value. For the default null string in `CSV` mode (nothing between two delimiters), this causes missing values to be evaluated as zero-length strings.</dd>
+
+<dt>FILL MISSING FIELDS  </dt>
+<dd>In `COPY FROM` more for both `TEXT` and `CSV`, specifying `FILL MISSING FIELDS` will set missing trailing field values to `NULL` (instead of reporting an error) when a row of data has missing data fields at the end of a line or row. Blank rows, fields with a `NOT NULL` constraint, and trailing delimiters on a line will still report an error.</dd>
+
+<dt>LOG ERRORS INTO \<error\_table\> \[KEEP\]  </dt>
+
+<dd>This is an optional clause that can precede a `SEGMENT REJECT LIMIT` clause to log information about rows with formatting errors. The `INTO <error_table>` clause specifies an error table where rows with formatting errors will be logged when running in single row error isolation mode. You can then examine this error table to see error rows that were not loaded (if any). If the \<error\_table\> specified already exists, it will be used. If it does not exist, it will be automatically generated. If the command auto-generates the error table and no errors are produced, the default is to drop the error table after the operation completes unless `KEEP` is specified. If the table is auto-generated and the error limit is exceeded, the entire transaction is rolled back and no error data is saved. If you want the error table to persist in this case, create the error table prior to running the `COPY`. An error table is defined as follows:
+
+
+``` pre
+CREATE TABLE <error_table_name> ( cmdtime timestamptz, relname text, 
+    filename text, linenum int, bytenum int, errmsg text, 
+    rawdata text, rawbytes bytea ) DISTRIBUTED RANDOMLY;
+```
+</dd>
+
+<dt>SEGMENT REJECT LIMIT \<count\> \[ROWS | PERCENT\]  </dt>
+<dd>Runs a `COPY FROM` operation in single row error isolation mode. If the input rows have format errors they will be discarded provided that the reject limit count is not reached on any HAWQ segment instance during the load operation. The reject limit count can be specified as number of rows (the default) or percentage of total rows (1-100). If `PERCENT` is used, each segment starts calculating the bad row percentage only after the number of rows specified by the parameter `gp_reject_percent_threshold` has been processed. The default for `gp_reject_percent_threshold` is 300 rows. Constraint errors such as violation of a `NOT NULL` or `CHECK` constraint will still be handled in 'all-or-nothing' input mode. If the limit is not reached, all good rows will be loaded and any error rows discarded.</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+`COPY` can only be used with tables, not with views. However, you can write `COPY (SELECT * FROM viewname) TO ...`
+
+The `BINARY` key word causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the normal text mode, but a binary-format file is less portable across machine architectures and HAWQ versions. Also, you cannot run `COPY FROM` in single row error isolation mode if the data is in binary format.
+
+You must have `SELECT` privilege on the table whose values are read by `COPY TO`, and insert privilege on the table into which values are inserted by `COPY FROM`.
+
+Files named in a `COPY` command are read or written directly by the database server, not by the client application. Therefore, they must reside on or be accessible to the HAWQ master host machine, not the client. They must be accessible to and readable or writable by the HAWQ system user (the user ID the server runs as), not the client. `COPY` naming a file is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
+
+`COPY FROM` will invoke any check constraints on the destination table. However, it will not invoke rewrite rules. Note that in this release, violations of constraints are not evaluated for single row error isolation mode.
+
+`COPY` input and output is affected by `DateStyle`. To ensure portability to other HAWQ installations that might use non-default `DateStyle` settings, `DateStyle` should be set to ISO before using `COPY TO`.
+
+By default, `COPY` stops operation at the first error. This should not lead to problems in the event of a `COPY TO`, but the target table will already have received earlier rows in a `COPY FROM`. These rows will not be visible or accessible, but they still occupy disk space. This may amount to a considerable amount of wasted disk space if the failure happened well into a large `COPY FROM` operation. You may wish to use single row error isolation mode to filter out error rows while still loading good rows.
+
+COPY supports creating readable foreign tables with error tables. The default for concurrently inserting into the error table is 127.�You can use�error tables with foreign tables under the following circumstances:
+
+-   Multiple foreign tables can�use different error tables
+-   Multiple foreign tables cannot use�the same�error table
+
+## <a id="topic1__section7"></a>File Formats
+
+File formats supported by `COPY`.
+
+**Text Format**
+When `COPY` is used without the `BINARY` or `CSV` options, the data read or written is a text file with one line per table row. Columns in a row are separated by the \<delimiter\> character (tab by default). The column values themselves are strings generated by the output function, or acceptable to the input function, of each attribute's data type. The specified null string is used in place of columns that are null. `COPY             FROM` will raise an error if any line of the input file contains more or fewer columns than are expected. If `OIDS` is specified, the OID is read or written as the first column, preceding the user data columns.
+
+The data file has two reserved characters that have special meaning to `COPY`:
+
+-   The designated delimiter character (tab by default), which is used to separate fields in the data file.
+-   A UNIX-style line feed (`\n` or `0x0a`), which is used to designate a new row in the data file. It is strongly recommended that applications generating `COPY` data convert data line feeds to UNIX-style line feeds rather than Microsoft Windows style carriage return line feeds (`\r\n` or `0x0a 0x0d`).
+
+If your data contains either of these characters, you must escape the character so `COPY` treats it as data and not as a field separator or new row.
+
+By default, the escape character is a `\` (backslash) for text-formatted files and a `"` (double quote) for csv-formatted files. If you want to use a different escape character, you can do so using the `ESCAPE AS `clause. Make sure to choose an escape character that is not used anywhere in your data file as an actual data value. You can also disable escaping in text-formatted files by using `ESCAPE 'OFF'`.
+
+For example, suppose you have a table with three columns and you want to load the following three fields using COPY.
+
+-   percentage sign = %
+-   vertical bar = |
+-   backslash = \\
+
+Your designated \<delimiter\> character is `|` (pipe character), and your designated \<escape\> character is `*` (asterisk). The formatted row in your data file would look like this:
+
+``` pre
+percentage sign = % | vertical bar = *| | backslash = \
+```
+
+Notice how the pipe character that is part of the data has been escaped using the asterisk character (\*). Also notice that we do not need to escape the backslash since we are using an alternative escape character.
+
+The following characters must be preceded by the escape character if they appear as part of a column value: the escape character itself, newline, carriage return, and the current delimiter character. You can specify a different escape character using the `ESCAPE             AS` clause.
+
+**CSV Format**
+
+This format is used for importing and exporting the Comma Separated Value (CSV) file format used by many other programs, such as spreadsheets. Instead of the escaping used by HAWQ standard text mode, it produces and recognizes the common CSV escaping mechanism.
+
+The values in each record are separated by the `DELIMITER` character. If the value contains the delimiter character, the `QUOTE` character, the `ESCAPE` character (which is double quote by default), the `NULL` string, a carriage return, or line feed character, then the whole value is prefixed and suffixed by the `QUOTE` character. You can also use `FORCE QUOTE` to force quotes when outputting non-`NULL` values in specific columns.
+
+The CSV format has no standard way to distinguish a `NULL` value from an empty string. HAWQ `COPY` handles this by quoting. A `NULL` is output as the `NULL` string and is not quoted, while a data value matching the `NULL` string is quoted. Therefore, using the default settings, a `NULL` is written as an unquoted empty string, while an empty string is written with double quotes (""). Reading values follows similar rules. You can use `FORCE NOT NULL` to prevent `NULL` input comparisons for specific columns.
+
+Because backslash is not a special character in the `CSV` format, `\.`, the end-of-data marker, could also appear as a data value. To avoid any misinterpretation, a `\.` data value appearing as a lone entry on a line is automatically quoted on output, and on input, if quoted, is not interpreted as the end-of-data marker. If you are loading a file created by another application that has a single unquoted column and might have a value of `\.`, you might need to quote that value in the input file.
+
+**Note:** In `CSV` mode, all characters are significant. A quoted value surrounded by white space, or any characters other than `DELIMITER`, will include those characters. This can cause errors if you import data from a system that pads CSV lines with white space out to some fixed width. If such a situation arises you might need to preprocess the CSV file to remove the trailing white space, before importing the data into HAWQ.
+
+**Note:** `CSV` mode will both recognize and produce CSV files with quoted values containing embedded carriage returns and line feeds. Thus the files are not strictly one line per table row like text-mode files.
+
+**Note:** Many programs produce strange and occasionally perverse CSV files, so the file format is more a convention than a standard. Thus you might encounter some files that cannot be imported using this mechanism, and `COPY` might produce files that other programs cannot process.
+
+**Binary Format**
+
+The `BINARY` format consists of a file header, zero or more tuples containing the row data, and a file trailer. Headers and data are in network byte order.
+
+-   **File Header** \u2014 The file header consists of 15 bytes of fixed fields, followed by a variable-length header extension area. The fixed fields are:
+    -   **Signature** \u2014 11-byte sequence PGCOPY\\n\\377\\r\\n\\0 \u2014 note that the zero byte is a required part of the signature. (The signature is designed to allow easy identification of files that have been munged by a non-8-bit-clean transfer. This signature will be changed by end-of-line-translation filters, dropped zero bytes, dropped high bits, or parity changes.)
+    -   **Flags field** \u2014 32-bit integer bit mask to denote important aspects of the file format. Bits are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16-31 are reserved to denote critical file format issues; a reader should abort if it finds an unexpected bit set in this range. Bits 0-15 are reserved to signal backwards-compatible format issues; a reader should simply ignore any unexpected bits set in this range. Currently only one flag is defined, and the rest must be zero (Bit 16: 1 if data has OIDs, 0 if not).
+    -   **Header extension area length** \u2014 32-bit integer, length in bytes of remainder of header, not including self. Currently, this is zero, and the first tuple follows immediately. Future changes to the format might allow additional data to be present in the header. A reader should silently skip over any header extension data it does not know what to do with. The header extension area is envisioned to contain a sequence of self-identifying chunks. The flags field is not intended to tell readers what is in the extension area. Specific design of header extension contents is left for a later release.
+-   **Tuples** \u2014 Each tuple begins with a 16-bit integer count of the number of fields in the tuple. (Presently, all tuples in a table will have the same count, but that might not always be true.) Then, repeated for each field in the tuple, there is a 32-bit length word followed by that many bytes of field data. (The length word does not include itself, and can be zero.) As a special case, -1 indicates a NULL field value. No value bytes follow in the NULL case.
+
+    There is no alignment padding or any other extra data between fields.
+
+    Presently, all data values in a COPY BINARY file are assumed to be in binary format (format code one). It is anticipated that a future extension may add a header field that allows per-column format codes to be specified.
+
+    If OIDs are included in the file, the OID field immediately follows the field-count word. It is a normal field except that it's not included in the field-count. In particular it has a length word \u2014 this will allow handling of 4-byte vs. 8-byte OIDs without too much pain, and will allow OIDs to be shown as null if that ever proves desirable.
+
+-   **File Trailer** \u2014 The file trailer consists of a 16-bit integer word containing `-1`. This is easily distinguished from a tuple's field-count word. A reader should report an error if a field-count word is neither `-1` nor the expected number of columns. This provides an extra check against somehow getting out of sync with the data.
+
+## <a id="topic1__section11"></a>Examples
+
+Copy a table to the client using the vertical bar (|) as the field delimiter:
+
+``` pre
+COPY country TO STDOUT WITH DELIMITER '|';
+```
+
+Copy data from a file into the `country` table:
+
+``` pre
+COPY country FROM '/home/usr1/sql/country_data';
+```
+
+Copy into a file just the countries whose names start with 'A':
+
+``` pre
+COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO 
+'/home/usr1/sql/a_list_countries.copy';
+```
+
+Create an error table called `err_sales` to use with single row error isolation mode:
+
+``` pre
+CREATE TABLE err_sales ( cmdtime timestamptz, relname text, 
+filename text, linenum int, bytenum int, errmsg text, rawdata text, rawbytes bytea ) DISTRIBUTED RANDOMLY;
+```
+
+Copy data from a file into the `sales` table using single row error isolation mode:
+
+``` pre
+COPY sales FROM '/home/usr1/sql/sales_data' LOG ERRORS INTO 
+err_sales SEGMENT REJECT LIMIT 10 ROWS;
+```
+
+## <a id="topic1__section12"></a>Compatibility
+
+There is no `COPY` statement in the SQL standard.
+
+## <a id="topic1__section13"></a>See Also
+
+[CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-AGGREGATE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-AGGREGATE.html.md.erb b/markdown/reference/sql/CREATE-AGGREGATE.html.md.erb
new file mode 100644
index 0000000..a195224
--- /dev/null
+++ b/markdown/reference/sql/CREATE-AGGREGATE.html.md.erb
@@ -0,0 +1,162 @@
+---
+title: CREATE AGGREGATE
+---
+
+Defines a new aggregate function.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [ORDERED] AGGREGATE <name> (<input_data_type> [ , ... ]) 
+������( SFUNC = <sfunc>,
+��������STYPE = <state_data_type>
+��������[, PREFUNC = <prefunc>]
+��������[, FINALFUNC = <ffunc>]
+��������[, INITCOND = <initial_condition>]
+��������[, SORTOP = <sort_operator>] )
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE AGGREGATE` defines a new aggregate function. Some basic and commonly-used aggregate functions such as `count`, `min`, `max`, `sum`, `avg` and so on are already provided in HAWQ. If one defines new types or needs an aggregate function not already provided, then `CREATE AGGREGATE` can be used to provide the desired features.
+
+An aggregate function is identified by its name and input data types. Two aggregate functions in the same schema can have the same name if they operate on different input types. The name and input data types of an aggregate function must also be distinct from the name and input data types of every ordinary function in the same schema.
+
+An aggregate function is made from one, two or three ordinary functions (all of which must be `IMMUTABLE` functions):
+
+-   A state transition function \<sfunc\>
+-   An optional preliminary segment-level calculation function \<prefunc\>
+-   An optional final calculation function \<ffunc\>
+
+These functions are used as follows:
+
+``` pre
+sfunc( internal-state, next-data-values ) ---> next-internal-state
+prefunc( internal-state, internal-state ) ---> next-internal-state
+ffunc( internal-state ) ---> aggregate-value
+```
+
+You can specify `PREFUNC` as method for optimizing aggregate execution. By specifying `PREFUNC`, the aggregate can be executed in parallel on segments first and then on the master. When a two-level execution is performed, `SFUNC` is executed on the segments to generate partial aggregate results, and `PREFUNC` is executed on the master to aggregate the partial results from segments. If single-level aggregation is performed, all the rows are sent to the master and \<sfunc\> is applied to the rows.
+
+Single-level aggregation and two-level aggregation are equivalent execution strategies. Either type of aggregation can be implemented in a query plan. When you implement the functions \<prefunc\> and \<sfunc\>, you must ensure that the invocation of \<sfunc\> on the segment instances followed by \<prefunc\> on the master produce the same result as single-level aggregation that sends all the rows to the master and then applies only the \<sfunc\> to the rows.
+
+HAWQ creates a temporary variable of data type \<stype\> to hold the current internal state of the aggregate function. At each input row, the aggregate argument values are calculated and the state transition function is invoked with the current state value and the new argument values to calculate a new internal state value. After all the rows have been processed, the final function is invoked once to calculate the aggregate return value. If there is no final function then the ending state value is returned as-is.
+
+An aggregate function can provide an optional initial condition, an initial value for the internal state value. This is specified and stored in the database as a value of type text, but it must be a valid external representation of a constant of the state value data type. If it is not supplied then the state value starts out `NULL`.
+
+If the state transition function is declared `STRICT`, then it cannot be called with `NULL` inputs. With such a transition function, aggregate execution behaves as follows. Rows with any null input values are ignored (the function is not called and the previous state value is retained). If the initial state value is `NULL`, then at the first row with all non-null input values, the first argument value replaces the state value, and the transition function is invoked at subsequent rows with all non-null input values. This is useful for implementing aggregates like `max`. Note that this behavior is only available when \<state\_data\_type\> is the same as the first \<input\_data\_type\>. When these types are different, you must supply a non-null initial condition or use a nonstrict transition function.
+
+If the state transition function is not declared `STRICT`, then it will be called unconditionally at each input row, and must deal with `NULL` inputs and `NULL` transition values for itself. This allows the aggregate author to have full control over the aggregate handling of `NULL` values.
+
+If the final function is declared `STRICT`, then it will not be called when the ending state value is `NULL`; instead a `NULL` result will be returned automatically. (This is the normal behavior of `STRICT` functions.) In any case the final function has the option of returning a `NULL` value. For example, the final function for `avg` returns `NULL` when it sees there were zero input rows.
+
+Single argument aggregate functions, such as min or max, can sometimes be optimized by looking into an index instead of scanning every input row. If this aggregate can be so optimized, indicate it by specifying a sort operator. The basic requirement is that the aggregate must yield the first element in the sort ordering induced by the operator; in other words:
+
+``` pre
+SELECT agg(col) FROM tab; 
+```
+
+must be equivalent to:
+
+``` pre
+SELECT col FROM tab ORDER BY col USING sortop LIMIT 1;
+```
+
+Further assumptions are that the aggregate function ignores `NULL` inputs, and that it delivers a `NULL` result if and only if there were no non-null inputs. Ordinarily, a data type's `<` operator is the proper sort operator for `MIN`, and `>` is the proper sort operator for `MAX`. Note that the optimization will never actually take effect unless the specified operator is the "less than" or "greater than" strategy member of a B-tree index operator class.
+
+**Ordered Aggregates**
+
+If the optional qualification `ORDERED` appears, the created aggregate function is an *ordered aggregate*. In this case, the preliminary aggregation function, `prefunc` cannot be specified.
+
+An ordered aggregate is called with the following syntax.
+
+``` pre
+<name> ( <arg> [ , ... ] [ORDER BY <sortspec> [ , ...]] ) 
+```
+
+If the optional `ORDER BY` is omitted, a system-defined ordering is used. The transition function \<sfunc\> of an ordered aggregate function is called on its input arguments in the specified order and on a single segment. There is a new column `aggordered` in the `pg_aggregate` table to indicate the aggregate function is defined as an ordered aggregate.
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of the aggregate function to create.</dd>
+
+<dt> \<input\_data\_type\>   </dt>
+<dd>An input data type on which this aggregate function operates. To create a zero-argument aggregate function, write \* in place of the list of input data types. An example of such an aggregate is `count(*)`.</dd>
+
+<dt> \<sfunc\>   </dt>
+<dd>The name of the state transition function to be called for each input row. For an N-argument aggregate function, the \<sfunc\> must take N+1 arguments, the first being of type \<state\_data\_type\> and the rest matching the declared input data types of the aggregate. The function must return a value of type \<state\_data\_type\>. This function takes the current state value and the current input data values, and returns the next state value.</dd>
+
+<dt> \<state\_data\_type\>   </dt>
+<dd>The data type for the aggregate state value.</dd>
+
+<dt> \<prefunc\>   </dt>
+<dd>The name of a preliminary aggregation function. This is a function of two arguments, both of type \<state\_data\_type\>. It must return a value of \<state\_data\_type\>. A preliminary function takes two transition state values and returns a new transition state value representing the combined aggregation. In HAWQ, if the result of the aggregate function is computed in a segmented fashion, the preliminary aggregation function is invoked on the individual internal states in order to combine them into an ending internal state.
+
+Note that this function is also called in hash aggregate mode within a segment. Therefore, if you call this aggregate function without a preliminary function, hash aggregate is never chosen. Since hash aggregate is efficient, consider defining preliminary function whenever possible.
+
+PREFUNC is optional. If defined, it is executed on master. Input to PREFUNC is partial results from segments, and not the tuples. If PREFUNC is not defined, the aggregate cannot be executed in parallel. PREFUNC and gp\_enable\_multiphase\_agg are used as follows:
+
+-   gp\_enable\_multiphase\_agg = off: SFUNC is executed sequentially on master. PREFUNC, even if defined, is unused.
+-   gp\_enable\_multiphase\_agg = on and PREFUNC is defined: SFUNC is executed in parallel, on segments. PREFUNC is invoked on master to aggregate partial results from segments.�
+
+    ``` pre
+    CREATE OR REPLACE FUNCTION my_avg_accum(bytea,bigint) returns bytea as 'int8_avg_accum' language internal strict immutable;  
+    CREATE OR REPLACE FUNCTION my_avg_merge(bytea,bytea) returns bytea as 'int8_avg_amalg' language internal strict immutable;  
+    CREATE OR REPLACE FUNCTION my_avg_final(bytea) returns numeric as 'int8_avg' language internal strict immutable;  
+    CREATE AGGREGATE my_avg(bigint) (   stype = bytea,sfunc = my_avg_accum,prefunc = my_avg_merge,finalfunc = my_avg_final,initcond = ''  );
+    ```
+</dd>
+
+<dt> \<ffunc\>   </dt>
+<dd>The name of the final function called to compute the aggregate result after all input rows have been traversed. The function must take a single argument of type `state_data_type`. The return data type of the aggregate is defined as the return type of this function. If \<ffunc\> is not specified, then the ending state value is used as the aggregate result, and the return type is \<state\_data\_type\>.</dd>
+
+<dt> \<initial\_condition\>   </dt>
+<dd>The initial setting for the state value. This must be a string constant in the form accepted for the data type \<state\_data\_type\>. If not specified, the state value starts out `NULL`.</dd>
+
+<dt> \<sort\_operator\>   </dt>
+<dd>The associated sort operator for a MIN- or MAX-like aggregate function. This is just an operator name (possibly schema-qualified). The operator is assumed to have the same input data types as the aggregate function (which must be a single-argument aggregate function).</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+The ordinary functions used to define a new aggregate function must be defined first. Note that in this release of HAWQ, it is required that the \<sfunc\>, \<ffunc\>, and \<prefunc\> functions used to create the aggregate are defined as `IMMUTABLE`.
+
+Any compiled code (shared library files) for custom functions must be placed in the same location on every host in your HAWQ array (master and all segments). This location must also be in the `LD_LIBRARY_PATH` so that the server can locate the files.
+
+## Examples
+
+Create a sum of cubes aggregate:
+
+``` pre
+CREATE FUNCTION scube_accum(numeric, numeric) RETURNS numeric 
+    AS 'select $1 + $2 * $2 * $2' 
+    LANGUAGE SQL 
+    IMMUTABLE 
+    RETURNS NULL ON NULL INPUT;
+CREATE AGGREGATE scube(numeric) ( 
+    SFUNC = scube_accum, 
+    STYPE = numeric, 
+    INITCOND = 0 );
+```
+
+To test this aggregate:
+
+``` pre
+CREATE TABLE x(a INT);
+INSERT INTO x VALUES (1),(2),(3);
+SELECT scube(a) FROM x;
+```
+
+Correct answer for reference:
+
+``` pre
+SELECT sum(a*a*a) FROM x;
+```
+
+## <a id="topic1__section8"></a>Compatibility
+
+`CREATE AGGREGATE` is a HAWQ language extension. The SQL standard does not provide for user-defined aggregate functions.
+
+## <a id="topic1__section9"></a>See Also
+
+[ALTER AGGREGATE](ALTER-AGGREGATE.html), [DROP AGGREGATE](DROP-AGGREGATE.html), [CREATE FUNCTION](CREATE-FUNCTION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-DATABASE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-DATABASE.html.md.erb b/markdown/reference/sql/CREATE-DATABASE.html.md.erb
new file mode 100644
index 0000000..7ebab4e
--- /dev/null
+++ b/markdown/reference/sql/CREATE-DATABASE.html.md.erb
@@ -0,0 +1,86 @@
+---
+title: CREATE DATABASE
+---
+
+Creates a new database.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE DATABASE <database_name> [s[WITH] <database_attribute>=<value> [ ... ]]
+```
+where \<database\_attribute\> is:
+ 
+``` pre
+	[OWNER=<database_owner>]
+����[TEMPLATE=<template>]
+����[ENCODING=<encoding>]
+����[TABLESPACE=<tablespace>]
+����[CONNECTION LIMIT=<connection_limit>]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE DATABASE` creates a new database. To create a database, you must be a superuser or have the special `CREATEDB` privilege.
+
+The creator becomes the owner of the new database by default. Superusers can create databases owned by other users by using the `OWNER` clause. They can even create databases owned by users with no special privileges. Non-superusers with `CREATEDB` privilege can only create databases owned by themselves.
+
+By default, the new database will be created by cloning the standard system database `template1`. A different template can be specified by writing `TEMPLATE <template>`. In particular, by writing `TEMPLATE template0`, you can create a clean database containing only the standard objects predefined by HAWQ. This is useful if you wish to avoid copying any installation-local objects that may have been added to `template1`.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<database_name\></dt>
+<dd>The name of a database to create.
+
+**Note:** HAWQ reserves the database name "hcatalog" for system use.</dd>
+
+<dt>OWNER=\<database_owner\> </dt>
+<dd>The name of the database user who will own the new database, or `DEFAULT` to use the default owner (the user executing the command).</dd>
+
+<dt>TEMPLATE=\<template\> </dt>
+<dd>The name of the template from which to create the new database, or `DEFAULT` to use the default template (*template1*).</dd>
+
+<dt>ENCODING=\<encoding\> </dt>
+<dd>Character set encoding to use in the new database. Specify a string constant (such as `'SQL_ASCII'`), an integer encoding number, or `DEFAULT` to use the default encoding.</dd>
+
+<dt>TABLESPACE=\<tablespace\> </dt>
+<dd>The name of the tablespace that will be associated with the new database, or `DEFAULT` to use the template database's tablespace. This tablespace will be the default tablespace used for objects created in this database.</dd>
+
+<dt>CONNECTION LIMIT=\<connection_limit\></dt>
+<dd>The maximum number of concurrent connections possible. The default of -1 means there is no limitation.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+`CREATE DATABASE` cannot be executed inside a transaction block.
+
+When you copy a database by specifying its name as the template, no other sessions can be connected to the template database while it is being copied. New connections to the template database are locked out until `CREATE DATABASE` completes.
+
+The `CONNECTION LIMIT` is not enforced against superusers.
+
+## <a id="topic1__section6"></a>Examples
+
+To create a new database:
+
+``` pre
+CREATE DATABASE gpdb;
+```
+
+To create a database `sales` owned by user `salesapp` with a default tablespace of `salesspace`:
+
+``` pre
+CREATE DATABASE sales OWNER=salesapp TABLESPACE=salesspace;
+```
+
+To create a database `music` which supports the ISO-8859-1 character set:
+
+``` pre
+CREATE DATABASE music ENCODING='LATIN1';
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+There is no `CREATE DATABASE` statement in the SQL standard. Databases are equivalent to catalogs, whose creation is implementation-defined.
+
+## <a id="topic1__section8"></a>See Also
+
+[DROP DATABASE](DROP-DATABASE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb b/markdown/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
new file mode 100644
index 0000000..3479e3e
--- /dev/null
+++ b/markdown/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
@@ -0,0 +1,333 @@
+---
+title: CREATE EXTERNAL TABLE
+---
+
+Defines a new external table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [READABLE] EXTERNAL TABLE <table_name>�����
+    ( <column_name>
+            <data_type> [, ...] | LIKE <other_table> )
+������LOCATION ('gpfdist://<filehost>[:<port>]/<file_pattern>[#<transform>]' [, ...])
+��������| ('gpfdists://<filehost>[:<port>]/<file_pattern>[#<transform>]' [, ...])
+        | ('pxf://<host>[:<port>]/<path-to-data><pxf parameters>') 
+������FORMAT 'TEXT' 
+������������[( [HEADER]
+���������������[DELIMITER [AS] '<delimiter>' | 'OFF']
+���������������[NULL [AS] '<null string>']
+���������������[ESCAPE [AS] '<escape>' | 'OFF']
+���������������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+���������������[FILL MISSING FIELDS] )]
+�����������| 'CSV'
+������������[( [HEADER]
+���������������[QUOTE [AS] '<quote>'] 
+�������������� [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[FORCE NOT NULL <column> [, ...]]
+���������������[ESCAPE [AS] '<escape>']
+���������������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+���������������[FILL MISSING FIELDS] )]
+�����������| 'CUSTOM' (Formatter=<formatter specifications>)
+�����[ ENCODING '<encoding>' ]
+ ��� [ [LOG ERRORS INTO <error_table>] SEGMENT REJECT LIMIT <count>
+�������[ROWS | PERCENT] ]
+
+CREATE [READABLE] EXTERNAL WEB TABLE <table_name>�����
+   ( <column_name>
+            <data_type> [, ...] | LIKE <other_table> )
+������LOCATION ('http://<webhost>[:<port>]/<path>/<file>' [, ...])
+����| EXECUTE '<command>' ON { MASTER | <number_of_segments> | SEGMENT #<num> }
+������FORMAT 'TEXT' 
+������������[( [HEADER]
+���������������[DELIMITER [AS] '<delimiter>' | 'OFF']
+���������������[NULL [AS] '<null string>']
+���������������[ESCAPE [AS] '<escape>' | 'OFF']
+���������������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+���������������[FILL MISSING FIELDS] )]
+�����������| 'CSV'
+������������[( [HEADER]
+���������������[QUOTE [AS] '<quote>'] 
+�������������� [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[FORCE NOT NULL <column> [, ...]]
+���������������[ESCAPE [AS] '<escape>']
+���������������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+���������������[FILL MISSING FIELDS] )]
+�����������| 'CUSTOM' (Formatter=<formatter specifications>)
+�����[ ENCODING '<encoding>' ]
+�����[ [LOG ERRORS INTO <error_table>] SEGMENT REJECT LIMIT <count>
+�������[ROWS | PERCENT] ]
+
+CREATE WRITABLE EXTERNAL TABLE <table_name>
+����( <column_name>
+            <data_type> [, ...] | LIKE <other_table> )
+�����LOCATION('gpfdist://<outputhost>[:<port>]/<filename>[#<transform>]'
+������| ('gpfdists://<outputhost>[:<port>]/<file_pattern>[#<transform>]'
+����������[, ...])
+      | ('pxf://<host>[:<port>]/<path-to-data>?<pxf parameters>'
+������FORMAT 'TEXT' 
+���������������[( [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[ESCAPE [AS] '<escape>' | 'OFF'] )]
+����������| 'CSV'
+���������������[([QUOTE [AS] '<quote>'] 
+�������������� [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[FORCE QUOTE <column> [, ...]] ]
+���������������[ESCAPE [AS] '<escape>'] )]
+�����������| 'CUSTOM' (Formatter=<formatter specifications>)
+����[ ENCODING '<write_encoding>' ]
+����[ DISTRIBUTED BY (<column>, [ ... ] ) | DISTRIBUTED RANDOMLY ]
+
+CREATE WRITABLE EXTERNAL WEB TABLE <table_name>
+����( <column_name>
+            <data_type> [, ...] | LIKE <other_table> )
+����EXECUTE '<command>' ON #<num>
+    FORMAT 'TEXT' 
+���������������[( [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[ESCAPE [AS] '<escape>' | 'OFF'] )]
+����������| 'CSV'
+���������������[([QUOTE [AS] '<quote>'] 
+�������������� [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[FORCE QUOTE <column> [, ...]] ]
+���������������[ESCAPE [AS] '<escape>'] )]
+����������| 'CUSTOM' (Formatter=<formatter specifications>)
+����[ ENCODING '<write_encoding>' ]
+����[ DISTRIBUTED BY (<column>, [ ... ] ) | DISTRIBUTED RANDOMLY ]
+```
+
+where \<pxf parameters\> is:
+
+``` pre
+   ?FRAGMENTER=<class>&ACCESSOR=<class>&RESOLVER=<class>[&<custom-option>=<value>...]
+ | ?PROFILE=<profile-name>[&<custom-option>=<value>...]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE EXTERNAL TABLE` or `CREATE EXTERNAL WEB TABLE` creates a new readable external table definition in HAWQ. Readable external tables are typically used for fast, parallel data loading. Once an external table is defined, you can query its data directly (and in parallel) using SQL commands. For example, you can select, join, or sort external table data. You can also create views for external tables. DML operations (`UPDATE`, `INSERT`, `DELETE`, or `TRUNCATE`) are not permitted on readable external tables.
+
+`CREATE WRITABLE EXTERNAL TABLE` or `CREATE WRITABLE EXTERNAL WEB           TABLE` creates a new writable external table definition in HAWQ. Writable external tables are typically used for unloading data from the database into a set of files or named pipes.
+
+Writable external web tables can also be used to output data to an executable program. Once a writable external table is defined, data can be selected from database tables and inserted into the writable external table. Writable external tables only allow `INSERT` operations \u2013 `SELECT`, `UPDATE`, `DELETE`, or `TRUNCATE` are not allowed.
+
+Regular readable external tables can access static flat files or, by using HAWQ Extensions Framework (PXF), data from other sources. PXF plug-ins are included for HDFS, HBase, and Hive tables. Custom plug-ins can be created for other external data sources using the PXF API.
+
+Web external tables access dynamic data sources \u2013 either on a web server or by executing OS commands or scripts.
+
+The LOCATION clause specifies the location of the external data. The location string begins with a protocol string that specifies the storage type and protocol used to access the data. The `gpfdist://` protocol specifies data files served by one or more instances of the HAWQ file server `gpfdist`. The `http://` protocol specifies one or more HTTP URLs and is used with web tables. The `pxf://` protocol specifies data accessed through the PXF service, which provides access to data in a Hadoop system. Using the PXF API, you can create PXF plug-ins to provide HAWQ access to any other data source.
+
+**Note:** The `file://` protocol is deprecated. Instead, use the `gpfdist://`, `gpfdists://`, or `pxf://` protocol, or the `COPY` command instead.
+
+The `FORMAT` clause is used to describe how external table files are formatted. Valid flat file formats, including files in HDFS, are delimited text (`TEXT`) and comma separated values (`CSV`) format for `gpfdist` protocols. If the data in the file does not use the default column delimiter, escape character, null string, and so on, you must specify the additional formatting options so that the data in the external file is read correctly by HAWQ.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>READABLE | WRITABLE  </dt>
+<dd>Specifiies the type of external table, readable being the default. Readable external tables are used for loading data into HAWQ. Writable external tables are used for unloading data.</dd>
+
+<dt>WEB  </dt>
+<dd>Creates a readable or writable web external table definition in HAWQ. There are two forms of readable web external tables \u2013 those that access files via the `http://` protocol or those that access data by executing OS commands. Writable web external tables output data to an executable program that can accept an input stream of data. Web external tables are not rescannable during query execution.</dd>
+
+<dt> \<table\_name\>   </dt>
+<dd>The name of the new external table.</dd>
+
+<dt> \<column\_name\>   </dt>
+<dd>The name of a column to create in the external table definition. Unlike regular tables, external tables do not have column constraints or default values, so do not specify those.</dd>
+
+<dt>LIKE \<other\_table\>   </dt>
+<dd>The `LIKE` clause specifies a table from which the new external table automatically copies all column names, data types and HAWQ distribution policy. If the original table specifies any column constraints or default column values, those will not be copied over to the new external table definition.</dd>
+
+<dt> \<data\_type\>   </dt>
+<dd>The data type of the column.</dd>
+
+<dt>LOCATION ('\<protocol\>://\<host\>\[:\<port\>\]/\<path\>/\<file\>' \[, ...\])   </dt>
+<dd>For readable external tables, specifies the URI of the external data source(s) to be used to populate the external table or web table. Regular readable external tables allow the `file`, `gpfdist`, and `pxf` protocols. Web external tables allow the `http` protocol. If \<port\> is omitted, the `http` and `gpfdist` protocols assume port `8080` and the `pxf` protocol assumes the \<host\> is a high availability nameservice string. If using the `gpfdist` protocol, the \<path\> is relative to the directory from which `gpfdist` is serving files (the directory specified when you started the `gpfdist` program). Also, the \<path\> can use wildcards (or other C-style pattern matching) in the \<file\> name part of the location to denote multiple files in a directory. For example:
+
+``` pre
+'gpfdist://filehost:8081/*'
+'gpfdist://masterhost/my_load_file'
+'http://intranet.example.com/finance/expenses.csv'
+'pxf://mdw:41200/sales/*.csv?Profile=HDFS'
+```
+
+For writable external tables, specifies the URI location of the `gpfdist` process that will collect data output from the HAWQ segments and write it to the named file. The \<path\> is relative to the directory from which `gpfdist` is serving files (the directory specified when you started the `gpfdist` program). If multiple `gpfdist` locations are listed, the segments sending data will be evenly divided across the available output locations. For example:
+
+``` pre
+'gpfdist://outputhost:8081/data1.out',
+'gpfdist://outputhost:8081/data2.out'
+```
+
+With two `gpfdist` locations listed as in the above example, half of the segments would send their output data to the `data1.out` file and the other half to the `data2.out` file.
+
+For the `pxf` protocol, the `LOCATION` string specifies the \<host\> and \<port\> of the PXF service, the location of the data, and the PXF plug-ins (Java classes) used to convert the data between storage format and HAWQ format. If the \<port\> is omitted, the \<host\> is taken to be the logical name for the high availability name service and the \<port\> is the value of the `pxf_service_port` configuration variable, 51200 by default. The URL parameters `FRAGMENTER`, `ACCESSOR`, and `RESOLVER` are the names of PXF plug-ins (Java classes) that convert between the external data format and HAWQ data format. The `FRAGMENTER` parameter is only used with readable external tables. PXF allows combinations of these parameters to be configured as profiles so that a single `PROFILE` parameter can be specified to access external data, for example `?PROFILE=Hive`. Additional \<custom-options\>` can be added to the LOCATION URI to further describe the external data format or storage options. For 
 details about the plug-ins and profiles provided with PXF and information about creating custom plug-ins for other data sources see [Using PXF with Unmanaged Data](../../pxf/HawqExtensionFrameworkPXF.html).</dd>
+
+<dt>EXECUTE '\<command\>' ON ...  </dt>
+<dd>Allowed for readable web external tables or writable external tables only. For readable web external tables, specifies the OS command to be executed by the segment instances. The \<command\> can be a single OS command or a script. If \<command\> executes a script, that script must reside in the same location on all of the segment hosts and be executable by the HAWQ superuser (`gpadmin`).
+
+For writable external tables, the \<command\> specified in the `EXECUTE` clause must be prepared to have data piped into it, as segments having data to send write their output to the specified program. HAWQ uses virtual elastic segments to run its queries.
+
+The `ON` clause is used to specify which segment instances will execute the given command. For writable external tables, only `ON` \<number\> is supported.
+
+**Note:** ON ALL/HOST is deprecated when creating a readable external table, as HAWQ cannot guarantee scheduling executors on a specific host. Instead, use `ON                 MASTER`, `ON <number>`, or `SEGMENT <virtual_segment>` to specify which segment instances will execute the command.
+
+-   `ON MASTER` runs the command on the master host only.
+-   `ON <number>` means the command will be executed by the specified number of virtual segments. The particular segments are chosen by the HAWQ system's Resource Manager at runtime.
+-   `ON SEGMENT <virtual_segment>` means the command will be executed only once by the specified segment.
+</dd>
+
+<dt>FORMAT 'TEXT | CSV' (\<options\>)   </dt>
+<dd>Specifies the format of the external or web table data - either plain text (`TEXT`) or comma separated values (`CSV`) format.</dd>
+
+<dt>DELIMITER  </dt>
+<dd>Specifies a single ASCII character that separates columns within each row (line) of data. The default is a tab character in `TEXT` mode, a comma in `CSV` mode. In `TEXT` mode for readable external tables, the delimiter can be set to `OFF` for special use cases in which unstructured data is loaded into a single-column table.</dd>
+
+<dt>NULL  </dt>
+<dd>Specifies the string that represents a `NULL` value. The default is `\N` (backslash-N) in `TEXT` mode, and an empty value with no quotations in `CSV` mode. You might prefer an empty string even in `TEXT` mode for cases where you do not want to distinguish `NULL` values from empty strings. When using external and web tables, any data item that matches this string will be considered a `NULL` value.</dd>
+
+<dt>ESCAPE  </dt>
+<dd>Specifies the single character that is used for C escape sequences (such as `\n`,`\t`,`\100`, and so on) and for escaping data characters that might otherwise be taken as row or column delimiters. Make sure to choose an escape character that is not used anywhere in your actual column data. The default escape character is a \\ (backslash) for text-formatted files and a `"` (double quote) for csv-formatted files, however it is possible to specify another character to represent an escape. It is also possible to disable escaping in text-formatted files by specifying the value `'OFF'` as the escape value. This is very useful for data such as text-formatted web log data that has many embedded backslashes that are not intended to be escapes.</dd>
+
+<dt>NEWLINE  </dt>
+<dd>Specifies the newline used in your data files \u2013 `LF` (Line feed, 0x0A), `CR` (Carriage return, 0x0D), or `CRLF` (Carriage return plus line feed, 0x0D 0x0A). If not specified, a HAWQ segment will detect the newline type by looking at the first row of data it receives and using the first newline type encountered.</dd>
+
+<dt>HEADER  </dt>
+<dd>For readable external tables, specifies that the first line in the data file(s) is a header row (contains the names of the table columns) and should not be included as data for the table. If using multiple data source files, all files must have a header row.
+
+**Note:** The `HEADER` formatting option is not allowed with PXF.
+For CSV files or other files that include a header line, use an error table instead of the `HEADER` formatting option.</dd>
+
+<dt>QUOTE  </dt>
+<dd>Specifies the quotation character for `CSV` mode. The default is double-quote (`"`).</dd>
+
+<dt>FORCE NOT NULL  </dt>
+<dd>In `CSV` mode, processes each specified column as though it were quoted and hence not a `NULL` value. For the default null string in `CSV` mode (nothing between two delimiters), this causes missing values to be evaluated as zero-length strings.</dd>
+
+<dt>FORCE QUOTE  </dt>
+<dd>In `CSV` mode for writable external tables, forces quoting to be used for all non-`NULL` values in each specified column. `NULL` output is never quoted.</dd>
+
+<dt>FILL MISSING FIELDS  </dt>
+<dd>In both `TEXT` and `CSV` mode for readable external tables, specifying `FILL MISSING FIELDS` will set missing trailing field values to `NULL` (instead of reporting an error) when a row of data has missing data fields at the end of a line or row. Blank rows, fields with a `NOT               NULL` constraint, and trailing delimiters on a line will still report an error.</dd>
+
+<dt>ENCODING '\<encoding\>'   </dt>
+<dd>Character set encoding to use for the external table. Specify a string constant (such as `'SQL_ASCII'`), an integer encoding number, or `DEFAULT` to use the default client encoding.</dd>
+
+<dt>LOG ERRORS INTO \<error\_table\>  </dt>
+<dd>This is an optional clause that can precede a `SEGMENT REJECT LIMIT` clause to log information about rows with formatting errors. It specifies an error table where rows with formatting errors will be logged when running in single row error isolation mode. You can then examine this \<error\_table\> to see error rows that were not loaded (if any). If the \<error\_table\> specified already exists, it will be used. If it does not exist, it will be automatically generated.</dd>
+
+<dt>SEGMENT REJECT LIMIT \<count\> \[ROWS | PERCENT\]  </dt>
+<dd>Runs a `COPY FROM` operation in single row error isolation mode. If the input rows have format errors they will be discarded provided that the reject limit \<count\> is not reached on any HAWQ segment instance during the load operation. The reject limit \<count\> can be specified as number of rows (the default) or percentage of total rows (1-100). If `PERCENT` is used, each segment starts calculating the bad row percentage only after the number of rows specified by the parameter `gp_reject_percent_threshold` has been processed. The default for `gp_reject_percent_threshold` is 300 rows. Constraint errors such as violation of a `NOT NULL` or `CHECK` constraint will still be handled in "all-or-nothing" input mode. If the limit is not reached, all good rows will be loaded and any error rows discarded.</dd>
+
+<dt>DISTRIBUTED RANDOMLY  </dt>
+<dd>Used to declare the HAWQ distribution policy for a writable external table. By default, writable external tables are distributed randomly. If the source table you are exporting data from has a hash distribution policy, defining the same distribution key column(s) for the writable external table will improve unload performance by eliminating the need to move rows over the interconnect. When you issue an unload command such as `INSERT INTO wex_table SELECT * FROM                 source_table             `, the rows that are unloaded can be sent directly from the segments to the output location if the two tables have the same hash distribution policy.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Start the `gpfdist` file server program in the background on port `8081` serving files from directory `/var/data/staging`:
+
+``` pre
+gpfdist -p 8081 -d /var/data/staging -l /home/gpadmin/log &
+```
+
+Create a readable external table named `ext_customer` using the `gpfdist` protocol and any text formatted files (`*.txt`) found in the `gpfdist` directory. The files are formatted with a pipe (`|`) as the column delimiter and an empty space as `NULL`. Also access the external table in single row error isolation mode:
+
+``` pre
+CREATE EXTERNAL TABLE ext_customer
+���(id int, name text, sponsor text) 
+���LOCATION ( 'gpfdist://filehost:8081/*.txt' ) 
+���FORMAT 'TEXT' ( DELIMITER '|' NULL ' ')
+���LOG ERRORS INTO err_customer SEGMENT REJECT LIMIT 5;
+```
+
+Create the same readable external table definition as above, but with CSV formatted files:
+
+``` pre
+CREATE EXTERNAL TABLE ext_customer 
+���(id int, name text, sponsor text) 
+���LOCATION ( 'gpfdist://filehost:8081/*.csv' ) 
+���FORMAT 'CSV' ( DELIMITER ',' );
+```
+
+Create a readable external table using the `pxf` protocol to read data in HDFS files:
+
+``` pre
+CREATE EXTERNAL TABLE ext_customer 
+    (id int, name text, sponsor text)
+LOCATION ('pxf://mdw:51200/sales/customers/customers.tsv.gz'
+          '?Fragmenter=org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter'
+          '&Accessor=org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor'
+          '&Resolver=org.apache.hawq.pxf.plugins.hdfs.StringPassResolver')
+FORMAT 'TEXT' (DELIMITER = E'\t');
+```
+
+The `LOCATION` string in this command is equivalent to the previous example, but using a PXF Profile:
+
+``` pre
+CREATE EXTERNAL TABLE ext_customer 
+    (id int, name text, sponsor text)
+LOCATION ('pxf://mdw:51200/sales/customers/customers.tsv.gz?Profile=HdfsTextSimple')
+FORMAT 'TEXT' (DELIMITER = E'\t');
+```
+
+Create a readable web external table that executes a script on five virtual segment hosts. (The script must reside at the same location on all segment hosts.)
+
+``` pre
+CREATE EXTERNAL WEB TABLE log_output (linenum int, message text) �
+EXECUTE '/var/load_scripts/get_log_data.sh' ON 5 
+FORMAT 'TEXT' (DELIMITER '|');
+```
+
+Create a writable external table named `sales_out` that uses `gpfdist` to write output data to a file named `sales.out`. The files are formatted with a pipe (`|`) as the column delimiter and an empty space as `NULL`.
+
+``` pre
+CREATE WRITABLE EXTERNAL TABLE sales_out (LIKE sales) 
+���LOCATION ('gpfdist://etl1:8081/sales.out')
+���FORMAT 'TEXT' ( DELIMITER '|' NULL ' ')
+���DISTRIBUTED BY (txn_id);
+```
+
+The following command sequence shows how to create a writable external web table using a specified number of elastic virtual segments to run the query:
+
+``` pre
+postgres=# CREATE TABLE a (i int);
+CREATE TABLE
+postgres=# INSERT INTO a VALUES(1);
+INSERT 0 1
+postgres=# INSERT INTO a VALUES(2);
+INSERT 0 1
+postgres=# INSERT INTO a VALUES(10);
+INSERT 0 1
+postgres=# CREATE WRITABLE EXTERNAL WEB TABLE externala (output text) 
+postgres-# EXECUTE 'cat > /tmp/externala' ON 3 
+postgres-# FORMAT 'TEXT' DISTRIBUTED RANDOMLY;
+CREATE EXTERNAL TABLE
+postgres=# INSERT INTO externala SELECT * FROM a;
+INSERT 0 3
+```
+
+Create a writable external web table that pipes output data received by the segments to an executable script named `to_adreport_etl.sh`:
+
+``` pre
+CREATE WRITABLE EXTERNAL WEB TABLE campaign_out (LIKE campaign)  
+EXECUTE '/var/unload_scripts/to_adreport_etl.sh'
+FORMAT 'TEXT' (DELIMITER '|');
+```
+
+Use the writable external table defined above to unload selected data:
+
+``` pre
+INSERT INTO campaign_out 
+    SELECT * FROM campaign WHERE customer_id=123;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`CREATE EXTERNAL TABLE` is a HAWQ extension. The SQL standard makes no provisions for external tables.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE TABLE](CREATE-TABLE.html), [CREATE TABLE AS](CREATE-TABLE-AS.html), [COPY](COPY.html), [INSERT](INSERT.html), [SELECT INTO](SELECT-INTO.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-FUNCTION.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-FUNCTION.html.md.erb b/markdown/reference/sql/CREATE-FUNCTION.html.md.erb
new file mode 100644
index 0000000..6675752
--- /dev/null
+++ b/markdown/reference/sql/CREATE-FUNCTION.html.md.erb
@@ -0,0 +1,190 @@
+---
+title: CREATE FUNCTION
+---
+
+Defines a new function.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [OR REPLACE] FUNCTION <name>����
+    ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+������[ RETURNS { [ SETOF ] <rettype>
+��������| TABLE ([{ <argname> <argtype> | LIKE <other table> }
+����������[, ...]])
+��������} ]
+����{ LANGUAGE <langname>
+����| IMMUTABLE | STABLE | VOLATILE
+����| CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT
+����| [EXTERNAL] SECURITY INVOKER | [EXTERNAL] SECURITY DEFINER
+����| AS '<definition>'
+����| AS '<obj_file>', '<link_symbol>' } ...
+����[ WITH ({ DESCRIBE = <describe_function>
+           } [, ...] ) ]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE FUNCTION` defines a new function. `CREATE OR REPLACE                     FUNCTION` will either create a new function, or replace an existing definition.
+
+The name of the new function must not match any existing function with the same argument types in the same schema. However, functions of different argument types may share a name (overloading).
+
+To update the definition of an existing function, use `CREATE OR REPLACE                     FUNCTION`. It is not possible to change the name or argument types of a function this way (this would actually create a new, distinct function). Also, `CREATE OR REPLACE FUNCTION` will not let you change the return type of an existing function. To do that, you must drop and recreate the function. If you drop and then recreate a function, you will have to drop existing objects (rules, views, and so on) that refer to the old function. Use `CREATE OR                     REPLACE FUNCTION` to change a function definition without breaking objects that refer to the function.
+
+For more information about creating functions, see�[User-Defined Functions](../../query/functions-operators.html#topic28).
+
+**Limited Use of VOLATILE and STABLE Functions**
+
+To prevent data from becoming out-of-sync across the segments in HAWQ, any function classified as `STABLE` or `VOLATILE` cannot be executed at the segment level if it contains SQL or modifies the database in any way. For example, functions such as `random()` or `timeofday()` are not allowed to execute on distributed data in HAWQ because they could potentially cause inconsistent data between the segment instances.
+
+To ensure data consistency, `VOLATILE` and `STABLE` functions can safely be used in statements that are evaluated on and execute from the master. For example, the following statements are always executed on the master (statements without a `FROM` clause):
+
+``` pre
+SELECT setval('myseq', 201);
+SELECT foo();
+```
+
+In cases where a statement has a `FROM` clause containing a distributed table and the function used in the `FROM` clause simply returns a set of rows, execution may be allowed on the segments:
+
+``` pre
+SELECT * FROM foo();
+```
+
+One exception to this rule are functions that return a table reference (`rangeFuncs`) or functions that use the `refCursor` data type. Note that you cannot return a `refcursor` from any kind of function in HAWQ.
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name (optionally schema-qualified) of the function to create.</dd>
+
+<dt> \<argmode\>  </dt>
+<dd>The mode of an argument: either `IN`, `OUT`, or `INOUT`. If omitted, the default is `IN`.</dd>
+
+<dt> \<argname\>  </dt>
+<dd>The name of an argument. Some languages (currently only PL/pgSQL) let you use the name in the function body. For other languages the name of an input argument is just extra documentation. But the name of an output argument is significant, since it defines the column name in the result row type. (If you omit the name for an output argument, the system will choose a default column name.)</dd>
+
+<dt> \<argtype\>  </dt>
+<dd>The data type(s) of the function's arguments (optionally schema-qualified), if any. The argument types may be base, composite, or domain types, or may reference the type of a table column.
+
+Depending on the implementation language it may also be allowed to specify pseudotypes such as `cstring`. Pseudotypes indicate that the actual argument type is either incompletely specified, or outside the set of ordinary SQL data types.
+
+The type of a column is referenced by writing `                             <tablename>.<columnname>%<TYPE>`. Using this feature can sometimes help make a function independent of changes to the definition of a table.</dd>
+
+<dt> \<rettype\>  </dt>
+<dd>The return data type (optionally schema-qualified). The return type can be a base, composite, or domain type, or may reference the type of a table column. Depending on the implementation language it may also be allowed to specify pseudotypes such as `cstring`. If the function is not supposed to return a value, specify `void` as the return type.
+
+When there are `OUT` or `INOUT` parameters, the `RETURNS` clause may be omitted. If present, it must agree with the result type implied by the output parameters: `RECORD` if there are multiple output parameters, or the same type as the single output parameter.
+
+The `SETOF` modifier indicates that the function will return a set of items, rather than a single item.
+
+The type of a column is referenced by writing `                             <tablename>.<columnname>%<TYPE>`.</dd>
+
+<dt> \<langname\>  </dt>
+<dd>The name of the language that the function is implemented in. May be `SQL`, `C`, `internal`, or the name of a user-defined procedural language. See [CREATE LANGUAGE](CREATE-LANGUAGE.html) for the procedural languages supported in HAWQ. For backward compatibility, the name may be enclosed by single quotes.</dd>
+
+<dt>IMMUTABLE  
+STABLE  
+VOLATILE  </dt>
+<dd>These attributes inform the query optimizer about the behavior of the function. At most one choice may be specified. If none of these appear, `VOLATILE` is the default assumption. Since HAWQ currently has limited use of `VOLATILE` functions, if a function is truly `IMMUTABLE`, you must declare it as so to be able to use it without restrictions.
+
+`IMMUTABLE` indicates that the function cannot modify the database and always returns the same result when given the same argument values. It does not do database lookups or otherwise use information not directly present in its argument list. If this option is given, any call of the function with all-constant arguments can be immediately replaced with the function value.
+
+`STABLE` indicates that the function cannot modify the database, and that within a single table scan it will consistently return the same result for the same argument values, but that its result could change across SQL statements. This is the appropriate selection for functions whose results depend on database lookups, parameter values (such as the current time zone), and so on. Also note that the *current\_timestamp* family of functions qualify as stable, since their values do not change within a transaction.
+
+`VOLATILE` indicates that the function value can change even within a single table scan, so no optimizations can be made. Relatively few database functions are volatile in this sense; some examples are `random()`, `currval()`, `timeofday()`. But note that any function that has side-effects must be classified volatile, even if its result is quite predictable, to prevent calls from being optimized away; an example is `setval()`.</dd>
+
+<dt>CALLED ON NULL INPUT  
+RETURNS NULL ON NULL INPUT  
+STRICT  </dt>
+<dd>`CALLED ON NULL INPUT` (the default) indicates that the function will be called normally when some of its arguments are null. It is then the function author's responsibility to check for null values if necessary and respond appropriately. `RETURNS NULL ON NULL                             INPUT` or `STRICT` indicates that the function always returns null whenever any of its arguments are null. If this parameter is specified, the function is not executed when there are null arguments; instead a null result is assumed automatically.</dd>
+
+<dt>\[EXTERNAL\] SECURITY INVOKER  
+\[EXTERNAL\] SECURITY DEFINER  </dt>
+<dd>`SECURITY INVOKER` (the default) indicates that the function is to be executed with the privileges of the user that calls it. `SECURITY DEFINER` specifies that the function is to be executed with the privileges of the user that created it. The key word `EXTERNAL` is allowed for SQL conformance, but it is optional since, unlike in SQL, this feature applies to all functions not just external ones.</dd>
+
+<dt> \<definition\>  </dt>
+<dd>A string constant defining the function; the meaning depends on the language. It may be an internal function name, the path to an object file, an SQL command, or text in a procedural language.</dd>
+
+<dt> \<obj\_file\>, \<link\_symbol\>  </dt>
+<dd>This form of the `AS` clause is used for dynamically loadable C language functions when the function name in the C language source code is not the same as the name of the SQL function. The string \<obj\_file\> is the name of the file containing the dynamically loadable object, and \<link\_symbol\> is the name of the function in the C language source code. If the link symbol is omitted, it is assumed to be the same as the name of the SQL function being defined. A good practice is to locate shared libraries either relative to `$libdir` (which is located at `$GPHOME/lib`) or through the dynamic library path (set by the `dynamic_library_path` server configuration parameter). This simplifies version upgrades if the new installation is at a different location.</dd>
+
+<dt> \<describe\_function\>  </dt>
+<dd>The name of a callback function to execute when a query that calls this function is parsed. The callback function returns a tuple descriptor that indicates the result type.</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+Any compiled code (shared library files) for custom functions must be placed in the same location on every host in your HAWQ array (master and all segments). This location must also be in the `LD_LIBRARY_PATH` so that the server can locate the files. Consider locating shared libraries either relative to `$libdir` (which is located at `$GPHOME/lib`) or through the dynamic library path (set by the `dynamic_library_path` server configuration parameter) on all master segment instances in the HAWQ array.
+
+The full SQL type syntax is allowed for input arguments and return value. However, some details of the type specification (such as the precision field for type *numeric*) are the responsibility of the underlying function implementation and are not recognized or enforced by the `CREATE                     FUNCTION` command.
+
+HAWQ allows function overloading. The same name can be used for several different functions so long as they have distinct argument types. However, the C names of all functions must be different, so you must give overloaded C functions different C names (for example, use the argument types as part of the C names).
+
+Two functions are considered the same if they have the same names and input argument types, ignoring any `OUT` parameters. Thus for example these declarations conflict:
+
+``` pre
+CREATE FUNCTION foo(int) ...
+CREATE FUNCTION foo(int, out text) ...
+```
+
+When repeated `CREATE FUNCTION` calls refer to the same object file, the file is only loaded once. To unload and reload the file, use the `LOAD` command.
+
+To be able to define a function, the user must have the `USAGE` privilege on the language.
+
+It is often helpful to use dollar quoting to write the function definition string, rather than the normal single quote syntax. Without dollar quoting, any single quotes or backslashes in the function definition must be escaped by doubling them. A dollar-quoted string constant consists of a dollar sign (`$`), an optional tag of zero or more characters, another dollar sign, an arbitrary sequence of characters that makes up the string content, a dollar sign, the same tag that began this dollar quote, and a dollar sign. Inside the dollar-quoted string, single quotes, backslashes, or any character can be used without escaping. The string content is always written literally. For example, here are two different ways to specify the string "Dianne's horse" using dollar quoting:
+
+``` pre
+$$Dianne's horse$$
+$SomeTag$Dianne's horse$SomeTag$
+```
+
+## <a id="topic1__section8"></a>Examples
+
+A very simple addition function:
+
+``` pre
+CREATE FUNCTION add(integer, integer) RETURNS integer
+    AS 'select $1 + $2;'
+    LANGUAGE SQL
+    IMMUTABLE
+    RETURNS NULL ON NULL INPUT;
+```
+
+Increment an integer, making use of an argument name, in PL/pgSQL:
+
+``` pre
+CREATE OR REPLACE FUNCTION increment(i integer) RETURNS
+integer AS $$
+        BEGIN
+                RETURN i + 1;
+        END;
+$$ LANGUAGE plpgsql;
+```
+
+Return a record containing multiple output parameters:
+
+``` pre
+CREATE FUNCTION dup(in int, out f1 int, out f2 text)
+    AS $$ SELECT $1, CAST($1 AS text) || ' is text' $$
+    LANGUAGE SQL;
+SELECT * FROM dup(42);
+```
+
+You can do the same thing more verbosely with an explicitly named composite type:
+
+``` pre
+CREATE TYPE dup_result AS (f1 int, f2 text);
+CREATE FUNCTION dup(int) RETURNS dup_result
+    AS $$ SELECT $1, CAST($1 AS text) || ' is text' $$
+    LANGUAGE SQL;
+SELECT * FROM dup(42);
+```
+
+## <a id="topic1__section9"></a>Compatibility
+
+`CREATE FUNCTION` is defined in SQL:1999 and later. The HAWQ version of the command is similar, but not fully compatible. The attributes are not portable, neither are the different available languages.
+
+For compatibility with some other database systems, \<argmode\> can be written either before or after \<argname\>. But only the first way is standard-compliant.
+
+## <a id="topic1__section10"></a>See Also
+
+[ALTER FUNCTION](ALTER-FUNCTION.html), [DROP FUNCTION](DROP-FUNCTION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-GROUP.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-GROUP.html.md.erb b/markdown/reference/sql/CREATE-GROUP.html.md.erb
new file mode 100644
index 0000000..79cc6aa
--- /dev/null
+++ b/markdown/reference/sql/CREATE-GROUP.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: CREATE GROUP
+---
+
+Defines a new database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE GROUP <name> [ [WITH] <option> [ ... ] ]
+```
+
+where \<option\> can be:
+
+``` pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+    | CREATEUSER | NOCREATEUSER
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | [ ENCRYPTED | UNENCRYPTED ] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>' 
+    | IN ROLE <rolename> [, ...]
+    | IN GROUP <rolename> [, ...]
+    | ROLE <rolename> [, ...]
+    | ADMIN <rolename> [, ...]
+    | USER <rolename> [, ...]
+    | SYSID <uid>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE GROUP` has been replaced by [CREATE ROLE](CREATE-ROLE.html), although it is still accepted for backwards compatibility.
+
+## <a id="topic1__section4"></a>Compatibility
+
+There is no `CREATE GROUP` statement in the SQL standard.
+
+## <a id="topic1__section5"></a>See Also
+
+[CREATE ROLE](CREATE-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-LANGUAGE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-LANGUAGE.html.md.erb b/markdown/reference/sql/CREATE-LANGUAGE.html.md.erb
new file mode 100644
index 0000000..6643fef
--- /dev/null
+++ b/markdown/reference/sql/CREATE-LANGUAGE.html.md.erb
@@ -0,0 +1,93 @@
+---
+title: CREATE LANGUAGE
+---
+
+Defines a new procedural language.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [PROCEDURAL] LANGUAGE <name>
+
+CREATE [TRUSTED] [PROCEDURAL] LANGUAGE <name>
+�������HANDLER <call_handler> [VALIDATOR <valfunction>]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE LANGUAGE` registers a new procedural language with a HAWQ database. Subsequently, functions can be defined in this new language. You must be a superuser to register a new language.
+
+When you register a new procedural language, you effectively associate the language name with a call handler that is responsible for executing functions written in that language. For a function written in a procedural language (a language other than C or SQL), the database server has no built-in knowledge about how to interpret the function's source code. The task is passed to a special handler that knows the details of the language. The handler could either do all the work of parsing, syntax analysis, execution, and so on, or it could serve as a bridge between HAWQ and an existing implementation of a programming language. The handler itself is a C language function compiled into a shared object and loaded on demand, just like any other C function.
+
+There are two forms of the `CREATE LANGUAGE` command. In the first form, the user specifies the name of the desired language and the HAWQ server uses the `pg_pltemplate` system catalog to determine the correct parameters. In the second form, the user specifies the language parameters as well as the language name. You can use the second form to create a language that is not defined in `pg_pltemplate`.
+
+When the server finds an entry in the `pg_pltemplate` catalog for the given language name, it will use the catalog data even if the command includes language parameters. This behavior simplifies loading of old dump files, which are likely to contain out-of-date information about language support functions.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>TRUSTED  </dt>
+<dd>Ignored if the server has an entry for the specified language name in *pg\_pltemplate*. Specifies that the call handler for the language is safe and does not offer an unprivileged user any functionality to bypass access restrictions. If this key word is omitted when registering the language, only users with the superuser privilege can use this language to create new functions.</dd>
+
+<dt>PROCEDURAL  </dt>
+<dd>Indicates that this is a procedural language.</dd>
+
+<dt> \<name\>   </dt>
+<dd>The name of the new procedural language. The language name is case insensitive. The name must be unique among the languages in the database. Built-in support is included for `plpgsql`, `plpython`, `plpythonu`, and `plr`. `plpgsql` is installed by default in HAWQ.</dd>
+
+<dt>HANDLER \<call\_handler\>   </dt>
+<dd>Ignored if the server has an entry for the specified language name in `pg_pltemplate`. The name of a previously registered function that will be called to execute the procedural language functions. The call handler for a procedural language must be written in a compiled language such as C with version 1 call convention and registered with HAWQ as a function taking no arguments and returning the `language_handler` type, a placeholder type that is simply used to identify the function as a call handler.</dd>
+
+<dt>VALIDATOR \<valfunction\>   </dt>
+<dd>Ignored if the server has an entry for the specified language name in `pg_pltemplate`. \<valfunction\> is the name of a previously registered function that will be called when a new function in the language is created, to validate the new function. If no validator function is specified, then a new function will not be checked when it is created. The validator function must take one argument of type `oid`, which will be the OID of the to-be-created function, and will typically return `void`.
+
+A validator function would typically inspect the function body for syntactical correctness, but it can also look at other properties of the function, for example if the language cannot handle certain argument types. To signal an error, the validator function should use the `ereport()` function. The return value of the function is ignored.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+The procedural language packages included in the standard HAWQ distribution are:
+
+-   `PL/pgSQL` - registered in all databases by default
+-   `PL/Perl`
+-   `PL/Python`
+-   `PL/Java`
+
+HAWQ supports a language handler for `PL/R`, but the `PL/R` language package is not pre-installed with HAWQ.
+
+The system catalog `pg_language` records information about the currently installed languages.
+
+To create functions in a procedural language, a user must have the `USAGE` privilege for the language. By default, `USAGE` is granted to `PUBLIC` (everyone) for trusted languages. This may be revoked if desired.
+
+Procedural languages are local to individual databases. However, a language can be installed into the `template1` database, which will cause it to be available automatically in all subsequently-created databases.
+
+The call handler function and the validator function (if any) must already exist if the server does not have an entry for the language in `pg_pltemplate`. But when there is an entry, the functions need not already exist; they will be automatically defined if not present in the database.
+
+Any shared library that implements a language must be located in the same `LD_LIBRARY_PATH` location on all segment hosts in your HAWQ array.
+
+## <a id="topic1__section6"></a>Examples
+
+The preferred way of creating any of the standard procedural languages in a database:
+
+``` pre
+CREATE LANGUAGE plr;
+CREATE LANGUAGE plpythonu;
+CREATE LANGUAGE plperl;
+```
+
+For a language not known in the `pg_pltemplate` catalog:
+
+``` pre
+CREATE FUNCTION plsample_call_handler() RETURNS 
+language_handler
+    AS '$libdir/plsample'
+    LANGUAGE C;
+CREATE LANGUAGE plsample
+    HANDLER plsample_call_handler;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE LANGUAGE` is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE FUNCTION](CREATE-FUNCTION.html), [DROP LANGUAGE](DROP-LANGUAGE.html)


[13/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-SEQUENCE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-SEQUENCE.html.md.erb b/markdown/reference/sql/DROP-SEQUENCE.html.md.erb
new file mode 100644
index 0000000..59c0d85
--- /dev/null
+++ b/markdown/reference/sql/DROP-SEQUENCE.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: DROP SEQUENCE
+---
+
+Removes a sequence.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP SEQUENCE [IF EXISTS]  <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP SEQUENCE` removes a sequence generator table. You must own the sequence to drop it (or be a superuser).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the sequence does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of the sequence to remove.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the sequence.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the sequence if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the sequence `myserial`:
+
+``` pre
+DROP SEQUENCE myserial;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`DROP SEQUENCE` is fully conforming with the SQL standard, except that the standard only allows one sequence to be dropped per command. Also, the `IF           EXISTS` option is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE SEQUENCE](CREATE-SEQUENCE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-TABLE.html.md.erb b/markdown/reference/sql/DROP-TABLE.html.md.erb
new file mode 100644
index 0000000..b277273
--- /dev/null
+++ b/markdown/reference/sql/DROP-TABLE.html.md.erb
@@ -0,0 +1,47 @@
+---
+title: DROP TABLE
+---
+
+Removes a table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP TABLE [IF EXISTS] <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP TABLE` removes tables from the database. Only its owner may drop a table. To empty a table of rows without removing the table definition, use `TRUNCATE`.
+
+`DROP TABLE` always removes any indexes, rules, and constraints that exist for the target table. However, to drop a table that is referenced by a view, `CASCADE` must be specified. `CASCADE` will remove a dependent view entirely.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the table does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of the table to remove.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the table (such as views).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the table if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the table `mytable`:
+
+``` pre
+DROP TABLE mytable;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`DROP TABLE` is fully conforming with the SQL standard, except that the standard only allows one table to be dropped per command. Also, the `IF           EXISTS` option is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE TABLE](CREATE-TABLE.html), [ALTER TABLE](ALTER-TABLE.html), [TRUNCATE](TRUNCATE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-TABLESPACE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-TABLESPACE.html.md.erb b/markdown/reference/sql/DROP-TABLESPACE.html.md.erb
new file mode 100644
index 0000000..9ffdfef
--- /dev/null
+++ b/markdown/reference/sql/DROP-TABLESPACE.html.md.erb
@@ -0,0 +1,42 @@
+---
+title: DROP TABLESPACE
+---
+
+Removes a tablespace.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP TABLESPACE [IF EXISTS] <tablespacename>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP TABLESPACE` removes a tablespace from the system.
+
+A tablespace can only be dropped by its owner or a superuser. The tablespace must be empty of all database objects before it can be dropped. It is possible that objects in other databases may still reside in the tablespace even if no objects in the current database are using the tablespace.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the tablespace does not exist. A notice is issued in this case.</dd>
+
+<dt>\<tablespacename\>  </dt>
+<dd>The name of the tablespace to remove.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the tablespace `mystuff`:
+
+``` pre
+DROP TABLESPACE mystuff;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`DROP TABLESPACE` is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE TABLESPACE](CREATE-TABLESPACE.html), [ALTER TABLESPACE](ALTER-TABLESPACE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-TYPE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-TYPE.html.md.erb b/markdown/reference/sql/DROP-TYPE.html.md.erb
new file mode 100644
index 0000000..1ffd44a
--- /dev/null
+++ b/markdown/reference/sql/DROP-TYPE.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: DROP TYPE
+---
+
+Removes a data type.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP TYPE [IF EXISTS] <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP TYPE` will remove a user-defined data type. Only the owner of a type can remove it.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the type does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>  </dt>
+<dd>The name (optionally schema-qualified) of the data type to remove.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the type (such as table columns, functions, operators).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the type if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the data type `box`;
+
+``` pre
+DROP TYPE box;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+This command is similar to the corresponding command in the SQL standard, apart from the `IF EXISTS` option, which is a HAWQ extension. But note that the `CREATE TYPE` command and the data type extension mechanisms in HAWQ differ from the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER TYPE](ALTER-TYPE.html), [CREATE TYPE](CREATE-TYPE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-USER.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-USER.html.md.erb b/markdown/reference/sql/DROP-USER.html.md.erb
new file mode 100644
index 0000000..6ab3992
--- /dev/null
+++ b/markdown/reference/sql/DROP-USER.html.md.erb
@@ -0,0 +1,31 @@
+---
+title: DROP USER
+---
+
+Removes a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP USER [IF EXISTS] <name> [, ...]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP USER` is an obsolete command, though still accepted for backwards compatibility. Groups (and users) have been superseded by the more general concept of roles. See [DROP ROLE](DROP-ROLE.html) for more information.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the role does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>  </dt>
+<dd>The name of an existing role.</dd>
+
+## <a id="topic1__section5"></a>Compatibility
+
+There is no `DROP USER` statement in the SQL standard. The SQL standard leaves the definition of users to the implementation.
+
+## <a id="topic1__section6"></a>See Also
+
+[DROP ROLE](DROP-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/DROP-VIEW.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/DROP-VIEW.html.md.erb b/markdown/reference/sql/DROP-VIEW.html.md.erb
new file mode 100644
index 0000000..b8b9968
--- /dev/null
+++ b/markdown/reference/sql/DROP-VIEW.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: DROP VIEW
+---
+
+Removes a view.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP VIEW [IF EXISTS] <name. [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP VIEW` will remove an existing view. Only the owner of a view can remove it.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the view does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>  </dt>
+<dd>The name (optionally schema-qualified) of the view to remove.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the view (such as other views).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the view if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the view `topten`;
+
+``` pre
+DROP VIEW topten;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`DROP VIEW` is fully conforming with the SQL standard, except that the standard only allows one view to be dropped per command. Also, the `IF           EXISTS` option is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE VIEW](CREATE-VIEW.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/END.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/END.html.md.erb b/markdown/reference/sql/END.html.md.erb
new file mode 100644
index 0000000..484afcf
--- /dev/null
+++ b/markdown/reference/sql/END.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: END
+---
+
+Commits the current transaction.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+END [WORK | TRANSACTION]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`END` commits the current transaction. All changes made by the transaction become visible to others and are guaranteed to be durable if a crash occurs. This command is a HAWQ extension that is equivalent to [COMMIT](COMMIT.html).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional keywords. They have no effect.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Commit the current transaction:
+
+``` pre
+END;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`END` is a HAWQ extension that provides functionality equivalent to [COMMIT](COMMIT.html), which is specified in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[BEGIN](BEGIN.html), [ROLLBACK](ROLLBACK.html), [COMMIT](COMMIT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/EXECUTE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/EXECUTE.html.md.erb b/markdown/reference/sql/EXECUTE.html.md.erb
new file mode 100644
index 0000000..ff57cc6
--- /dev/null
+++ b/markdown/reference/sql/EXECUTE.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: EXECUTE
+---
+
+Executes a prepared SQL statement.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+EXECUTE <name> [ (<parameter> [, ...] ) ]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`EXECUTE` is used to execute a previously prepared statement. Since prepared statements only exist for the duration of a session, the prepared statement must have been created by a `PREPARE` statement executed earlier in the current session.
+
+If the `PREPARE` statement that created the statement specified some parameters, a compatible set of parameters must be passed to the `EXECUTE` statement, or else an error is raised. Note that (unlike functions) prepared statements are not overloaded based on the type or number of their parameters; the name of a prepared statement must be unique within a database session.
+
+For more information on the creation and usage of prepared statements, see `PREPARE`.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<name\>   </dt>
+<dd>The name of the prepared statement to execute.</dd>
+
+<dt>\<parameter\>   </dt>
+<dd>The actual value of a parameter to the prepared statement. This must be an expression yielding a value that is compatible with the data type of this parameter, as was determined when the prepared statement was created.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Create a prepared statement for an `INSERT` statement, and then execute it:
+
+``` pre
+PREPARE fooplan (int, text, bool, numeric) AS INSERT INTO 
+foo VALUES($1, $2, $3, $4);
+EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00);
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+The SQL standard includes an `EXECUTE` statement, but it is only for use in embedded SQL. This version of the `EXECUTE` statement also uses a somewhat different syntax.
+
+## <a id="topic1__section7"></a>See Also
+
+[DEALLOCATE](DEALLOCATE.html), [PREPARE](PREPARE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/EXPLAIN.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/EXPLAIN.html.md.erb b/markdown/reference/sql/EXPLAIN.html.md.erb
new file mode 100644
index 0000000..ca0e908
--- /dev/null
+++ b/markdown/reference/sql/EXPLAIN.html.md.erb
@@ -0,0 +1,96 @@
+---
+title: EXPLAIN
+---
+
+Shows the query plan of a statement.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+EXPLAIN [ANALYZE] [VERBOSE] <statement>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`EXPLAIN` displays the query plan that the HAWQ planner generates for the supplied statement. Query plans are a tree plan of nodes. Each node in the plan represents a single operation, such as table scan, join, aggregation or a sort.
+
+Plans should be read from the bottom up as each node feeds rows into the node directly above it. The bottom nodes of a plan are usually table scan operations. If the query requires joins, aggregations, or sorts (or other operations on the raw rows), then there will be additional nodes above the scan nodes to perform these operations. The topmost plan nodes are usually the HAWQ motion nodes (redistribute, explicit redistribute, broadcast, or gather motions). These are the operations responsible for moving rows between the segment instances during query processing.
+
+The output of `EXPLAIN` has one line for each node in the plan tree, showing the basic node type plus the following cost estimates that the planner made for the execution of that plan node:
+
+-   **cost** \u2014 measured in units of disk page fetches; that is, 1.0 equals one sequential disk page read. The first estimate is the start-up cost (cost of getting to the first row) and the second is the total cost (cost of getting all rows). Note that the total cost assumes that all rows will be retrieved, which may not always be the case (if using `LIMIT` for example).
+-   **rows** \u2014 the total number of rows output by this plan node. This is usually less than the actual number of rows processed or scanned by the plan node, reflecting the estimated selectivity of any `WHERE` clause conditions. Ideally the top-level nodes estimate will approximate the number of rows actually returned, updated, or deleted by the query.
+-   **width** \u2014 total bytes of all the rows output by this plan node.
+
+It is important to note that the cost of an upper-level node includes the cost of all its child nodes. The topmost node of the plan has the estimated total execution cost for the plan. This is this number that the planner seeks to minimize. It is also important to realize that the cost only reflects things that the query planner cares about. In particular, the cost does not consider the time spent transmitting result rows to the client.
+
+`EXPLAIN ANALYZE` causes the statement to be actually executed, not only planned. The `EXPLAIN ANALYZE` plan shows the actual results along with the planner's estimates. This is useful for seeing whether the planner's estimates are close to reality. In addition to the information shown in the `EXPLAIN` plan, `EXPLAIN ANALYZE` will show the following additional information:
+
+-   The total elapsed time (in milliseconds) that it took to run the query.
+-   The number of *workers* (segments) involved in a plan node operation. Only segments that return rows are counted.
+-   The maximum number of rows returned by the segment that produced the most rows for an operation. If multiple segments produce an equal number of rows, the one with the longest *time to end* is the one chosen.
+-   The segment id number of the segment that produced the most rows for an operation.
+-   For relevant operations, the *work\_mem* used by the operation. If *work\_mem* was not sufficient to perform the operation in memory, the plan will show how much data was spilled to disk and how many passes over the data were required for the lowest performing segment. For example:
+
+    ``` pre
+    Work_mem used: 64K bytes avg, 64K bytes max (seg0).
+    Work_mem wanted: 90K bytes avg, 90K bytes max (seg0) to abate workfile 
+    I/O affecting 2 workers.
+    [seg0] pass 0: 488 groups made from 488 rows; 263 rows written to 
+    workfile
+    [seg0] pass 1: 263 groups made from 263 rows
+    ```
+**Note**
+You cannot set the *work\_mem* property. The *work\_mem* property is for information only.
+ 
+-   The time (in milliseconds) it took to retrieve the first row from the segment that produced the most rows, and the total time taken to retrieve all rows from that segment. The `\<time\> to first row` may be omitted if it is the same as the `\<time\> to end`.
+
+**Important:**
+Keep in mind that the statement is actually executed when `EXPLAIN  ANALYZE` is used. Although `EXPLAIN ANALYZE` will discard any output that a `SELECT` would return, other side effects of the statement will happen as usual. If you wish to use `EXPLAIN ANALYZE` on a DML statement without letting the command affect your data, use this approach:
+
+``` pre
+BEGIN;
+EXPLAIN ANALYZE ...;
+ROLLBACK;
+```
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<name\>   </dt>
+<dd>The name of the prepared statement to execute.</dd>
+
+<dt>\<parameter\>  </dt>
+<dd>The actual value of a parameter to the prepared statement. This must be an expression yielding a value that is compatible with the data type of this parameter, as was determined when the prepared statement was created.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+In order to allow the query planner to make reasonably informed decisions when optimizing queries, the `ANALYZE` statement should be run to record statistics about the distribution of data within the table. If you have not done this (or if the statistical distribution of the data in the table has changed significantly since the last time `ANALYZE` was run), the estimated costs are unlikely to conform to the real properties of the query, and consequently an inferior query plan may be chosen.
+
+## <a id="topic1__section6"></a>Examples
+
+To illustrate how to read an `EXPLAIN` query plan, consider the following example for a very simple query:
+
+``` pre
+EXPLAIN SELECT * FROM names WHERE name = 'Joelle';
+���������������������QUERY PLAN
+------------------------------------------------------------
+Gather Motion 2:1 (slice1) (cost=0.00..20.88 rows=1 width=13)
+
+���-> Seq Scan on 'names' (cost=0.00..20.88 rows=1 width=13)
+   ������Filter: name::text ~~ 'Joelle'::text
+```
+
+If we read the plan from the bottom up, the query planner starts by doing a sequential scan of the `names` table. Notice that the `WHERE` clause is being applied as a *filter* condition. This means that the scan operation checks the condition for each row it scans, and outputs only the ones that pass the condition.
+
+The results of the scan operation are passed up to a *gather motion* operation. In HAWQ, a gather motion is when segments send rows up to the master. In this case we have 2 segment instances sending to 1 master instance (2:1). This operation is working on `slice1` of the parallel query execution plan. In HAWQ, a query plan is divided into *slices* so that portions of the query plan can be worked on in parallel by the segments.
+
+The estimated startup cost for this plan is `00.00` (no cost) and a total cost of `20.88` disk page fetches. The planner is estimating that this query will return one row.
+
+## <a id="topic1__section7"></a>Compatibility
+
+There is no `EXPLAIN` statement defined in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[ANALYZE](ANALYZE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/FETCH.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/FETCH.html.md.erb b/markdown/reference/sql/FETCH.html.md.erb
new file mode 100644
index 0000000..bdd9292
--- /dev/null
+++ b/markdown/reference/sql/FETCH.html.md.erb
@@ -0,0 +1,146 @@
+---
+title: FETCH
+---
+
+Retrieves rows from a query using a cursor.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+FETCH [ <forward_direction> { FROM | IN } ] <cursorname>
+
+```
+
+where *forward\_direction* can be empty or one of:
+
+``` pre
+����NEXT
+����FIRST
+����LAST
+����ABSOLUTE <count>
+����RELATIVE <count>
+����<count>
+����ALL
+����FORWARD
+����FORWARD <count>
+����FORWARD ALL
+```
+
+## <a id="topic1__section3"></a>Description
+
+`FETCH` retrieves rows using a previously-created cursor.
+
+A cursor has an associated position, which is used by `FETCH`. The cursor position can be before the first row of the query result, on any particular row of the result, or after the last row of the result. When created, a cursor is positioned before the first row. After fetching some rows, the cursor is positioned on the row most recently retrieved. If `FETCH` runs off the end of the available rows then the cursor is left positioned after the last row. `FETCH           ALL` will always leave the cursor positioned after the last row.
+
+The forms `NEXT`, `FIRST`, `LAST`, `ABSOLUTE`, `RELATIVE` fetch a single row after moving the cursor appropriately. If there is no such row, an empty result is returned, and the cursor is left positioned before the first row or after the last row as appropriate.
+
+The forms using `FORWARD` retrieve the indicated number of rows moving in the forward direction, leaving the cursor positioned on the last-returned row (or after all rows, if the count exceeds the number of rows available). Note that it is not possible to move a cursor position backwards in HAWQ, since scrollable cursors are not supported. You can only move a cursor forward in position using `FETCH`.
+
+`RELATIVE 0` and `FORWARD 0` request fetching the current row without moving the cursor, that is, re-fetching the most recently fetched row. This will succeed unless the cursor is positioned before the first row or after the last row, in which case no row is returned.
+
+**Outputs**
+
+On successful completion, a `FETCH` command returns a command tag of the form
+
+``` pre
+FETCH count
+
+```
+
+The count is the number of rows fetched (possibly zero). Note that in `psql`, the command tag will not actually be displayed, since `psql` displays the fetched rows instead.
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt>\<forward\_direction\>  </dt>
+<dd>Defines the fetch direction and number of rows to fetch. Only forward fetches are allowed in HAWQ. It can be one of the following:</dd>
+
+<dt>NEXT  </dt>
+<dd>Fetch the next row. This is the default if direction is omitted.</dd>
+
+<dt>FIRST  </dt>
+<dd>Fetch the first row of the query (same as `ABSOLUTE 1`). Only allowed if it is the first `FETCH` operation using this cursor.</dd>
+
+<dt>LAST  </dt>
+<dd>Fetch the last row of the query (same as `ABSOLUTE -1`).</dd>
+
+<dt>ABSOLUTE \<count\>  </dt>
+<dd>Fetch the specified row of the query. Position after last row if count is out of range. Only allowed if the row specified by *count* moves the cursor position forward.</dd>
+
+<dt>RELATIVE \<count\>  </dt>
+<dd>Fetch the specified row of the query *count* rows ahead of the current cursor position. `RELATIVE 0` re-fetches the current row, if any. Only allowed if *count* moves the cursor position forward.</dd>
+
+<dt>\<count\> </dt>
+<dd>Fetch the next *count* number of rows (same as `FORWARD                 count                   `).</dd>
+
+<dt>ALL  </dt>
+<dd>Fetch all remaining rows (same as `FORWARD ALL`).</dd>
+
+<dt>FORWARD  </dt>
+<dd>Fetch the next row (same as `NEXT`).</dd>
+
+<dt>FORWARD \<count\>  </dt>
+<dd>Fetch the next *count* number of rows. `FORWARD 0` re-fetches the current row.</dd>
+
+<dt>FORWARD ALL  </dt>
+<dd>Fetch all remaining rows.</dd>
+
+<dt>\<cursorname\> </dt>
+<dd>The name of an open cursor.</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+HAWQ does not support scrollable cursors, so you can only use `FETCH` to move the cursor position forward.
+
+`ABSOLUTE` fetches are not any faster than navigating to the desired row with a relative move: the underlying implementation must traverse all the intermediate rows anyway.
+
+Updating data via a cursor is currently not supported by HAWQ.
+
+`DECLARE` is used to define a cursor. Use `MOVE` to change cursor position without retrieving data.
+
+## <a id="topic1__section7"></a>Examples
+
+-- Start the transaction:
+
+``` pre
+BEGIN;
+```
+
+-- Set up a cursor:
+
+``` pre
+DECLARE mycursor CURSOR FOR SELECT * FROM films;
+```
+
+-- Fetch the first 5 rows in the cursor `mycursor`:
+
+``` pre
+FETCH FORWARD 5 FROM mycursor;
+ code  |          title          | did | date_prod  |   kind   |  len
+-------+-------------------------+-----+------------+----------+-------
+ BL101 | The Third Man           | 101 | 1949-12-23 | Drama    | 01:44
+ BL102 | The African Queen       | 101 | 1951-08-11 | Romantic | 01:43
+ JL201 | Une Femme est une Femme | 102 | 1961-03-12 | Romantic | 01:25
+ P_301 | Vertigo                 | 103 | 1958-11-14 | Action   | 02:08
+ P_302 | Becket                  | 103 | 1964-02-03 | Drama    | 02:28
+```
+
+-- Close the cursor and end the transaction:
+
+``` pre
+CLOSE mycursor;
+COMMIT;
+```
+
+## <a id="topic1__section8"></a>Compatibility
+
+SQL standard allows cursors only in embedded SQL and in modules. HAWQ permits cursors to be used interactively.
+
+The variant of `FETCH` described here returns the data as if it were a `SELECT` result rather than placing it in host variables. Other than this point, `FETCH` is fully upward-compatible with the SQL standard.
+
+The `FETCH` forms involving `FORWARD`, as well as the forms `FETCH` count and `FETCH ALL`, in which `FORWARD` is implicit, are HAWQ extensions. `BACKWARD` is not supported.
+
+The SQL standard allows only `FROM` preceding the cursor name; the option to use `IN` is an extension.
+
+## <a id="topic1__section9"></a>See Also
+
+[DECLARE](DECLARE.html), [CLOSE](CLOSE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/GRANT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/GRANT.html.md.erb b/markdown/reference/sql/GRANT.html.md.erb
new file mode 100644
index 0000000..1673df5
--- /dev/null
+++ b/markdown/reference/sql/GRANT.html.md.erb
@@ -0,0 +1,180 @@
+---
+title: GRANT
+---
+
+Defines access privileges.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+GRANT { {SELECT | INSERT | UPDATE | DELETE | REFERENCES }
+[,...] | ALL [PRIVILEGES] }
+����ON [TABLE] <tablename> [, ...]
+����TO {<rolename> | PUBLIC} [, ...] [WITH GRANT OPTION]
+
+GRANT { {USAGE | SELECT | UPDATE} [,...] | ALL [PRIVILEGES] }
+����ON SEQUENCE <sequencename> [, ...]
+����TO { <rolename> | PUBLIC } [, ...] [WITH GRANT OPTION]
+
+GRANT { {CREATE | CONNECT | TEMPORARY | TEMP} [,...] | ALL
+[PRIVILEGES] }
+����ON DATABASE <dbname> [, ...]
+����TO {<rolename> | PUBLIC} [, ...] [WITH GRANT OPTION]
+
+GRANT { EXECUTE | ALL [PRIVILEGES] }
+����ON FUNCTION <funcname> ( [ [<argmode>] [<argname>] <argtype> [, ...]
+    ] ) [, ...]
+����TO {<rolename> | PUBLIC} [, ...] [WITH GRANT OPTION]
+
+GRANT { USAGE | ALL [PRIVILEGES] }
+����ON LANGUAGE <langname> [, ...]
+����TO {<rolename> | PUBLIC} [, ...] [WITH GRANT OPTION]
+
+GRANT { {CREATE | USAGE} [,...] | ALL [PRIVILEGES] }
+����ON SCHEMA <schemaname> [, ...]
+����TO {<rolename> | PUBLIC} [, ...] [WITH GRANT OPTION]
+
+GRANT { CREATE | ALL [PRIVILEGES] }
+����ON TABLESPACE <tablespacename> [, ...]
+����TO {<rolename> | PUBLIC} [, ...] [WITH GRANT OPTION]
+
+GRANT <parent_role> [, ...]
+����TO <member_role> [, ...] [WITH ADMIN OPTION]
+
+GRANT { SELECT | INSERT | ALL [PRIVILEGES] }
+����ON PROTOCOL <protocolname>
+����TO <username>
+
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `GRANT` command has two basic variants: one that grants privileges on a database object (table, view, sequence, database, function, procedural language, schema, or tablespace), and one that grants membership in a role.
+
+**GRANT on Database Objects**
+
+This variant of the `GRANT` command gives specific privileges on a database object to one or more roles. These privileges are added to those already granted, if any.
+
+The key word `PUBLIC` indicates that the privileges are to be granted to all roles, including those that may be created later. `PUBLIC` may be thought of as an implicitly defined group-level role that always includes all roles. Any particular role will have the sum of privileges granted directly to it, privileges granted to any role it is presently a member of, and privileges granted to `PUBLIC`.
+
+If `WITH GRANT OPTION` is specified, the recipient of the privilege may in turn grant it to others. Without a grant option, the recipient cannot do that. Grant options cannot be granted to `PUBLIC`.
+
+There is no need to grant privileges to the owner of an object (usually the role that created it), as the owner has all privileges by default. The right to drop an object, or to alter its definition in any way is not described by a grantable privilege; it is inherent in the owner, and cannot be granted or revoked. The owner implicitly has all grant options for the object, too.
+
+Depending on the type of object, the initial default privileges may include granting some privileges to `PUBLIC`. The default is no public access for tables, schemas, and tablespaces; `CONNECT` privilege and `TEMP` table creation privilege for databases; `EXECUTE` privilege for functions; and `USAGE` privilege for languages. The object owner may of course revoke these privileges.
+
+**GRANT on Roles**
+
+This variant of the `GRANT` command grants membership in a role to one or more other roles. Membership in a role is significant because it conveys the privileges granted to a role to each of its members.
+
+If `WITH ADMIN OPTION` is specified, the member may in turn grant membership in the role to others, and revoke membership in the role as well. Database superusers can grant or revoke membership in any role to anyone. Roles having `CREATEROLE` privilege can grant or revoke membership in any role that is not a superuser.
+
+Unlike the case with privileges, membership in a role cannot be granted to `PUBLIC`.
+
+## <a id="topic1__section7"></a>Parameters
+
+<dt>SELECT  </dt>
+<dd>Allows `SELECT` from any column of the specified table, view, or sequence. Also allows the use of `COPY TO`. For sequences, this privilege also allows the use of the `currval` function.</dd>
+
+<dt>INSERT  </dt>
+<dd>Allows `INSERT` of a new row into the specified table. Also allows `COPY FROM`.</dd>
+
+<dt>UPDATE  </dt>
+<dd>Allows `UPDATE` of any column of the specified table. `SELECT               ... FOR UPDATE` and `SELECT ... FOR SHARE` also require this privilege (as well as the `SELECT` privilege). For sequences, this privilege allows the use of the `nextval` and `setval` functions.</dd>
+
+<dt>DELETE  </dt>
+<dd>Allows `DELETE` of a row from the specified table.</dd>
+
+<dt>REFERENCES  </dt>
+<dd>This keyword is accepted, although foreign key constraints are currently not supported in HAWQ. To create a foreign key constraint, it is necessary to have this privilege on both the referencing and referenced tables.</dd>
+
+<dt>TRIGGER  </dt>
+<dd>Allows the creation of a trigger on the specified table.
+
+**Note:** HAWQ does not support triggers.</dd>
+
+<dt>CREATE  </dt>
+<dd>For databases, allows new schemas to be created within the database.
+
+For schemas, allows new objects to be created within the schema. To rename an existing object, you must own the object and have this privilege for the containing schema.
+
+For tablespaces, allows tables and indexes to be created within the tablespace, and allows databases to be created that have the tablespace as their default tablespace. (Note that revoking this privilege will not alter the placement of existing objects.)</dd>
+
+<dt>CONNECT  </dt>
+<dd>Allows the user to connect to the specified database. This privilege is checked at connection startup (in addition to checking any restrictions imposed by `pg_hba.conf`).</dd>
+
+<dt>TEMPORARY  
+TEMP  </dt>
+<dd>Allows temporary tables to be created while using the database.</dd>
+
+<dt>EXECUTE  </dt>
+<dd>Allows the use of the specified function and the use of any operators that are implemented on top of the function. This is the only type of privilege that is applicable to functions. (This syntax works for aggregate functions, as well.)</dd>
+
+<dt>USAGE  </dt>
+<dd>For procedural languages, allows the use of the specified language for the creation of functions in that language. This is the only type of privilege that is applicable to procedural languages.
+
+For schemas, allows access to objects contained in the specified schema (assuming that the objects' own privilege requirements are also met). Essentially this allows the grantee to look up objects within the schema.
+
+For sequences, this privilege allows the use of the `currval` and `nextval` functions.</dd>
+
+<dt>ALL PRIVILEGES  </dt>
+<dd>Grant all of the available privileges at once. The `PRIVILEGES` key word is optional in HAWQ, though it is required by strict SQL.</dd>
+
+<dt>PUBLIC  </dt>
+<dd>A special group-level role that denotes that the privileges are to be granted to all roles, including those that may be created later.</dd>
+
+<dt>WITH GRANT OPTION  </dt>
+<dd>The recipient of the privilege may in turn grant it to others.</dd>
+
+<dt>WITH ADMIN OPTION  </dt>
+<dd>The member of a role may in turn grant membership in the role to others.</dd>
+
+## <a id="topic1__section8"></a>Notes
+
+Database superusers can access all objects regardless of object privilege settings. One exception to this rule is view objects. Access to tables referenced in the view is determined by permissions of the view owner not the current user (even if the current user is a superuser).
+
+If a superuser chooses to issue a `GRANT` or `REVOKE` command, the command is performed as though it were issued by the owner of the affected object. In particular, privileges granted via such a command will appear to have been granted by the object owner. For role membership, the membership appears to have been granted by the containing role itself.
+
+`GRANT` and `REVOKE` can also be done by a role that is not the owner of the affected object, but is a member of the role that owns the object, or is a member of a role that holds privileges `WITH GRANT OPTION` on the object. In this case the privileges will be recorded as having been granted by the role that actually owns the object or holds the privileges `WITH GRANT OPTION`.
+
+Granting permission on a table does not automatically extend permissions to any sequences used by the table, including sequences tied to `SERIAL` columns. Permissions on a sequence must be set separately.
+
+HAWQ does not support granting or revoking privileges for individual columns of a table. One possible workaround is to create a view having just the desired columns and then grant privileges to that view.
+
+Use psql's `\z` meta-command to obtain information about existing privileges for an object.
+
+## <a id="topic1__section9"></a>Examples
+
+Grant insert privilege to all roles on table `mytable`:
+
+``` pre
+GRANT INSERT ON mytable TO PUBLIC;
+```
+
+Grant all available privileges to role `sally` on the view `topten`. Note that while the above will indeed grant all privileges if executed by a superuser or the owner of `topten`, when executed by someone else it will only grant those permissions for which the granting role has grant options.
+
+``` pre
+GRANT ALL PRIVILEGES ON topten TO sally;
+```
+
+Grant membership in role `admins` to user `joe`:
+
+``` pre
+GRANT admins TO joe;
+```
+
+## <a id="topic1__section10"></a>Compatibility
+
+The `PRIVILEGES` key word in is required in the SQL standard, but optional in HAWQ. The SQL standard does not support setting the privileges on more than one object per command.
+
+HAWQ allows an object owner to revoke his own ordinary privileges: for example, a table owner can make the table read-only to himself by revoking his own `INSERT` privileges. This is not possible according to the SQL standard. HAWQ treats the owner's privileges as having been granted by the owner to himself; therefore he can revoke them too. In the SQL standard, the owner's privileges are granted by an assumed *system* entity.
+
+The SQL standard allows setting privileges for individual columns within a table.
+
+The SQL standard provides for a `USAGE` privilege on other kinds of objects: character sets, collations, translations, domains.
+
+Privileges on databases, tablespaces, schemas, and languages are HAWQ extensions.
+
+## <a id="topic1__section11"></a>See Also
+
+[REVOKE](REVOKE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/INSERT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/INSERT.html.md.erb b/markdown/reference/sql/INSERT.html.md.erb
new file mode 100644
index 0000000..d23a2aa
--- /dev/null
+++ b/markdown/reference/sql/INSERT.html.md.erb
@@ -0,0 +1,111 @@
+---
+title: INSERT
+---
+
+Creates new rows in a table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+INSERT INTO <table> [( <column> [, ...] )]
+���{DEFAULT VALUES | VALUES ( {<expression> | DEFAULT} [, ...] ) 
+   [, ...] | <query>}
+```
+
+## <a id="topic1__section3"></a>Description
+
+`INSERT` inserts new rows into a table. One can insert one or more rows specified by value expressions, or zero or more rows resulting from a query.
+
+The target column names may be listed in any order. If no list of column names is given at all, the default is the columns of the table in their declared order. The values supplied by the `VALUES` clause or query are associated with the explicit or implicit column list left-to-right.
+
+Each column not present in the explicit or implicit column list will be filled with a default value, either its declared default value or null if there is no default.
+
+If the expression for any column is not of the correct data type, automatic type conversion will be attempted.
+
+You must have `INSERT` privilege on a table in order to insert into it.
+
+**Note:** HAWQ supports 127 concurrent inserts currently.
+
+**Outputs**
+On successful completion, an `INSERT` command returns a command tag of the form:
+
+``` pre
+INSERT oid
+               count           
+```
+
+The *count* is the number of rows inserted. If count is exactly one, and the target table has OIDs, then *oid* is the OID assigned to the inserted row. Otherwise *oid* is zero.
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt> \<table\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing table.</dd>
+
+<dt> \<column\>   </dt>
+<dd>The name of a column in table. The column name can be qualified with a subfield name or array subscript, if needed. (Inserting into only some fields of a composite column leaves the other fields null.)</dd>
+
+<dt>DEFAULT VALUES  </dt>
+<dd>All columns will be filled with their default values.</dd>
+
+<dt> \<expression\>   </dt>
+<dd>An expression or value to assign to the corresponding column.</dd>
+
+<dt>DEFAULT  </dt>
+<dd>The corresponding column will be filled with its default value.</dd>
+
+<dt> \<query\>   </dt>
+<dd>A query (`SELECT` statement) that supplies the rows to be inserted. Refer to the [SELECT](SELECT.html) statement for a description of the syntax.</dd>
+
+## <a id="topic1__section7"></a>Examples
+
+Insert a single row into table `films`:
+
+``` pre
+INSERT INTO films VALUES ('UA502', 'Bananas', 105, 
+'1971-07-13', 'Comedy', '82 minutes');
+```
+
+In this example, the `length` column is omitted and therefore it will have the default value:
+
+``` pre
+INSERT INTO films (code, title, did, date_prod, kind) VALUES 
+('T_601', 'Yojimbo', 106, '1961-06-16', 'Drama');
+```
+
+This example uses the `DEFAULT` clause for the `date_prod` column rather than specifying a value:
+
+``` pre
+INSERT INTO films VALUES ('UA502', 'Bananas', 105, DEFAULT, 
+'Comedy', '82 minutes');
+```
+
+To insert a row consisting entirely of default values:
+
+``` pre
+INSERT INTO films DEFAULT VALUES;
+```
+
+To insert multiple rows using the multirow `VALUES` syntax:
+
+``` pre
+INSERT INTO films (code, title, did, date_prod, kind) VALUES
+    ('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
+    ('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');
+```
+
+This example inserts some rows into table `films` from a table `tmp_films` with the same column layout as `films`:
+
+``` pre
+INSERT INTO films SELECT * FROM tmp_films WHERE date_prod < 
+'2004-05-07';
+```
+
+## <a id="topic1__section8"></a>Compatibility
+
+`INSERT` conforms to the SQL standard. The case in which a column name list is omitted, but not all the columns are filled from the `VALUES` clause or query, is disallowed by the standard.
+
+Possible limitations of the *query* clause are documented under `SELECT`.
+
+## <a id="topic1__section9"></a>See Also
+
+[COPY](COPY.html), [SELECT](SELECT.html), [CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/PREPARE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/PREPARE.html.md.erb b/markdown/reference/sql/PREPARE.html.md.erb
new file mode 100644
index 0000000..c633f14
--- /dev/null
+++ b/markdown/reference/sql/PREPARE.html.md.erb
@@ -0,0 +1,67 @@
+---
+title: PREPARE
+---
+
+Prepare a statement for execution.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+PREPARE <name> [ (<datatype> [, ...] ) ] AS <statement>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`PREPARE` creates a prepared statement, possibly with unbound parameters. A prepared statement is a server-side object that can be used to optimize performance. A prepared statement may be subsequently executed with a binding for its parameters. HAWQ may choose to replan the query for different executions of the same prepared statement.
+
+Prepared statements can take parameters: values that are substituted into the statement when it is executed. When creating the prepared statement, refer to parameters by position, using `$1`, `$2`, etc. A corresponding list of parameter data types can optionally be specified. When a parameter's data type is not specified or is declared as unknown, the type is inferred from the context in which the parameter is used (if possible). When executing the statement, specify the actual values for these parameters in the `EXECUTE` statement.
+
+Prepared statements only last for the duration of the current database session. When the session ends, the prepared statement is forgotten, so it must be recreated before being used again. This also means that a single prepared statement cannot be used by multiple simultaneous database clients; however, each client can create their own prepared statement to use. The prepared statement can be manually cleaned up using the [DEALLOCATE](DEALLOCATE.html) command.
+
+Prepared statements have the largest performance advantage when a single session is being used to execute a large number of similar statements. The performance difference will be particularly significant if the statements are complex to plan or rewrite, for example, if the query involves a join of many tables or requires the application of several rules. If the statement is relatively simple to plan and rewrite but relatively expensive to execute, the performance advantage of prepared statements will be less noticeable.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>An arbitrary name given to this particular prepared statement. It must be unique within a single session and is subsequently used to execute or deallocate a previously prepared statement.</dd>
+
+<dt> \<datatype\>   </dt>
+<dd>The data type of a parameter to the prepared statement. If the data type of a particular parameter is unspecified or is specified as unknown, it will be inferred from the context in which the parameter is used. To refer to the parameters in the prepared statement itself, use `$1`, `$2`, etc.</dd>
+
+<dt> \<statement\>   </dt>
+<dd>Any `SELECT`, `INSERT`, or `VALUES` statement.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+In some situations, the query plan produced for a prepared statement will be inferior to the query plan that would have been chosen if the statement had been submitted and executed normally. This is because when the statement is planned and the planner attempts to determine the optimal query plan, the actual values of any parameters specified in the statement are unavailable. HAWQ collects statistics on the distribution of data in the table, and can use constant values in a statement to make guesses about the likely result of executing the statement. Since this data is unavailable when planning prepared statements with parameters, the chosen plan may be suboptimal. To examine the query plan HAWQ has chosen for a prepared statement, use `EXPLAIN`.
+
+For more information on query planning and the statistics collected by HAWQ for that purpose, see the `ANALYZE` documentation.
+
+You can see all available prepared statements of a session by querying the `pg_prepared_statements` system view.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a prepared statement for an `INSERT` statement, and then execute it:
+
+``` pre
+PREPARE fooplan (int, text, bool, numeric) AS INSERT INTO 
+foo VALUES($1, $2, $3, $4);
+EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00);
+```
+
+Create a prepared statement for a `SELECT` statement, and then execute it. Note that the data type of the second parameter is not specified, so it is inferred from the context in which `$2` is used:
+
+``` pre
+PREPARE usrrptplan (int) AS SELECT * FROM users u, logs l 
+WHERE u.usrid=$1 AND u.usrid=l.usrid AND l.date = $2;
+EXECUTE usrrptplan(1, current_date);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard includes a `PREPARE` statement, but it is only for use in embedded SQL. This version of the `PREPARE` statement also uses a somewhat different syntax.
+
+## <a id="topic1__section8"></a>See Also
+
+[EXECUTE](EXECUTE.html), [DEALLOCATE](DEALLOCATE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/REASSIGN-OWNED.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/REASSIGN-OWNED.html.md.erb b/markdown/reference/sql/REASSIGN-OWNED.html.md.erb
new file mode 100644
index 0000000..c037bfe
--- /dev/null
+++ b/markdown/reference/sql/REASSIGN-OWNED.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: REASSIGN OWNED
+---
+
+Changes the ownership of database objects owned by a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+REASSIGN OWNED BY <old_role> [, ...] TO <new_role>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`REASSIGN OWNED` reassigns all the objects in the current database that are owned by \<old\_role\> to \<new\_role\>. Note that it does not change the ownership of the database itself.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<old\_role\>   </dt>
+<dd>The name of a role. The ownership of all the objects in the current database owned by this role will be reassigned to \<new\_role\>.</dd>
+
+<dt> \<new\_role\>   </dt>
+<dd>The name of the role that will be made the new owner of the affected objects.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+`REASSIGN OWNED` is often used to prepare for the removal of one or more roles. Because `REASSIGN OWNED` only affects the objects in the current database, it is usually necessary to execute this command in each database that contains objects owned by a role that is to be removed.
+
+The `DROP OWNED` command is an alternative that drops all the database objects owned by one or more roles.
+
+The `REASSIGN OWNED` command does not affect the privileges granted to the old roles in objects that are not owned by them. Use `DROP OWNED` to revoke those privileges.
+
+## <a id="topic1__section6"></a>Examples
+
+Reassign any database objects owned by the role named `sally` and `bob` to `admin`;
+
+``` pre
+REASSIGN OWNED BY sally, bob TO admin;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The `REASSIGN OWNED` statement is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[DROP OWNED](DROP-OWNED.html), [DROP ROLE](DROP-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/RELEASE-SAVEPOINT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/RELEASE-SAVEPOINT.html.md.erb b/markdown/reference/sql/RELEASE-SAVEPOINT.html.md.erb
new file mode 100644
index 0000000..ca25d9e
--- /dev/null
+++ b/markdown/reference/sql/RELEASE-SAVEPOINT.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: RELEASE SAVEPOINT
+---
+
+Destroys a previously defined savepoint.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+RELEASE [SAVEPOINT] <savepoint_name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`RELEASE SAVEPOINT` destroys a savepoint previously defined in the current transaction.
+
+Destroying a savepoint makes it unavailable as a rollback point, but it has no other user visible behavior. It does not undo the effects of commands executed after the savepoint was established. (To do that, see [ROLLBACK TO SAVEPOINT](ROLLBACK-TO-SAVEPOINT.html).) Destroying a savepoint when it is no longer needed may allow the system to reclaim some resources earlier than transaction end.
+
+`RELEASE SAVEPOINT` also destroys all savepoints that were established *after* the named savepoint was established.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<savepoint\_name\>   </dt>
+<dd>The name of the savepoint to destroy.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+To establish and later destroy a savepoint:
+
+``` pre
+BEGIN;
+    INSERT INTO table1 VALUES (3);
+    SAVEPOINT my_savepoint;
+    INSERT INTO table1 VALUES (4);
+    RELEASE SAVEPOINT my_savepoint;
+COMMIT;
+```
+
+The above transaction will insert both 3 and 4.
+
+## <a id="topic1__section6"></a>Compatibility
+
+This command conforms to the SQL standard. The standard specifies that the key word `SAVEPOINT` is mandatory, but HAWQ allows it to be omitted.
+
+## <a id="topic1__section7"></a>See Also
+
+[BEGIN](BEGIN.html), [SAVEPOINT](SAVEPOINT.html), [ROLLBACK TO SAVEPOINT](ROLLBACK-TO-SAVEPOINT.html), [COMMIT](COMMIT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/RESET.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/RESET.html.md.erb b/markdown/reference/sql/RESET.html.md.erb
new file mode 100644
index 0000000..cb04d32
--- /dev/null
+++ b/markdown/reference/sql/RESET.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: RESET
+---
+
+Restores the value of a system configuration parameter to the default value.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+RESET <configuration_parameter>
+
+RESET ALL
+```
+
+## <a id="topic1__section3"></a>Description
+
+`RESET` restores system configuration parameters to their default values. `RESET` is an alternative spelling for `SET                   configuration_parameter TO DEFAULT`.
+
+The default value is defined as the value that the parameter would have had, had no `SET` ever been issued for it in the current session. The actual source of this value might be a compiled-in default, the master `hawq-site.xml` configuration file, command-line options, or per-database or per-user default settings.
+
+See [Server Configuration Parameter Reference](../HAWQSiteConfig.html) for more information.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<configuration\_parameter\>   </dt>
+<dd>The name of a system configuration parameter. See [Server Configuration Parameter Reference](../HAWQSiteConfig.html) for a list of configuration parameters.</dd>
+
+<dt>ALL  </dt>
+<dd>Resets all settable configuration parameters to their default values.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Set the `hawq_rm_stmt_vseg_memory` configuration parameter to its default value:
+
+``` sql
+RESET hawq_rm_stmt_vseg_memory; 
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`RESET` is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[SET](SET.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/REVOKE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/REVOKE.html.md.erb b/markdown/reference/sql/REVOKE.html.md.erb
new file mode 100644
index 0000000..cad809a
--- /dev/null
+++ b/markdown/reference/sql/REVOKE.html.md.erb
@@ -0,0 +1,101 @@
+---
+title: REVOKE
+---
+
+Removes access privileges.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+REVOKE [GRANT OPTION FOR] { {SELECT | INSERT | UPDATE | DELETE 
+�������| REFERENCES | TRUNCATE } [,...] | ALL [PRIVILEGES] }
+�������ON [TABLE] <tablename> [, ...]
+�������FROM {<rolename> | PUBLIC} [, ...]
+�������[CASCADE | RESTRICT]
+
+REVOKE [GRANT OPTION FOR] { {USAGE | SELECT | UPDATE} [,...] 
+�������| ALL [PRIVILEGES] }
+�������ON SEQUENCE <sequencename> [, ...]
+�������FROM { <rolename> | PUBLIC } [, ...]
+�������[CASCADE | RESTRICT]
+
+REVOKE [GRANT OPTION FOR] { {CREATE | CONNECT 
+�������| TEMPORARY | TEMP} [,...] | ALL [PRIVILEGES] }
+�������ON DATABASE <dbname> [, ...]
+�������FROM {<rolename> | PUBLIC} [, ...]
+�������[CASCADE | RESTRICT]
+
+REVOKE [GRANT OPTION FOR] {EXECUTE | ALL [PRIVILEGES]}
+�������ON FUNCTION <funcname> ( [[<argmode>] [<argname>] <argtype>
+������������������������������[, ...]] ) [, ...]
+�������FROM {<rolename> | PUBLIC} [, ...]
+�������[CASCADE | RESTRICT]
+
+REVOKE [GRANT OPTION FOR] {USAGE | ALL [PRIVILEGES]}
+�������ON LANGUAGE <langname> [, ...]
+�������FROM {<rolename> | PUBLIC} [, ...]
+�������[ CASCADE | RESTRICT ]
+
+REVOKE [GRANT OPTION FOR] { {CREATE | USAGE} [,...] 
+�������| ALL [PRIVILEGES] }
+�������ON SCHEMA <schemaname> [, ...]
+�������FROM {<rolename> | PUBLIC} [, ...]
+�������[CASCADE | RESTRICT]
+
+REVOKE [GRANT OPTION FOR] { CREATE | ALL [PRIVILEGES] }
+�������ON TABLESPACE <tablespacename> [, ...]
+�������FROM { <rolename> | PUBLIC } [, ...]
+�������[CASCADE | RESTRICT]
+
+REVOKE [ADMIN OPTION FOR] <parent_role> [, ...] 
+�������FROM <member_role> [, ...]
+�������[CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`REVOKE` command revokes previously granted privileges from one or more roles. The key word `PUBLIC` refers to the implicitly defined group of all roles.
+
+See the description of the [GRANT](GRANT.html) command for the meaning of the privilege types.
+
+Note that any particular role will have the sum of privileges granted directly to it, privileges granted to any role it is presently a member of, and privileges granted to `PUBLIC`. Thus, for example, revoking `SELECT` privilege from `PUBLIC` does not necessarily mean that all roles have lost `SELECT` privilege on the object: those who have it granted directly or via another role will still have it.
+
+If `GRANT OPTION FOR` is specified, only the grant option for the privilege is revoked, not the privilege itself. Otherwise, both the privilege and the grant option are revoked.
+
+If a role holds a privilege with grant option and has granted it to other roles then the privileges held by those other roles are called dependent privileges. If the privilege or the grant option held by the first role is being revoked and dependent privileges exist, those dependent privileges are also revoked if `CASCADE` is specified, else the revoke action will fail. This recursive revocation only affects privileges that were granted through a chain of roles that is traceable to the role that is the subject of this `REVOKE` command. Thus, the affected roles may effectively keep the privilege if it was also granted through other roles.
+
+When revoking membership in a role, `GRANT OPTION` is instead called `ADMIN OPTION`, but the behavior is similar.
+
+## <a id="topic1__section4"></a>Parameters
+
+See [GRANT](GRANT.html).
+
+## <a id="topic1__section5"></a>Examples
+
+Revoke insert privilege for the public on table `films`:
+
+``` sql
+REVOKE INSERT ON films FROM PUBLIC;
+```
+
+Revoke all privileges from role `sally` on view `topten`. Note that this actually means revoke all privileges that the current role granted (if not a superuser).
+
+``` sql
+REVOKE ALL PRIVILEGES ON topten FROM sally;
+```
+
+Revoke membership in role `admins` from user `joe`:
+
+``` sql
+REVOKE admins FROM joe;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+The compatibility notes of the [GRANT](GRANT.html) command also apply to `REVOKE`.
+
+Either `RESTRICT` or `CASCADE` is required according to the standard, but HAWQ assumes `RESTRICT` by default.
+
+## <a id="topic1__section7"></a>See Also
+
+[GRANT](GRANT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ROLLBACK-TO-SAVEPOINT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ROLLBACK-TO-SAVEPOINT.html.md.erb b/markdown/reference/sql/ROLLBACK-TO-SAVEPOINT.html.md.erb
new file mode 100644
index 0000000..33c771b
--- /dev/null
+++ b/markdown/reference/sql/ROLLBACK-TO-SAVEPOINT.html.md.erb
@@ -0,0 +1,77 @@
+---
+title: ROLLBACK TO SAVEPOINT
+---
+
+Rolls back the current transaction to a savepoint.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+ROLLBACK [WORK | TRANSACTION] TO [SAVEPOINT] <savepoint_name>
+
+```
+
+## <a id="topic1__section3"></a>Description
+
+This command will roll back all commands that were executed after the savepoint was established. The savepoint remains valid and can be rolled back to again later, if needed.
+
+`ROLLBACK TO SAVEPOINT` implicitly destroys all savepoints that were established after the named savepoint.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional key words. They have no effect.</dd>
+
+<dt> \<savepoint\_name\>  </dt>
+<dd>The name of a savepoint to roll back to.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use `RELEASE SAVEPOINT` to destroy a savepoint without discarding the effects of commands executed after it was established.
+
+Specifying a savepoint name that has not been established is an error.
+
+Cursors have somewhat non-transactional behavior with respect to savepoints. Any cursor that is opened inside a savepoint will be closed when the savepoint is rolled back. If a previously opened cursor is affected by a `FETCH` command inside a savepoint that is later rolled back, the cursor position remains at the position that `FETCH` left it pointing to (that is, `FETCH` is not rolled back). Closing a cursor is not undone by rolling back, either. A cursor whose execution causes a transaction to abort is put in a can't-execute state, so while the transaction can be restored using `ROLLBACK TO SAVEPOINT`, the cursor can no longer be used.
+
+## <a id="topic1__section6"></a>Examples
+
+To undo the effects of the commands executed after `my_savepoint` was established:
+
+``` sql
+ROLLBACK TO SAVEPOINT my_savepoint;
+```
+
+Cursor positions are not affected by a savepoint rollback:
+
+``` sql
+BEGIN;
+DECLARE foo CURSOR FOR SELECT 1 UNION SELECT 2;
+SAVEPOINT foo;
+FETCH 1 FROM foo;
+```
+``` pre
+column
+----------
+        1
+```
+``` sql
+ROLLBACK TO SAVEPOINT foo;
+FETCH 1 FROM foo;
+```
+``` pre
+column
+----------
+        2
+```
+``` sql
+COMMIT;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard specifies that the key word `SAVEPOINT` is mandatory, but HAWQ (and Oracle) allow it to be omitted. SQL allows only `WORK`, not `TRANSACTION`, as a stopword after `ROLLBACK`. Also, SQL has an optional clause `AND [NO] CHAIN` which is not currently supported by HAWQ. Otherwise, this command conforms to the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[BEGIN](BEGIN.html), [COMMIT](COMMIT.html), [SAVEPOINT](SAVEPOINT.html), [RELEASE SAVEPOINT](RELEASE-SAVEPOINT.html), [ROLLBACK](ROLLBACK.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/ROLLBACK.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/ROLLBACK.html.md.erb b/markdown/reference/sql/ROLLBACK.html.md.erb
new file mode 100644
index 0000000..eb1345d
--- /dev/null
+++ b/markdown/reference/sql/ROLLBACK.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: ROLLBACK
+---
+
+Aborts the current transaction.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+ROLLBACK [WORK | TRANSACTION]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`ROLLBACK` rolls back the current transaction and causes all the updates made by the transaction to be discarded.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional key words. They have no effect.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use `COMMIT` to successfully end the current transaction.
+
+Issuing `ROLLBACK` when not inside a transaction does no harm, but it will provoke a warning message.
+
+## <a id="topic1__section6"></a>Examples
+
+To discard all changes made in the current transaction:
+
+``` sql
+ROLLBACK;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard only specifies the two forms `ROLLBACK` and `ROLLBACK WORK`. Otherwise, this command is fully conforming.
+
+## <a id="topic1__section8"></a>See Also
+
+[BEGIN](BEGIN.html), [COMMIT](COMMIT.html), [SAVEPOINT](SAVEPOINT.html), [ROLLBACK TO SAVEPOINT](ROLLBACK-TO-SAVEPOINT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/SAVEPOINT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/SAVEPOINT.html.md.erb b/markdown/reference/sql/SAVEPOINT.html.md.erb
new file mode 100644
index 0000000..c2f6917
--- /dev/null
+++ b/markdown/reference/sql/SAVEPOINT.html.md.erb
@@ -0,0 +1,66 @@
+---
+title: SAVEPOINT
+---
+
+Defines a new savepoint within the current transaction.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+SAVEPOINT <savepoint_name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`SAVEPOINT` establishes a new savepoint within the current transaction.
+
+A savepoint is a special mark inside a transaction that allows all commands that are executed after it was established to be rolled back, restoring the transaction state to what it was at the time of the savepoint.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<savepoint\_name\>   </dt>
+<dd>The name of the new savepoint.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use [ROLLBACK TO SAVEPOINT](ROLLBACK-TO-SAVEPOINT.html) to rollback to a savepoint. Use [RELEASE SAVEPOINT](RELEASE-SAVEPOINT.html) to destroy a savepoint, keeping the effects of commands executed after it was established.
+
+Savepoints can only be established when inside a transaction block. There can be multiple savepoints defined within a transaction.
+
+## <a id="topic1__section6"></a>Examples
+
+To establish a savepoint and later undo the effects of all commands executed after it was established:
+
+``` pre
+BEGIN;
+    INSERT INTO table1 VALUES (1);
+    SAVEPOINT my_savepoint;
+    INSERT INTO table1 VALUES (2);
+    ROLLBACK TO SAVEPOINT my_savepoint;
+    INSERT INTO table1 VALUES (3);
+COMMIT;
+```
+
+The above transaction will insert the values 1 and 3, but not 2.
+
+To establish and later destroy a savepoint:
+
+``` pre
+BEGIN;
+    INSERT INTO table1 VALUES (3);
+    SAVEPOINT my_savepoint;
+    INSERT INTO table1 VALUES (4);
+    RELEASE SAVEPOINT my_savepoint;
+COMMIT;
+```
+
+The above transaction will insert both 3 and 4.
+
+## <a id="topic1__section7"></a>Compatibility
+
+SQL requires a savepoint to be destroyed automatically when another savepoint with the same name is established. In HAWQ, the old savepoint is kept, though only the more recent one will be used when rolling back or releasing. (Releasing the newer savepoint will cause the older one to again become accessible to [ROLLBACK TO SAVEPOINT](ROLLBACK-TO-SAVEPOINT.html) and [RELEASE SAVEPOINT](RELEASE-SAVEPOINT.html).) Otherwise, `SAVEPOINT` is fully SQL conforming.
+
+## <a id="topic1__section8"></a>See Also
+
+[BEGIN](BEGIN.html), [COMMIT](COMMIT.html), [ROLLBACK](ROLLBACK.html), [RELEASE SAVEPOINT](RELEASE-SAVEPOINT.html), [ROLLBACK TO SAVEPOINT](ROLLBACK-TO-SAVEPOINT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/SELECT-INTO.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/SELECT-INTO.html.md.erb b/markdown/reference/sql/SELECT-INTO.html.md.erb
new file mode 100644
index 0000000..524a5f1
--- /dev/null
+++ b/markdown/reference/sql/SELECT-INTO.html.md.erb
@@ -0,0 +1,55 @@
+---
+title: SELECT INTO
+---
+
+Defines a new table from the results of a query.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+SELECT [ALL | DISTINCT [ON ( <expression> [, ...] )]]
+    * | <expression> [AS <output_name>] [, ...]
+    INTO [TEMPORARY | TEMP] [TABLE] <new_table>
+    [FROM <from_item> [, ...]]
+    [WHERE <condition>]
+    [GROUP BY <expression> [, ...]]
+    [HAVING <condition> [, ...]]
+    [{UNION | INTERSECT | EXCEPT} [ALL] <select>]
+    [ORDER BY <expression> [ASC | DESC | USING <operator>] [, ...]]
+    [LIMIT {<count> | ALL}]
+    [OFFSET <start>]
+    [FOR {UPDATE | SHARE} [OF <table_name> [, ...]] [NOWAIT]
+    [...]]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`SELECT INTO` creates a new table and fills it with data computed by a query. The data is not returned to the client, as it is with a normal `SELECT`. The new table's columns have the names and data types associated with the output columns of the `SELECT`. Data is always distributed randomly.
+
+## <a id="topic1__section4"></a>Parameters
+
+The majority of parameters for `SELECT INTO` are the same as [SELECT](SELECT.html).
+
+<dt>TEMPORARY,  
+TEMP  </dt>
+<dd>If specified, the table is created as a temporary table.</dd>
+
+<dt> \<new\_table\>  </dt>
+<dd>The name (optionally schema-qualified) of the table to be created.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Create a new table `films_recent` consisting of only recent entries from the table `films`:
+
+``` sql
+SELECT * INTO films_recent FROM films WHERE date_prod >=
+'2006-01-01';
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+The SQL standard uses `SELECT INTO` to represent selecting values into scalar variables of a host program, rather than creating a new table. The HAWQ usage of `SELECT INTO` to represent table creation is historical. It is best to use [CREATE TABLE AS](CREATE-TABLE-AS.html) for this purpose in new applications.
+
+## <a id="topic1__section7"></a>See Also
+
+[SELECT](SELECT.html), [CREATE TABLE AS](CREATE-TABLE-AS.html)


[38/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/creating-external-tables-examples.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/creating-external-tables-examples.html.md.erb b/markdown/datamgmt/load/creating-external-tables-examples.html.md.erb
new file mode 100644
index 0000000..8cdbff1
--- /dev/null
+++ b/markdown/datamgmt/load/creating-external-tables-examples.html.md.erb
@@ -0,0 +1,117 @@
+---
+title: Creating External Tables - Examples
+---
+
+The following examples show how to define external data with different protocols. Each `CREATE EXTERNAL TABLE` command can contain only one protocol.
+
+**Note:** When using IPv6, always enclose the numeric IP addresses in square brackets.
+
+Start `gpfdist` before you create external tables with the `gpfdist` protocol. The following code starts the `gpfdist` file server program in the background on port *8081* serving files from directory `/var/data/staging`. The logs are saved in `/home/gpadmin/log`.
+
+``` shell
+$ gpfdist -p 8081 -d /var/data/staging -l /home/gpadmin/log &
+```
+
+## <a id="ex1"></a>Example 1 - Single gpfdist instance on single-NIC machine
+
+Creates a readable external table, `ext_expenses`, using the `gpfdist` protocol. The files are formatted with a pipe (|) as the column delimiter.
+
+``` sql
+=# CREATE EXTERNAL TABLE ext_expenses
+        ( name text, date date, amount float4, category text, desc1 text )
+    LOCATION ('gpfdist://etlhost-1:8081/*', 'gpfdist://etlhost-1:8082/*')
+    FORMAT 'TEXT' (DELIMITER '|');
+```
+
+## <a id="ex2"></a>Example 2 - Multiple gpfdist instances
+
+Creates a readable external table, *ext\_expenses*, using the `gpfdist` protocol from all files with the *txt* extension. The column delimiter is a pipe ( | ) and NULL is a space (' ').
+
+``` sql
+=# CREATE EXTERNAL TABLE ext_expenses
+        ( name text, date date, amount float4, category text, desc1 text )
+    LOCATION ('gpfdist://etlhost-1:8081/*.txt', 'gpfdist://etlhost-2:8081/*.txt')
+    FORMAT 'TEXT' ( DELIMITER '|' NULL ' ') ;
+    
+```
+
+## <a id="ex3"></a>Example 3 - Multiple gpfdists instances
+
+Creates a readable external table, *ext\_expenses,* from all files with the *txt* extension using the `gpfdists` protocol. The column delimiter is a pipe ( | ) and NULL is a space (' '). For information about the location of security certificates, see [gpfdists Protocol](g-gpfdists-protocol.html).
+
+1.  Run `gpfdist` with the `--ssl` option.
+2.  Run the following command.
+
+    ``` sql
+    =# CREATE EXTERNAL TABLE ext_expenses
+             ( name text, date date, amount float4, category text, desc1 text )
+        LOCATION ('gpfdists://etlhost-1:8081/*.txt', 'gpfdists://etlhost-2:8082/*.txt')
+        FORMAT 'TEXT' ( DELIMITER '|' NULL ' ') ;
+        
+    ```
+
+## <a id="ex4"></a>Example 4 - Single gpfdist instance with error logging
+
+Uses the gpfdist protocol to create a readable external table, `ext_expenses,` from all files with the *txt* extension. The column delimiter is a pipe ( | ) and NULL (' ') is a space.
+
+Access to the external table is single row error isolation mode. Input data formatting errors can be captured so that you can view the errors, fix the issues, and then reload the rejected data. If the error count on a segment is greater than five (the `SEGMENT REJECT LIMIT` value), the entire external table operation fails and no rows are processed.
+
+``` sql
+=# CREATE EXTERNAL TABLE ext_expenses
+         ( name text, date date, amount float4, category text, desc1 text )
+    LOCATION ('gpfdist://etlhost-1:8081/*.txt', 'gpfdist://etlhost-2:8082/*.txt')
+    FORMAT 'TEXT' ( DELIMITER '|' NULL ' ')
+    LOG ERRORS INTO expenses_errs SEGMENT REJECT LIMIT 5;
+    
+```
+
+To create the readable `ext_expenses` table from CSV-formatted text files:
+
+``` sql
+=# CREATE EXTERNAL TABLE ext_expenses
+         ( name text, date date, amount float4, category text, desc1 text )
+    LOCATION ('gpfdist://etlhost-1:8081/*.txt', 'gpfdist://etlhost-2:8082/*.txt')
+    FORMAT 'CSV' ( DELIMITER ',' )
+    LOG ERRORS INTO expenses_errs SEGMENT REJECT LIMIT 5;
+    
+```
+
+## <a id="ex5"></a>Example 5 - Readable Web External Table with Script
+
+Creates a readable web external table that executes a script once on five virtual segments:
+
+``` sql
+=# CREATE EXTERNAL WEB TABLE log_output (linenum int, message text)
+    EXECUTE '/var/load_scripts/get_log_data.sh' ON 5
+    FORMAT 'TEXT' (DELIMITER '|');
+    
+```
+
+## <a id="ex6"></a>Example 6 - Writable External Table with gpfdist
+
+Creates a writable external table, *sales\_out*, that uses `gpfdist` to write output data to the file *sales.out*. The column delimiter is a pipe ( | ) and NULL is a space (' '). The file will be created in the directory specified when you started the gpfdist file server.
+
+``` sql
+=# CREATE WRITABLE EXTERNAL TABLE sales_out (LIKE sales)
+    LOCATION ('gpfdist://etl1:8081/sales.out')
+    FORMAT 'TEXT' ( DELIMITER '|' NULL ' ')
+    DISTRIBUTED BY (txn_id);
+    
+```
+
+## <a id="ex7"></a>Example 7 - Writable External Web Table with Script
+
+Creates a writable external web table, `campaign_out`, that pipes output data recieved by the segments to an executable script, `to_adreport_etl.sh`:
+
+``` sql
+=# CREATE WRITABLE EXTERNAL WEB TABLE campaign_out
+        (LIKE campaign)
+        EXECUTE '/var/unload_scripts/to_adreport_etl.sh' ON 6
+        FORMAT 'TEXT' (DELIMITER '|');
+```
+
+## <a id="ex8"></a>Example 8 - Readable and Writable External Tables with XML Transformations
+
+HAWQ can read and write XML data to and from external tables with gpfdist. For information about setting up an XML transform, see [Transforming XML Data](g-transforming-xml-data.html#topic75).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-about-gpfdist-setup-and-performance.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-about-gpfdist-setup-and-performance.html.md.erb b/markdown/datamgmt/load/g-about-gpfdist-setup-and-performance.html.md.erb
new file mode 100644
index 0000000..28a0bfe
--- /dev/null
+++ b/markdown/datamgmt/load/g-about-gpfdist-setup-and-performance.html.md.erb
@@ -0,0 +1,22 @@
+---
+title: About gpfdist Setup and Performance
+---
+
+Consider the following scenarios for optimizing your ETL network performance.
+
+-   Allow network traffic to use all ETL host Network Interface Cards (NICs) simultaneously. Run one instance of `gpfdist` on the ETL host, then declare the host name of each NIC in the `LOCATION` clause of your external table definition (see [Creating External Tables - Examples](creating-external-tables-examples.html#topic44)).
+
+<a id="topic14__du165872"></a>
+<span class="figtitleprefix">Figure: </span>External Table Using Single gpfdist Instance with Multiple NICs
+
+<img src="../../images/ext_tables_multinic.jpg" class="image" width="472" height="271" />
+
+-   Divide external table data equally among multiple `gpfdist` instances on the ETL host. For example, on an ETL system with two NICs, run two `gpfdist` instances (one on each NIC) to optimize data load performance and divide the external table data files evenly between the two `gpfdists`.
+
+<a id="topic14__du165882"></a>
+
+<span class="figtitleprefix">Figure: </span>External Tables Using Multiple gpfdist Instances with Multiple NICs
+
+<img src="../../images/ext_tables.jpg" class="image" width="467" height="282" />
+
+**Note:** Use pipes (|) to separate formatted text when you submit files to `gpfdist`. HAWQ encloses comma-separated text strings in single or double quotes. `gpfdist` has to remove the quotes to parse the strings. Using pipes to separate formatted text avoids the extra step and improves performance.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-character-encoding.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-character-encoding.html.md.erb b/markdown/datamgmt/load/g-character-encoding.html.md.erb
new file mode 100644
index 0000000..9f3756d
--- /dev/null
+++ b/markdown/datamgmt/load/g-character-encoding.html.md.erb
@@ -0,0 +1,11 @@
+---
+title: Character Encoding
+---
+
+Character encoding systems consist of a code that pairs each character from a character set with something else, such as a sequence of numbers or octets, to facilitate data stransmission and storage. HAWQ supports a variety of character sets, including single-byte character sets such as the ISO 8859 series and multiple-byte character sets such as EUC (Extended UNIX Code), UTF-8, and Mule internal code. Clients can use all supported character sets transparently, but a few are not supported for use within the server as a server-side encoding.
+
+Data files must be in a character encoding recognized by HAWQ. Data files that contain invalid or unsupported encoding sequences encounter errors when loading into HAWQ.
+
+**Note:** On data files generated on a Microsoft Windows operating system, run the `dos2unix` system command to remove any Windows-only characters before loading into HAWQ.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-command-based-web-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-command-based-web-external-tables.html.md.erb b/markdown/datamgmt/load/g-command-based-web-external-tables.html.md.erb
new file mode 100644
index 0000000..7830cc3
--- /dev/null
+++ b/markdown/datamgmt/load/g-command-based-web-external-tables.html.md.erb
@@ -0,0 +1,26 @@
+---
+title: Command-based Web External Tables
+---
+
+The output of a shell command or script defines command-based web table data. Specify the command in the `EXECUTE` clause of `CREATE EXTERNAL WEB                 TABLE`. The data is current as of the time the command runs. The `EXECUTE` clause runs the shell command or script on the specified master or virtual segments. The virtual segments run the command in parallel. Scripts must be executable by the gpadmin user and reside in the same location on the master or the hosts of virtual segments.
+
+The command that you specify in the external table definition executes from the database and cannot access environment variables from `.bashrc` or `.profile`. Set environment variables in the `EXECUTE` clause. The following external web table, for example, runs a command on the HAWQ master host:
+
+``` sql
+CREATE EXTERNAL WEB TABLE output (output text)
+EXECUTE 'PATH=/home/gpadmin/programs; export PATH; myprogram.sh'
+    ON MASTER 
+FORMAT 'TEXT';
+```
+
+The following command defines a web table that runs a script on five virtual segments.
+
+``` sql
+CREATE EXTERNAL WEB TABLE log_output (linenum int, message text) 
+EXECUTE '/var/load_scripts/get_log_data.sh' ON 5 
+FORMAT 'TEXT' (DELIMITER '|');
+```
+
+The virtual segments are selected by the resource manager at runtime.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-configuration-file-format.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-configuration-file-format.html.md.erb b/markdown/datamgmt/load/g-configuration-file-format.html.md.erb
new file mode 100644
index 0000000..73f51a9
--- /dev/null
+++ b/markdown/datamgmt/load/g-configuration-file-format.html.md.erb
@@ -0,0 +1,66 @@
+---
+title: Configuration File Format
+---
+
+The `gpfdist` configuration file uses the YAML 1.1 document format and implements a schema for defining the transformation parameters. The configuration file must be a valid YAML document.
+
+The `gpfdist` program processes the document in order and uses indentation (spaces) to determine the document hierarchy and relationships of the sections to one another. The use of white space is significant. Do not use white space for formatting and do not use tabs.
+
+The following is the basic structure of a configuration file.
+
+``` pre
+---
+VERSION:   1.0.0.1
+TRANSFORMATIONS: 
+transformation_name1:
+TYPE:      input | output
+COMMAND:   command
+CONTENT:   data | paths
+SAFE:      posix-regex
+STDERR:    server | console
+transformation_name2:
+TYPE:      input | output
+COMMAND:   command 
+...
+```
+
+VERSION  
+Required. The version of the `gpfdist` configuration file schema. The current version is 1.0.0.1.
+
+TRANSFORMATIONS  
+Required. Begins the transformation specification section. A configuration file must have at least one transformation. When `gpfdist` receives a transformation request, it looks in this section for an entry with the matching transformation name.
+
+TYPE  
+Required. Specifies the direction of transformation. Values are `input` or `output`.
+
+-   `input`: `gpfdist` treats the standard output of the transformation process as a stream of records to load into HAWQ.
+-   `output` <span class="ph">: </span> `gpfdist` treats the standard input of the transformation process as a stream of records from HAWQ to transform and write to the appropriate output.
+
+COMMAND  
+Required. Specifies the command `gpfdist` will execute to perform the transformation.
+
+For input transformations, `gpfdist` invokes the command specified in the `CONTENT` setting. The command is expected to open the underlying file(s) as appropriate and produce one line of `TEXT` for each row to load into HAWQ /&gt;. The input transform determines whether the entire content should be converted to one row or to multiple rows.
+
+For output transformations, `gpfdist` invokes this command as specified in the `CONTENT` setting. The output command is expected to open and write to the underlying file(s) as appropriate. The output transformation determines the final placement of the converted output.
+
+CONTENT  
+Optional. The values are `data` and `paths`. The default value is `data`.
+
+-   When `CONTENT` specifies `data`, the text `%filename%` in the `COMMAND` section is replaced by the path to the file to read or write.
+-   When `CONTENT` specifies `paths`, the text `%filename%` in the `COMMAND` section is replaced by the path to the temporary file that contains the list of files to read or write.
+
+The following is an example of a `COMMAND` section showing the text `%filename%` that is replaced.
+
+``` pre
+COMMAND: /bin/bash input_transform.sh %filename%
+```
+
+SAFE  
+Optional. A `POSIX `regular expression that the paths must match to be passed to the transformation. Specify `SAFE` when there is a concern about injection or improper interpretation of paths passed to the command. The default is no restriction on paths.
+
+STDERR  
+Optional.The values are `server` and `console`.
+
+This setting specifies how to handle standard error output from the transformation. The default, `server`, specifies that `gpfdist` will capture the standard error output from the transformation in a temporary file and send the first 8k of that file to HAWQ as an error message. The error message will appear as a SQL error. `Console` specifies that `gpfdist` does not redirect or transmit the standard error output from the transformation.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-controlling-segment-parallelism.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-controlling-segment-parallelism.html.md.erb b/markdown/datamgmt/load/g-controlling-segment-parallelism.html.md.erb
new file mode 100644
index 0000000..4e0096c
--- /dev/null
+++ b/markdown/datamgmt/load/g-controlling-segment-parallelism.html.md.erb
@@ -0,0 +1,11 @@
+---
+title: Controlling Segment Parallelism
+---
+
+The `gp_external_max_segs` server configuration parameter controls the number of virtual segments that can simultaneously access a single `gpfdist` instance. The default is 64. You can set the number of segments such that some segments process external data files and some perform other database processing. Set this parameter in the `hawq-site.xml` file of your master instance.
+
+The number of segments in the `gpfdist` location list specify the minimum number of virtual segments required to serve data to a `gpfdist` external table.
+
+The `hawq_rm_nvseg_perquery_perseg_limit` and `hawq_rm_nvseg_perquery_limit` parameters also control segment parallelism by specifying the maximum number of segments used in running queries on a `gpfdist` external table on the cluster.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html.md.erb b/markdown/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html.md.erb
new file mode 100644
index 0000000..ade14ea
--- /dev/null
+++ b/markdown/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html.md.erb
@@ -0,0 +1,11 @@
+---
+title: Capture Row Formatting Errors and Declare a Reject Limit
+---
+
+The following SQL fragment captures formatting errors internally in HAWQ and declares a reject limit of 10 rows.
+
+``` sql
+LOG ERRORS INTO errortable SEGMENT REJECT LIMIT 10 ROWS
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-creating-and-using-web-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-creating-and-using-web-external-tables.html.md.erb b/markdown/datamgmt/load/g-creating-and-using-web-external-tables.html.md.erb
new file mode 100644
index 0000000..4ef6cab
--- /dev/null
+++ b/markdown/datamgmt/load/g-creating-and-using-web-external-tables.html.md.erb
@@ -0,0 +1,13 @@
+---
+title: Creating and Using Web External Tables
+---
+
+`CREATE EXTERNAL WEB TABLE` creates a web table definition. Web external tables allow HAWQ to treat dynamic data sources like regular database tables. Because web table data can change as a query runs, the data is not rescannable.
+
+You can define command-based or URL-based web external tables. The definition forms are distinct: you cannot mix command-based and URL-based definitions.
+
+-   **[Command-based Web External Tables](../../datamgmt/load/g-command-based-web-external-tables.html)**
+
+-   **[URL-based Web External Tables](../../datamgmt/load/g-url-based-web-external-tables.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html.md.erb b/markdown/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html.md.erb
new file mode 100644
index 0000000..e0c3c17
--- /dev/null
+++ b/markdown/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html.md.erb
@@ -0,0 +1,24 @@
+---
+title: Define an External Table with Single Row Error Isolation
+---
+
+The following example logs errors internally in HAWQ and sets an error threshold of 10 errors.
+
+``` sql
+=# CREATE EXTERNAL TABLE ext_expenses ( name text, date date, amount float4, category text, desc1 text )
+   LOCATION ('gpfdist://etlhost-1:8081/*', 'gpfdist://etlhost-2:8082/*')
+   FORMAT 'TEXT' (DELIMITER '|')
+   LOG ERRORS INTO errortable SEGMENT REJECT LIMIT 10 ROWS;
+```
+
+The following example creates an external table, *ext\_expenses*, sets an error threshold of 10 errors, and writes error rows to the table *err\_expenses*.
+
+``` sql
+=# CREATE EXTERNAL TABLE ext_expenses
+     ( name text, date date, amount float4, category text, desc1 text )
+   LOCATION ('gpfdist://etlhost-1:8081/*', 'gpfdist://etlhost-2:8082/*')
+   FORMAT 'TEXT' (DELIMITER '|')
+   LOG ERRORS INTO err_expenses SEGMENT REJECT LIMIT 10 ROWS;
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html.md.erb b/markdown/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html.md.erb
new file mode 100644
index 0000000..8a24474
--- /dev/null
+++ b/markdown/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: Defining a Command-Based Writable External Web Table
+---
+
+You can define writable external web tables to send output rows to an application or script. The application must accept an input stream, reside in the same location on all of the HAWQ segment hosts, and be executable by the `gpadmin` user. All segments in the HAWQ system run the application or script, whether or not a segment has output rows to process.
+
+Use `CREATE WRITABLE EXTERNAL WEB TABLE` to define the external table and specify the application or script to run on the segment hosts. Commands execute from within the database and cannot access environment variables (such as `$PATH`). Set environment variables in the `EXECUTE` clause of your writable external table definition. For example:
+
+``` sql
+=# CREATE WRITABLE EXTERNAL WEB TABLE output (output text) 
+    EXECUTE 'export PATH=$PATH:/home/gpadmin/programs; myprogram.sh' 
+    ON 6
+    FORMAT 'TEXT'
+    DISTRIBUTED RANDOMLY;
+```
+
+The following HAWQ variables are available for use in OS commands executed by a web or writable external table. Set these variables as environment variables in the shell that executes the command(s). They can be used to identify a set of requests made by an external table statement across the HAWQ array of hosts and segment instances.
+
+<caption><span class="tablecap">Table 1. External Table EXECUTE Variables</span></caption>
+
+<a id="topic71__du224024"></a>
+
+| Variable            | Description                                                                                                                |
+|---------------------|----------------------------------------------------------------------------------------------------------------------------|
+| $GP\_CID            | Command count of the transaction executing the external table statement.                                                   |
+| $GP\_DATABASE       | The database in which the external table definition resides.                                                               |
+| $GP\_DATE           | The date on which the external table command ran.                                                                          |
+| $GP\_MASTER\_HOST   | The host name of the HAWQ master host from which the external table statement was dispatched.                              |
+| $GP\_MASTER\_PORT   | The port number of the HAWQ master instance from which the external table statement was dispatched.                        |
+| $GP\_SEG\_DATADIR   | The location of the data directory of the segment instance executing the external table command.                           |
+| $GP\_SEG\_PG\_CONF  | The location of the `hawq-site.xml` file of the segment instance executing the external table command.                     |
+| $GP\_SEG\_PORT      | The port number of the segment instance executing the external table command.                                              |
+| $GP\_SEGMENT\_COUNT | The total number of segment instances in the HAWQ system.                                                                  |
+| $GP\_SEGMENT\_ID    | The ID number of the segment instance executing the external table command (same as `dbid` in `gp_segment_configuration`). |
+| $GP\_SESSION\_ID    | The database session identifier number associated with the external table statement.                                       |
+| $GP\_SN             | Serial number of the external table scan node in the query plan of the external table statement.                           |
+| $GP\_TIME           | The time the external table command was executed.                                                                          |
+| $GP\_USER           | The database user executing the external table statement.                                                                  |
+| $GP\_XID            | The transaction ID of the external table statement.                                                                        |
+
+-   **[Disabling EXECUTE for Web or Writable External Tables](../../datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-defining-a-file-based-writable-external-table.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-defining-a-file-based-writable-external-table.html.md.erb b/markdown/datamgmt/load/g-defining-a-file-based-writable-external-table.html.md.erb
new file mode 100644
index 0000000..fa1ddfa
--- /dev/null
+++ b/markdown/datamgmt/load/g-defining-a-file-based-writable-external-table.html.md.erb
@@ -0,0 +1,16 @@
+---
+title: Defining a File-Based Writable External Table
+---
+
+Writable external tables that output data to files use the HAWQ parallel file server program, `gpfdist`, or HAWQ Extensions Framework (PXF).
+
+Use the `CREATE WRITABLE EXTERNAL TABLE` command to define the external table and specify the location and format of the output files.
+
+-   With a writable external table using the `gpfdist` protocol, the HAWQ segments send their data to `gpfdist`, which writes the data to the named file. `gpfdist` must run on a host that the HAWQ segments can access over the network. `gpfdist` points to a file location on the output host and writes data received from the HAWQ segments to the file. To divide the output data among multiple files, list multiple `gpfdist` URIs in your writable external table definition.
+-   A writable external web table sends data to an application as a stream of data. For example, unload data from HAWQ and send it to an application that connects to another database or ETL tool to load the data elsewhere. Writable external web tables use the `EXECUTE` clause to specify a shell command, script, or application to run on the segment hosts and accept an input stream of data. See [Defining a Command-Based Writable External Web Table](g-defining-a-command-based-writable-external-web-table.html#topic71) for more information about using `EXECUTE` commands in a writable external table definition.
+
+You can optionally declare a distribution policy for your writable external tables. By default, writable external tables use a random distribution policy. If the source table you are exporting data from has a hash distribution policy, defining the same distribution key column(s) for the writable external table improves unload performance by eliminating the requirement to move rows over the interconnect. If you unload data from a particular table, you can use the `LIKE` clause to copy the column definitions and distribution policy from the source table.
+
+-   **[Example - HAWQ file server (gpfdist)](../../datamgmt/load/g-example-hawq-file-server-gpfdist.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-determine-the-transformation-schema.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-determine-the-transformation-schema.html.md.erb b/markdown/datamgmt/load/g-determine-the-transformation-schema.html.md.erb
new file mode 100644
index 0000000..1a4eb9b
--- /dev/null
+++ b/markdown/datamgmt/load/g-determine-the-transformation-schema.html.md.erb
@@ -0,0 +1,33 @@
+---
+title: Determine the Transformation Schema
+---
+
+To prepare for the transformation project:
+
+1.  <span class="ph">Determine the goal of the project, such as indexing data, analyzing data, combining data, and so on.</span>
+2.  <span class="ph">Examine the XML file and note the file structure and element names. </span>
+3.  <span class="ph">Choose the elements to import and decide if any other limits are appropriate. </span>
+
+For example, the following XML file, *prices.xml*, is a simple, short file that contains price records. Each price record contains two fields: an item number and a price.
+
+``` xml
+<?xml version="1.0" encoding="ISO-8859-1" ?>
+<prices>
+  <pricerecord>
+    <itemnumber>708421</itemnumber>
+    <price>19.99</price>
+  </pricerecord>
+  <pricerecord>
+    <itemnumber>708466</itemnumber>
+    <price>59.25</price>
+  </pricerecord>
+  <pricerecord>
+    <itemnumber>711121</itemnumber>
+    <price>24.99</price>
+  </pricerecord>
+</prices>
+```
+
+The goal is to import all the data into a HAWQ table with an integer `itemnumber` column and a decimal `price` column.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html.md.erb b/markdown/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html.md.erb
new file mode 100644
index 0000000..f0332b5
--- /dev/null
+++ b/markdown/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html.md.erb
@@ -0,0 +1,11 @@
+---
+title: Disabling EXECUTE for Web or Writable External Tables
+---
+
+There is a security risk associated with allowing external tables to execute OS commands or scripts. To disable the use of `EXECUTE` in web and writable external table definitions, set the `gp_external_enable_exec server` configuration parameter to off in your master `hawq-site.xml` file:
+
+``` pre
+gp_external_enable_exec = off
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-escaping-in-csv-formatted-files.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-escaping-in-csv-formatted-files.html.md.erb b/markdown/datamgmt/load/g-escaping-in-csv-formatted-files.html.md.erb
new file mode 100644
index 0000000..d07b463
--- /dev/null
+++ b/markdown/datamgmt/load/g-escaping-in-csv-formatted-files.html.md.erb
@@ -0,0 +1,29 @@
+---
+title: Escaping in CSV Formatted Files
+---
+
+By default, the escape character is a `"` (double quote) for CSV-formatted files. If you want to use a different escape character, use the `ESCAPE` clause of `COPY`, `CREATE EXTERNAL TABLE` or the `hawq load` control file to declare a different escape character. In cases where your selected escape character is present in your data, you can use it to escape itself.
+
+For example, suppose you have a table with three columns and you want to load the following three fields:
+
+-   `Free trip to A,B`
+-   `5.89`
+-   `Special rate "1.79"`
+
+Your designated delimiter character is `,` (comma), and your designated escape character is `"` (double quote). The formatted row in your data file looks like this:
+
+``` pre
+         "Free trip to A,B","5.89","Special rate ""1.79"""
+
+      
+```
+
+The data value with a comma character that is part of the data is enclosed in double quotes. The double quotes that are part of the data are escaped with a double quote even though the field value is enclosed in double quotes.
+
+Embedding the entire field inside a set of double quotes guarantees preservation of leading and trailing whitespace characters:
+
+`"`Free trip to A,B `"`,`"`5.89 `"`,`"`Special rate `""`1.79`""             "`
+
+**Note:** In CSV mode, all characters are significant. A quoted value surrounded by white space, or any characters other than `DELIMITER`, includes those characters. This can cause errors if you import data from a system that pads CSV lines with white space to some fixed width. In this case, preprocess the CSV file to remove the trailing white space before importing the data into HAWQ.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-escaping-in-text-formatted-files.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-escaping-in-text-formatted-files.html.md.erb b/markdown/datamgmt/load/g-escaping-in-text-formatted-files.html.md.erb
new file mode 100644
index 0000000..e24a2b7
--- /dev/null
+++ b/markdown/datamgmt/load/g-escaping-in-text-formatted-files.html.md.erb
@@ -0,0 +1,31 @@
+---
+title: Escaping in Text Formatted Files
+---
+
+By default, the escape character is a \\ (backslash) for text-formatted files. You can declare a different escape character in the `ESCAPE` clause of `COPY`, `CREATE EXTERNAL TABLE`, or the `hawq             load` control file. If your escape character appears in your data, use it to escape itself.
+
+For example, suppose you have a table with three columns and you want to load the following three fields:
+
+-   `backslash = \`
+-   `vertical bar = |`
+-   `exclamation point = !`
+
+Your designated delimiter character is `|` (pipe character), and your designated escape character is `\` (backslash). The formatted row in your data file looks like this:
+
+``` pre
+backslash = \\ | vertical bar = \| | exclamation point = !
+```
+
+Notice how the backslash character that is part of the data is escaped with another backslash character, and the pipe character that is part of the data is escaped with a backslash character.
+
+You can use the escape character to escape octal and hexidecimal sequences. The escaped value is converted to the equivalent character when loaded into HAWQ. For example, to load the ampersand character (`&`), use the escape character to escape its equivalent hexidecimal (`\0x26`) or octal (`\046`) representation.
+
+You can disable escaping in `TEXT`-formatted files using the `ESCAPE` clause of `COPY`, `CREATE EXTERNAL TABLE` or the `hawq load` control file as follows:
+
+``` pre
+ESCAPE 'OFF'
+```
+
+This is useful for input data that contains many backslash characters, such as web log data.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-escaping.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-escaping.html.md.erb b/markdown/datamgmt/load/g-escaping.html.md.erb
new file mode 100644
index 0000000..0a1e62a
--- /dev/null
+++ b/markdown/datamgmt/load/g-escaping.html.md.erb
@@ -0,0 +1,16 @@
+---
+title: Escaping
+---
+
+There are two reserved characters that have special meaning to HAWQ:
+
+-   The designated delimiter character separates columns or fields in the data file.
+-   The newline character designates a new row in the data file.
+
+If your data contains either of these characters, you must escape the character so that HAWQ treats it as data and not as a field separator or new row. By default, the escape character is a \\ (backslash) for text-formatted files and a double quote (") for csv-formatted files.
+
+-   **[Escaping in Text Formatted Files](../../datamgmt/load/g-escaping-in-text-formatted-files.html)**
+
+-   **[Escaping in CSV Formatted Files](../../datamgmt/load/g-escaping-in-csv-formatted-files.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html.md.erb b/markdown/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html.md.erb
new file mode 100644
index 0000000..4f61396
--- /dev/null
+++ b/markdown/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html.md.erb
@@ -0,0 +1,29 @@
+---
+title: Command-based Web External Tables
+---
+
+The output of a shell command or script defines command-based web table data. Specify the command in the `EXECUTE` clause of `CREATE EXTERNAL WEB                 TABLE`. The data is current as of the time the command runs. The `EXECUTE` clause runs the shell command or script on the specified master, and/or segment host or hosts. The command or script must reside on the hosts corresponding to the host(s) defined in the `EXECUTE` clause.
+
+By default, the command is run on segment hosts when active segments have output rows to process. For example, if each segment host runs four primary segment instances that have output rows to process, the command runs four times per segment host. You can optionally limit the number of segment instances that execute the web table command. All segments included in the web table definition in the `ON` clause run the command in parallel.
+
+The command that you specify in the external table definition executes from the database and cannot access environment variables from `.bashrc` or `.profile`. Set environment variables in the `EXECUTE` clause. For example:
+
+``` sql
+=# CREATE EXTERNAL WEB TABLE output (output text)
+EXECUTE 'PATH=/home/gpadmin/programs; export PATH; myprogram.sh'
+    ON MASTER
+FORMAT 'TEXT';
+```
+
+Scripts must be executable by the `gpadmin` user and reside in the same location on the master or segment hosts.
+
+The following command defines a web table that runs a script. The script runs on five virtual segments selected by the resource manager at runtime.
+
+``` sql
+=# CREATE EXTERNAL WEB TABLE log_output
+(linenum int, message text)
+EXECUTE '/var/load_scripts/get_log_data.sh' ON 5
+FORMAT 'TEXT' (DELIMITER '|');
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-example-hawq-file-server-gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-example-hawq-file-server-gpfdist.html.md.erb b/markdown/datamgmt/load/g-example-hawq-file-server-gpfdist.html.md.erb
new file mode 100644
index 0000000..a0bf669
--- /dev/null
+++ b/markdown/datamgmt/load/g-example-hawq-file-server-gpfdist.html.md.erb
@@ -0,0 +1,13 @@
+---
+title: Example - HAWQ file server (gpfdist)
+---
+
+``` sql
+=# CREATE WRITABLE EXTERNAL TABLE unload_expenses
+( LIKE expenses )
+LOCATION ('gpfdist://etlhost-1:8081/expenses1.out',
+'gpfdist://etlhost-2:8081/expenses2.out')
+FORMAT 'TEXT' (DELIMITER ',');
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html.md.erb b/markdown/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html.md.erb
new file mode 100644
index 0000000..6f5b9e3
--- /dev/null
+++ b/markdown/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html.md.erb
@@ -0,0 +1,54 @@
+---
+title: Example using IRS MeF XML Files (In demo Directory)
+---
+
+This example demonstrates loading a sample IRS Modernized eFile tax return using a Joost STX transformation. The data is in the form of a complex XML file.
+
+The U.S. Internal Revenue Service (IRS) made a significant commitment to XML and specifies its use in its Modernized e-File (MeF) system. In MeF, each tax return is an XML document with a deep hierarchical structure that closely reflects the particular form of the underlying tax code.
+
+XML, XML Schema and stylesheets play a role in their data representation and business workflow. The actual XML data is extracted from a ZIP file attached to a MIME "transmission file" message. For more information about MeF, see [Modernized e-File (Overview)](http://www.irs.gov/uac/Modernized-e-File-Overview) on the IRS web site.
+
+The sample XML document, *RET990EZ\_2006.xml*, is about 350KB in size with two elements:
+
+-   ReturnHeader
+-   ReturnData
+
+The &lt;ReturnHeader&gt; element contains general details about the tax return such as the taxpayer's name, the tax year of the return, and the preparer. The &lt;ReturnData&gt; element contains multiple sections with specific details about the tax return and associated schedules.
+
+The following is an abridged sample of the XML file.
+
+``` xml
+<?xml version="1.0" encoding="UTF-8"?> 
+<Return returnVersion="2006v2.0"
+   xmlns="http://www.irs.gov/efile" 
+   xmlns:efile="http://www.irs.gov/efile"
+   xsi:schemaLocation="http://www.irs.gov/efile"
+   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 
+   <ReturnHeader binaryAttachmentCount="1">
+     <ReturnId>AAAAAAAAAAAAAAAAAAAA</ReturnId>
+     <Timestamp>1999-05-30T12:01:01+05:01</Timestamp>
+     <ReturnType>990EZ</ReturnType>
+     <TaxPeriodBeginDate>2005-01-01</TaxPeriodBeginDate>
+     <TaxPeriodEndDate>2005-12-31</TaxPeriodEndDate>
+     <Filer>
+       <EIN>011248772</EIN>
+       ... more data ...
+     </Filer>
+     <Preparer>
+       <Name>Percy Polar</Name>
+       ... more data ...
+     </Preparer>
+     <TaxYear>2005</TaxYear>
+   </ReturnHeader>
+   ... more data ..
+```
+
+The goal is to import all the data into a HAWQ database. First, convert the XML document into text with newlines "escaped", with two columns: `ReturnId` and a single column on the end for the entire MeF tax return. For example:
+
+``` pre
+AAAAAAAAAAAAAAAAAAAA|<Return returnVersion="2006v2.0"... 
+```
+
+Load the data into HAWQ.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-example-witsml-files-in-demo-directory.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-example-witsml-files-in-demo-directory.html.md.erb b/markdown/datamgmt/load/g-example-witsml-files-in-demo-directory.html.md.erb
new file mode 100644
index 0000000..0484523
--- /dev/null
+++ b/markdown/datamgmt/load/g-example-witsml-files-in-demo-directory.html.md.erb
@@ -0,0 +1,54 @@
+---
+title: Example using WITSML\u2122 Files (In demo Directory)
+---
+
+This example demonstrates loading sample data describing an oil rig using a Joost STX transformation. The data is in the form of a complex XML file downloaded from energistics.org.
+
+The Wellsite Information Transfer Standard Markup Language (WITSML\u2122) is an oil industry initiative to provide open, non-proprietary, standard interfaces for technology and software to share information among oil companies, service companies, drilling contractors, application vendors, and regulatory agencies. For more information about WITSML\u2122, see [http://www.witsml.org](http://www.witsml.org).
+
+The oil rig information consists of a top level `<rigs>` element with multiple child elements such as `<documentInfo>,                             <rig>`, and so on. The following excerpt from the file shows the type of information in the `<rig>` tag.
+
+``` xml
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet href="../stylesheets/rig.xsl" type="text/xsl" media="screen"?>
+<rigs 
+ xmlns="http://www.witsml.org/schemas/131" 
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
+ xsi:schemaLocation="http://www.witsml.org/schemas/131 ../obj_rig.xsd" 
+ version="1.3.1.1">
+ <documentInfo>
+ ... misc data ...
+ </documentInfo>
+ <rig uidWell="W-12" uidWellbore="B-01" uid="xr31">
+     <nameWell>6507/7-A-42</nameWell>
+     <nameWellbore>A-42</nameWellbore>
+     <name>Deep Drill #5</name>
+     <owner>Deep Drilling Co.</owner>
+     <typeRig>floater</typeRig>
+     <manufacturer>Fitsui Engineering</manufacturer>
+     <yearEntService>1980</yearEntService>
+     <classRig>ABS Class A1 M CSDU AMS ACCU</classRig>
+     <approvals>DNV</approvals>
+ ... more data ...
+```
+
+The goal is to import the information for this rig into HAWQ.
+
+The sample document, *rig.xml*, is about 11KB in size. The input does not contain tabs so the relevant information can be converted into records delimited with a pipe (|).
+
+`W-12|6507/7-A-42|xr31|Deep Drill #5|Deep Drilling Co.|John                             Doe|John.Doe@example.com|`
+
+With the columns:
+
+-   `well_uid text`, -- e.g. W-12
+-   `well_name text`, -- e.g. 6507/7-A-42
+-   `rig_uid text`, -- e.g. xr31
+-   `rig_name text`, -- e.g. Deep Drill \#5
+-   `rig_owner text`, -- e.g. Deep Drilling Co.
+-   `rig_contact text`, -- e.g. John Doe
+-   `rig_email text`, -- e.g. John.Doe@example.com
+-   `doc xml`
+
+Then, load the data into HAWQ.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-examples-read-fixed-width-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-examples-read-fixed-width-data.html.md.erb b/markdown/datamgmt/load/g-examples-read-fixed-width-data.html.md.erb
new file mode 100644
index 0000000..174529a
--- /dev/null
+++ b/markdown/datamgmt/load/g-examples-read-fixed-width-data.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: Examples - Read Fixed-Width Data
+---
+
+The following examples show how to read fixed-width data.
+
+## Example 1 \u2013 Loading a table with PRESERVED\_BLANKS on
+
+``` sql
+CREATE READABLE EXTERNAL TABLE students (
+  name varchar(20), address varchar(30), age int)
+LOCATION ('gpfdist://host:port/file/path/')
+FORMAT 'CUSTOM' (formatter=fixedwidth_in, name=20, address=30, age=4,
+        preserve_blanks='on',null='NULL');
+```
+
+## Example 2 \u2013 Loading data with no line delimiter
+
+``` sql
+CREATE READABLE EXTERNAL TABLE students (
+  name varchar(20), address varchar(30), age int)
+LOCATION ('gpfdist://host:port/file/path/')
+FORMAT 'CUSTOM' (formatter=fixedwidth_in, name='20', address='30', age='4', 
+        line_delim='?@');
+```
+
+## Example 3 \u2013 Create a writable external table with a \\r\\n line delimiter
+
+``` sql
+CREATE WRITABLE EXTERNAL TABLE students_out (
+  name varchar(20), address varchar(30), age int)
+LOCATION ('gpfdist://host:port/file/path/filename')     
+FORMAT 'CUSTOM' (formatter=fixedwidth_out, 
+   name=20, address=30, age=4, line_delim=E'\r\n');
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-external-tables.html.md.erb b/markdown/datamgmt/load/g-external-tables.html.md.erb
new file mode 100644
index 0000000..4142a07
--- /dev/null
+++ b/markdown/datamgmt/load/g-external-tables.html.md.erb
@@ -0,0 +1,44 @@
+---
+title: Accessing File-Based External Tables
+---
+
+External tables enable accessing external files as if they are regular database tables. They are often used to move data into and out of a HAWQ database.
+
+To create an external table definition, you specify the format of your input files and the location of your external data sources. For information input file formats, see [Formatting Data Files](g-formatting-data-files.html#topic95).
+
+Use one of the following protocols to access external table data sources. You cannot mix protocols in `CREATE EXTERNAL TABLE` statements:
+
+-   `gpfdist://` points to a directory on the file host and serves external data files to all HAWQ segments in parallel. See [gpfdist Protocol](g-gpfdist-protocol.html#topic_sny_yph_kr).
+-   `gpfdists://` is the secure version of `gpfdist`. See [gpfdists Protocol](g-gpfdists-protocol.html#topic_sny_yph_kr).
+-   `pxf://` specifies data accessed through the HAWQ Extensions Framework (PXF). PXF is a service that uses plug-in Java classes to read and write data in external data sources. PXF includes plug-ins to access data in HDFS, HBase, and Hive. Custom plug-ins can be written to access other external data sources.
+
+External tables allow you to access external files from within the database as if they are regular database tables. Used with `gpfdist`, the HAWQ parallel file distribution program, or HAWQ Extensions Framework (PXF), external tables provide full parallelism by using the resources of all HAWQ segments to load or unload data.
+
+You can query external table data directly and in parallel using SQL commands such as `SELECT`, `JOIN`, or `SORT EXTERNAL TABLE             DATA`, and you can create views for external tables.
+
+The steps for using external tables are:
+
+1.  Define the external table.
+2.  Start the gpfdist file server(s) if you plan to use the `gpfdist` or `gpdists` protocols.
+3.  Place the data files in the correct locations.
+4.  Query the external table with SQL commands.
+
+HAWQ provides readable and writable external tables:
+
+-   Readable external tables for data loading. Readable external tables support basic extraction, transformation, and loading (ETL) tasks common in data warehousing. HAWQ segment instances read external table data in parallel to optimize large load operations. You cannot modify readable external tables.
+-   Writable external tables for data unloading. Writable external tables support:
+
+    -   Selecting data from database tables to insert into the writable external table.
+    -   Sending data to an application as a stream of data. For example, unload data from HAWQ and send it to an application that connects to another database or ETL tool to load the data elsewhere.
+    -   Receiving output from HAWQ parallel MapReduce calculations.
+
+    Writable external tables allow only `INSERT` operations.
+
+External tables can be file-based or web-based.
+
+-   Regular (file-based) external tables access static flat files. Regular external tables are rescannable: the data is static while the query runs.
+-   Web (web-based) external tables access dynamic data sources, either on a web server with the `http://` protocol or by executing OS commands or scripts. Web external tables are not rescannable: the data can change while the query runs.
+
+Dump and restore operate only on external and web external table *definitions*, not on the data sources.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-formatting-columns.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-formatting-columns.html.md.erb b/markdown/datamgmt/load/g-formatting-columns.html.md.erb
new file mode 100644
index 0000000..b828212
--- /dev/null
+++ b/markdown/datamgmt/load/g-formatting-columns.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: Formatting Columns
+---
+
+The default column or field delimiter is the horizontal `TAB` character (`0x09`) for text files and the comma character (`0x2C`) for CSV files. You can declare a single character delimiter using the `DELIMITER` clause of `COPY`, `CREATE                 EXTERNAL TABLE` or the `hawq load` configuration table when you define your data format. The delimiter character must appear between any two data value fields. Do not place a delimiter at the beginning or end of a row. For example, if the pipe character ( | ) is your delimiter:
+
+``` pre
+data value 1|data value 2|data value 3
+```
+
+The following command shows the use of the pipe character as a column delimiter:
+
+``` sql
+=# CREATE EXTERNAL TABLE ext_table (name text, date date)
+LOCATION ('gpfdist://host:port/filename.txt)
+FORMAT 'TEXT' (DELIMITER '|');
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-formatting-data-files.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-formatting-data-files.html.md.erb b/markdown/datamgmt/load/g-formatting-data-files.html.md.erb
new file mode 100644
index 0000000..6c929ad
--- /dev/null
+++ b/markdown/datamgmt/load/g-formatting-data-files.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: Formatting Data Files
+---
+
+When you use the HAWQ tools for loading and unloading data, you must specify how your data is formatted. `COPY`, `CREATE             EXTERNAL TABLE, `and `hawq load` have clauses that allow you to specify how your data is formatted. Data can be delimited text (`TEXT`) or comma separated values (`CSV`) format. External data must be formatted correctly to be read by HAWQ. This topic explains the format of data files expected by HAWQ.
+
+-   **[Formatting Rows](../../datamgmt/load/g-formatting-rows.html)**
+
+-   **[Formatting Columns](../../datamgmt/load/g-formatting-columns.html)**
+
+-   **[Representing NULL Values](../../datamgmt/load/g-representing-null-values.html)**
+
+-   **[Escaping](../../datamgmt/load/g-escaping.html)**
+
+-   **[Character Encoding](../../datamgmt/load/g-character-encoding.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-formatting-rows.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-formatting-rows.html.md.erb b/markdown/datamgmt/load/g-formatting-rows.html.md.erb
new file mode 100644
index 0000000..ea9b416
--- /dev/null
+++ b/markdown/datamgmt/load/g-formatting-rows.html.md.erb
@@ -0,0 +1,7 @@
+---
+title: Formatting Rows
+---
+
+HAWQ expects rows of data to be separated by the `LF` character (Line feed, `0x0A`), `CR` (Carriage return, `0x0D`), or `CR` followed by `LF` (`CR+LF`, `0x0D 0x0A`). `LF` is the standard newline representation on UNIX or UNIX-like operating systems. Operating systems such as Windows or Mac OS X use `CR` or `CR+LF`. All of these representations of a newline are supported by HAWQ as a row delimiter. For more information, see [Importing and Exporting Fixed Width Data](g-importing-and-exporting-fixed-width-data.html#topic37).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-gpfdist-protocol.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-gpfdist-protocol.html.md.erb b/markdown/datamgmt/load/g-gpfdist-protocol.html.md.erb
new file mode 100644
index 0000000..ee98609
--- /dev/null
+++ b/markdown/datamgmt/load/g-gpfdist-protocol.html.md.erb
@@ -0,0 +1,15 @@
+---
+title: gpfdist Protocol
+---
+
+The `gpfdist://` protocol is used in a URI to reference a running `gpfdist` instance. The `gpfdist` utility serves external data files from a directory on a file host to all HAWQ segments in parallel.
+
+`gpfdist` is located in the `$GPHOME/bin` directory on your HAWQ master host and on each segment host.
+
+Run `gpfdist` on the host where the external data files reside. `gpfdist` uncompresses `gzip` (`.gz`) and `bzip2` (.`bz2`) files automatically. You can use the wildcard character (\*) or other C-style pattern matching to denote multiple files to read. The files specified are assumed to be relative to the directory that you specified when you started the `gpfdist` instance.
+
+All virtual segments access the external file(s) in parallel, subject to the number of segments set in the `gp_external_max_segments` parameter, the length of the `gpfdist` location list, and the limits specified by the `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit` parameters. Use multiple `gpfdist` data sources in a `CREATE EXTERNAL TABLE` statement to scale the external table's scan performance. For more information about configuring `gpfdist`, see [Using the HAWQ File Server (gpfdist)](g-using-the-hawq-file-server--gpfdist-.html#topic13).
+
+See the `gpfdist` reference documentation for more information about using `gpfdist` with external tables.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-gpfdists-protocol.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-gpfdists-protocol.html.md.erb b/markdown/datamgmt/load/g-gpfdists-protocol.html.md.erb
new file mode 100644
index 0000000..2f5641d
--- /dev/null
+++ b/markdown/datamgmt/load/g-gpfdists-protocol.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: gpfdists Protocol
+---
+
+The `gpfdists://` protocol is a secure version of the `gpfdist://         protocol`. To use it, you run the `gpfdist` utility with the `--ssl` option. When specified in a URI, the `gpfdists://` protocol enables encrypted communication and secure identification of the file server and the HAWQ to protect against attacks such as eavesdropping and man-in-the-middle attacks.
+
+`gpfdists` implements SSL security in a client/server scheme with the following attributes and limitations:
+
+-   Client certificates are required.
+-   Multilingual certificates are not supported.
+-   A Certificate Revocation List (CRL) is not supported.
+-   The `TLSv1` protocol is used with the `TLS_RSA_WITH_AES_128_CBC_SHA` encryption algorithm.
+-   SSL parameters cannot be changed.
+-   SSL renegotiation is supported.
+-   The SSL ignore host mismatch parameter is set to `false`.
+-   Private keys containing a passphrase are not supported for the `gpfdist` file server (server.key) and for the HAWQ (client.key).
+-   Issuing certificates that are appropriate for the operating system in use is the user's responsibility. Generally, converting certificates as shown in [https://www.sslshopper.com/ssl-converter.html](https://www.sslshopper.com/ssl-converter.html) is supported.
+
+    **Note:** A server started with the `gpfdist --ssl` option can only communicate with the `gpfdists` protocol. A server that was started with `gpfdist` without the `--ssl` option can only communicate with the `gpfdist` protocol.
+
+-   The client certificate file, client.crt
+-   The client private key file, client.key
+
+Use one of the following methods to invoke the `gpfdists` protocol.
+
+-   Run `gpfdist` with the `--ssl` option and then use the `gpfdists` protocol in the `LOCATION` clause of a `CREATE EXTERNAL TABLE` statement.
+-   Use a `hawq load` YAML control file with the `SSL` option set to true.
+
+Using `gpfdists` requires that the following client certificates reside in the `$PGDATA/gpfdists` directory on each segment.
+
+-   The client certificate file, `client.crt`
+-   The client private key file, `client.key`
+-   The trusted certificate authorities, `root.crt`
+
+For an example of loading data into an external table security, see [Example 3 - Multiple gpfdists instances](creating-external-tables-examples.html#topic47).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-handling-errors-ext-table-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-handling-errors-ext-table-data.html.md.erb b/markdown/datamgmt/load/g-handling-errors-ext-table-data.html.md.erb
new file mode 100644
index 0000000..2b8dc78
--- /dev/null
+++ b/markdown/datamgmt/load/g-handling-errors-ext-table-data.html.md.erb
@@ -0,0 +1,9 @@
+---
+title: Handling Errors in External Table Data
+---
+
+By default, if external table data contains an error, the command fails and no data loads into the target database table. Define the external table with single row error handling to enable loading correctly formatted rows and to isolate data errors in external table data. See [Handling Load Errors](g-handling-load-errors.html#topic55).
+
+The `gpfdist` file server uses the `HTTP` protocol. External table queries that use `LIMIT` end the connection after retrieving the rows, causing an HTTP socket error. If you use `LIMIT` in queries of external tables that use the `gpfdist://` or `http:// protocols`, ignore these errors \u2013 data is returned to the database as expected.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-handling-load-errors.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-handling-load-errors.html.md.erb b/markdown/datamgmt/load/g-handling-load-errors.html.md.erb
new file mode 100644
index 0000000..6faf7a5
--- /dev/null
+++ b/markdown/datamgmt/load/g-handling-load-errors.html.md.erb
@@ -0,0 +1,28 @@
+---
+title: Handling Load Errors
+---
+
+Readable external tables are most commonly used to select data to load into regular database tables. You use the `CREATE TABLE AS SELECT` or `INSERT                 INTO `commands to query the external table data. By default, if the data contains an error, the entire command fails and the data is not loaded into the target database table.
+
+The `SEGMENT REJECT LIMIT` clause allows you to isolate format errors in external table data and to continue loading correctly formatted rows. Use `SEGMENT REJECT LIMIT `to set an error threshold, specifying the reject limit `count` as number of `ROWS` (the default) or as a `PERCENT` of total rows (1-100).
+
+The entire external table operation is aborted, and no rows are processed, if the number of error rows reaches the `SEGMENT REJECT LIMIT`. The limit of error rows is per-segment, not per entire operation. The operation processes all good rows, and it discards and optionally logs formatting errors for erroneous rows, if the number of error rows does not reach the `SEGMENT REJECT                 LIMIT`.
+
+The `LOG ERRORS` clause allows you to keep error rows for further examination. For information about the `LOG ERRORS` clause, see the `CREATE EXTERNAL TABLE` command.
+
+When you set `SEGMENT REJECT LIMIT`, HAWQ scans the external data in single row error isolation mode. Single row error isolation mode applies to external data rows with format errors such as extra or missing attributes, attributes of a wrong data type, or invalid client encoding sequences. HAWQ does not check constraint errors, but you can filter constraint errors by limiting the `SELECT` from an external table at runtime. For example, to eliminate duplicate key errors:
+
+``` sql
+=# INSERT INTO table_with_pkeys 
+SELECT DISTINCT * FROM external_table;
+```
+
+-   **[Define an External Table with Single Row Error Isolation](../../datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html)**
+
+-   **[Capture Row Formatting Errors and Declare a Reject Limit](../../datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html)**
+
+-   **[Identifying Invalid CSV Files in Error Table Data](../../datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html)**
+
+-   **[Moving Data between Tables](../../datamgmt/load/g-moving-data-between-tables.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html.md.erb b/markdown/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html.md.erb
new file mode 100644
index 0000000..534d530
--- /dev/null
+++ b/markdown/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html.md.erb
@@ -0,0 +1,7 @@
+---
+title: Identifying Invalid CSV Files in Error Table Data
+---
+
+If a CSV file contains invalid formatting, the *rawdata* field in the error table can contain several combined rows. For example, if a closing quote for a specific field is missing, all the following newlines are treated as embedded newlines. When this happens, HAWQ stops parsing a row when it reaches 64K, puts that 64K of data into the error table as a single row, resets the quote flag, and continues. If this happens three times during load processing, the load file is considered invalid and the entire load fails with the message "`rejected ` `N` ` or more rows`". See [Escaping in CSV Formatted Files](g-escaping-in-csv-formatted-files.html#topic101) for more information on the correct use of quotes in CSV files.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-importing-and-exporting-fixed-width-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-importing-and-exporting-fixed-width-data.html.md.erb b/markdown/datamgmt/load/g-importing-and-exporting-fixed-width-data.html.md.erb
new file mode 100644
index 0000000..f49cae0
--- /dev/null
+++ b/markdown/datamgmt/load/g-importing-and-exporting-fixed-width-data.html.md.erb
@@ -0,0 +1,38 @@
+---
+title: Importing and Exporting Fixed Width Data
+---
+
+Specify custom formats for fixed-width data with the HAWQ functions `fixedwith_in` and `fixedwidth_out`. These functions already exist in the file `$GPHOME/share/postgresql/cdb_external_extensions.sql`. The following example declares a custom format, then calls the `fixedwidth_in` function to format the data.
+
+``` sql
+CREATE READABLE EXTERNAL TABLE students (
+  name varchar(20), address varchar(30), age int)
+LOCATION ('gpfdist://mdw:8081/students.txt')
+FORMAT 'CUSTOM' (formatter=fixedwidth_in, name='20', address='30', age='4');
+```
+
+The following options specify how to import fixed width data.
+
+-   Read all the data.
+
+    To load all the fields on a line of fixed with data, you must load them in their physical order. You must specify the field length, but cannot specify a starting and ending position. The fields names in the fixed width arguments must match the order in the field list at the beginning of the `CREATE TABLE` command.
+
+-   Set options for blank and null characters.
+
+    Trailing blanks are trimmed by default. To keep trailing blanks, use the `preserve_blanks=on` option.You can reset the trailing blanks option to the default with the `preserve_blanks=off` option.
+
+    Use the null=`'null_string_value'` option to specify a value for null characters.
+
+-   If you specify `preserve_blanks=on`, you must also define a value for null characters.
+-   If you specify `preserve_blanks=off`, null is not defined, and the field contains only blanks, HAWQ writes a null to the table. If null is defined, HAWQ writes an empty string to the table.
+
+    Use the `line_delim='line_ending'` parameter to specify the line ending character. The following examples cover most cases. The `E` specifies an escape string constant.
+
+    ``` pre
+    line_delim=E'\n'
+    line_delim=E'\r'
+    line_delim=E'\r\n'
+    line_delim='abc'
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-installing-gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-installing-gpfdist.html.md.erb b/markdown/datamgmt/load/g-installing-gpfdist.html.md.erb
new file mode 100644
index 0000000..85549df
--- /dev/null
+++ b/markdown/datamgmt/load/g-installing-gpfdist.html.md.erb
@@ -0,0 +1,7 @@
+---
+title: Installing gpfdist
+---
+
+You may choose to run `gpfdist` from a machine other than the HAWQ master, such as on a machine devoted to ETL processing. To install `gpfdist` on your ETL server, refer to [Client-Based HAWQ Load Tools](client-loadtools.html) for information related to Linux and Windows load tools installation and configuration.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-load-the-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-load-the-data.html.md.erb b/markdown/datamgmt/load/g-load-the-data.html.md.erb
new file mode 100644
index 0000000..4c88c9f
--- /dev/null
+++ b/markdown/datamgmt/load/g-load-the-data.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: Load the Data
+---
+
+Create the tables with SQL statements based on the appropriate schema.
+
+There are no special requirements for the HAWQ tables that hold loaded data. In the prices example, the following command creates the appropriate table.
+
+``` sql
+CREATE TABLE prices (
+  itemnumber integer,       
+  price       decimal        
+) 
+DISTRIBUTED BY (itemnumber);
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-loading-and-unloading-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-loading-and-unloading-data.html.md.erb b/markdown/datamgmt/load/g-loading-and-unloading-data.html.md.erb
new file mode 100644
index 0000000..8ea43d5
--- /dev/null
+++ b/markdown/datamgmt/load/g-loading-and-unloading-data.html.md.erb
@@ -0,0 +1,55 @@
+---
+title: Loading and Unloading Data
+---
+
+The topics in this section describe methods for loading and writing data into and out of HAWQ, and how to format data files. It also covers registering HDFS files and folders directly into HAWQ internal tables.
+
+HAWQ supports high-performance parallel data loading and unloading, and for smaller amounts of data, single file, non-parallel data import and export.
+
+HAWQ can read from and write to several types of external data sources, including text files, Hadoop file systems, and web servers.
+
+-   The `COPY` SQL command transfers data between an external text file on the master host and a HAWQ database table.
+-   External tables allow you to query data outside of the database directly and in parallel using SQL commands such as `SELECT`, `JOIN`, or `SORT           EXTERNAL TABLE DATA`, and you can create views for external tables. External tables are often used to load external data into a regular database table using a command such as `CREATE TABLE table AS SELECT * FROM ext_table`.
+-   External web tables provide access to dynamic data. They can be backed with data from URLs accessed using the HTTP protocol or by the output of an OS script running on one or more segments.
+-   The `gpfdist` utility is the HAWQ parallel file distribution program. It is an HTTP server that is used with external tables to allow HAWQ segments to load external data in parallel, from multiple file systems. You can run multiple instances of `gpfdist` on different hosts and network interfaces and access them in parallel.
+-   The `hawq load` utility automates the steps of a load task using a YAML-formatted control file.
+
+The method you choose to load data depends on the characteristics of the source data\u2014its location, size, format, and any transformations required.
+
+In the simplest case, the `COPY` SQL command loads data into a table from a text file that is accessible to the HAWQ master instance. This requires no setup and provides good performance for smaller amounts of data. With the `COPY` command, the data copied into or out of the database passes between a single file on the master host and the database. This limits the total size of the dataset to the capacity of the file system where the external file resides and limits the data transfer to a single file write stream.
+
+More efficient data loading options for large datasets take advantage of the HAWQ MPP architecture, using the HAWQ segments to load data in parallel. These methods allow data to load simultaneously from multiple file systems, through multiple NICs, on multiple hosts, achieving very high data transfer rates. External tables allow you to access external files from within the database as if they are regular database tables. When used with `gpfdist`, the HAWQ parallel file distribution program, external tables provide full parallelism by using the resources of all HAWQ segments to load or unload data.
+
+HAWQ leverages the parallel architecture of the Hadoop Distributed File System to access files on that system.
+
+-   **[Working with File-Based External Tables](../../datamgmt/load/g-working-with-file-based-ext-tables.html)**
+
+-   **[Using the HAWQ File Server (gpfdist)](../../datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html)**
+
+-   **[Creating and Using Web External Tables](../../datamgmt/load/g-creating-and-using-web-external-tables.html)**
+
+-   **[Loading Data Using an External Table](../../datamgmt/load/g-loading-data-using-an-external-table.html)**
+
+-   **[Registering Files into HAWQ Internal Tables](../../datamgmt/load/g-register_files.html)**
+
+-   **[Loading and Writing Non-HDFS Custom Data](../../datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html)**
+
+-   **[Creating External Tables - Examples](../../datamgmt/load/creating-external-tables-examples.html#topic44)**
+
+-   **[Handling Load Errors](../../datamgmt/load/g-handling-load-errors.html)**
+
+-   **[Loading Data with hawq load](../../datamgmt/load/g-loading-data-with-hawqload.html)**
+
+-   **[Loading Data with COPY](../../datamgmt/load/g-loading-data-with-copy.html)**
+
+-   **[Running COPY in Single Row Error Isolation Mode](../../datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html)**
+
+-   **[Optimizing Data Load and Query Performance](../../datamgmt/load/g-optimizing-data-load-and-query-performance.html)**
+
+-   **[Unloading Data from HAWQ](../../datamgmt/load/g-unloading-data-from-hawq-database.html)**
+
+-   **[Transforming XML Data](../../datamgmt/load/g-transforming-xml-data.html)**
+
+-   **[Formatting Data Files](../../datamgmt/load/g-formatting-data-files.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html.md.erb b/markdown/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html.md.erb
new file mode 100644
index 0000000..e826963
--- /dev/null
+++ b/markdown/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html.md.erb
@@ -0,0 +1,9 @@
+---
+title: Loading and Writing Non-HDFS Custom Data
+---
+
+HAWQ supports `TEXT` and `CSV` formats for importing and exporting data. You can load and write the data in other formats by defining and using a custom format or custom protocol.
+
+-   **[Using a Custom Format](../../datamgmt/load/g-using-a-custom-format.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-loading-data-using-an-external-table.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-loading-data-using-an-external-table.html.md.erb b/markdown/datamgmt/load/g-loading-data-using-an-external-table.html.md.erb
new file mode 100644
index 0000000..32a741a
--- /dev/null
+++ b/markdown/datamgmt/load/g-loading-data-using-an-external-table.html.md.erb
@@ -0,0 +1,18 @@
+---
+title: Loading Data Using an External Table
+---
+
+Use SQL commands such as `INSERT` and `SELECT` to query a readable external table, the same way that you query a regular database table. For example, to load travel expense data from an external table, `ext_expenses`, into a database table,` expenses_travel`:
+
+``` sql
+=# INSERT INTO expenses_travel 
+SELECT * FROM ext_expenses WHERE category='travel';
+```
+
+To load all data into a new database table:
+
+``` sql
+=# CREATE TABLE expenses AS SELECT * FROM ext_expenses;
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-loading-data-with-copy.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-loading-data-with-copy.html.md.erb b/markdown/datamgmt/load/g-loading-data-with-copy.html.md.erb
new file mode 100644
index 0000000..72e5ac6
--- /dev/null
+++ b/markdown/datamgmt/load/g-loading-data-with-copy.html.md.erb
@@ -0,0 +1,11 @@
+---
+title: Loading Data with COPY
+---
+
+`COPY FROM` copies data from a file or standard input into a table and appends the data to the table contents. `COPY` is non-parallel: data is loaded in a single process using the HAWQ master instance. Using `COPY` is only recommended for very small data files.
+
+The `COPY` source file must be accessible to the master host. Specify the `COPY` source file name relative to the master host location.
+
+HAWQ copies data from `STDIN` or `STDOUT` using the connection between the client and the master server.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-loading-data-with-hawqload.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-loading-data-with-hawqload.html.md.erb b/markdown/datamgmt/load/g-loading-data-with-hawqload.html.md.erb
new file mode 100644
index 0000000..68e4459
--- /dev/null
+++ b/markdown/datamgmt/load/g-loading-data-with-hawqload.html.md.erb
@@ -0,0 +1,56 @@
+---
+title: Loading Data with hawq load
+---
+
+The HAWQ `hawq load` utility loads data using readable external tables and the HAWQ parallel file server ( `gpfdist` or `gpfdists`). It handles parallel file-based external table setup and allows users to configure their data format, external table definition, and `gpfdist` or `gpfdists` setup in a single configuration file.
+
+## <a id="topic62__du168147"></a>To use hawq load
+
+1.  Ensure that your environment is set up to run `hawq                         load`. Some dependent files from your HAWQ /&gt; installation are required, such as `gpfdist` and Python, as well as network access to the HAWQ segment hosts.
+2.  Create your load control file. This is a YAML-formatted file that specifies the HAWQ connection information, `gpfdist` configuration information, external table options, and data format.
+
+    For example:
+
+    ``` pre
+    ---
+    VERSION: 1.0.0.1
+    DATABASE: ops
+    USER: gpadmin
+    HOST: mdw-1
+    PORT: 5432
+    GPLOAD:
+       INPUT:
+        - SOURCE:
+             LOCAL_HOSTNAME:
+               - etl1-1
+               - etl1-2
+               - etl1-3
+               - etl1-4
+             PORT: 8081
+             FILE: 
+               - /var/load/data/*
+        - COLUMNS:
+               - name: text
+               - amount: float4
+               - category: text
+               - description: text
+               - date: date
+        - FORMAT: text
+        - DELIMITER: '|'
+        - ERROR_LIMIT: 25
+        - ERROR_TABLE: payables.err_expenses
+       OUTPUT:
+        - TABLE: payables.expenses
+        - MODE: INSERT
+    SQL:
+       - BEFORE: "INSERT INTO audit VALUES('start', current_timestamp)"
+       - AFTER: "INSERT INTO audit VALUES('end', current_timestamp)"
+    ```
+
+3.  Run `hawq load`, passing in the load control file. For example:
+
+    ``` shell
+    $ hawq load -f my_load.yml
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-moving-data-between-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-moving-data-between-tables.html.md.erb b/markdown/datamgmt/load/g-moving-data-between-tables.html.md.erb
new file mode 100644
index 0000000..2603ae4
--- /dev/null
+++ b/markdown/datamgmt/load/g-moving-data-between-tables.html.md.erb
@@ -0,0 +1,12 @@
+---
+title: Moving Data between Tables
+---
+
+You can use `CREATE TABLE AS` or `INSERT...SELECT` to load external and web external table data into another (non-external) database table, and the data will be loaded in parallel according to the external or web external table definition.
+
+If an external table file or web external table data source has an error, one of the following will happen, depending on the isolation mode used:
+
+-   **Tables without error isolation mode**: any operation that reads from that table fails. Loading from external and web external tables without error isolation mode is an all or nothing operation.
+-   **Tables with error isolation mode**: the entire file will be loaded, except for the problematic rows (subject to the configured REJECT\_LIMIT)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb b/markdown/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
new file mode 100644
index 0000000..ff1c230
--- /dev/null
+++ b/markdown/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
@@ -0,0 +1,10 @@
+---
+title: Optimizing Data Load and Query Performance
+---
+
+Use the following tip to help optimize your data load and subsequent query performance.
+
+-   Run `ANALYZE` after loading data. If you significantly altered the data in a table, run `ANALYZE` or `VACUUM                     ANALYZE` (system catalog tables only) to update table statistics for the query optimizer. Current statistics ensure that the optimizer makes the best decisions during query planning and avoids poor performance due to inaccurate or nonexistent statistics.
+
+
+


[07/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/svg/hawq_resource_queues.svg
----------------------------------------------------------------------
diff --git a/mdimages/svg/hawq_resource_queues.svg b/mdimages/svg/hawq_resource_queues.svg
deleted file mode 100644
index 4fdf655..0000000
--- a/mdimages/svg/hawq_resource_queues.svg
+++ /dev/null
@@ -1,340 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   version="1.1"
-   viewBox="0 0 1033.1752 549.67151"
-   stroke-miterlimit="10"
-   id="svg2"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="hawq_resource_queue.svg"
-   width="1033.1752"
-   height="549.67151"
-   style="fill:none;stroke:none;stroke-linecap:square;stroke-miterlimit:10">
-  <metadata
-     id="metadata121">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <defs
-     id="defs119" />
-  <sodipodi:namedview
-     pagecolor="#ffffff"
-     bordercolor="#666666"
-     borderopacity="1"
-     objecttolerance="10"
-     gridtolerance="10"
-     guidetolerance="10"
-     inkscape:pageopacity="0"
-     inkscape:pageshadow="2"
-     inkscape:window-width="1033"
-     inkscape:window-height="564"
-     id="namedview117"
-     showgrid="false"
-     fit-margin-top="0"
-     fit-margin-left="0"
-     fit-margin-right="0"
-     fit-margin-bottom="0"
-     inkscape:zoom="0.47569444"
-     inkscape:cx="446.16423"
-     inkscape:cy="394.66056"
-     inkscape:window-x="0"
-     inkscape:window-y="0"
-     inkscape:window-maximized="0"
-     inkscape:current-layer="svg2" />
-  <clipPath
-     id="p.0">
-    <path
-       d="m 0,0 1152,0 0,864 L 0,864 0,0 Z"
-       id="path5"
-       inkscape:connector-curvature="0"
-       style="clip-rule:nonzero" />
-  </clipPath>
-  <g
-     clip-path="url(#p.0)"
-     id="g7"
-     transform="translate(-62.565693,-24.726276)">
-    <path
-       d="m 0,0 1152,0 0,864 -1152,0 z"
-       id="path9"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 550.86255,58.181103 125.60632,0 0,76.566927 -125.60632,0 z"
-       id="path11"
-       inkscape:connector-curvature="0"
-       style="fill:#cccccc;fill-rule:nonzero" />
-    <path
-       d="m 550.86255,58.181103 125.60632,0 0,76.566927 -125.60632,0 z"
-       id="path13"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:8, 6" />
-    <path
-       d="m 587.5954,105.51206 0,-11.484375 1.28125,0 0,1.078125 q 0.45313,-0.640625 1.01563,-0.953125 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.546875 0.8125,0.546875 1.21875,1.546875 0.42188,0.984375 0.42188,2.171875 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.296875 q 0,1.609375 0.64062,2.375005 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.796875 -1.5625,-0.796875 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.36718,4.796875 1.375,0.20313 q 0.0781,0.64062 0.46875,0.92187 0.53125,0.39063 1.4375,0.39063 0.96875,0 1.5,-0.39063 0.53125,-0.39062 0.71875,-1.09375 0.10938,-0.42187 0.10938,-1.8125 -0.92188,1.09375 -2.29688,1.09375 -1.71875,0 -2.65625,-1.23437 -0.9375,-1.23438 -0.9375,-2.968755 0,-1.1875 0.42188,-2.1875 0.4375,-1 1.
 25,-1.546875 0.82812,-0.546875 1.92187,-0.546875 1.46875,0 2.42188,1.1875 l 0,-1 1.29687,0 0,7.171875 q 0,1.9375 -0.39062,2.75 -0.39063,0.8125 -1.25,1.28125 -0.85938,0.46875 -2.10938,0.46875 -1.48437,0 -2.40625,-0.67187 -0.90625,-0.67188 -0.875,-2.01563 z m 1.17188,-4.984375 q 0,1.625 0.64062,2.375005 0.65625,0.75 1.625,0.75 0.96875,0 1.625,-0.73438 0.65625,-0.75 0.65625,-2.34375 0,-1.53125 -0.67187,-2.296875 -0.67188,-0.78125 -1.625,-0.78125 -0.9375,0 -1.59375,0.765625 -0.65625,0.765625 -0.65625,2.265625 z m 6.67969,7.484375 0,-1.01562 9.32812,0 0,1.01562 -9.32812,0 z m 10.19531,-3.1875 0,-8.296875 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.453125 l -0.48438,1.296875 q -0.51562,-0.296875 -1.03125,-0.296875 -0.45312,0 -0.82812,0.28125 -0.35938,0.265625 -0.51563,0.765625 -0.23437,0.75 -0.23437,1.640625 l 0,4.34375 -1.40625,0 z m 4.8125,-4.15625 q 0,-2.296875 1.28125,-3.40625 1.07812,-0.921875 2.60937,-0.921875 1.71875,0 2.7
 9688,1.125 1.09375,1.109375 1.09375,3.09375 0,1.59375 -0.48438,2.515625 -0.48437,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73437,0 -2.8125,-1.10937 -1.07812,-1.125 -1.07812,-3.23438 z m 1.45312,0 q 0,1.59375 0.6875,2.39063 0.70313,0.79687 1.75,0.79687 1.04688,0 1.73438,-0.79687 0.70312,-0.79688 0.70312,-2.437505 0,-1.53125 -0.70312,-2.328125 -0.6875,-0.796875 -1.73438,-0.796875 -1.04687,0 -1.75,0.796875 -0.6875,0.78125 -0.6875,2.375 z m 7.44532,0 q 0,-2.296875 1.28125,-3.40625 1.07812,-0.921875 2.60937,-0.921875 1.71875,0 2.79688,1.125 1.09375,1.109375 1.09375,3.09375 0,1.59375 -0.48438,2.515625 -0.48437,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73437,0 -2.8125,-1.10937 -1.07812,-1.125 -1.07812,-3.23438 z m 1.45312,0 q 0,1.59375 0.6875,2.39063 0.70313,0.79687 1.75,0.79687 1.04688,0 1.73438,-0.79687 0.70312,-0.79688 0.70312,-2.437505 0,-1.53125 -0.70312,-2.328125 -0.6875,-0.796875 -1.73438,-0.796875 -1.04687,0 -1.75,0.796875 -0.6875,0.78125 -0.6875,2.375 z m 11.03906,2.8906
 3 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23438 -0.42188,-0.25 -0.59375,-0.64062 -0.17188,-0.40625 -0.17188,-1.67188 l 0,-4.765625 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.609375 0.0625,0.781255 0.0781,0.17187 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z"
-       id="path15"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 265.409,249.52692 125.60629,0 0,76.56692 -125.60629,0 z"
-       id="path17"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 265.409,249.52692 125.60629,0 0,76.56692 -125.60629,0 z"
-       id="path19"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:8, 6" />
-    <path
-       d="m 289.7317,293.67038 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82813,-0.54687 -0.82812,-0.54688 -1.29687,-1.53125 -0.45313,-0.98438 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01563 1.25,-1.54688 0.82813,-0.54687 1.85938,-0.54687 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67187,2.39062 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 13.63281,1.46875 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.4
 0625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83593,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45313,-0.64062 1.01563,-0.95312 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42188,0.98437 0.42188,2.17187 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.29687 q 0,1.60937 0.64062,2.375 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 13.03906,3.07812 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 0
 ,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 3.58593,4.17187 0,-8.29687 1.26563,0 0,1.25 q 0.48437,
 -0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45312 l -0.48437,1.29688 q -0.51563,-0.29688 -1.03125,-0.29688 -0.45313,0 -0.82813,0.28125 -0.35937,0.26563 -0.51562,0.76563 -0.23438,0.75 -0.23438,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23438 -0.42188,-0.25 -0.59375,-0.64062 -0.17188,-0.40625 -0.17188,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z m 1.38282,1.26562 0,-8.29687 1.25,0 0,1.15625 q 0.39062,-0.60938 1.03125,-0.96875 0.65625,-0.375 1.48437,-0.375 0.92188,0 1.51563,0.39062 0.59375,0.375 0.82812,1.0625 0.98438,-1.45312 2.5625,-1.45312 1.23438,0 1.89063,0.6875 0.67187,0.67187 0.67187,2.09375 l 0,5.70312 -1.39062,0 0,-5.23437 q 0,-0.84375 -0.14063,-1.20313 -0.14062,-0.375 -0.5,
 -0.59375 -0.35937,-0.23437 -0.84375,-0.23437 -0.875,0 -1.45312,0.57812 -0.57813,0.57813 -0.57813,1.85938 l 0,4.82812 -1.40625,0 0,-5.39062 q 0,-0.9375 -0.34375,-1.40625 -0.34375,-0.46875 -1.125,-0.46875 -0.59375,0 -1.09375,0.3125 -0.5,0.3125 -0.73437,0.92187 -0.21875,0.59375 -0.21875,1.71875 l 0,4.3125 -1.40625,0 z m 19,-2.67187 1.45312,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92187,0.70312 -2.35937,0.70312 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10937 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29687 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,4.95312 0,-8.29687 1.26562,0 0,1.17187 q 0.90625,-1.35937 2.64063,-1.35937 0.75,0 
 1.375,0.26562 0.625,0.26563 0.9375,0.70313 0.3125,0.4375 0.4375,1.04687 0.0781,0.39063 0.0781,1.35938 l 0,5.10937 -1.40625,0 0,-5.04687 q 0,-0.85938 -0.17187,-1.28125 -0.15625,-0.4375 -0.57813,-0.6875 -0.40625,-0.25 -0.96875,-0.25 -0.90625,0 -1.5625,0.57812 -0.64062,0.5625 -0.64062,2.15625 l 0,4.53125 -1.40625,0 z m 11.96094,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 6.28906,1.26562 -1.40625,0 0,-8.96875 q -0.51563,0.48438 -1.34375,0.96875 -0.8125,0.48438 -1.46875,0.73438 l 0,-1.35938 q 1.17187,-0.5625 2.04687,-1.34375 0.89063,-0.79687 1.26563,-1.53125 l 0.90625,0 0,11.5 z"
-       id="path21"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 510.44476,249.52692 125.60626,0 0,76.56692 -125.60626,0 z"
-       id="path23"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 510.44476,249.52692 125.60626,0 0,76.56692 -125.60626,0 z"
-       id="path25"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:8, 6" />
-    <path
-       d="m 534.76746,293.67038 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82812,-0.54687 -0.82813,-0.54688 -1.29688,-1.53125 -0.45312,-0.98438 -0.45312,-2.25 0,-1.25 0.40625,-2.25 0.42187,-1.01563 1.25,-1.54688 0.82812,-0.54687 1.85937,-0.54687 0.75,0 1.32813,0.3125 0.59375,0.3125 0.95312,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67188,2.39062 0.67187,0.78125 1.57812,0.78125 0.92188,0 1.5625,-0.75 0.65625,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 13.63281,1.46875 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.
 40625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45312,-0.64062 1.01562,-0.95312 0.57813,-0.3125 1.39063,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42187,0.98437 0.42187,2.17187 0,1.28125 -0.46875,2.29688 -0.45312,1.01562 -1.32812,1.5625 -0.85938,0.54687 -1.82813,0.54687 -0.70312,0 -1.26562,-0.29687 -0.54688,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26562,-7.29687 q 0,1.60937 0.64063,2.375 0.65625,0.76562 1.57812,0.76562 0.9375,0 1.60938,-0.79687 0.67187,-0.79688 0.67187,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89062,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 13.03906,3.07812 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 
 0,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 3.58594,4.17187 0,-8.29687 1.26562,0 0,1.25 q 0.48438
 ,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45312 l -0.48438,1.29688 q -0.51562,-0.29688 -1.03125,-0.29688 -0.45312,0 -0.82812,0.28125 -0.35938,0.26563 -0.51563,0.76563 -0.23437,0.75 -0.23437,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 1.38281,1.26562 0,-8.29687 1.25,0 0,1.15625 q 0.39062,-0.60938 1.03125,-0.96875 0.65625,-0.375 1.48437,-0.375 0.92188,0 1.51563,0.39062 0.59375,0.375 0.82812,1.0625 0.98438,-1.45312 2.5625,-1.45312 1.23438,0 1.89063,0.6875 0.67187,0.67187 0.67187,2.09375 l 0,5.70312 -1.39062,0 0,-5.23437 q 0,-0.84375 -0.14063,-1.20313 -0.14062,-0.375 -0.5
 ,-0.59375 -0.35937,-0.23437 -0.84375,-0.23437 -0.875,0 -1.45312,0.57812 -0.57813,0.57813 -0.57813,1.85938 l 0,4.82812 -1.40625,0 0,-5.39062 q 0,-0.9375 -0.34375,-1.40625 -0.34375,-0.46875 -1.125,-0.46875 -0.59375,0 -1.09375,0.3125 -0.5,0.3125 -0.73437,0.92187 -0.21875,0.59375 -0.21875,1.71875 l 0,4.3125 -1.40625,0 z m 19,-2.67187 1.45312,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92187,0.70312 -2.35937,0.70312 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10937 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29687 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,4.95312 0,-8.29687 1.26563,0 0,1.17187 q 0.90625,-1.35937 2.64062,-1.35937 0.75,0
  1.375,0.26562 0.625,0.26563 0.9375,0.70313 0.3125,0.4375 0.4375,1.04687 0.0781,0.39063 0.0781,1.35938 l 0,5.10937 -1.40625,0 0,-5.04687 q 0,-0.85938 -0.17188,-1.28125 -0.15625,-0.4375 -0.57812,-0.6875 -0.40625,-0.25 -0.96875,-0.25 -0.90625,0 -1.5625,0.57812 -0.64063,0.5625 -0.64063,2.15625 l 0,4.53125 -1.40625,0 z m 11.96094,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 8.38281,-0.0937 0,1.35937 -7.57812,0 q -0.0156,-0.51562 0.17187,-0.98437 0.28125,-0.76563 0.92188,-1.51563 0.64062,-0.75 1.84375,-1.73437 1.85937,-1.53125 2.51562,-2.42188 0.65625,-0.90625 0.65625,-1.70312 0,-0.82813 -0.59375,-1.40625 -0.59375,-0.57813 -1.5625,-0.57813 -1.01562,0 
 -1.625,0.60938 -0.60937,0.60937 -0.60937,1.6875 l -1.45313,-0.14063 q 0.15625,-1.625 1.125,-2.46875 0.96875,-0.84375 2.59375,-0.84375 1.65625,0 2.60938,0.92188 0.96875,0.90625 0.96875,2.25 0,0.6875 -0.28125,1.35937 -0.28125,0.65625 -0.9375,1.39063 -0.65625,0.73437 -2.17188,2.01562 -1.26562,1.0625 -1.625,1.45313 -0.35937,0.375 -0.59375,0.75 l 5.625,0 z"
-       id="path27"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 189,433.2189 125.60629,0 0,76.56696 -125.60629,0 z"
-       id="path29"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 189,433.2189 125.60629,0 0,76.56696 -125.60629,0 z"
-       id="path31"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 232.0258,476.33112 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54688,0.26563 -1.375,0 -2.10937,-0.67188 -0.73438,-0.67187 -0.73438,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45313,-0.3125 1.01563,-0.46875 0.42187,-0.10937 1.25,-0.20312 1.70312,-0.20313 2.51562,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39062,-1.20312 -0.54688,-0.48438 -1.60938,-0.48438 -0.98437,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60938,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67187,0.25 0.98437,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35938,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10938,-3.14063 q -0.76562,0.3125 -2.29687,0.53125 -0.875,0.125 -1.23438,0.28125 -0.35937,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42188,0.9375 0.4375,0.375 1.25,0.375 0.
 8125,0 1.4375,-0.34375 0.64062,-0.35937 0.9375,-0.98437 0.23437,-0.46875 0.23437,-1.40625 l 0,-0.51563 z m 8.97657,4.17188 0,-1.04688 q -0.78125,1.23438 -2.3125,1.23438 -1,0 -1.82813,-0.54688 -0.82812,-0.54687 -1.29687,-1.53125 -0.45313,-0.98437 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01562 1.25,-1.54687 0.82813,-0.54688 1.85938,-0.54688 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82813 l 0,-4.10938 1.40625,0 0,11.45313 -1.3125,0 z m -4.4375,-4.14063 q 0,1.59375 0.67187,2.39063 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625,-0.76563 0.65625,-2.3125 0,-1.70313 -0.65625,-2.5 -0.65625,-0.79688 -1.625,-0.79688 -0.9375,0 -1.5625,0.76563 -0.625,0.76562 -0.625,2.42187 z m 7.96093,4.14063 0,-11.45313 1.40625,0 0,4.10938 q 0.98438,-1.14063 2.48438,-1.14063 0.92187,0 1.59375,0.35938 0.6875,0.35937 0.96875,1 0.29687,0.64062 0.29687,1.85937 l 0,5.26563 -1.40625,0 0,-5.26563 q 0,-1.04687 -0.45312,-1.53125 -0.45313,-0.48437 -1.29688,-0.48437 -0.625,0 -1.17187,0.32812 -0
 .54688,0.32813 -0.78125,0.89063 -0.23438,0.54687 -0.23438,1.51562 l 0,4.54688 -1.40625,0 z m 8.36719,-4.15625 q 0,-2.29688 1.28125,-3.40625 1.07813,-0.92188 2.60939,-0.92188 1.71875,0 2.79688,1.125 1.09375,1.10938 1.09375,3.09375 0,1.59375 -0.48438,2.51563 -0.48437,0.92187 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73439,0 -2.81251,-1.10938 -1.07813,-1.125 -1.07813,-3.23437 z m 1.45313,0 q 0,1.59375 0.6875,2.39062 0.70312,0.79688 1.75001,0.79688 1.04688,0 1.73438,-0.79688 0.70312,-0.79687 0.70312,-2.4375 0,-1.53125 -0.70312,-2.32812 -0.6875,-0.79688 -1.73438,-0.79688 -1.04689,0 -1.75001,0.79688 -0.6875,0.78125 -0.6875,2.375 z m 13.38283,1.10937 1.39062,0.1875 q -0.23437,1.42188 -1.17187,2.23438 -0.92188,0.8125 -2.28125,0.8125 -1.70313,0 -2.75,-1.10938 -1.03125,-1.125 -1.03125,-3.20312 0,-1.34375 0.4375,-2.34375 0.45312,-1.01563 1.35937,-1.51563 0.92188,-0.5 1.98438,-0.5 1.35937,0 2.21875,0.6875 0.85937,0.67188 1.09375,1.9375 l -1.35938,0.20313 q -0.20312,-0.82813 -0.70312,-1.25 -0.48438
 ,-0.42188 -1.1875,-0.42188 -1.0625,0 -1.73438,0.76563 -0.65625,0.75 -0.65625,2.40625 0,1.67187 0.64063,2.4375 0.64062,0.75 1.67187,0.75 0.82813,0 1.375,-0.5 0.5625,-0.51563 0.70313,-1.57813 z m 7.5,3.04688 -1.40625,0 0,-8.96875 q -0.51563,0.48437 -1.34375,0.96875 -0.8125,0.48437 -1.46875,0.73437 l 0,-1.35937 q 1.17187,-0.5625 2.04687,-1.34375 0.89063,-0.79688 1.26563,-1.53125 l 0.90625,0 0,11.5 z"
-       id="path33"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 341.38232,433.2189 125.60629,0 0,76.56696 -125.60629,0 z"
-       id="path35"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 341.38232,433.2189 125.60629,0 0,76.56696 -125.60629,0 z"
-       id="path37"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 370.1503,477.36237 0,-1.04688 q -0.78125,1.23438 -2.3125,1.23438 -1,0 -1.82812,-0.54688 -0.82813,-0.54687 -1.29688,-1.53125 -0.45312,-0.98437 -0.45312,-2.25 0,-1.25 0.40625,-2.25 0.42187,-1.01562 1.25,-1.54687 0.82812,-0.54688 1.85937,-0.54688 0.75,0 1.32813,0.3125 0.59375,0.3125 0.95312,0.82813 l 0,-4.10938 1.40625,0 0,11.45313 -1.3125,0 z m -4.4375,-4.14063 q 0,1.59375 0.67188,2.39063 0.67187,0.78125 1.57812,0.78125 0.92188,0 1.5625,-0.75 0.65625,-0.76563 0.65625,-2.3125 0,-1.70313 -0.65625,-2.5 -0.65625,-0.79688 -1.625,-0.79688 -0.9375,0 -1.5625,0.76563 -0.625,0.76562 -0.625,2.42187 z m 13.36719,3.10938 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54688,0.26563 -1.375,0 -2.10937,-0.67188 -0.73438,-0.67187 -0.73438,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45313,-0.3125 1.01563,-0.46875 0.42187,-0.10937 1.25,-0.20312 1.70312,-0.20313 2.51562,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39062,-1.20312 -0.54688,-0.48438 -1.60938,-0.4843
 8 -0.98437,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60938,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67187,0.25 0.98437,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35938,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10938,-3.14063 q -0.76562,0.3125 -2.29687,0.53125 -0.875,0.125 -1.23438,0.28125 -0.35937,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42188,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64062,-0.35937 0.9375,-0.98437 0.23437,-0.46875 0.23437,-1.40625 l 0,-0.51563 z m 3.60157,-5.67187 0,-1.60938 1.40625,0 0,1.60938 -1.40625,0 z m 0,9.84375 0,-8.29688 1.40625,0 0,8.29688 -1.40625,0 z m 3.52343,0 0,-11.45313 1.40625,0 0,11.45313 -1.40625,0 z m 3.52344,3.20312 -0.15625,-1.32812 q 0.45313,0.125 0.79688,0.125 0.46875,0 0.75,-0.15625 0.28125,-0.15625 0.46875,-0.43
 75 0.125,-0.20313 0.42187,-1.04688 0.0469,-0.10937 0.125,-0.34375 l -3.14062,-8.3125 1.51562,0 1.71875,4.79688 q 0.34375,0.92187 0.60938,1.92187 0.23437,-0.96875 0.57812,-1.89062 l 1.76563,-4.82813 1.40625,0 -3.15625,8.4375 q -0.5,1.375 -0.78125,1.89063 -0.375,0.6875 -0.85938,1.01562 -0.48437,0.32813 -1.15625,0.32813 -0.40625,0 -0.90625,-0.17188 z m 6.75,-0.0156 0,-1.01563 9.32813,0 0,1.01563 -9.32813,0 z m 11.50781,-3.1875 -1.3125,0 0,-11.45313 1.40625,0 0,4.07813 q 0.89063,-1.10938 2.28125,-1.10938 0.76563,0 1.4375,0.3125 0.6875,0.29688 1.125,0.85938 0.45313,0.5625 0.70313,1.35937 0.25,0.78125 0.25,1.67188 0,2.14062 -1.0625,3.3125 -1.04688,1.15625 -2.53125,1.15625 -1.46875,0 -2.29688,-1.23438 l 0,1.04688 z m -0.0156,-4.21875 q 0,1.5 0.40625,2.15625 0.65625,1.09375 1.79687,1.09375 0.92188,0 1.59375,-0.79688 0.67188,-0.8125 0.67188,-2.39062 0,-1.625 -0.65625,-2.39063 -0.64063,-0.78125 -1.54688,-0.78125 -0.92187,0 -1.59375,0.79688 -0.67187,0.79687 -0.67187,2.3125 z m 13.02344,3.1875 
 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54688,0.26563 -1.375,0 -2.10937,-0.67188 -0.73438,-0.67187 -0.73438,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45313,-0.3125 1.01563,-0.46875 0.42187,-0.10937 1.25,-0.20312 1.70312,-0.20313 2.51562,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39062,-1.20312 -0.54688,-0.48438 -1.60938,-0.48438 -0.98437,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60938,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67187,0.25 0.98437,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35938,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10938,-3.14063 q -0.76562,0.3125 -2.29687,0.53125 -0.875,0.125 -1.23438,0.28125 -0.35937,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42188,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64062,
 -0.35937 0.9375,-0.98437 0.23437,-0.46875 0.23437,-1.40625 l 0,-0.51563 z m 6.66406,2.90625 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23437 -0.42188,-0.25 -0.59375,-0.64063 -0.17188,-0.40625 -0.17188,-1.67187 l 0,-4.76563 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60938 0.0625,0.78125 0.0781,0.17188 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z m 6.78907,-1.78125 1.39062,0.1875 q -0.23437,1.42188 -1.17187,2.23438 -0.92188,0.8125 -2.28125,0.8125 -1.70313,0 -2.75,-1.10938 -1.03125,-1.125 -1.03125,-3.20312 0,-1.34375 0.4375,-2.34375 0.45312,-1.01563 1.35937,-1.51563 0.92188,-0.5 1.98438,-0.5 1.35937,0 2.21875,0.6875 0.85937,0.67188 1.09375,1.9375 l -1.35938,0.20313 q -0.20312,-0.82813 -0.70312,-1.25 -0.48438,-0.42188 -1.1875,-0.42188 -1.0625,0 -1.73438,0.76563 -0.65625,0.75 -0.65625,2.40625 0,1.67187 0.64063,2.4375 0.64062,0.75 1.67187,0.75 0.82813,0 1.375,-0.5 0.56
 25,-0.51563 0.70313,-1.57813 z m 2.59375,3.04688 0,-11.45313 1.40625,0 0,4.10938 q 0.98437,-1.14063 2.48437,-1.14063 0.92188,0 1.59375,0.35938 0.6875,0.35937 0.96875,1 0.29688,0.64062 0.29688,1.85937 l 0,5.26563 -1.40625,0 0,-5.26563 q 0,-1.04687 -0.45313,-1.53125 -0.45312,-0.48437 -1.29687,-0.48437 -0.625,0 -1.17188,0.32812 -0.54687,0.32813 -0.78125,0.89063 -0.23437,0.54687 -0.23437,1.51562 l 0,4.54688 -1.40625,0 z"
-       id="path39"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="m 510.44476,440.87274 125.60626,0 0,76.56696 -125.60626,0 z"
-       id="path41"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 510.44476,440.87274 125.60626,0 0,76.56696 -125.60626,0 z"
-       id="path43"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 520.95105,485.0162 0,-8.29687 1.25,0 0,1.15625 q 0.39062,-0.60938 1.03125,-0.96875 0.65625,-0.375 1.48437,-0.375 0.92188,0 1.51563,0.39062 0.59375,0.375 0.82812,1.0625 0.98438,-1.45312 2.5625,-1.45312 1.23438,0 1.89063,0.6875 0.67187,0.67187 0.67187,2.09375 l 0,5.70312 -1.39062,0 0,-5.23437 q 0,-0.84375 -0.14063,-1.20313 -0.14062,-0.375 -0.5,-0.59375 -0.35937,-0.23437 -0.84375,-0.23437 -0.875,0 -1.45312,0.57812 -0.57813,0.57813 -0.57813,1.85938 l 0,4.82812 -1.40625,0 0,-5.39062 q 0,-0.9375 -0.34375,-1.40625 -0.34375,-0.46875 -1.125,-0.46875 -0.59375,0 -1.09375,0.3125 -0.5,0.3125 -0.73437,0.92187 -0.21875,0.59375 -0.21875,1.71875 l 0,4.3125 -1.40625,0 z m 12.79687,-4.15625 q 0,-2.29687 1.28125,-3.40625 1.07813,-0.92187 2.60938,-0.92187 1.71875,0 2.79687,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48437,2.51562 -0.48438,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73438,0 -2.8125,-1.10937 -1.07813,-1.125 -1.07813,-3.23438 z m 1.45313,0 q 0,1.59375 0.6875,2.39063 0
 .70312,0.79687 1.75,0.79687 1.04687,0 1.73437,-0.79687 0.70313,-0.79688 0.70313,-2.4375 0,-1.53125 -0.70313,-2.32813 -0.6875,-0.79687 -1.73437,-0.79687 -1.04688,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 7.97656,4.15625 0,-8.29687 1.26563,0 0,1.17187 q 0.90625,-1.35937 2.64062,-1.35937 0.75,0 1.375,0.26562 0.625,0.26563 0.9375,0.70313 0.3125,0.4375 0.4375,1.04687 0.0781,0.39063 0.0781,1.35938 l 0,5.10937 -1.40625,0 0,-5.04687 q 0,-0.85938 -0.17188,-1.28125 -0.15625,-0.4375 -0.57812,-0.6875 -0.40625,-0.25 -0.96875,-0.25 -0.90625,0 -1.5625,0.57812 -0.64063,0.5625 -0.64063,2.15625 l 0,4.53125 -1.40625,0 z m 11.96094,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0
 .60938,-0.0625 z m 1.38281,1.26562 0,-11.45312 1.40625,0 0,4.10937 q 0.98438,-1.14062 2.48438,-1.14062 0.92187,0 1.59375,0.35937 0.6875,0.35938 0.96875,1 0.29687,0.64063 0.29687,1.85938 l 0,5.26562 -1.40625,0 0,-5.26562 q 0,-1.04688 -0.45312,-1.53125 -0.45313,-0.48438 -1.29688,-0.48438 -0.625,0 -1.17187,0.32813 -0.54688,0.32812 -0.78125,0.89062 -0.23438,0.54688 -0.23438,1.51563 l 0,4.54687 -1.40625,0 z m 8.86719,0 0,-11.45312 1.40625,0 0,11.45312 -1.40625,0 z m 3.52344,3.20313 -0.15625,-1.32813 q 0.45312,0.125 0.79687,0.125 0.46875,0 0.75,-0.15625 0.28125,-0.15625 0.46875,-0.4375 0.125,-0.20312 0.42188,-1.04687 0.0469,-0.10938 0.125,-0.34375 l -3.14063,-8.3125 1.51563,0 1.71875,4.79687 q 0.34375,0.92188 0.60937,1.92188 0.23438,-0.96875 0.57813,-1.89063 l 1.76562,-4.82812 1.40625,0 -3.15625,8.4375 q -0.5,1.375 -0.78125,1.89062 -0.375,0.6875 -0.85937,1.01563 -0.48438,0.32812 -1.15625,0.32812 -0.40625,0 -0.90625,-0.17187 z m 6.75,-0.0156 0,-1.01562 9.32812,0 0,1.01562 -9.32812,0 z m 10
 .19531,-3.1875 0,-8.29687 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45312 l -0.48438,1.29688 q -0.51562,-0.29688 -1.03125,-0.29688 -0.45312,0 -0.82812,0.28125 -0.35938,0.26563 -0.51563,0.76563 -0.23437,0.75 -0.23437,1.64062 l 0,4.34375 -1.40625,0 z m 11.01562,-2.67187 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45312,-0.64062 1.01562,-0.95312 0.5781
 3,-0.3125 1.39063,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42187,0.98437 0.42187,2.17187 0,1.28125 -0.46875,2.29688 -0.45312,1.01562 -1.32812,1.5625 -0.85938,0.54687 -1.82813,0.54687 -0.70312,0 -1.26562,-0.29687 -0.54688,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26562,-7.29687 q 0,1.60937 0.64063,2.375 0.65625,0.76562 1.57812,0.76562 0.9375,0 1.60938,-0.79687 0.67187,-0.79688 0.67187,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89062,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.10156,-0.0469 q 0,-2.29687 1.28125,-3.40625 1.07813,-0.92187 2.60938,-0.92187 1.71875,0 2.79687,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48437,2.51562 -0.48438,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73438,0 -2.8125,-1.10937 -1.07813,-1.125 -1.07813,-3.23438 z m 1.45313,0 q 0,1.59375 0.6875,2.39063 0.70312,0.79687 1.75,0.79687 1.04687,0 1.73437,-0.79687 0.70313,-0.79688 0.70313,-2.4375 0,-1.53125 -0.70313,-2.32813 -0.68
 75,-0.79687 -1.73437,-0.79687 -1.04688,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 7.96094,4.15625 0,-8.29687 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45312 l -0.48438,1.29688 q -0.51562,-0.29688 -1.03125,-0.29688 -0.45312,0 -0.82812,0.28125 -0.35938,0.26563 -0.51563,0.76563 -0.23437,0.75 -0.23437,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z"
-       id="path45"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="M 328.21216,326.09384 251.80271,433.21197"
-       id="path47"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 328.21216,326.09384 251.80271,433.21197"
-       id="path49"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 328.21216,326.09384 75.96851,107.11813"
-       id="path51"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 328.21216,326.09384 75.96851,107.11813"
-       id="path53"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 573.2479,326.09384 0,114.77167"
-       id="path55"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="m 573.2479,326.09384 0,114.77167"
-       id="path57"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="M 613.6657,134.74803 328.21688,249.51968"
-       id="path59"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 613.6657,134.74803 328.21688,249.51968"
-       id="path61"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="M 613.6657,134.74803 573.25628,249.51968"
-       id="path63"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 613.6657,134.74803 573.25628,249.51968"
-       id="path65"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 728.2543,249.52692 125.60626,0 0,76.56692 -125.60626,0 z"
-       id="path67"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 728.2543,249.52692 125.60626,0 0,76.56692 -125.60626,0 z"
-       id="path69"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:8, 6" />
-    <path
-       d="m 752.57697,293.67038 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82813,-0.54687 -0.82812,-0.54688 -1.29687,-1.53125 -0.45313,-0.98438 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01563 1.25,-1.54688 0.82813,-0.54687 1.85938,-0.54687 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67187,2.39062 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 13.63281,1.46875 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.
 40625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83593,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45313,-0.64062 1.01563,-0.95312 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42188,0.98437 0.42188,2.17187 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.29687 q 0,1.60937 0.64062,2.375 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 13.03906,3.07812 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 
 0,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 3.58593,4.17187 0,-8.29687 1.26563,0 0,1.25 q 0.48437
 ,-0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45312 l -0.48437,1.29688 q -0.51563,-0.29688 -1.03125,-0.29688 -0.45313,0 -0.82813,0.28125 -0.35937,0.26563 -0.51562,0.76563 -0.23438,0.75 -0.23438,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23438 -0.42188,-0.25 -0.59375,-0.64062 -0.17188,-0.40625 -0.17188,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z m 1.38282,1.26562 0,-8.29687 1.25,0 0,1.15625 q 0.39062,-0.60938 1.03125,-0.96875 0.65625,-0.375 1.48437,-0.375 0.92188,0 1.51563,0.39062 0.59375,0.375 0.82812,1.0625 0.98438,-1.45312 2.5625,-1.45312 1.23438,0 1.89063,0.6875 0.67187,0.67187 0.67187,2.09375 l 0,5.70312 -1.39062,0 0,-5.23437 q 0,-0.84375 -0.14063,-1.20313 -0.14062,-0.375 -0.5
 ,-0.59375 -0.35937,-0.23437 -0.84375,-0.23437 -0.875,0 -1.45312,0.57812 -0.57813,0.57813 -0.57813,1.85938 l 0,4.82812 -1.40625,0 0,-5.39062 q 0,-0.9375 -0.34375,-1.40625 -0.34375,-0.46875 -1.125,-0.46875 -0.59375,0 -1.09375,0.3125 -0.5,0.3125 -0.73437,0.92187 -0.21875,0.59375 -0.21875,1.71875 l 0,4.3125 -1.40625,0 z m 19,-2.67187 1.45312,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92187,0.70312 -2.35937,0.70312 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10937 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29687 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,4.95312 0,-8.29687 1.26562,0 0,1.17187 q 0.90625,-1.35937 2.64063,-1.35937 0.75,0
  1.375,0.26562 0.625,0.26563 0.9375,0.70313 0.3125,0.4375 0.4375,1.04687 0.0781,0.39063 0.0781,1.35938 l 0,5.10937 -1.40625,0 0,-5.04687 q 0,-0.85938 -0.17187,-1.28125 -0.15625,-0.4375 -0.57813,-0.6875 -0.40625,-0.25 -0.96875,-0.25 -0.90625,0 -1.5625,0.57812 -0.64062,0.5625 -0.64062,2.15625 l 0,4.53125 -1.40625,0 z m 11.96094,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 0.99218,-1.76563 1.40625,-0.1875 q 0.25,1.20313 0.82813,1.73438 0.57812,0.51562 1.42187,0.51562 0.98438,0 1.67188,-0.6875 0.6875,-0.6875 0.6875,-1.70312 0,-0.96875 -0.64063,-1.59375 -0.625,-0.625 -1.60937,-0.625 -0.39063,0 -0.98438,0.15625 l 0.15625,-1.23438 q 0.14063,0.0156 0.218
 75,0.0156 0.90625,0 1.625,-0.46875 0.71875,-0.46875 0.71875,-1.45313 0,-0.76562 -0.53125,-1.26562 -0.51562,-0.51563 -1.34375,-0.51563 -0.82812,0 -1.375,0.51563 -0.54687,0.51562 -0.70312,1.54687 l -1.40625,-0.25 q 0.26562,-1.42187 1.17187,-2.1875 0.92188,-0.78125 2.28125,-0.78125 0.9375,0 1.71875,0.40625 0.79688,0.39063 1.20313,1.09375 0.42187,0.6875 0.42187,1.46875 0,0.75 -0.40625,1.35938 -0.39062,0.60937 -1.17187,0.96875 1.01562,0.23437 1.57812,0.96875 0.5625,0.73437 0.5625,1.84375 0,1.5 -1.09375,2.54687 -1.09375,1.04688 -2.76562,1.04688 -1.5,0 -2.5,-0.89063 -1,-0.90625 -1.14063,-2.34375 z"
-       id="path71"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="M 613.6657,134.74803 791.05156,249.51968"
-       id="path73"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 613.6657,134.74803 791.05156,249.51968"
-       id="path75"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 657.1149,440.87274 125.60626,0 0,76.56696 -125.60626,0 z"
-       id="path77"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 657.1149,440.87274 125.60626,0 0,76.56696 -125.60626,0 z"
-       id="path79"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 700.1407,483.98495 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 0,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.
 8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 8.97656,4.17187 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82813,-0.54687 -0.82812,-0.54688 -1.29687,-1.53125 -0.45313,-0.98438 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01563 1.25,-1.54688 0.82813,-0.54687 1.85938,-0.54687 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67187,2.39062 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 7.96094,4.14062 0,-11.45312 1.40625,0 0,4.10937 q 0.98437,-1.14062 2.48437,-1.14062 0.92188,0 1.59375,0.35937 0.6875,0.35938 0.96875,1 0.29688,0.64063 0.29688,1.85938 l 0,5.26562 -1.40625,0 0,-5.26562 q 0,-1.04688 -0.45313,-1.53125 -0.45312,-0.48438 -1.29687,-0.48438 -0.625,0 -1.17188,0.32813 -0
 .54687,0.32812 -0.78125,0.89062 -0.23437,0.54688 -0.23437,1.51563 l 0,4.54687 -1.40625,0 z m 8.36718,-4.15625 q 0,-2.29687 1.28125,-3.40625 1.07813,-0.92187 2.60938,-0.92187 1.71875,0 2.79687,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48437,2.51562 -0.48438,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73438,0 -2.8125,-1.10937 -1.07813,-1.125 -1.07813,-3.23438 z m 1.45313,0 q 0,1.59375 0.6875,2.39063 0.70312,0.79687 1.75,0.79687 1.04687,0 1.73437,-0.79687 0.70313,-0.79688 0.70313,-2.4375 0,-1.53125 -0.70313,-2.32813 -0.6875,-0.79687 -1.73437,-0.79687 -1.04688,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 13.38281,1.10938 1.39063,0.1875 q -0.23438,1.42187 -1.17188,2.23437 -0.92187,0.8125 -2.28125,0.8125 -1.70312,0 -2.75,-1.10937 -1.03125,-1.125 -1.03125,-3.20313 0,-1.34375 0.4375,-2.34375 0.45313,-1.01562 1.35938,-1.51562 0.92187,-0.5 1.98437,-0.5 1.35938,0 2.21875,0.6875 0.85938,0.67187 1.09375,1.9375 l -1.35937,0.20312 q -0.20313,-0.82812 -0.70313,-1.25 -0.48437,-0.421
 87 -1.1875,-0.42187 -1.0625,0 -1.73437,0.76562 -0.65625,0.75 -0.65625,2.40625 0,1.67188 0.64062,2.4375 0.64063,0.75 1.67188,0.75 0.82812,0 1.375,-0.5 0.5625,-0.51562 0.70312,-1.57812 z m 9.59375,1.6875 0,1.35937 -7.57812,0 q -0.0156,-0.51562 0.17187,-0.98437 0.28125,-0.76563 0.92188,-1.51563 0.64062,-0.75 1.84375,-1.73437 1.85937,-1.53125 2.51562,-2.42188 0.65625,-0.90625 0.65625,-1.70312 0,-0.82813 -0.59375,-1.40625 -0.59375,-0.57813 -1.5625,-0.57813 -1.01562,0 -1.625,0.60938 -0.60937,0.60937 -0.60937,1.6875 l -1.45313,-0.14063 q 0.15625,-1.625 1.125,-2.46875 0.96875,-0.84375 2.59375,-0.84375 1.65625,0 2.60938,0.92188 0.96875,0.90625 0.96875,2.25 0,0.6875 -0.28125,1.35937 -0.28125,0.65625 -0.9375,1.39063 -0.65625,0.73437 -2.17188,2.01562 -1.26562,1.0625 -1.625,1.45313 -0.35937,0.375 -0.59375,0.75 l 5.625,0 z"
-       id="path81"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="M 791.05743,326.09384 719.90777,440.86551"
-       id="path83"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 791.05743,326.09384 719.90777,440.86551"
-       id="path85"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 803.78503,440.88477 125.60626,0 0,76.5669 -125.60626,0 z"
-       id="path87"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 803.78503,440.88477 125.60626,0 0,76.5669 -125.60626,0 z"
-       id="path89"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 831.2249,485.02823 0,-1.04688 q -0.78125,1.23438 -2.3125,1.23438 -1,0 -1.82812,-0.54688 -0.82813,-0.54687 -1.29688,-1.53125 -0.45312,-0.98437 -0.45312,-2.25 0,-1.25 0.40625,-2.25 0.42187,-1.01562 1.25,-1.54687 0.82812,-0.54688 1.85937,-0.54688 0.75,0 1.32813,0.3125 0.59375,0.3125 0.95312,0.82813 l 0,-4.10938 1.40625,0 0,11.45313 -1.3125,0 z m -4.4375,-4.14063 q 0,1.59375 0.67188,2.39063 0.67187,0.78125 1.57812,0.78125 0.92188,0 1.5625,-0.75 0.65625,-0.76563 0.65625,-2.3125 0,-1.70313 -0.65625,-2.5 -0.65625,-0.79688 -1.625,-0.79688 -0.9375,0 -1.5625,0.76563 -0.625,0.76562 -0.625,2.42187 z m 13.36719,3.10938 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54688,0.26563 -1.375,0 -2.10937,-0.67188 -0.73438,-0.67187 -0.73438,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45313,-0.3125 1.01563,-0.46875 0.42187,-0.10937 1.25,-0.20312 1.70312,-0.20313 2.51562,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39062,-1.20312 -0.54688,-0.48438 -1.60938,-0.4843
 8 -0.98437,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60938,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67187,0.25 0.98437,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35938,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10938,-3.14063 q -0.76562,0.3125 -2.29687,0.53125 -0.875,0.125 -1.23438,0.28125 -0.35937,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42188,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64062,-0.35937 0.9375,-0.98437 0.23437,-0.46875 0.23437,-1.40625 l 0,-0.51563 z m 3.60157,-5.67187 0,-1.60938 1.40625,0 0,1.60938 -1.40625,0 z m 0,9.84375 0,-8.29688 1.40625,0 0,8.29688 -1.40625,0 z m 3.52343,0 0,-11.45313 1.40625,0 0,11.45313 -1.40625,0 z m 3.52344,3.20312 -0.15625,-1.32812 q 0.45313,0.125 0.79688,0.125 0.46875,0 0.75,-0.15625 0.28125,-0.15625 0.46875,-0.43
 75 0.125,-0.20313 0.42187,-1.04688 0.0469,-0.10937 0.125,-0.34375 l -3.14062,-8.3125 1.51562,0 1.71875,4.79688 q 0.34375,0.92187 0.60938,1.92187 0.23437,-0.96875 0.57812,-1.89062 l 1.76563,-4.82813 1.40625,0 -3.15625,8.4375 q -0.5,1.375 -0.78125,1.89063 -0.375,0.6875 -0.85938,1.01562 -0.48437,0.32813 -1.15625,0.32813 -0.40625,0 -0.90625,-0.17188 z m 6.75,-0.0156 0,-1.01563 9.32813,0 0,1.01563 -9.32813,0 z m 10.19531,-3.1875 0,-8.29688 1.26563,0 0,1.25 q 0.48437,-0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45313 l -0.48437,1.29687 q -0.51563,-0.29687 -1.03125,-0.29687 -0.45313,0 -0.82813,0.28125 -0.35937,0.26562 -0.51562,0.76562 -0.23438,0.75 -0.23438,1.64063 l 0,4.34375 -1.40625,0 z m 11.01563,-2.67188 1.45312,0.17188 q -0.34375,1.28125 -1.28125,1.98437 -0.92187,0.70313 -2.35937,0.70313 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14063 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14063 1.0625,1.125 1.0625,3.1
 7187 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10938 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29688 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04687 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64063 -0.71875,1.71875 z m 7.83594,8.14063 0,-11.48438 1.28125,0 0,1.07813 q 0.45313,-0.64063 1.01563,-0.95313 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54688 0.8125,0.54687 1.21875,1.54687 0.42188,0.98438 0.42188,2.17188 0,1.28125 -0.46875,2.29687 -0.45313,1.01563 -1.32813,1.5625 -0.85937,0.54688 -1.82812,0.54688 -0.70313,0 -1.26563,-0.29688 -0.54687,-0.29687 -0.90625,-0.75 l 0,4.04688 -1.40625,0 z m 1.26563,-7.29688 q 0,1.60938 0.64062,2.375 0.65625,0.76563 1.57813,0.76563 0.9375,0 1.60937,-0.79688 0.67188,-0.79687 0.67188,-2.45312 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79688 -1.5625,-0.79688 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.10156,-0.0469 q 0,-2.29688 
 1.28125,-3.40625 1.07812,-0.92188 2.60937,-0.92188 1.71875,0 2.79688,1.125 1.09375,1.10938 1.09375,3.09375 0,1.59375 -0.48438,2.51563 -0.48437,0.92187 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73437,0 -2.8125,-1.10938 -1.07812,-1.125 -1.07812,-3.23437 z m 1.45312,0 q 0,1.59375 0.6875,2.39062 0.70313,0.79688 1.75,0.79688 1.04688,0 1.73438,-0.79688 0.70312,-0.79687 0.70312,-2.4375 0,-1.53125 -0.70312,-2.32812 -0.6875,-0.79688 -1.73438,-0.79688 -1.04687,0 -1.75,0.79688 -0.6875,0.78125 -0.6875,2.375 z m 7.96094,4.15625 0,-8.29688 1.26563,0 0,1.25 q 0.48437,-0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45313 l -0.48437,1.29687 q -0.51563,-0.29687 -1.03125,-0.29687 -0.45313,0 -0.82813,0.28125 -0.35937,0.26562 -0.51562,0.76562 -0.23438,0.75 -0.23438,1.64063 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26563 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23437 -0.42188,-0.25 -0.59375,-0.64063 -0.17188,-0.40625 -0.17188,-1.67187 l 0,-4.76563 -1.03125,0
  0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60938 0.0625,0.78125 0.0781,0.17188 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z"
-       id="path91"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="M 791.05743,326.09384 866.58496,440.897"
-       id="path93"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 791.05743,326.09384 866.58496,440.897"
-       id="path95"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 896.80316,249.52692 125.60624,0 0,76.56692 -125.60624,0 z"
-       id="path97"
-       inkscape:connector-curvature="0"
-       style="fill:#ffffff;fill-rule:nonzero" />
-    <path
-       d="m 896.80316,249.52692 125.60624,0 0,76.56692 -125.60624,0 z"
-       id="path99"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 927.829,290.6235 1.39062,0.1875 q -0.23437,1.42187 -1.17187,2.23437 -0.92188,0.8125 -2.28125,0.8125 -1.70313,0 -2.75,-1.10937 -1.03125,-1.125 -1.03125,-3.20313 0,-1.34375 0.4375,-2.34375 0.45312,-1.01562 1.35937,-1.51562 0.92188,-0.5 1.98438,-0.5 1.35937,0 2.21875,0.6875 0.85937,0.67187 1.09375,1.9375 l -1.35938,0.20312 q -0.20312,-0.82812 -0.70312,-1.25 -0.48438,-0.42187 -1.1875,-0.42187 -1.0625,0 -1.73438,0.76562 -0.65625,0.75 -0.65625,2.40625 0,1.67188 0.64063,2.4375 0.64062,0.75 1.67187,0.75 0.82813,0 1.375,-0.5 0.5625,-0.51562 0.70313,-1.57812 z m 8.26562,0.375 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29687 z 
 m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.30469,0.79687 q 0,-2.29687 1.28125,-3.40625 1.07812,-0.92187 2.60937,-0.92187 1.71875,0 2.79688,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48438,2.51562 -0.48437,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73437,0 -2.8125,-1.10937 -1.07812,-1.125 -1.07812,-3.23438 z m 1.45312,0 q 0,1.59375 0.6875,2.39063 0.70313,0.79687 1.75,0.79687 1.04688,0 1.73438,-0.79687 0.70312,-0.79688 0.70312,-2.4375 0,-1.53125 -0.70312,-2.32813 -0.6875,-0.79687 -1.73438,-0.79687 -1.04687,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 6.66406,7.34375 0,-1.01562 9.32813,0 0,1.01562 -9.32813,0 z m 10.19532,-3.1875 0,-8.29687 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45312 l -0.48438,1.29688 q -0.51562,-0.29688 -1.03125,-0.29688 -0.45312,0 -0.82812,0.28125 -0.35938,0.26563 
 -0.51563,0.76563 -0.23437,0.75 -0.23437,1.64062 l 0,4.34375 -1.40625,0 z m 11.01562,-2.67187 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83593,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45313,-0.64062 1.01563,-0.95312 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42188,0.98437 0.42188,2.17187 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.
 54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.29687 q 0,1.60937 0.64062,2.375 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.10156,-0.0469 q 0,-2.29687 1.28125,-3.40625 1.07813,-0.92187 2.60938,-0.92187 1.71875,0 2.79687,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48437,2.51562 -0.48438,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73438,0 -2.8125,-1.10937 -1.07813,-1.125 -1.07813,-3.23438 z m 1.45313,0 q 0,1.59375 0.6875,2.39063 0.70312,0.79687 1.75,0.79687 1.04687,0 1.73437,-0.79687 0.70313,-0.79688 0.70313,-2.4375 0,-1.53125 -0.70313,-2.32813 -0.6875,-0.79687 -1.73437,-0.79687 -1.04688,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 7.96093,4.15625 0,-8.29687 1.26563,0 0,1.25 q 0.48437,-0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45312 l -0.48437,1
 .29688 q -0.51563,-0.29688 -1.03125,-0.29688 -0.45313,0 -0.82813,0.28125 -0.35937,0.26563 -0.51562,0.76563 -0.23438,0.75 -0.23438,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23438 -0.42188,-0.25 -0.59375,-0.64062 -0.17188,-0.40625 -0.17188,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z"
-       id="path101"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="M 613.6657,134.74803 959.61846,249.51968"
-       id="path103"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 613.6657,134.74803 959.61846,249.51968"
-       id="path105"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 96.80315,249.52692 125.60629,0 0,76.56692 -125.60629,0 z"
-       id="path107"
-       inkscape:connector-curvature="0"
-       style="fill:#cccccc;fill-rule:nonzero" />
-    <path
-       d="m 96.80315,249.52692 125.60629,0 0,76.56692 -125.60629,0 z"
-       id="path109"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-    <path
-       d="m 123.3016,296.85788 0,-11.48437 1.28125,0 0,1.07812 q 0.45313,-0.64062 1.01563,-0.95312 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42188,0.98437 0.42188,2.17187 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.29687 q 0,1.60937 0.64062,2.375 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.36719,4.79687 1.375,0.20313 q 0.0781,0.64062 0.46875,0.92187 0.53125,0.39063 1.4375,0.39063 0.96875,0 1.5,-0.39063 0.53125,-0.39062 0.71875,-1.09375 0.10937,-0.42187 0.10937,-1.8125 -0.92187,1.09375 -2.29687,1.09375 -1.71875,0 -2.65625,-1.23437 -0.9375,-1.23438 -0.9375,-2.96875 0,-1.1875 0.42187,-2.1875 0.4375,-1 1.25,-1.54688 0.8281
 3,-0.54687 1.92188,-0.54687 1.46875,0 2.42187,1.1875 l 0,-1 1.29688,0 0,7.17187 q 0,1.9375 -0.39063,2.75 -0.39062,0.8125 -1.25,1.28125 -0.85937,0.46875 -2.10937,0.46875 -1.48438,0 -2.40625,-0.67187 -0.90625,-0.67188 -0.875,-2.01563 z m 1.17187,-4.98437 q 0,1.625 0.64063,2.375 0.65625,0.75 1.625,0.75 0.96875,0 1.625,-0.73438 0.65625,-0.75 0.65625,-2.34375 0,-1.53125 -0.67188,-2.29687 -0.67187,-0.78125 -1.625,-0.78125 -0.9375,0 -1.59375,0.76562 -0.65625,0.76563 -0.65625,2.26563 z m 6.67969,7.48437 0,-1.01562 9.32812,0 0,1.01562 -9.32812,0 z m 15.58594,-3.1875 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82813,-0.54687 -0.82812,-0.54688 -1.29687,-1.53125 -0.45313,-0.98438 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01563 1.25,-1.54688 0.82813,-0.54687 1.85938,-0.54687 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67187,2.39062 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625
 ,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 13.63281,1.46875 1.45312,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92187,0.70312 -2.35937,0.70312 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10937 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29687 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 8.16407,4.95312 0,-7.20312 -1.23438,0 0,-1.09375 1.23438,0 0,-0.89063 q 0,-0.82812 0.15625,-1.23437 0.20312,-0.54688 0.70312,-0.89063 0.51563,-0.34375 1.4375,-0.34375 0.59375,0 1.3125,0.14063 l -0.20312,1.23437 q -0.4375,-0.0781 -0.82813,-0.0781 -0.64062,0 -0
 .90625,0.28125 -0.26562,0.26562 -0.26562,1.01562 l 0,0.76563 1.60937,0 0,1.09375 -1.60937,0 0,7.20312 -1.40625,0 z m 9.52343,-1.03125 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 0,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.1
 5625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 9.03906,4.17187 0,-1.21875 q -0.96875,1.40625 -2.64062,1.40625 -0.73438,0 -1.375,-0.28125 -0.625,-0.28125 -0.9375,-0.70312 -0.3125,-0.4375 -0.4375,-1.04688 -0.0781,-0.42187 -0.0781,-1.3125 l 0,-5.14062 1.40625,0 0,4.59375 q 0,1.10937 0.0781,1.48437 0.14062,0.5625 0.5625,0.875 0.4375,0.3125 1.0625,0.3125 0.64062,0 1.1875,-0.3125 0.5625,-0.32812 0.78125,-0.89062 0.23437,-0.5625 0.23437,-1.625 l 0,-4.4375 1.40625,0 0,8.29687 -1.25,0 z m 3.42969,0 0,-11.45312 1.40625,0 0,11.45312 -1.40625,0 z m 6.64844,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.6093
 7 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z"
-       id="path111"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-rule:nonzero" />
-    <path
-       d="M 613.6657,134.74803 159.61846,249.51968"
-       id="path113"
-       inkscape:connector-curvature="0"
-       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
-    <path
-       d="M 613.6657,134.74803 159.61846,249.51968"
-       id="path115"
-       inkscape:connector-curvature="0"
-       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
-  </g>
-</svg>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/ElasticSegments.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/ElasticSegments.html.md.erb b/overview/ElasticSegments.html.md.erb
deleted file mode 100755
index 383eab5..0000000
--- a/overview/ElasticSegments.html.md.erb
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Elastic Query Execution Runtime
----
-
-HAWQ uses dynamically allocated virtual segments to provide resources for query execution.
-
-In HAWQ 1.x, the number of segments \(compute resource carrier\) used to run a query is fixed, no matter whether the underlying query is big query requiring many resources or a small query requiring little resources. This architecture is simple, however it uses resources inefficiently.
-
-To address this issue, HAWQ now uses the elastic query execution runtime feature, which is based on virtual segments. HAWQ allocates virtual segments on demand based on the costs of queries. In other words, for big queries, HAWQ starts a large number of virtual segments, while for small queries HAWQ starts fewer virtual segments.
-
-## Storage
-
-In HAWQ, the number of invoked segments varies based on cost of query. In order to simplify table data management, all data of one relation are saved under one HDFS folder.
-
-For all the HAWQ table storage formats, AO \(Append-Only\) and Parquet, the data files are splittable, so that HAWQ can assign multiple virtual segments to consume one data file concurrently to increase the parallelism of a query.
-
-## Physical Segments and Virtual Segments
-
-In HAWQ, only one physical segment needs to be installed on one host, in which multiple virtual segments can be started to run queries. HAWQ allocates multiple virtual segments distributed across different hosts on demand to run one query. Virtual segments are carriers \(containers\) for resources such as memory and CPU. Queries are executed by query executors in virtual segments.
-
-**Note:** In this documentation, when we refer to segment by itself, we mean a *physical segment*.
-
-## Virtual Segment Allocation Policy
-
-Different number of virtual segments are allocated based on virtual segment allocation policies. The following factors determine the number of virtual segments that are used for a query:
-
--   Resources available at the query running time
--   The cost of the query
--   The distribution of the table; in other words, randomly distributed tables and hash distributed tables
--   Whether the query involves UDFs and external tables
--   Specific server configuration parameters, such as `default_hash_table_bucket_number` for hash table queries and `hawq_rm_nvseg_perquery_limit`

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/HAWQArchitecture.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/HAWQArchitecture.html.md.erb b/overview/HAWQArchitecture.html.md.erb
deleted file mode 100755
index d42d241..0000000
--- a/overview/HAWQArchitecture.html.md.erb
+++ /dev/null
@@ -1,69 +0,0 @@
----
-title: HAWQ Architecture
----
-
-This topic presents HAWQ architecture and its main components.
-
-In a typical HAWQ deployment, each slave node has one physical HAWQ segment, an HDFS DataNode and a NodeManager installed. Masters for HAWQ, HDFS and YARN are hosted on separate nodes.
-
-The following diagram provides a high-level architectural view of a typical HAWQ deployment.
-
-![](../mdimages/hawq_high_level_architecture.png)
-
-HAWQ is tightly integrated with YARN, the Hadoop resource management framework, for query resource management. HAWQ caches containers from YARN in a resource pool and then manages those resources locally by leveraging HAWQ's own finer-grained resource management for users and groups. To execute a query, HAWQ allocates a set of virtual segments according to the cost of a query, resource queue definitions, data locality and the current resource usage in the system. Then the query is dispatched to corresponding physical hosts, which can be a subset of nodes or the whole cluster. The HAWQ resource enforcer on each node monitors and controls the real time resources used by the query to avoid resource usage violations.
-
-The following diagram provides another view of the software components that constitute HAWQ.
-
-![](../mdimages/hawq_architecture_components.png)
-
-## <a id="hawqmaster"></a>HAWQ Master 
-
-The HAWQ *master* is the entry point to the system. It is the database process that accepts client connections and processes the SQL commands issued. The HAWQ master parses queries, optimizes queries, dispatches queries to segments and coordinates the query execution.
-
-End-users interact with HAWQ through the master and can connect to the database using client programs such as psql or application programming interfaces \(APIs\) such as JDBC or ODBC.
-
-The master is where the `global system catalog` resides. The global system catalog is the set of system tables that contain metadata about the HAWQ system itself. The master does not contain any user data; data resides only on *HDFS*. The master authenticates client connections, processes incoming SQL commands, distributes workload among segments, coordinates the results returned by each segment, and presents the final results to the client program.
-
-## <a id="hawqsegment"></a>HAWQ Segment 
-
-In HAWQ, the *segments* are the units that process data simultaneously.
-
-There is only one *physical segment* on each host. Each segment can start many Query Executors \(QEs\) for each query slice. This makes a single segment act like multiple virtual segments, which enables HAWQ to better utilize all available resources.
-
-**Note:** In this documentation, when we refer to segment by itself, we mean a *physical segment*.
-
-A *virtual segment* behaves like a container for QEs. Each virtual segment has one QE for each slice of a query. The number of virtual segments used determines the degree of parallelism \(DOP\) of a query.
-
-A segment differs from a master because it:
-
--   Is stateless.
--   Does not store the metadata for each database and table.
--   Does not store data on the local file system.
-
-The master dispatches the SQL request to the segments along with the related metadata information to process. The metadata contains the HDFS url for the required table. The segment accesses the corresponding data using this URL.
-
-## <a id="hawqinterconnect"></a>HAWQ Interconnect 
-
-The *interconnect* is the networking layer of HAWQ. When a user connects to a database and issues a query, processes are created on each segment to handle the query. The *interconnect* refers to the inter-process communication between the segments, as well as the network infrastructure on which this communication relies. The interconnect uses standard Ethernet switching fabric.
-
-By default, the interconnect uses UDP \(User Datagram Protocol\) to send messages over the network. The HAWQ software performs the additional packet verification beyond what is provided by UDP. This means the reliability is equivalent to Transmission Control Protocol \(TCP\), and the performance and scalability exceeds that of TCP. If the interconnect used TCP, HAWQ would have a scalability limit of 1000 segment instances. With UDP as the current default protocol for the interconnect, this limit is not applicable.
-
-## <a id="topic_jjf_11m_g5"></a>HAWQ Resource Manager 
-
-The HAWQ resource manager obtains resources from YARN and responds to resource requests. Resources are buffered by the HAWQ resource manager to support low latency queries. The HAWQ resource manager can also run in standalone mode. In these deployments, HAWQ manages resources by itself without YARN.
-
-See [How HAWQ Manages Resources](../resourcemgmt/HAWQResourceManagement.html) for more details on HAWQ resource management.
-
-## <a id="topic_mrl_psq_f5"></a>HAWQ Catalog Service 
-
-The HAWQ catalog service stores all metadata, such as UDF/UDT information, relation information, security information and data file locations.
-
-## <a id="topic_dcs_rjm_g5"></a>HAWQ Fault Tolerance Service 
-
-The HAWQ fault tolerance service \(FTS\) is responsible for detecting segment failures and accepting heartbeats from segments.
-
-See [Understanding the Fault Tolerance Service](../admin/FaultTolerance.html) for more information on this service.
-
-## <a id="topic_jtc_nkm_g5"></a>HAWQ Dispatcher 
-
-The HAWQ dispatcher dispatches query plans to a selected subset of segments and coordinates the execution of the query. The dispatcher and the HAWQ resource manager are the main components responsible for the dynamic scheduling of queries and the resources required to execute them.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/HAWQOverview.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/HAWQOverview.html.md.erb b/overview/HAWQOverview.html.md.erb
deleted file mode 100755
index c41f3d9..0000000
--- a/overview/HAWQOverview.html.md.erb
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: What is HAWQ?
----
-
-HAWQ is a Hadoop native SQL query engine that combines the key technological advantages of MPP database with the scalability and convenience of Hadoop. HAWQ reads data from and writes data to HDFS natively.
-
-HAWQ delivers industry-leading performance and linear scalability. It provides users the tools to confidently and successfully interact with petabyte range data sets. HAWQ provides users with a complete, standards compliant SQL interface. More specifically, HAWQ has the following features:
-
--   On-premise or cloud deployment
--   Robust ANSI SQL compliance: SQL-92, SQL-99, SQL-2003, OLAP extension
--   Extremely high performance- many times faster than other Hadoop SQL engines
--   World-class parallel optimizer
--   Full transaction capability and consistency guarantee: ACID
--   Dynamic data flow engine through high speed UDP based interconnect
--   Elastic execution engine based on on-demand virtual segments and data locality
--   Support multiple level partitioning and List/Range based partitioned tables.
--   Multiple compression method support: snappy, gzip
--   Multi-language user defined function support: Python, Perl, Java, C/C++, R
--   Advanced machine learning and data mining functionalities through MADLib
--   Dynamic node expansion: in seconds
--   Most advanced three level resource management: Integrate with YARN and hierarchical resource queues.
--   Easy access of all HDFS data and external system data \(for example, HBase\)
--   Hadoop Native: from storage \(HDFS\), resource management \(YARN\) to deployment \(Ambari\).
--   Authentication & granular authorization: Kerberos, SSL and role based access
--   Advanced C/C++ access library to HDFS and YARN: libhdfs3 and libYARN
--   Support for most third party tools: Tableau, SAS et al.
--   Standard connectivity: JDBC/ODBC
-
-HAWQ breaks complex queries into small tasks and distributes them to MPP query processing units for execution.
-
-HAWQ's basic unit of parallelism is the segment instance. Multiple segment instances on commodity servers work together to form a single parallel query processing system. A query submitted to HAWQ is optimized, broken into smaller components, and dispatched to segments that work together to deliver a single result set. All relational operations - such as table scans, joins, aggregations, and sorts - simultaneously execute in parallel across the segments. Data from upstream components in the dynamic pipeline are transmitted to downstream components through the scalable User Datagram Protocol \(UDP\) interconnect.
-
-Based on Hadoop's distributed storage, HAWQ has no single point of failure and supports fully-automatic online recovery. System states are continuously monitored, therefore if a segment fails, it is automatically removed from the cluster. During this process, the system continues serving customer queries, and the segments can be added back to the system when necessary.
-
-These topics provide more information about HAWQ and its main components:
-
-* <a class="subnav" href="./HAWQArchitecture.html">HAWQ Architecture</a>
-* <a class="subnav" href="./TableDistributionStorage.html">Table Distribution and Storage</a>
-* <a class="subnav" href="./ElasticSegments.html">Elastic Segments</a>
-* <a class="subnav" href="./ResourceManagement.html">Resource Management</a>
-* <a class="subnav" href="./HDFSCatalogCache.html">HDFS Catalog Cache</a>
-* <a class="subnav" href="./ManagementTools.html">Management Tools</a>
-* <a class="subnav" href="./RedundancyFailover.html">High Availability, Redundancy, and Fault Tolerance</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/HDFSCatalogCache.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/HDFSCatalogCache.html.md.erb b/overview/HDFSCatalogCache.html.md.erb
deleted file mode 100755
index 8803dc4..0000000
--- a/overview/HDFSCatalogCache.html.md.erb
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: HDFS Catalog Cache
----
-
-HDFS catalog cache is a caching service used by HAWQ master to determine the distribution information of table data on HDFS.
-
-HDFS is slow at RPC handling, especially when the number of concurrent requests is high. In order to decide which segments handle which part of data, HAWQ needs data location information from HDFS NameNodes. HDFS catalog cache is used to cache the data location information and accelerate HDFS RPCs.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/ManagementTools.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/ManagementTools.html.md.erb b/overview/ManagementTools.html.md.erb
deleted file mode 100755
index 0c7439d..0000000
--- a/overview/ManagementTools.html.md.erb
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: HAWQ Management Tools
----
-
-HAWQ management tools are consolidated into one `hawq` command.
-
-The `hawq` command can init, start and stop each segment separately, and supports dynamic expansion of the cluster.
-
-See [HAWQ Management Tools Reference](../reference/cli/management_tools.html) for a list of all tools available in HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/RedundancyFailover.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/RedundancyFailover.html.md.erb b/overview/RedundancyFailover.html.md.erb
deleted file mode 100755
index 90eec63..0000000
--- a/overview/RedundancyFailover.html.md.erb
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: High Availability, Redundancy and Fault Tolerance
----
-
-HAWQ ensures high availability for its clusters through system redundancy. HAWQ deployments utilize platform hardware redundancy, such as RAID for the master catalog, JBOD for segments and network redundancy for its interconnect layer. On the software level, HAWQ provides redundancy via master mirroring and dual cluster maintenance. In addition, HAWQ supports high availability NameNode configuration within HDFS.
-
-To maintain cluster health, HAWQ uses a fault tolerance service based on heartbeats and on-demand probe protocols. It can identify newly added nodes dynamically and remove nodes from the cluster when it becomes unusable.
-
-## <a id="abouthighavailability"></a>About High Availability 
-
-HAWQ employs several mechanisms for ensuring high availability. The foremost mechanisms are specific to HAWQ and include the following:
-
-* Master mirroring. Clusters have a standby master in the event of failure of the primary master.
-* Dual clusters. Administrators can create a secondary cluster and synchronizes its data with the primary cluster either through dual ETL or backup and restore mechanisms.
-
-In addition to high availability managed on the HAWQ level, you can enable high availability in HDFS for HAWQ by implementing the high availability feature for NameNodes. See [HAWQ Filespaces and High Availability Enabled HDFS](../admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html).
-
-
-## <a id="aboutsegmentfailover"></a>About Segment Fault Tolerance 
-
-In HAWQ, the segments are stateless. This ensures faster recovery and better availability.
-
-When a segment fails, the segment is removed from the resource pool. Queries are no longer dispatched to the segment. When the segment is operational again, the Fault Tolerance Service verifies its state and adds the segment back to the resource pool.
-
-## <a id="aboutinterconnectredundancy"></a>About Interconnect Redundancy 
-
-The *interconnect* refers to the inter-process communication between the segments and the network infrastructure on which this communication relies. You can achieve a highly available interconnect by deploying dual Gigabit Ethernet switches on your network and deploying redundant Gigabit connections to the HAWQ host \(master and segment\) servers.
-
-In order to use multiple NICs in HAWQ, NIC bounding is required.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/ResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/ResourceManagement.html.md.erb b/overview/ResourceManagement.html.md.erb
deleted file mode 100755
index 8f7e2fd..0000000
--- a/overview/ResourceManagement.html.md.erb
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Resource Management
----
-
-HAWQ provides several approaches to resource management and includes several user-configurable options, including integration with YARN's resource management.
-
-HAWQ has the ability to manage resources by using the following mechanisms:
-
--   Global resource management. You can integrate HAWQ with the YARN resource manager to request or return resources as needed. If you do not integrate HAWQ with YARN, HAWQ exclusively consumes cluster resources and manages its own resources. If you integrate HAWQ with YARN, then HAWQ automatically fetches resources from YARN and manages those obtained resources through its internally defined resource queues. Resources are returned to YARN automatically when resources are not used anymore.
--   User defined hierarchical resource queues. HAWQ administrators or superusers design and define the resource queues used to organize the distribution of resources to queries.
--   Dynamic resource allocation at query runtime. HAWQ dynamically allocates resources based on resource queue definitions. HAWQ automatically distributes resources based on running \(or queued\) queries and resource queue capacities.
--   Resource limitations on virtual segments and queries. You can configure HAWQ to enforce limits on CPU and memory usage both for virtual segments and the resource queues used by queries.
-
-For more details on resource management in HAWQ and how it works, see [Managing Resources](../resourcemgmt/HAWQResourceManagement.html).


[32/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/svg/hawq_resource_queues.svg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/svg/hawq_resource_queues.svg b/markdown/mdimages/svg/hawq_resource_queues.svg
new file mode 100644
index 0000000..4fdf655
--- /dev/null
+++ b/markdown/mdimages/svg/hawq_resource_queues.svg
@@ -0,0 +1,340 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   version="1.1"
+   viewBox="0 0 1033.1752 549.67151"
+   stroke-miterlimit="10"
+   id="svg2"
+   inkscape:version="0.91 r13725"
+   sodipodi:docname="hawq_resource_queue.svg"
+   width="1033.1752"
+   height="549.67151"
+   style="fill:none;stroke:none;stroke-linecap:square;stroke-miterlimit:10">
+  <metadata
+     id="metadata121">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <defs
+     id="defs119" />
+  <sodipodi:namedview
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1"
+     objecttolerance="10"
+     gridtolerance="10"
+     guidetolerance="10"
+     inkscape:pageopacity="0"
+     inkscape:pageshadow="2"
+     inkscape:window-width="1033"
+     inkscape:window-height="564"
+     id="namedview117"
+     showgrid="false"
+     fit-margin-top="0"
+     fit-margin-left="0"
+     fit-margin-right="0"
+     fit-margin-bottom="0"
+     inkscape:zoom="0.47569444"
+     inkscape:cx="446.16423"
+     inkscape:cy="394.66056"
+     inkscape:window-x="0"
+     inkscape:window-y="0"
+     inkscape:window-maximized="0"
+     inkscape:current-layer="svg2" />
+  <clipPath
+     id="p.0">
+    <path
+       d="m 0,0 1152,0 0,864 L 0,864 0,0 Z"
+       id="path5"
+       inkscape:connector-curvature="0"
+       style="clip-rule:nonzero" />
+  </clipPath>
+  <g
+     clip-path="url(#p.0)"
+     id="g7"
+     transform="translate(-62.565693,-24.726276)">
+    <path
+       d="m 0,0 1152,0 0,864 -1152,0 z"
+       id="path9"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 550.86255,58.181103 125.60632,0 0,76.566927 -125.60632,0 z"
+       id="path11"
+       inkscape:connector-curvature="0"
+       style="fill:#cccccc;fill-rule:nonzero" />
+    <path
+       d="m 550.86255,58.181103 125.60632,0 0,76.566927 -125.60632,0 z"
+       id="path13"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:8, 6" />
+    <path
+       d="m 587.5954,105.51206 0,-11.484375 1.28125,0 0,1.078125 q 0.45313,-0.640625 1.01563,-0.953125 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.546875 0.8125,0.546875 1.21875,1.546875 0.42188,0.984375 0.42188,2.171875 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.296875 q 0,1.609375 0.64062,2.375005 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.796875 -1.5625,-0.796875 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.36718,4.796875 1.375,0.20313 q 0.0781,0.64062 0.46875,0.92187 0.53125,0.39063 1.4375,0.39063 0.96875,0 1.5,-0.39063 0.53125,-0.39062 0.71875,-1.09375 0.10938,-0.42187 0.10938,-1.8125 -0.92188,1.09375 -2.29688,1.09375 -1.71875,0 -2.65625,-1.23437 -0.9375,-1.23438 -0.9375,-2.968755 0,-1.1875 0.42188,-2.1875 0.4375,-1 1.
 25,-1.546875 0.82812,-0.546875 1.92187,-0.546875 1.46875,0 2.42188,1.1875 l 0,-1 1.29687,0 0,7.171875 q 0,1.9375 -0.39062,2.75 -0.39063,0.8125 -1.25,1.28125 -0.85938,0.46875 -2.10938,0.46875 -1.48437,0 -2.40625,-0.67187 -0.90625,-0.67188 -0.875,-2.01563 z m 1.17188,-4.984375 q 0,1.625 0.64062,2.375005 0.65625,0.75 1.625,0.75 0.96875,0 1.625,-0.73438 0.65625,-0.75 0.65625,-2.34375 0,-1.53125 -0.67187,-2.296875 -0.67188,-0.78125 -1.625,-0.78125 -0.9375,0 -1.59375,0.765625 -0.65625,0.765625 -0.65625,2.265625 z m 6.67969,7.484375 0,-1.01562 9.32812,0 0,1.01562 -9.32812,0 z m 10.19531,-3.1875 0,-8.296875 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.453125 l -0.48438,1.296875 q -0.51562,-0.296875 -1.03125,-0.296875 -0.45312,0 -0.82812,0.28125 -0.35938,0.265625 -0.51563,0.765625 -0.23437,0.75 -0.23437,1.640625 l 0,4.34375 -1.40625,0 z m 4.8125,-4.15625 q 0,-2.296875 1.28125,-3.40625 1.07812,-0.921875 2.60937,-0.921875 1.71875,0 2.7
 9688,1.125 1.09375,1.109375 1.09375,3.09375 0,1.59375 -0.48438,2.515625 -0.48437,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73437,0 -2.8125,-1.10937 -1.07812,-1.125 -1.07812,-3.23438 z m 1.45312,0 q 0,1.59375 0.6875,2.39063 0.70313,0.79687 1.75,0.79687 1.04688,0 1.73438,-0.79687 0.70312,-0.79688 0.70312,-2.437505 0,-1.53125 -0.70312,-2.328125 -0.6875,-0.796875 -1.73438,-0.796875 -1.04687,0 -1.75,0.796875 -0.6875,0.78125 -0.6875,2.375 z m 7.44532,0 q 0,-2.296875 1.28125,-3.40625 1.07812,-0.921875 2.60937,-0.921875 1.71875,0 2.79688,1.125 1.09375,1.109375 1.09375,3.09375 0,1.59375 -0.48438,2.515625 -0.48437,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73437,0 -2.8125,-1.10937 -1.07812,-1.125 -1.07812,-3.23438 z m 1.45312,0 q 0,1.59375 0.6875,2.39063 0.70313,0.79687 1.75,0.79687 1.04688,0 1.73438,-0.79687 0.70312,-0.79688 0.70312,-2.437505 0,-1.53125 -0.70312,-2.328125 -0.6875,-0.796875 -1.73438,-0.796875 -1.04687,0 -1.75,0.796875 -0.6875,0.78125 -0.6875,2.375 z m 11.03906,2.8906
 3 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23438 -0.42188,-0.25 -0.59375,-0.64062 -0.17188,-0.40625 -0.17188,-1.67188 l 0,-4.765625 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.609375 0.0625,0.781255 0.0781,0.17187 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z"
+       id="path15"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 265.409,249.52692 125.60629,0 0,76.56692 -125.60629,0 z"
+       id="path17"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 265.409,249.52692 125.60629,0 0,76.56692 -125.60629,0 z"
+       id="path19"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:8, 6" />
+    <path
+       d="m 289.7317,293.67038 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82813,-0.54687 -0.82812,-0.54688 -1.29687,-1.53125 -0.45313,-0.98438 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01563 1.25,-1.54688 0.82813,-0.54687 1.85938,-0.54687 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67187,2.39062 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 13.63281,1.46875 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.4
 0625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83593,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45313,-0.64062 1.01563,-0.95312 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42188,0.98437 0.42188,2.17187 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.29687 q 0,1.60937 0.64062,2.375 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 13.03906,3.07812 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 0
 ,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 3.58593,4.17187 0,-8.29687 1.26563,0 0,1.25 q 0.48437,
 -0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45312 l -0.48437,1.29688 q -0.51563,-0.29688 -1.03125,-0.29688 -0.45313,0 -0.82813,0.28125 -0.35937,0.26563 -0.51562,0.76563 -0.23438,0.75 -0.23438,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23438 -0.42188,-0.25 -0.59375,-0.64062 -0.17188,-0.40625 -0.17188,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z m 1.38282,1.26562 0,-8.29687 1.25,0 0,1.15625 q 0.39062,-0.60938 1.03125,-0.96875 0.65625,-0.375 1.48437,-0.375 0.92188,0 1.51563,0.39062 0.59375,0.375 0.82812,1.0625 0.98438,-1.45312 2.5625,-1.45312 1.23438,0 1.89063,0.6875 0.67187,0.67187 0.67187,2.09375 l 0,5.70312 -1.39062,0 0,-5.23437 q 0,-0.84375 -0.14063,-1.20313 -0.14062,-0.375 -0.5,
 -0.59375 -0.35937,-0.23437 -0.84375,-0.23437 -0.875,0 -1.45312,0.57812 -0.57813,0.57813 -0.57813,1.85938 l 0,4.82812 -1.40625,0 0,-5.39062 q 0,-0.9375 -0.34375,-1.40625 -0.34375,-0.46875 -1.125,-0.46875 -0.59375,0 -1.09375,0.3125 -0.5,0.3125 -0.73437,0.92187 -0.21875,0.59375 -0.21875,1.71875 l 0,4.3125 -1.40625,0 z m 19,-2.67187 1.45312,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92187,0.70312 -2.35937,0.70312 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10937 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29687 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,4.95312 0,-8.29687 1.26562,0 0,1.17187 q 0.90625,-1.35937 2.64063,-1.35937 0.75,0 
 1.375,0.26562 0.625,0.26563 0.9375,0.70313 0.3125,0.4375 0.4375,1.04687 0.0781,0.39063 0.0781,1.35938 l 0,5.10937 -1.40625,0 0,-5.04687 q 0,-0.85938 -0.17187,-1.28125 -0.15625,-0.4375 -0.57813,-0.6875 -0.40625,-0.25 -0.96875,-0.25 -0.90625,0 -1.5625,0.57812 -0.64062,0.5625 -0.64062,2.15625 l 0,4.53125 -1.40625,0 z m 11.96094,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 6.28906,1.26562 -1.40625,0 0,-8.96875 q -0.51563,0.48438 -1.34375,0.96875 -0.8125,0.48438 -1.46875,0.73438 l 0,-1.35938 q 1.17187,-0.5625 2.04687,-1.34375 0.89063,-0.79687 1.26563,-1.53125 l 0.90625,0 0,11.5 z"
+       id="path21"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 510.44476,249.52692 125.60626,0 0,76.56692 -125.60626,0 z"
+       id="path23"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 510.44476,249.52692 125.60626,0 0,76.56692 -125.60626,0 z"
+       id="path25"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:8, 6" />
+    <path
+       d="m 534.76746,293.67038 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82812,-0.54687 -0.82813,-0.54688 -1.29688,-1.53125 -0.45312,-0.98438 -0.45312,-2.25 0,-1.25 0.40625,-2.25 0.42187,-1.01563 1.25,-1.54688 0.82812,-0.54687 1.85937,-0.54687 0.75,0 1.32813,0.3125 0.59375,0.3125 0.95312,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67188,2.39062 0.67187,0.78125 1.57812,0.78125 0.92188,0 1.5625,-0.75 0.65625,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 13.63281,1.46875 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.
 40625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45312,-0.64062 1.01562,-0.95312 0.57813,-0.3125 1.39063,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42187,0.98437 0.42187,2.17187 0,1.28125 -0.46875,2.29688 -0.45312,1.01562 -1.32812,1.5625 -0.85938,0.54687 -1.82813,0.54687 -0.70312,0 -1.26562,-0.29687 -0.54688,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26562,-7.29687 q 0,1.60937 0.64063,2.375 0.65625,0.76562 1.57812,0.76562 0.9375,0 1.60938,-0.79687 0.67187,-0.79688 0.67187,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89062,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 13.03906,3.07812 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 
 0,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 3.58594,4.17187 0,-8.29687 1.26562,0 0,1.25 q 0.48438
 ,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45312 l -0.48438,1.29688 q -0.51562,-0.29688 -1.03125,-0.29688 -0.45312,0 -0.82812,0.28125 -0.35938,0.26563 -0.51563,0.76563 -0.23437,0.75 -0.23437,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 1.38281,1.26562 0,-8.29687 1.25,0 0,1.15625 q 0.39062,-0.60938 1.03125,-0.96875 0.65625,-0.375 1.48437,-0.375 0.92188,0 1.51563,0.39062 0.59375,0.375 0.82812,1.0625 0.98438,-1.45312 2.5625,-1.45312 1.23438,0 1.89063,0.6875 0.67187,0.67187 0.67187,2.09375 l 0,5.70312 -1.39062,0 0,-5.23437 q 0,-0.84375 -0.14063,-1.20313 -0.14062,-0.375 -0.5
 ,-0.59375 -0.35937,-0.23437 -0.84375,-0.23437 -0.875,0 -1.45312,0.57812 -0.57813,0.57813 -0.57813,1.85938 l 0,4.82812 -1.40625,0 0,-5.39062 q 0,-0.9375 -0.34375,-1.40625 -0.34375,-0.46875 -1.125,-0.46875 -0.59375,0 -1.09375,0.3125 -0.5,0.3125 -0.73437,0.92187 -0.21875,0.59375 -0.21875,1.71875 l 0,4.3125 -1.40625,0 z m 19,-2.67187 1.45312,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92187,0.70312 -2.35937,0.70312 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10937 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29687 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,4.95312 0,-8.29687 1.26563,0 0,1.17187 q 0.90625,-1.35937 2.64062,-1.35937 0.75,0
  1.375,0.26562 0.625,0.26563 0.9375,0.70313 0.3125,0.4375 0.4375,1.04687 0.0781,0.39063 0.0781,1.35938 l 0,5.10937 -1.40625,0 0,-5.04687 q 0,-0.85938 -0.17188,-1.28125 -0.15625,-0.4375 -0.57812,-0.6875 -0.40625,-0.25 -0.96875,-0.25 -0.90625,0 -1.5625,0.57812 -0.64063,0.5625 -0.64063,2.15625 l 0,4.53125 -1.40625,0 z m 11.96094,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 8.38281,-0.0937 0,1.35937 -7.57812,0 q -0.0156,-0.51562 0.17187,-0.98437 0.28125,-0.76563 0.92188,-1.51563 0.64062,-0.75 1.84375,-1.73437 1.85937,-1.53125 2.51562,-2.42188 0.65625,-0.90625 0.65625,-1.70312 0,-0.82813 -0.59375,-1.40625 -0.59375,-0.57813 -1.5625,-0.57813 -1.01562,0 
 -1.625,0.60938 -0.60937,0.60937 -0.60937,1.6875 l -1.45313,-0.14063 q 0.15625,-1.625 1.125,-2.46875 0.96875,-0.84375 2.59375,-0.84375 1.65625,0 2.60938,0.92188 0.96875,0.90625 0.96875,2.25 0,0.6875 -0.28125,1.35937 -0.28125,0.65625 -0.9375,1.39063 -0.65625,0.73437 -2.17188,2.01562 -1.26562,1.0625 -1.625,1.45313 -0.35937,0.375 -0.59375,0.75 l 5.625,0 z"
+       id="path27"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 189,433.2189 125.60629,0 0,76.56696 -125.60629,0 z"
+       id="path29"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 189,433.2189 125.60629,0 0,76.56696 -125.60629,0 z"
+       id="path31"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 232.0258,476.33112 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54688,0.26563 -1.375,0 -2.10937,-0.67188 -0.73438,-0.67187 -0.73438,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45313,-0.3125 1.01563,-0.46875 0.42187,-0.10937 1.25,-0.20312 1.70312,-0.20313 2.51562,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39062,-1.20312 -0.54688,-0.48438 -1.60938,-0.48438 -0.98437,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60938,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67187,0.25 0.98437,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35938,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10938,-3.14063 q -0.76562,0.3125 -2.29687,0.53125 -0.875,0.125 -1.23438,0.28125 -0.35937,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42188,0.9375 0.4375,0.375 1.25,0.375 0.
 8125,0 1.4375,-0.34375 0.64062,-0.35937 0.9375,-0.98437 0.23437,-0.46875 0.23437,-1.40625 l 0,-0.51563 z m 8.97657,4.17188 0,-1.04688 q -0.78125,1.23438 -2.3125,1.23438 -1,0 -1.82813,-0.54688 -0.82812,-0.54687 -1.29687,-1.53125 -0.45313,-0.98437 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01562 1.25,-1.54687 0.82813,-0.54688 1.85938,-0.54688 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82813 l 0,-4.10938 1.40625,0 0,11.45313 -1.3125,0 z m -4.4375,-4.14063 q 0,1.59375 0.67187,2.39063 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625,-0.76563 0.65625,-2.3125 0,-1.70313 -0.65625,-2.5 -0.65625,-0.79688 -1.625,-0.79688 -0.9375,0 -1.5625,0.76563 -0.625,0.76562 -0.625,2.42187 z m 7.96093,4.14063 0,-11.45313 1.40625,0 0,4.10938 q 0.98438,-1.14063 2.48438,-1.14063 0.92187,0 1.59375,0.35938 0.6875,0.35937 0.96875,1 0.29687,0.64062 0.29687,1.85937 l 0,5.26563 -1.40625,0 0,-5.26563 q 0,-1.04687 -0.45312,-1.53125 -0.45313,-0.48437 -1.29688,-0.48437 -0.625,0 -1.17187,0.32812 -0
 .54688,0.32813 -0.78125,0.89063 -0.23438,0.54687 -0.23438,1.51562 l 0,4.54688 -1.40625,0 z m 8.36719,-4.15625 q 0,-2.29688 1.28125,-3.40625 1.07813,-0.92188 2.60939,-0.92188 1.71875,0 2.79688,1.125 1.09375,1.10938 1.09375,3.09375 0,1.59375 -0.48438,2.51563 -0.48437,0.92187 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73439,0 -2.81251,-1.10938 -1.07813,-1.125 -1.07813,-3.23437 z m 1.45313,0 q 0,1.59375 0.6875,2.39062 0.70312,0.79688 1.75001,0.79688 1.04688,0 1.73438,-0.79688 0.70312,-0.79687 0.70312,-2.4375 0,-1.53125 -0.70312,-2.32812 -0.6875,-0.79688 -1.73438,-0.79688 -1.04689,0 -1.75001,0.79688 -0.6875,0.78125 -0.6875,2.375 z m 13.38283,1.10937 1.39062,0.1875 q -0.23437,1.42188 -1.17187,2.23438 -0.92188,0.8125 -2.28125,0.8125 -1.70313,0 -2.75,-1.10938 -1.03125,-1.125 -1.03125,-3.20312 0,-1.34375 0.4375,-2.34375 0.45312,-1.01563 1.35937,-1.51563 0.92188,-0.5 1.98438,-0.5 1.35937,0 2.21875,0.6875 0.85937,0.67188 1.09375,1.9375 l -1.35938,0.20313 q -0.20312,-0.82813 -0.70312,-1.25 -0.48438
 ,-0.42188 -1.1875,-0.42188 -1.0625,0 -1.73438,0.76563 -0.65625,0.75 -0.65625,2.40625 0,1.67187 0.64063,2.4375 0.64062,0.75 1.67187,0.75 0.82813,0 1.375,-0.5 0.5625,-0.51563 0.70313,-1.57813 z m 7.5,3.04688 -1.40625,0 0,-8.96875 q -0.51563,0.48437 -1.34375,0.96875 -0.8125,0.48437 -1.46875,0.73437 l 0,-1.35937 q 1.17187,-0.5625 2.04687,-1.34375 0.89063,-0.79688 1.26563,-1.53125 l 0.90625,0 0,11.5 z"
+       id="path33"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 341.38232,433.2189 125.60629,0 0,76.56696 -125.60629,0 z"
+       id="path35"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 341.38232,433.2189 125.60629,0 0,76.56696 -125.60629,0 z"
+       id="path37"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 370.1503,477.36237 0,-1.04688 q -0.78125,1.23438 -2.3125,1.23438 -1,0 -1.82812,-0.54688 -0.82813,-0.54687 -1.29688,-1.53125 -0.45312,-0.98437 -0.45312,-2.25 0,-1.25 0.40625,-2.25 0.42187,-1.01562 1.25,-1.54687 0.82812,-0.54688 1.85937,-0.54688 0.75,0 1.32813,0.3125 0.59375,0.3125 0.95312,0.82813 l 0,-4.10938 1.40625,0 0,11.45313 -1.3125,0 z m -4.4375,-4.14063 q 0,1.59375 0.67188,2.39063 0.67187,0.78125 1.57812,0.78125 0.92188,0 1.5625,-0.75 0.65625,-0.76563 0.65625,-2.3125 0,-1.70313 -0.65625,-2.5 -0.65625,-0.79688 -1.625,-0.79688 -0.9375,0 -1.5625,0.76563 -0.625,0.76562 -0.625,2.42187 z m 13.36719,3.10938 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54688,0.26563 -1.375,0 -2.10937,-0.67188 -0.73438,-0.67187 -0.73438,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45313,-0.3125 1.01563,-0.46875 0.42187,-0.10937 1.25,-0.20312 1.70312,-0.20313 2.51562,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39062,-1.20312 -0.54688,-0.48438 -1.60938,-0.4843
 8 -0.98437,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60938,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67187,0.25 0.98437,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35938,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10938,-3.14063 q -0.76562,0.3125 -2.29687,0.53125 -0.875,0.125 -1.23438,0.28125 -0.35937,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42188,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64062,-0.35937 0.9375,-0.98437 0.23437,-0.46875 0.23437,-1.40625 l 0,-0.51563 z m 3.60157,-5.67187 0,-1.60938 1.40625,0 0,1.60938 -1.40625,0 z m 0,9.84375 0,-8.29688 1.40625,0 0,8.29688 -1.40625,0 z m 3.52343,0 0,-11.45313 1.40625,0 0,11.45313 -1.40625,0 z m 3.52344,3.20312 -0.15625,-1.32812 q 0.45313,0.125 0.79688,0.125 0.46875,0 0.75,-0.15625 0.28125,-0.15625 0.46875,-0.43
 75 0.125,-0.20313 0.42187,-1.04688 0.0469,-0.10937 0.125,-0.34375 l -3.14062,-8.3125 1.51562,0 1.71875,4.79688 q 0.34375,0.92187 0.60938,1.92187 0.23437,-0.96875 0.57812,-1.89062 l 1.76563,-4.82813 1.40625,0 -3.15625,8.4375 q -0.5,1.375 -0.78125,1.89063 -0.375,0.6875 -0.85938,1.01562 -0.48437,0.32813 -1.15625,0.32813 -0.40625,0 -0.90625,-0.17188 z m 6.75,-0.0156 0,-1.01563 9.32813,0 0,1.01563 -9.32813,0 z m 11.50781,-3.1875 -1.3125,0 0,-11.45313 1.40625,0 0,4.07813 q 0.89063,-1.10938 2.28125,-1.10938 0.76563,0 1.4375,0.3125 0.6875,0.29688 1.125,0.85938 0.45313,0.5625 0.70313,1.35937 0.25,0.78125 0.25,1.67188 0,2.14062 -1.0625,3.3125 -1.04688,1.15625 -2.53125,1.15625 -1.46875,0 -2.29688,-1.23438 l 0,1.04688 z m -0.0156,-4.21875 q 0,1.5 0.40625,2.15625 0.65625,1.09375 1.79687,1.09375 0.92188,0 1.59375,-0.79688 0.67188,-0.8125 0.67188,-2.39062 0,-1.625 -0.65625,-2.39063 -0.64063,-0.78125 -1.54688,-0.78125 -0.92187,0 -1.59375,0.79688 -0.67187,0.79687 -0.67187,2.3125 z m 13.02344,3.1875 
 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54688,0.26563 -1.375,0 -2.10937,-0.67188 -0.73438,-0.67187 -0.73438,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45313,-0.3125 1.01563,-0.46875 0.42187,-0.10937 1.25,-0.20312 1.70312,-0.20313 2.51562,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39062,-1.20312 -0.54688,-0.48438 -1.60938,-0.48438 -0.98437,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60938,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67187,0.25 0.98437,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35938,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10938,-3.14063 q -0.76562,0.3125 -2.29687,0.53125 -0.875,0.125 -1.23438,0.28125 -0.35937,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42188,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64062,
 -0.35937 0.9375,-0.98437 0.23437,-0.46875 0.23437,-1.40625 l 0,-0.51563 z m 6.66406,2.90625 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23437 -0.42188,-0.25 -0.59375,-0.64063 -0.17188,-0.40625 -0.17188,-1.67187 l 0,-4.76563 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60938 0.0625,0.78125 0.0781,0.17188 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z m 6.78907,-1.78125 1.39062,0.1875 q -0.23437,1.42188 -1.17187,2.23438 -0.92188,0.8125 -2.28125,0.8125 -1.70313,0 -2.75,-1.10938 -1.03125,-1.125 -1.03125,-3.20312 0,-1.34375 0.4375,-2.34375 0.45312,-1.01563 1.35937,-1.51563 0.92188,-0.5 1.98438,-0.5 1.35937,0 2.21875,0.6875 0.85937,0.67188 1.09375,1.9375 l -1.35938,0.20313 q -0.20312,-0.82813 -0.70312,-1.25 -0.48438,-0.42188 -1.1875,-0.42188 -1.0625,0 -1.73438,0.76563 -0.65625,0.75 -0.65625,2.40625 0,1.67187 0.64063,2.4375 0.64062,0.75 1.67187,0.75 0.82813,0 1.375,-0.5 0.56
 25,-0.51563 0.70313,-1.57813 z m 2.59375,3.04688 0,-11.45313 1.40625,0 0,4.10938 q 0.98437,-1.14063 2.48437,-1.14063 0.92188,0 1.59375,0.35938 0.6875,0.35937 0.96875,1 0.29688,0.64062 0.29688,1.85937 l 0,5.26563 -1.40625,0 0,-5.26563 q 0,-1.04687 -0.45313,-1.53125 -0.45312,-0.48437 -1.29687,-0.48437 -0.625,0 -1.17188,0.32812 -0.54687,0.32813 -0.78125,0.89063 -0.23437,0.54687 -0.23437,1.51562 l 0,4.54688 -1.40625,0 z"
+       id="path39"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="m 510.44476,440.87274 125.60626,0 0,76.56696 -125.60626,0 z"
+       id="path41"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 510.44476,440.87274 125.60626,0 0,76.56696 -125.60626,0 z"
+       id="path43"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 520.95105,485.0162 0,-8.29687 1.25,0 0,1.15625 q 0.39062,-0.60938 1.03125,-0.96875 0.65625,-0.375 1.48437,-0.375 0.92188,0 1.51563,0.39062 0.59375,0.375 0.82812,1.0625 0.98438,-1.45312 2.5625,-1.45312 1.23438,0 1.89063,0.6875 0.67187,0.67187 0.67187,2.09375 l 0,5.70312 -1.39062,0 0,-5.23437 q 0,-0.84375 -0.14063,-1.20313 -0.14062,-0.375 -0.5,-0.59375 -0.35937,-0.23437 -0.84375,-0.23437 -0.875,0 -1.45312,0.57812 -0.57813,0.57813 -0.57813,1.85938 l 0,4.82812 -1.40625,0 0,-5.39062 q 0,-0.9375 -0.34375,-1.40625 -0.34375,-0.46875 -1.125,-0.46875 -0.59375,0 -1.09375,0.3125 -0.5,0.3125 -0.73437,0.92187 -0.21875,0.59375 -0.21875,1.71875 l 0,4.3125 -1.40625,0 z m 12.79687,-4.15625 q 0,-2.29687 1.28125,-3.40625 1.07813,-0.92187 2.60938,-0.92187 1.71875,0 2.79687,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48437,2.51562 -0.48438,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73438,0 -2.8125,-1.10937 -1.07813,-1.125 -1.07813,-3.23438 z m 1.45313,0 q 0,1.59375 0.6875,2.39063 0
 .70312,0.79687 1.75,0.79687 1.04687,0 1.73437,-0.79687 0.70313,-0.79688 0.70313,-2.4375 0,-1.53125 -0.70313,-2.32813 -0.6875,-0.79687 -1.73437,-0.79687 -1.04688,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 7.97656,4.15625 0,-8.29687 1.26563,0 0,1.17187 q 0.90625,-1.35937 2.64062,-1.35937 0.75,0 1.375,0.26562 0.625,0.26563 0.9375,0.70313 0.3125,0.4375 0.4375,1.04687 0.0781,0.39063 0.0781,1.35938 l 0,5.10937 -1.40625,0 0,-5.04687 q 0,-0.85938 -0.17188,-1.28125 -0.15625,-0.4375 -0.57812,-0.6875 -0.40625,-0.25 -0.96875,-0.25 -0.90625,0 -1.5625,0.57812 -0.64063,0.5625 -0.64063,2.15625 l 0,4.53125 -1.40625,0 z m 11.96094,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0
 .60938,-0.0625 z m 1.38281,1.26562 0,-11.45312 1.40625,0 0,4.10937 q 0.98438,-1.14062 2.48438,-1.14062 0.92187,0 1.59375,0.35937 0.6875,0.35938 0.96875,1 0.29687,0.64063 0.29687,1.85938 l 0,5.26562 -1.40625,0 0,-5.26562 q 0,-1.04688 -0.45312,-1.53125 -0.45313,-0.48438 -1.29688,-0.48438 -0.625,0 -1.17187,0.32813 -0.54688,0.32812 -0.78125,0.89062 -0.23438,0.54688 -0.23438,1.51563 l 0,4.54687 -1.40625,0 z m 8.86719,0 0,-11.45312 1.40625,0 0,11.45312 -1.40625,0 z m 3.52344,3.20313 -0.15625,-1.32813 q 0.45312,0.125 0.79687,0.125 0.46875,0 0.75,-0.15625 0.28125,-0.15625 0.46875,-0.4375 0.125,-0.20312 0.42188,-1.04687 0.0469,-0.10938 0.125,-0.34375 l -3.14063,-8.3125 1.51563,0 1.71875,4.79687 q 0.34375,0.92188 0.60937,1.92188 0.23438,-0.96875 0.57813,-1.89063 l 1.76562,-4.82812 1.40625,0 -3.15625,8.4375 q -0.5,1.375 -0.78125,1.89062 -0.375,0.6875 -0.85937,1.01563 -0.48438,0.32812 -1.15625,0.32812 -0.40625,0 -0.90625,-0.17187 z m 6.75,-0.0156 0,-1.01562 9.32812,0 0,1.01562 -9.32812,0 z m 10
 .19531,-3.1875 0,-8.29687 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45312 l -0.48438,1.29688 q -0.51562,-0.29688 -1.03125,-0.29688 -0.45312,0 -0.82812,0.28125 -0.35938,0.26563 -0.51563,0.76563 -0.23437,0.75 -0.23437,1.64062 l 0,4.34375 -1.40625,0 z m 11.01562,-2.67187 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45312,-0.64062 1.01562,-0.95312 0.5781
 3,-0.3125 1.39063,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42187,0.98437 0.42187,2.17187 0,1.28125 -0.46875,2.29688 -0.45312,1.01562 -1.32812,1.5625 -0.85938,0.54687 -1.82813,0.54687 -0.70312,0 -1.26562,-0.29687 -0.54688,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26562,-7.29687 q 0,1.60937 0.64063,2.375 0.65625,0.76562 1.57812,0.76562 0.9375,0 1.60938,-0.79687 0.67187,-0.79688 0.67187,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89062,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.10156,-0.0469 q 0,-2.29687 1.28125,-3.40625 1.07813,-0.92187 2.60938,-0.92187 1.71875,0 2.79687,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48437,2.51562 -0.48438,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73438,0 -2.8125,-1.10937 -1.07813,-1.125 -1.07813,-3.23438 z m 1.45313,0 q 0,1.59375 0.6875,2.39063 0.70312,0.79687 1.75,0.79687 1.04687,0 1.73437,-0.79687 0.70313,-0.79688 0.70313,-2.4375 0,-1.53125 -0.70313,-2.32813 -0.68
 75,-0.79687 -1.73437,-0.79687 -1.04688,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 7.96094,4.15625 0,-8.29687 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45312 l -0.48438,1.29688 q -0.51562,-0.29688 -1.03125,-0.29688 -0.45312,0 -0.82812,0.28125 -0.35938,0.26563 -0.51563,0.76563 -0.23437,0.75 -0.23437,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z"
+       id="path45"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="M 328.21216,326.09384 251.80271,433.21197"
+       id="path47"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 328.21216,326.09384 251.80271,433.21197"
+       id="path49"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 328.21216,326.09384 75.96851,107.11813"
+       id="path51"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 328.21216,326.09384 75.96851,107.11813"
+       id="path53"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 573.2479,326.09384 0,114.77167"
+       id="path55"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="m 573.2479,326.09384 0,114.77167"
+       id="path57"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="M 613.6657,134.74803 328.21688,249.51968"
+       id="path59"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 613.6657,134.74803 328.21688,249.51968"
+       id="path61"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="M 613.6657,134.74803 573.25628,249.51968"
+       id="path63"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 613.6657,134.74803 573.25628,249.51968"
+       id="path65"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 728.2543,249.52692 125.60626,0 0,76.56692 -125.60626,0 z"
+       id="path67"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 728.2543,249.52692 125.60626,0 0,76.56692 -125.60626,0 z"
+       id="path69"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-dasharray:8, 6" />
+    <path
+       d="m 752.57697,293.67038 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82813,-0.54687 -0.82812,-0.54688 -1.29687,-1.53125 -0.45313,-0.98438 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01563 1.25,-1.54688 0.82813,-0.54687 1.85938,-0.54687 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67187,2.39062 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 13.63281,1.46875 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.
 40625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83593,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45313,-0.64062 1.01563,-0.95312 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42188,0.98437 0.42188,2.17187 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.29687 q 0,1.60937 0.64062,2.375 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 13.03906,3.07812 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 
 0,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 3.58593,4.17187 0,-8.29687 1.26563,0 0,1.25 q 0.48437
 ,-0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45312 l -0.48437,1.29688 q -0.51563,-0.29688 -1.03125,-0.29688 -0.45313,0 -0.82813,0.28125 -0.35937,0.26563 -0.51562,0.76563 -0.23438,0.75 -0.23438,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23438 -0.42188,-0.25 -0.59375,-0.64062 -0.17188,-0.40625 -0.17188,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z m 1.38282,1.26562 0,-8.29687 1.25,0 0,1.15625 q 0.39062,-0.60938 1.03125,-0.96875 0.65625,-0.375 1.48437,-0.375 0.92188,0 1.51563,0.39062 0.59375,0.375 0.82812,1.0625 0.98438,-1.45312 2.5625,-1.45312 1.23438,0 1.89063,0.6875 0.67187,0.67187 0.67187,2.09375 l 0,5.70312 -1.39062,0 0,-5.23437 q 0,-0.84375 -0.14063,-1.20313 -0.14062,-0.375 -0.5
 ,-0.59375 -0.35937,-0.23437 -0.84375,-0.23437 -0.875,0 -1.45312,0.57812 -0.57813,0.57813 -0.57813,1.85938 l 0,4.82812 -1.40625,0 0,-5.39062 q 0,-0.9375 -0.34375,-1.40625 -0.34375,-0.46875 -1.125,-0.46875 -0.59375,0 -1.09375,0.3125 -0.5,0.3125 -0.73437,0.92187 -0.21875,0.59375 -0.21875,1.71875 l 0,4.3125 -1.40625,0 z m 19,-2.67187 1.45312,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92187,0.70312 -2.35937,0.70312 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10937 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29687 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83594,4.95312 0,-8.29687 1.26562,0 0,1.17187 q 0.90625,-1.35937 2.64063,-1.35937 0.75,0
  1.375,0.26562 0.625,0.26563 0.9375,0.70313 0.3125,0.4375 0.4375,1.04687 0.0781,0.39063 0.0781,1.35938 l 0,5.10937 -1.40625,0 0,-5.04687 q 0,-0.85938 -0.17187,-1.28125 -0.15625,-0.4375 -0.57813,-0.6875 -0.40625,-0.25 -0.96875,-0.25 -0.90625,0 -1.5625,0.57812 -0.64062,0.5625 -0.64062,2.15625 l 0,4.53125 -1.40625,0 z m 11.96094,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z m 0.99218,-1.76563 1.40625,-0.1875 q 0.25,1.20313 0.82813,1.73438 0.57812,0.51562 1.42187,0.51562 0.98438,0 1.67188,-0.6875 0.6875,-0.6875 0.6875,-1.70312 0,-0.96875 -0.64063,-1.59375 -0.625,-0.625 -1.60937,-0.625 -0.39063,0 -0.98438,0.15625 l 0.15625,-1.23438 q 0.14063,0.0156 0.218
 75,0.0156 0.90625,0 1.625,-0.46875 0.71875,-0.46875 0.71875,-1.45313 0,-0.76562 -0.53125,-1.26562 -0.51562,-0.51563 -1.34375,-0.51563 -0.82812,0 -1.375,0.51563 -0.54687,0.51562 -0.70312,1.54687 l -1.40625,-0.25 q 0.26562,-1.42187 1.17187,-2.1875 0.92188,-0.78125 2.28125,-0.78125 0.9375,0 1.71875,0.40625 0.79688,0.39063 1.20313,1.09375 0.42187,0.6875 0.42187,1.46875 0,0.75 -0.40625,1.35938 -0.39062,0.60937 -1.17187,0.96875 1.01562,0.23437 1.57812,0.96875 0.5625,0.73437 0.5625,1.84375 0,1.5 -1.09375,2.54687 -1.09375,1.04688 -2.76562,1.04688 -1.5,0 -2.5,-0.89063 -1,-0.90625 -1.14063,-2.34375 z"
+       id="path71"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="M 613.6657,134.74803 791.05156,249.51968"
+       id="path73"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 613.6657,134.74803 791.05156,249.51968"
+       id="path75"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 657.1149,440.87274 125.60626,0 0,76.56696 -125.60626,0 z"
+       id="path77"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 657.1149,440.87274 125.60626,0 0,76.56696 -125.60626,0 z"
+       id="path79"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 700.1407,483.98495 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 0,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.15625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.
 8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 8.97656,4.17187 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82813,-0.54687 -0.82812,-0.54688 -1.29687,-1.53125 -0.45313,-0.98438 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01563 1.25,-1.54688 0.82813,-0.54687 1.85938,-0.54687 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67187,2.39062 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 7.96094,4.14062 0,-11.45312 1.40625,0 0,4.10937 q 0.98437,-1.14062 2.48437,-1.14062 0.92188,0 1.59375,0.35937 0.6875,0.35938 0.96875,1 0.29688,0.64063 0.29688,1.85938 l 0,5.26562 -1.40625,0 0,-5.26562 q 0,-1.04688 -0.45313,-1.53125 -0.45312,-0.48438 -1.29687,-0.48438 -0.625,0 -1.17188,0.32813 -0
 .54687,0.32812 -0.78125,0.89062 -0.23437,0.54688 -0.23437,1.51563 l 0,4.54687 -1.40625,0 z m 8.36718,-4.15625 q 0,-2.29687 1.28125,-3.40625 1.07813,-0.92187 2.60938,-0.92187 1.71875,0 2.79687,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48437,2.51562 -0.48438,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73438,0 -2.8125,-1.10937 -1.07813,-1.125 -1.07813,-3.23438 z m 1.45313,0 q 0,1.59375 0.6875,2.39063 0.70312,0.79687 1.75,0.79687 1.04687,0 1.73437,-0.79687 0.70313,-0.79688 0.70313,-2.4375 0,-1.53125 -0.70313,-2.32813 -0.6875,-0.79687 -1.73437,-0.79687 -1.04688,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 13.38281,1.10938 1.39063,0.1875 q -0.23438,1.42187 -1.17188,2.23437 -0.92187,0.8125 -2.28125,0.8125 -1.70312,0 -2.75,-1.10937 -1.03125,-1.125 -1.03125,-3.20313 0,-1.34375 0.4375,-2.34375 0.45313,-1.01562 1.35938,-1.51562 0.92187,-0.5 1.98437,-0.5 1.35938,0 2.21875,0.6875 0.85938,0.67187 1.09375,1.9375 l -1.35937,0.20312 q -0.20313,-0.82812 -0.70313,-1.25 -0.48437,-0.421
 87 -1.1875,-0.42187 -1.0625,0 -1.73437,0.76562 -0.65625,0.75 -0.65625,2.40625 0,1.67188 0.64062,2.4375 0.64063,0.75 1.67188,0.75 0.82812,0 1.375,-0.5 0.5625,-0.51562 0.70312,-1.57812 z m 9.59375,1.6875 0,1.35937 -7.57812,0 q -0.0156,-0.51562 0.17187,-0.98437 0.28125,-0.76563 0.92188,-1.51563 0.64062,-0.75 1.84375,-1.73437 1.85937,-1.53125 2.51562,-2.42188 0.65625,-0.90625 0.65625,-1.70312 0,-0.82813 -0.59375,-1.40625 -0.59375,-0.57813 -1.5625,-0.57813 -1.01562,0 -1.625,0.60938 -0.60937,0.60937 -0.60937,1.6875 l -1.45313,-0.14063 q 0.15625,-1.625 1.125,-2.46875 0.96875,-0.84375 2.59375,-0.84375 1.65625,0 2.60938,0.92188 0.96875,0.90625 0.96875,2.25 0,0.6875 -0.28125,1.35937 -0.28125,0.65625 -0.9375,1.39063 -0.65625,0.73437 -2.17188,2.01562 -1.26562,1.0625 -1.625,1.45313 -0.35937,0.375 -0.59375,0.75 l 5.625,0 z"
+       id="path81"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="M 791.05743,326.09384 719.90777,440.86551"
+       id="path83"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 791.05743,326.09384 719.90777,440.86551"
+       id="path85"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 803.78503,440.88477 125.60626,0 0,76.5669 -125.60626,0 z"
+       id="path87"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 803.78503,440.88477 125.60626,0 0,76.5669 -125.60626,0 z"
+       id="path89"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 831.2249,485.02823 0,-1.04688 q -0.78125,1.23438 -2.3125,1.23438 -1,0 -1.82812,-0.54688 -0.82813,-0.54687 -1.29688,-1.53125 -0.45312,-0.98437 -0.45312,-2.25 0,-1.25 0.40625,-2.25 0.42187,-1.01562 1.25,-1.54687 0.82812,-0.54688 1.85937,-0.54688 0.75,0 1.32813,0.3125 0.59375,0.3125 0.95312,0.82813 l 0,-4.10938 1.40625,0 0,11.45313 -1.3125,0 z m -4.4375,-4.14063 q 0,1.59375 0.67188,2.39063 0.67187,0.78125 1.57812,0.78125 0.92188,0 1.5625,-0.75 0.65625,-0.76563 0.65625,-2.3125 0,-1.70313 -0.65625,-2.5 -0.65625,-0.79688 -1.625,-0.79688 -0.9375,0 -1.5625,0.76563 -0.625,0.76562 -0.625,2.42187 z m 13.36719,3.10938 q -0.78125,0.67187 -1.5,0.95312 -0.71875,0.26563 -1.54688,0.26563 -1.375,0 -2.10937,-0.67188 -0.73438,-0.67187 -0.73438,-1.70312 0,-0.60938 0.28125,-1.10938 0.28125,-0.51562 0.71875,-0.8125 0.45313,-0.3125 1.01563,-0.46875 0.42187,-0.10937 1.25,-0.20312 1.70312,-0.20313 2.51562,-0.48438 0,-0.29687 0,-0.375 0,-0.85937 -0.39062,-1.20312 -0.54688,-0.48438 -1.60938,-0.4843
 8 -0.98437,0 -1.46875,0.35938 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60938,-1.42188 0.4375,-0.54687 1.25,-0.82812 0.8125,-0.29688 1.875,-0.29688 1.0625,0 1.71875,0.25 0.67187,0.25 0.98437,0.625 0.3125,0.375 0.4375,0.95313 0.0781,0.35937 0.0781,1.29687 l 0,1.875 q 0,1.96875 0.0781,2.48438 0.0937,0.51562 0.35938,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10938,-3.14063 q -0.76562,0.3125 -2.29687,0.53125 -0.875,0.125 -1.23438,0.28125 -0.35937,0.15625 -0.5625,0.46875 -0.1875,0.29688 -0.1875,0.65625 0,0.5625 0.42188,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64062,-0.35937 0.9375,-0.98437 0.23437,-0.46875 0.23437,-1.40625 l 0,-0.51563 z m 3.60157,-5.67187 0,-1.60938 1.40625,0 0,1.60938 -1.40625,0 z m 0,9.84375 0,-8.29688 1.40625,0 0,8.29688 -1.40625,0 z m 3.52343,0 0,-11.45313 1.40625,0 0,11.45313 -1.40625,0 z m 3.52344,3.20312 -0.15625,-1.32812 q 0.45313,0.125 0.79688,0.125 0.46875,0 0.75,-0.15625 0.28125,-0.15625 0.46875,-0.43
 75 0.125,-0.20313 0.42187,-1.04688 0.0469,-0.10937 0.125,-0.34375 l -3.14062,-8.3125 1.51562,0 1.71875,4.79688 q 0.34375,0.92187 0.60938,1.92187 0.23437,-0.96875 0.57812,-1.89062 l 1.76563,-4.82813 1.40625,0 -3.15625,8.4375 q -0.5,1.375 -0.78125,1.89063 -0.375,0.6875 -0.85938,1.01562 -0.48437,0.32813 -1.15625,0.32813 -0.40625,0 -0.90625,-0.17188 z m 6.75,-0.0156 0,-1.01563 9.32813,0 0,1.01563 -9.32813,0 z m 10.19531,-3.1875 0,-8.29688 1.26563,0 0,1.25 q 0.48437,-0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45313 l -0.48437,1.29687 q -0.51563,-0.29687 -1.03125,-0.29687 -0.45313,0 -0.82813,0.28125 -0.35937,0.26562 -0.51562,0.76562 -0.23438,0.75 -0.23438,1.64063 l 0,4.34375 -1.40625,0 z m 11.01563,-2.67188 1.45312,0.17188 q -0.34375,1.28125 -1.28125,1.98437 -0.92187,0.70313 -2.35937,0.70313 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14063 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14063 1.0625,1.125 1.0625,3.1
 7187 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10938 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29688 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04687 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64063 -0.71875,1.71875 z m 7.83594,8.14063 0,-11.48438 1.28125,0 0,1.07813 q 0.45313,-0.64063 1.01563,-0.95313 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54688 0.8125,0.54687 1.21875,1.54687 0.42188,0.98438 0.42188,2.17188 0,1.28125 -0.46875,2.29687 -0.45313,1.01563 -1.32813,1.5625 -0.85937,0.54688 -1.82812,0.54688 -0.70313,0 -1.26563,-0.29688 -0.54687,-0.29687 -0.90625,-0.75 l 0,4.04688 -1.40625,0 z m 1.26563,-7.29688 q 0,1.60938 0.64062,2.375 0.65625,0.76563 1.57813,0.76563 0.9375,0 1.60937,-0.79688 0.67188,-0.79687 0.67188,-2.45312 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79688 -1.5625,-0.79688 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.10156,-0.0469 q 0,-2.29688 
 1.28125,-3.40625 1.07812,-0.92188 2.60937,-0.92188 1.71875,0 2.79688,1.125 1.09375,1.10938 1.09375,3.09375 0,1.59375 -0.48438,2.51563 -0.48437,0.92187 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73437,0 -2.8125,-1.10938 -1.07812,-1.125 -1.07812,-3.23437 z m 1.45312,0 q 0,1.59375 0.6875,2.39062 0.70313,0.79688 1.75,0.79688 1.04688,0 1.73438,-0.79688 0.70312,-0.79687 0.70312,-2.4375 0,-1.53125 -0.70312,-2.32812 -0.6875,-0.79688 -1.73438,-0.79688 -1.04687,0 -1.75,0.79688 -0.6875,0.78125 -0.6875,2.375 z m 7.96094,4.15625 0,-8.29688 1.26563,0 0,1.25 q 0.48437,-0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45313 l -0.48437,1.29687 q -0.51563,-0.29687 -1.03125,-0.29687 -0.45313,0 -0.82813,0.28125 -0.35937,0.26562 -0.51562,0.76562 -0.23438,0.75 -0.23438,1.64063 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26563 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23437 -0.42188,-0.25 -0.59375,-0.64063 -0.17188,-0.40625 -0.17188,-1.67187 l 0,-4.76563 -1.03125,0
  0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60938 0.0625,0.78125 0.0781,0.17188 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z"
+       id="path91"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="M 791.05743,326.09384 866.58496,440.897"
+       id="path93"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 791.05743,326.09384 866.58496,440.897"
+       id="path95"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 896.80316,249.52692 125.60624,0 0,76.56692 -125.60624,0 z"
+       id="path97"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff;fill-rule:nonzero" />
+    <path
+       d="m 896.80316,249.52692 125.60624,0 0,76.56692 -125.60624,0 z"
+       id="path99"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 927.829,290.6235 1.39062,0.1875 q -0.23437,1.42187 -1.17187,2.23437 -0.92188,0.8125 -2.28125,0.8125 -1.70313,0 -2.75,-1.10937 -1.03125,-1.125 -1.03125,-3.20313 0,-1.34375 0.4375,-2.34375 0.45312,-1.01562 1.35937,-1.51562 0.92188,-0.5 1.98438,-0.5 1.35937,0 2.21875,0.6875 0.85937,0.67187 1.09375,1.9375 l -1.35938,0.20312 q -0.20312,-0.82812 -0.70312,-1.25 -0.48438,-0.42187 -1.1875,-0.42187 -1.0625,0 -1.73438,0.76562 -0.65625,0.75 -0.65625,2.40625 0,1.67188 0.64063,2.4375 0.64062,0.75 1.67187,0.75 0.82813,0 1.375,-0.5 0.5625,-0.51562 0.70313,-1.57812 z m 8.26562,0.375 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29687 z 
 m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.30469,0.79687 q 0,-2.29687 1.28125,-3.40625 1.07812,-0.92187 2.60937,-0.92187 1.71875,0 2.79688,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48438,2.51562 -0.48437,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73437,0 -2.8125,-1.10937 -1.07812,-1.125 -1.07812,-3.23438 z m 1.45312,0 q 0,1.59375 0.6875,2.39063 0.70313,0.79687 1.75,0.79687 1.04688,0 1.73438,-0.79687 0.70312,-0.79688 0.70312,-2.4375 0,-1.53125 -0.70312,-2.32813 -0.6875,-0.79687 -1.73438,-0.79687 -1.04687,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 6.66406,7.34375 0,-1.01562 9.32813,0 0,1.01562 -9.32813,0 z m 10.19532,-3.1875 0,-8.29687 1.26562,0 0,1.25 q 0.48438,-0.875 0.89063,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70312,0 1.4375,0.45312 l -0.48438,1.29688 q -0.51562,-0.29688 -1.03125,-0.29688 -0.45312,0 -0.82812,0.28125 -0.35938,0.26563 
 -0.51563,0.76563 -0.23437,0.75 -0.23437,1.64062 l 0,4.34375 -1.40625,0 z m 11.01562,-2.67187 1.45313,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92188,0.70312 -2.35938,0.70312 -1.82812,0 -2.89062,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07812,-3.25 1.07813,-1.15625 2.79688,-1.15625 1.65625,0 2.70312,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76563,2.10937 0.70312,0.71875 1.73437,0.71875 0.78125,0 1.32813,-0.40625 0.54687,-0.40625 0.85937,-1.29687 z m -4.60937,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67188,-0.8125 -1.73438,-0.8125 -0.96875,0 -1.64062,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 7.83593,8.14062 0,-11.48437 1.28125,0 0,1.07812 q 0.45313,-0.64062 1.01563,-0.95312 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42188,0.98437 0.42188,2.17187 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.
 54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.29687 q 0,1.60937 0.64062,2.375 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.10156,-0.0469 q 0,-2.29687 1.28125,-3.40625 1.07813,-0.92187 2.60938,-0.92187 1.71875,0 2.79687,1.125 1.09375,1.10937 1.09375,3.09375 0,1.59375 -0.48437,2.51562 -0.48438,0.92188 -1.40625,1.4375 -0.90625,0.5 -2,0.5 -1.73438,0 -2.8125,-1.10937 -1.07813,-1.125 -1.07813,-3.23438 z m 1.45313,0 q 0,1.59375 0.6875,2.39063 0.70312,0.79687 1.75,0.79687 1.04687,0 1.73437,-0.79687 0.70313,-0.79688 0.70313,-2.4375 0,-1.53125 -0.70313,-2.32813 -0.6875,-0.79687 -1.73437,-0.79687 -1.04688,0 -1.75,0.79687 -0.6875,0.78125 -0.6875,2.375 z m 7.96093,4.15625 0,-8.29687 1.26563,0 0,1.25 q 0.48437,-0.875 0.89062,-1.15625 0.40625,-0.28125 0.90625,-0.28125 0.70313,0 1.4375,0.45312 l -0.48437,1
 .29688 q -0.51563,-0.29688 -1.03125,-0.29688 -0.45313,0 -0.82813,0.28125 -0.35937,0.26563 -0.51562,0.76563 -0.23438,0.75 -0.23438,1.64062 l 0,4.34375 -1.40625,0 z m 8.40625,-1.26562 0.20313,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76563,0 -1.1875,-0.23438 -0.42188,-0.25 -0.59375,-0.64062 -0.17188,-0.40625 -0.17188,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.60937 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17188,0.0937 0.48438,0.0937 0.23437,0 0.60937,-0.0625 z"
+       id="path101"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="M 613.6657,134.74803 959.61846,249.51968"
+       id="path103"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 613.6657,134.74803 959.61846,249.51968"
+       id="path105"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 96.80315,249.52692 125.60629,0 0,76.56692 -125.60629,0 z"
+       id="path107"
+       inkscape:connector-curvature="0"
+       style="fill:#cccccc;fill-rule:nonzero" />
+    <path
+       d="m 96.80315,249.52692 125.60629,0 0,76.56692 -125.60629,0 z"
+       id="path109"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+    <path
+       d="m 123.3016,296.85788 0,-11.48437 1.28125,0 0,1.07812 q 0.45313,-0.64062 1.01563,-0.95312 0.57812,-0.3125 1.39062,-0.3125 1.0625,0 1.875,0.54687 0.8125,0.54688 1.21875,1.54688 0.42188,0.98437 0.42188,2.17187 0,1.28125 -0.46875,2.29688 -0.45313,1.01562 -1.32813,1.5625 -0.85937,0.54687 -1.82812,0.54687 -0.70313,0 -1.26563,-0.29687 -0.54687,-0.29688 -0.90625,-0.75 l 0,4.04687 -1.40625,0 z m 1.26563,-7.29687 q 0,1.60937 0.64062,2.375 0.65625,0.76562 1.57813,0.76562 0.9375,0 1.60937,-0.79687 0.67188,-0.79688 0.67188,-2.45313 0,-1.59375 -0.65625,-2.375 -0.65625,-0.79687 -1.5625,-0.79687 -0.89063,0 -1.59375,0.84375 -0.6875,0.84375 -0.6875,2.4375 z m 7.36719,4.79687 1.375,0.20313 q 0.0781,0.64062 0.46875,0.92187 0.53125,0.39063 1.4375,0.39063 0.96875,0 1.5,-0.39063 0.53125,-0.39062 0.71875,-1.09375 0.10937,-0.42187 0.10937,-1.8125 -0.92187,1.09375 -2.29687,1.09375 -1.71875,0 -2.65625,-1.23437 -0.9375,-1.23438 -0.9375,-2.96875 0,-1.1875 0.42187,-2.1875 0.4375,-1 1.25,-1.54688 0.8281
 3,-0.54687 1.92188,-0.54687 1.46875,0 2.42187,1.1875 l 0,-1 1.29688,0 0,7.17187 q 0,1.9375 -0.39063,2.75 -0.39062,0.8125 -1.25,1.28125 -0.85937,0.46875 -2.10937,0.46875 -1.48438,0 -2.40625,-0.67187 -0.90625,-0.67188 -0.875,-2.01563 z m 1.17187,-4.98437 q 0,1.625 0.64063,2.375 0.65625,0.75 1.625,0.75 0.96875,0 1.625,-0.73438 0.65625,-0.75 0.65625,-2.34375 0,-1.53125 -0.67188,-2.29687 -0.67187,-0.78125 -1.625,-0.78125 -0.9375,0 -1.59375,0.76562 -0.65625,0.76563 -0.65625,2.26563 z m 6.67969,7.48437 0,-1.01562 9.32812,0 0,1.01562 -9.32812,0 z m 15.58594,-3.1875 0,-1.04687 q -0.78125,1.23437 -2.3125,1.23437 -1,0 -1.82813,-0.54687 -0.82812,-0.54688 -1.29687,-1.53125 -0.45313,-0.98438 -0.45313,-2.25 0,-1.25 0.40625,-2.25 0.42188,-1.01563 1.25,-1.54688 0.82813,-0.54687 1.85938,-0.54687 0.75,0 1.32812,0.3125 0.59375,0.3125 0.95313,0.82812 l 0,-4.10937 1.40625,0 0,11.45312 -1.3125,0 z m -4.4375,-4.14062 q 0,1.59375 0.67187,2.39062 0.67188,0.78125 1.57813,0.78125 0.92187,0 1.5625,-0.75 0.65625
 ,-0.76562 0.65625,-2.3125 0,-1.70312 -0.65625,-2.5 -0.65625,-0.79687 -1.625,-0.79687 -0.9375,0 -1.5625,0.76562 -0.625,0.76563 -0.625,2.42188 z m 13.63281,1.46875 1.45312,0.17187 q -0.34375,1.28125 -1.28125,1.98438 -0.92187,0.70312 -2.35937,0.70312 -1.82813,0 -2.89063,-1.125 -1.0625,-1.125 -1.0625,-3.14062 0,-2.09375 1.07813,-3.25 1.07812,-1.15625 2.79687,-1.15625 1.65625,0 2.70313,1.14062 1.0625,1.125 1.0625,3.17188 0,0.125 0,0.375 l -6.1875,0 q 0.0781,1.375 0.76562,2.10937 0.70313,0.71875 1.73438,0.71875 0.78125,0 1.32812,-0.40625 0.54688,-0.40625 0.85938,-1.29687 z m -4.60938,-2.28125 4.625,0 q -0.0937,-1.04688 -0.53125,-1.5625 -0.67187,-0.8125 -1.73437,-0.8125 -0.96875,0 -1.64063,0.65625 -0.65625,0.64062 -0.71875,1.71875 z m 8.16407,4.95312 0,-7.20312 -1.23438,0 0,-1.09375 1.23438,0 0,-0.89063 q 0,-0.82812 0.15625,-1.23437 0.20312,-0.54688 0.70312,-0.89063 0.51563,-0.34375 1.4375,-0.34375 0.59375,0 1.3125,0.14063 l -0.20312,1.23437 q -0.4375,-0.0781 -0.82813,-0.0781 -0.64062,0 -0
 .90625,0.28125 -0.26562,0.26562 -0.26562,1.01562 l 0,0.76563 1.60937,0 0,1.09375 -1.60937,0 0,7.20312 -1.40625,0 z m 9.52343,-1.03125 q -0.78125,0.67188 -1.5,0.95313 -0.71875,0.26562 -1.54687,0.26562 -1.375,0 -2.10938,-0.67187 -0.73437,-0.67188 -0.73437,-1.70313 0,-0.60937 0.28125,-1.10937 0.28125,-0.51563 0.71875,-0.8125 0.45312,-0.3125 1.01562,-0.46875 0.42188,-0.10938 1.25,-0.20313 1.70313,-0.20312 2.51563,-0.48437 0,-0.29688 0,-0.375 0,-0.85938 -0.39063,-1.20313 -0.54687,-0.48437 -1.60937,-0.48437 -0.98438,0 -1.46875,0.35937 -0.46875,0.34375 -0.6875,1.21875 l -1.375,-0.1875 q 0.1875,-0.875 0.60937,-1.42187 0.4375,-0.54688 1.25,-0.82813 0.8125,-0.29687 1.875,-0.29687 1.0625,0 1.71875,0.25 0.67188,0.25 0.98438,0.625 0.3125,0.375 0.4375,0.95312 0.0781,0.35938 0.0781,1.29688 l 0,1.875 q 0,1.96875 0.0781,2.48437 0.0937,0.51563 0.35937,1 l -1.46875,0 q -0.21875,-0.4375 -0.28125,-1.03125 z m -0.10937,-3.14062 q -0.76563,0.3125 -2.29688,0.53125 -0.875,0.125 -1.23437,0.28125 -0.35938,0.1
 5625 -0.5625,0.46875 -0.1875,0.29687 -0.1875,0.65625 0,0.5625 0.42187,0.9375 0.4375,0.375 1.25,0.375 0.8125,0 1.4375,-0.34375 0.64063,-0.35938 0.9375,-0.98438 0.23438,-0.46875 0.23438,-1.40625 l 0,-0.51562 z m 9.03906,4.17187 0,-1.21875 q -0.96875,1.40625 -2.64062,1.40625 -0.73438,0 -1.375,-0.28125 -0.625,-0.28125 -0.9375,-0.70312 -0.3125,-0.4375 -0.4375,-1.04688 -0.0781,-0.42187 -0.0781,-1.3125 l 0,-5.14062 1.40625,0 0,4.59375 q 0,1.10937 0.0781,1.48437 0.14062,0.5625 0.5625,0.875 0.4375,0.3125 1.0625,0.3125 0.64062,0 1.1875,-0.3125 0.5625,-0.32812 0.78125,-0.89062 0.23437,-0.5625 0.23437,-1.625 l 0,-4.4375 1.40625,0 0,8.29687 -1.25,0 z m 3.42969,0 0,-11.45312 1.40625,0 0,11.45312 -1.40625,0 z m 6.64844,-1.26562 0.20312,1.25 q -0.59375,0.125 -1.0625,0.125 -0.76562,0 -1.1875,-0.23438 -0.42187,-0.25 -0.59375,-0.64062 -0.17187,-0.40625 -0.17187,-1.67188 l 0,-4.76562 -1.03125,0 0,-1.09375 1.03125,0 0,-2.0625 1.40625,-0.84375 0,2.90625 1.40625,0 0,1.09375 -1.40625,0 0,4.84375 q 0,0.6093
 7 0.0625,0.78125 0.0781,0.17187 0.25,0.28125 0.17187,0.0937 0.48437,0.0937 0.23438,0 0.60938,-0.0625 z"
+       id="path111"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-rule:nonzero" />
+    <path
+       d="M 613.6657,134.74803 159.61846,249.51968"
+       id="path113"
+       inkscape:connector-curvature="0"
+       style="fill:#000000;fill-opacity:0;fill-rule:nonzero" />
+    <path
+       d="M 613.6657,134.74803 159.61846,249.51968"
+       id="path115"
+       inkscape:connector-curvature="0"
+       style="fill-rule:nonzero;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round" />
+  </g>
+</svg>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/ElasticSegments.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/ElasticSegments.html.md.erb b/markdown/overview/ElasticSegments.html.md.erb
new file mode 100755
index 0000000..383eab5
--- /dev/null
+++ b/markdown/overview/ElasticSegments.html.md.erb
@@ -0,0 +1,31 @@
+---
+title: Elastic Query Execution Runtime
+---
+
+HAWQ uses dynamically allocated virtual segments to provide resources for query execution.
+
+In HAWQ 1.x, the number of segments \(compute resource carrier\) used to run a query is fixed, no matter whether the underlying query is big query requiring many resources or a small query requiring little resources. This architecture is simple, however it uses resources inefficiently.
+
+To address this issue, HAWQ now uses the elastic query execution runtime feature, which is based on virtual segments. HAWQ allocates virtual segments on demand based on the costs of queries. In other words, for big queries, HAWQ starts a large number of virtual segments, while for small queries HAWQ starts fewer virtual segments.
+
+## Storage
+
+In HAWQ, the number of invoked segments varies based on cost of query. In order to simplify table data management, all data of one relation are saved under one HDFS folder.
+
+For all the HAWQ table storage formats, AO \(Append-Only\) and Parquet, the data files are splittable, so that HAWQ can assign multiple virtual segments to consume one data file concurrently to increase the parallelism of a query.
+
+## Physical Segments and Virtual Segments
+
+In HAWQ, only one physical segment needs to be installed on one host, in which multiple virtual segments can be started to run queries. HAWQ allocates multiple virtual segments distributed across different hosts on demand to run one query. Virtual segments are carriers \(containers\) for resources such as memory and CPU. Queries are executed by query executors in virtual segments.
+
+**Note:** In this documentation, when we refer to segment by itself, we mean a *physical segment*.
+
+## Virtual Segment Allocation Policy
+
+Different number of virtual segments are allocated based on virtual segment allocation policies. The following factors determine the number of virtual segments that are used for a query:
+
+-   Resources available at the query running time
+-   The cost of the query
+-   The distribution of the table; in other words, randomly distributed tables and hash distributed tables
+-   Whether the query involves UDFs and external tables
+-   Specific server configuration parameters, such as `default_hash_table_bucket_number` for hash table queries and `hawq_rm_nvseg_perquery_limit`

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/HAWQArchitecture.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/HAWQArchitecture.html.md.erb b/markdown/overview/HAWQArchitecture.html.md.erb
new file mode 100755
index 0000000..d42d241
--- /dev/null
+++ b/markdown/overview/HAWQArchitecture.html.md.erb
@@ -0,0 +1,69 @@
+---
+title: HAWQ Architecture
+---
+
+This topic presents HAWQ architecture and its main components.
+
+In a typical HAWQ deployment, each slave node has one physical HAWQ segment, an HDFS DataNode and a NodeManager installed. Masters for HAWQ, HDFS and YARN are hosted on separate nodes.
+
+The following diagram provides a high-level architectural view of a typical HAWQ deployment.
+
+![](../mdimages/hawq_high_level_architecture.png)
+
+HAWQ is tightly integrated with YARN, the Hadoop resource management framework, for query resource management. HAWQ caches containers from YARN in a resource pool and then manages those resources locally by leveraging HAWQ's own finer-grained resource management for users and groups. To execute a query, HAWQ allocates a set of virtual segments according to the cost of a query, resource queue definitions, data locality and the current resource usage in the system. Then the query is dispatched to corresponding physical hosts, which can be a subset of nodes or the whole cluster. The HAWQ resource enforcer on each node monitors and controls the real time resources used by the query to avoid resource usage violations.
+
+The following diagram provides another view of the software components that constitute HAWQ.
+
+![](../mdimages/hawq_architecture_components.png)
+
+## <a id="hawqmaster"></a>HAWQ Master 
+
+The HAWQ *master* is the entry point to the system. It is the database process that accepts client connections and processes the SQL commands issued. The HAWQ master parses queries, optimizes queries, dispatches queries to segments and coordinates the query execution.
+
+End-users interact with HAWQ through the master and can connect to the database using client programs such as psql or application programming interfaces \(APIs\) such as JDBC or ODBC.
+
+The master is where the `global system catalog` resides. The global system catalog is the set of system tables that contain metadata about the HAWQ system itself. The master does not contain any user data; data resides only on *HDFS*. The master authenticates client connections, processes incoming SQL commands, distributes workload among segments, coordinates the results returned by each segment, and presents the final results to the client program.
+
+## <a id="hawqsegment"></a>HAWQ Segment 
+
+In HAWQ, the *segments* are the units that process data simultaneously.
+
+There is only one *physical segment* on each host. Each segment can start many Query Executors \(QEs\) for each query slice. This makes a single segment act like multiple virtual segments, which enables HAWQ to better utilize all available resources.
+
+**Note:** In this documentation, when we refer to segment by itself, we mean a *physical segment*.
+
+A *virtual segment* behaves like a container for QEs. Each virtual segment has one QE for each slice of a query. The number of virtual segments used determines the degree of parallelism \(DOP\) of a query.
+
+A segment differs from a master because it:
+
+-   Is stateless.
+-   Does not store the metadata for each database and table.
+-   Does not store data on the local file system.
+
+The master dispatches the SQL request to the segments along with the related metadata information to process. The metadata contains the HDFS url for the required table. The segment accesses the corresponding data using this URL.
+
+## <a id="hawqinterconnect"></a>HAWQ Interconnect 
+
+The *interconnect* is the networking layer of HAWQ. When a user connects to a database and issues a query, processes are created on each segment to handle the query. The *interconnect* refers to the inter-process communication between the segments, as well as the network infrastructure on which this communication relies. The interconnect uses standard Ethernet switching fabric.
+
+By default, the interconnect uses UDP \(User Datagram Protocol\) to send messages over the network. The HAWQ software performs the additional packet verification beyond what is provided by UDP. This means the reliability is equivalent to Transmission Control Protocol \(TCP\), and the performance and scalability exceeds that of TCP. If the interconnect used TCP, HAWQ would have a scalability limit of 1000 segment instances. With UDP as the current default protocol for the interconnect, this limit is not applicable.
+
+## <a id="topic_jjf_11m_g5"></a>HAWQ Resource Manager 
+
+The HAWQ resource manager obtains resources from YARN and responds to resource requests. Resources are buffered by the HAWQ resource manager to support low latency queries. The HAWQ resource manager can also run in standalone mode. In these deployments, HAWQ manages resources by itself without YARN.
+
+See [How HAWQ Manages Resources](../resourcemgmt/HAWQResourceManagement.html) for more details on HAWQ resource management.
+
+## <a id="topic_mrl_psq_f5"></a>HAWQ Catalog Service 
+
+The HAWQ catalog service stores all metadata, such as UDF/UDT information, relation information, security information and data file locations.
+
+## <a id="topic_dcs_rjm_g5"></a>HAWQ Fault Tolerance Service 
+
+The HAWQ fault tolerance service \(FTS\) is responsible for detecting segment failures and accepting heartbeats from segments.
+
+See [Understanding the Fault Tolerance Service](../admin/FaultTolerance.html) for more information on this service.
+
+## <a id="topic_jtc_nkm_g5"></a>HAWQ Dispatcher 
+
+The HAWQ dispatcher dispatches query plans to a selected subset of segments and coordinates the execution of the query. The dispatcher and the HAWQ resource manager are the main components responsible for the dynamic scheduling of queries and the resources required to execute them.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/HAWQOverview.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/HAWQOverview.html.md.erb b/markdown/overview/HAWQOverview.html.md.erb
new file mode 100755
index 0000000..c41f3d9
--- /dev/null
+++ b/markdown/overview/HAWQOverview.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: What is HAWQ?
+---
+
+HAWQ is a Hadoop native SQL query engine that combines the key technological advantages of MPP database with the scalability and convenience of Hadoop. HAWQ reads data from and writes data to HDFS natively.
+
+HAWQ delivers industry-leading performance and linear scalability. It provides users the tools to confidently and successfully interact with petabyte range data sets. HAWQ provides users with a complete, standards compliant SQL interface. More specifically, HAWQ has the following features:
+
+-   On-premise or cloud deployment
+-   Robust ANSI SQL compliance: SQL-92, SQL-99, SQL-2003, OLAP extension
+-   Extremely high performance- many times faster than other Hadoop SQL engines
+-   World-class parallel optimizer
+-   Full transaction capability and consistency guarantee: ACID
+-   Dynamic data flow engine through high speed UDP based interconnect
+-   Elastic execution engine based on on-demand virtual segments and data locality
+-   Support multiple level partitioning and List/Range based partitioned tables.
+-   Multiple compression method support: snappy, gzip
+-   Multi-language user defined function support: Python, Perl, Java, C/C++, R
+-   Advanced machine learning and data mining functionalities through MADLib
+-   Dynamic node expansion: in seconds
+-   Most advanced three level resource management: Integrate with YARN and hierarchical resource queues.
+-   Easy access of all HDFS data and external system data \(for example, HBase\)
+-   Hadoop Native: from storage \(HDFS\), resource management \(YARN\) to deployment \(Ambari\).
+-   Authentication & granular authorization: Kerberos, SSL and role based access
+-   Advanced C/C++ access library to HDFS and YARN: libhdfs3 and libYARN
+-   Support for most third party tools: Tableau, SAS et al.
+-   Standard connectivity: JDBC/ODBC
+
+HAWQ breaks complex queries into small tasks and distributes them to MPP query processing units for execution.
+
+HAWQ's basic unit of parallelism is the segment instance. Multiple segment instances on commodity servers work together to form a single parallel query processing system. A query submitted to HAWQ is optimized, broken into smaller components, and dispatched to segments that work together to deliver a single result set. All relational operations - such as table scans, joins, aggregations, and sorts - simultaneously execute in parallel across the segments. Data from upstream components in the dynamic pipeline are transmitted to downstream components through the scalable User Datagram Protocol \(UDP\) interconnect.
+
+Based on Hadoop's distributed storage, HAWQ has no single point of failure and supports fully-automatic online recovery. System states are continuously monitored, therefore if a segment fails, it is automatically removed from the cluster. During this process, the system continues serving customer queries, and the segments can be added back to the system when necessary.
+
+These topics provide more information about HAWQ and its main components:
+
+* <a class="subnav" href="./HAWQArchitecture.html">HAWQ Architecture</a>
+* <a class="subnav" href="./TableDistributionStorage.html">Table Distribution and Storage</a>
+* <a class="subnav" href="./ElasticSegments.html">Elastic Segments</a>
+* <a class="subnav" href="./ResourceManagement.html">Resource Management</a>
+* <a class="subnav" href="./HDFSCatalogCache.html">HDFS Catalog Cache</a>
+* <a class="subnav" href="./ManagementTools.html">Management Tools</a>
+* <a class="subnav" href="./RedundancyFailover.html">High Availability, Redundancy, and Fault Tolerance</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/HDFSCatalogCache.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/HDFSCatalogCache.html.md.erb b/markdown/overview/HDFSCatalogCache.html.md.erb
new file mode 100755
index 0000000..8803dc4
--- /dev/null
+++ b/markdown/overview/HDFSCatalogCache.html.md.erb
@@ -0,0 +1,7 @@
+---
+title: HDFS Catalog Cache
+---
+
+HDFS catalog cache is a caching service used by HAWQ master to determine the distribution information of table data on HDFS.
+
+HDFS is slow at RPC handling, especially when the number of concurrent requests is high. In order to decide which segments handle which part of data, HAWQ needs data location information from HDFS NameNodes. HDFS catalog cache is used to cache the data location information and accelerate HDFS RPCs.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/ManagementTools.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/ManagementTools.html.md.erb b/markdown/overview/ManagementTools.html.md.erb
new file mode 100755
index 0000000..0c7439d
--- /dev/null
+++ b/markdown/overview/ManagementTools.html.md.erb
@@ -0,0 +1,9 @@
+---
+title: HAWQ Management Tools
+---
+
+HAWQ management tools are consolidated into one `hawq` command.
+
+The `hawq` command can init, start and stop each segment separately, and supports dynamic expansion of the cluster.
+
+See [HAWQ Management Tools Reference](../reference/cli/management_tools.html) for a list of all tools available in HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/RedundancyFailover.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/RedundancyFailover.html.md.erb b/markdown/overview/RedundancyFailover.html.md.erb
new file mode 100755
index 0000000..90eec63
--- /dev/null
+++ b/markdown/overview/RedundancyFailover.html.md.erb
@@ -0,0 +1,29 @@
+---
+title: High Availability, Redundancy and Fault Tolerance
+---
+
+HAWQ ensures high availability for its clusters through system redundancy. HAWQ deployments utilize platform hardware redundancy, such as RAID for the master catalog, JBOD for segments and network redundancy for its interconnect layer. On the software level, HAWQ provides redundancy via master mirroring and dual cluster maintenance. In addition, HAWQ supports high availability NameNode configuration within HDFS.
+
+To maintain cluster health, HAWQ uses a fault tolerance service based on heartbeats and on-demand probe protocols. It can identify newly added nodes dynamically and remove nodes from the cluster when it becomes unusable.
+
+## <a id="abouthighavailability"></a>About High Availability 
+
+HAWQ employs several mechanisms for ensuring high availability. The foremost mechanisms are specific to HAWQ and include the following:
+
+* Master mirroring. Clusters have a standby master in the event of failure of the primary master.
+* Dual clusters. Administrators can create a secondary cluster and synchronizes its data with the primary cluster either through dual ETL or backup and restore mechanisms.
+
+In addition to high availability managed on the HAWQ level, you can enable high availability in HDFS for HAWQ by implementing the high availability feature for NameNodes. See [HAWQ Filespaces and High Availability Enabled HDFS](../admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html).
+
+
+## <a id="aboutsegmentfailover"></a>About Segment Fault Tolerance 
+
+In HAWQ, the segments are stateless. This ensures faster recovery and better availability.
+
+When a segment fails, the segment is removed from the resource pool. Queries are no longer dispatched to the segment. When the segment is operational again, the Fault Tolerance Service verifies its state and adds the segment back to the resource pool.
+
+## <a id="aboutinterconnectredundancy"></a>About Interconnect Redundancy 
+
+The *interconnect* refers to the inter-process communication between the segments and the network infrastructure on which this communication relies. You can achieve a highly available interconnect by deploying dual Gigabit Ethernet switches on your network and deploying redundant Gigabit connections to the HAWQ host \(master and segment\) servers.
+
+In order to use multiple NICs in HAWQ, NIC bounding is required.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/ResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/ResourceManagement.html.md.erb b/markdown/overview/ResourceManagement.html.md.erb
new file mode 100755
index 0000000..8f7e2fd
--- /dev/null
+++ b/markdown/overview/ResourceManagement.html.md.erb
@@ -0,0 +1,14 @@
+---
+title: Resource Management
+---
+
+HAWQ provides several approaches to resource management and includes several user-configurable options, including integration with YARN's resource management.
+
+HAWQ has the ability to manage resources by using the following mechanisms:
+
+-   Global resource management. You can integrate HAWQ with the YARN resource manager to request or return resources as needed. If you do not integrate HAWQ with YARN, HAWQ exclusively consumes cluster resources and manages its own resources. If you integrate HAWQ with YARN, then HAWQ automatically fetches resources from YARN and manages those obtained resources through its internally defined resource queues. Resources are returned to YARN automatically when resources are not used anymore.
+-   User defined hierarchical resource queues. HAWQ administrators or superusers design and define the resource queues used to organize the distribution of resources to queries.
+-   Dynamic resource allocation at query runtime. HAWQ dynamically allocates resources based on resource queue definitions. HAWQ automatically distributes resources based on running \(or queued\) queries and resource queue capacities.
+-   Resource limitations on virtual segments and queries. You can configure HAWQ to enforce limits on CPU and memory usage both for virtual segments and the resource queues used by queries.
+
+For more details on resource management in HAWQ and how it works, see [Managing Resources](../resourcemgmt/HAWQResourceManagement.html).


[26/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/HDFSConfigurationParameterReference.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/HDFSConfigurationParameterReference.html.md.erb b/markdown/reference/HDFSConfigurationParameterReference.html.md.erb
new file mode 100644
index 0000000..aef4ed2
--- /dev/null
+++ b/markdown/reference/HDFSConfigurationParameterReference.html.md.erb
@@ -0,0 +1,257 @@
+---
+title: HDFS Configuration Reference
+---
+
+This reference page describes HDFS configuration values that are configured for HAWQ either within `hdfs-site.xml`, `core-site.xml`, or `hdfs-client.xml`.
+
+## <a id="topic_ixj_xw1_1w"></a>HDFS Site Configuration (hdfs-site.xml and core-site.xml)
+
+This topic provides a reference of the HDFS site configuration values recommended for HAWQ installations. These parameters are located in either `hdfs-site.xml` or `core-site.xml` of your HDFS deployment.
+
+This table describes the configuration parameters and values that are recommended for HAWQ installations. Only HDFS parameters that need to be modified or customized for HAWQ are listed.
+
+| Parameter                                 | Description                                                                                                                                                                                                        | Recommended Value for HAWQ Installs                                   | Comments                                                                                                                                                               |
+|-------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `dfs.allow.truncate`                      | Allows truncate.                                                                                                                                                                                                   | true                                                                  | HAWQ requires that you enable `dfs.allow.truncate`. The HAWQ service will fail to start if `dfs.allow.truncate` is not set to `true`.                                  |
+| `dfs.block.access.token.enable`           | If `true`, access tokens are used as capabilities for accessing DataNodes. If `false`, no access tokens are checked on accessing DataNodes.                                                                        | *false* for an unsecured HDFS cluster, or *true* for a secure cluster | �                                                                                                                                                                      |
+| `dfs.block.local-path-access.user`        | Comma separated list of the users allowed to open block files on legacy short-circuit local read.                                                                                                                  | gpadmin                                                               | �                                                                                                                                                                      |
+| `dfs.client.read.shortcircuit`            | This configuration parameter turns on short-circuit local reads.                                                                                                                                                   | true                                                                  | In Ambari, this parameter corresponds to **HDFS Short-circuit read**. The value for this parameter should be the same in `hdfs-site.xml` and HAWQ's `hdfs-client.xml`. |
+| `dfs.client.socket-timeout`               | The amount of time before a client connection times out when establishing a connection or reading. The value is expressed in milliseconds.                                                                         | 300000000                                                             | �                                                                                                                                                                      |
+| `dfs.client.use.legacy.blockreader.local` | Setting this value to false specifies that the new version of the short-circuit reader is used. Setting this value to true means that the legacy short-circuit reader would be used.                               | false                                                                 | �                                                                                                                                                                      |
+| `dfs.datanode.data.dir.perm`              | Permissions for the directories on on the local filesystem where the DFS DataNode stores its blocks. The permissions can either be octal or symbolic.                                                              | 750                                                                   | In Ambari, this parameter corresponds to **DataNode directories permission**                                                                                           |
+| `dfs.datanode.handler.count`              | The number of server threads for the DataNode.                                                                                                                                                                     | 60                                                                    | �                                                                                                                                                                      |
+| `dfs.datanode.max.transfer.threads`       | Specifies the maximum number of threads to use for transferring data in and out of the DataNode.                                                                                                                   | 40960                                                                 | In Ambari, this parameter corresponds to **DataNode max data transfer threads**                                                                                        |
+| `dfs.datanode.socket.write.timeout`       | The amount of time before a write operation times out, expressed in milliseconds.                                                                                                                                  | 7200000                                                               | �                                                                                                                                                                      |
+| `dfs.domain.socket.path`                  | (Optional.) The path to a UNIX domain socket to use for communication between the DataNode and local HDFS clients. If the string "\_PORT" is present in this path, it is replaced by the TCP port of the DataNode. | �                                                                     | If set, the value for this parameter should be the same in `hdfs-site.xml` and HAWQ's `hdfs-client.xml`.                                                               |
+| `dfs.namenode.accesstime.precision`       | The access time for HDFS file is precise up to this value. Setting a value of 0 disables access times for HDFS.                                                                                                    | 0                                                                     | In Ambari, this parameter corresponds to **Access time precision**                                                                                                     |
+| `dfs.namenode.handler.count`              | The number of server threads for the NameNode.                                                                                                                                                                     | 600                                                                   | �                                                                                                                                                                      |
+| `dfs.support.append`                      | Whether HDFS is allowed to append to files.                                                                                                                                                                        | true                                                                  | �                                                                                                                                                                      |
+| `ipc.client.connection.maxidletime`       | The maximum time in milliseconds after which a client will bring down the connection to the server.                                                                                                                | 3600000                                                               | In core-site.xml                                                                                                                                                       |
+| `ipc.client.connect.timeout`              | Indicates the number of milliseconds a client will wait for the socket to establish a server connection.                                                                                                           | 300000                                                                | In core-site.xml                                                                                                                                                       |
+| `ipc.server.listen.queue.size`            | Indicates the length of the listen queue for servers accepting client connections.                                                                                                                                 | 3300                                                                  | In core-site.xml                                                                                                                                                       |
+
+## <a id="topic_l1c_zw1_1w"></a>HDFS Client Configuration (hdfs-client.xml)
+
+This topic provides a reference of the HAWQ configuration values located in `$GPHOME/etc/hdfs-client.xml`.
+
+This table describes the configuration parameters and their default values:
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Parameter</th>
+<th>Description</th>
+<th>Default Value</th>
+<th>Comments</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">dfs.client.failover.max.attempts</code></td>
+<td>The maximum number of times that the DFS client retries issuing a RPC call when multiple NameNodes are configured.</td>
+<td>15</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">dfs.client.log.severity</code></td>
+<td>The minimal log severity level. Valid values include: FATAL, ERROR, INFO, DEBUG1, DEBUG2, and DEBUG3.</td>
+<td>INFO</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">dfs.client.read.shortcircuit</code></td>
+<td>Determines whether the DataNode is bypassed when reading file blocks, if the block and client are on the same node. The default value, true, bypasses the DataNode.</td>
+<td>true</td>
+<td>The value for this parameter should be the same in <code class="ph codeph">hdfs-site.xml</code> and HAWQ's <code class="ph codeph">hdfs-client.xml</code>.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">dfs.client.use.legacy.blockreader.local</code></td>
+<td>Determines whether the legacy short-circuit reader implementation, based on HDFS-2246, is used. Set this property to true on non-Linux platforms that do not have the new implementation based on HDFS-347.</td>
+<td>false</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">dfs.default.blocksize</code></td>
+<td>Default block size, in bytes.</td>
+<td>134217728</td>
+<td>Default is equivalent to 128 MB.�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">dfs.default.replica</code></td>
+<td>The default number of replicas.</td>
+<td>3</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">dfs.domain.socket.path</code></td>
+<td>(Optional.) The path to a UNIX domain socket to use for communication between the DataNode and local HDFS clients. If the string &quot;_PORT&quot; is present in this path, it is replaced by the TCP port of the DataNode.</td>
+<td>�</td>
+<td>If set, the value for this parameter should be the same in <code class="ph codeph">hdfs-site.xml</code> and HAWQ's <code class="ph codeph">hdfs-client.xml</code>.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">dfs.prefetchsize</code></td>
+<td>The number of blocks for which information is pre-fetched.</td>
+<td>10</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">hadoop.security.authentication</code></td>
+<td>Specifies the type of RPC authentication to use. A value of <code class="ph codeph">simple</code> indicates no authentication. A value of <code class="ph codeph">kerberos</code> enables authentication by Kerberos.</td>
+<td>simple</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">input.connect.timeout</code></td>
+<td>The timeout interval, in milliseconds, for when the input stream is setting up a connection to a DataNode.</td>
+<td>600000</td>
+<td>�Default is equal to 1 hour.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">input.localread.blockinfo.cachesize</code></td>
+<td>The size of the file block path information cache, in bytes.</td>
+<td>1000</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">input.localread.default.buffersize</code></td>
+<td>The size of the buffer, in bytes, used to hold data from the file block and verify the checksum. This value is used only when <code class="ph codeph">dfs.client.read.shortcircuit</code> is set to true.</td>
+<td>1048576</td>
+<td>Default is equal to 1MB. Only used when is set to true.
+<p>If an older version of�<code class="ph codeph">hdfs-client.xml</code> is retained during upgrade, to avoid performance degradation, set the�<code class="ph codeph">input.localread.default.buffersize</code> to�2097152.�</p></td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">input.read.getblockinfo.retry</code></td>
+<td>The maximum number of times the client should retry getting block information from the NameNode.</td>
+<td>3</td>
+<td></td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">input.read.timeout</code></td>
+<td>The timeout interval, in milliseconds, for when the input stream is reading from a DataNode.</td>
+<td>3600000</td>
+<td>Default is equal to 1 hour.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">input.write.timeout</code></td>
+<td>The timeout interval, in milliseconds, for when the input stream is writing to a DataNode.</td>
+<td>3600000</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">output.close.timeout</code></td>
+<td>The timeout interval for closing an output stream, in milliseconds.</td>
+<td>900000</td>
+<td>Default is equal to 1.5 hours.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">output.connect.timeout</code></td>
+<td>The timeout interval, in milliseconds, for when the output stream is setting up a connection to a DataNode.</td>
+<td>600000</td>
+<td>Default is equal to 10 minutes.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">output.default.chunksize</code></td>
+<td>The chunk size of the pipeline, in bytes.</td>
+<td>512</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">output.default.packetsize</code></td>
+<td>The packet size of the pipeline, in bytes.</td>
+<td>65536</td>
+<td>Default is equal to 64KB.�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">output.default.write.retry</code></td>
+<td>The maximum number of times that the client should reattempt to set up a failed pipeline.</td>
+<td>10</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">output.packetpool.size</code></td>
+<td>The maximum number of packets in a file's packet pool.</td>
+<td>1024</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">output.read.timeout</code></td>
+<td>The timeout interval, in milliseconds, for when the output stream is reading from a DataNode.</td>
+<td>3600000</td>
+<td>Default is equal to 1 hour.�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">output.replace-datanode-on-failure</code></td>
+<td>Determines whether the client adds a new DataNode to pipeline if the number of nodes in the pipeline is less than the specified number of replicas.</td>
+<td>false (if # of nodes less than or equal to 4), otherwise true</td>
+<td>When you deploy a HAWQ cluster, the <code class="ph codeph">hawq init</code> utility detects the number of nodes in the cluster and updates this configuration parameter accordingly. However, when expanding an existing cluster to 4 or more nodes, you must manually set this value to true. Set to false if you remove existing nodes and fall under 4 nodes.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">output.write.timeout</code></td>
+<td>The timeout interval, in milliseconds, for when the output stream is writing to a DataNode.</td>
+<td>3600000</td>
+<td>Default is equal to 1 hour.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">rpc.client.connect.retry</code></td>
+<td>The maximum number of times to retry a connection if the RPC client fails connect to the server.</td>
+<td>10</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">rpc.client.connect.tcpnodelay</code></td>
+<td>Determines whether TCP_NODELAY is used when connecting to the RPC server.</td>
+<td>true</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">rpc.client.connect.timeout</code></td>
+<td>The timeout interval for establishing the RPC client connection, in milliseconds.</td>
+<td>600000</td>
+<td>Default equals 10 minutes.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">rpc.client.max.idle</code></td>
+<td>The maximum idle time for an RPC connection, in milliseconds.</td>
+<td>10000</td>
+<td>Default equals 10 seconds.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">rpc.client.ping.interval</code></td>
+<td>The interval which the RPC client send a heart beat to server. 0 means disable.</td>
+<td>10000</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">rpc.client.read.timeout</code></td>
+<td>The timeout interval, in milliseconds, for when the RPC client is reading from the server.</td>
+<td>3600000</td>
+<td>Default equals 1 hour.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">rpc.client.socket.linger.timeout</code></td>
+<td>The value to set for the SO_LINGER socket when connecting to the RPC server.</td>
+<td>-1</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">rpc.client.timeout</code></td>
+<td>The timeout interval of an RPC invocation, in milliseconds.</td>
+<td>3600000</td>
+<td>Default equals 1 hour.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">rpc.client.write.timeout</code></td>
+<td>The timeout interval, in milliseconds, for when the RPC client is writing to the server.</td>
+<td>3600000</td>
+<td>Default equals 1 hour.</td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/SQLCommandReference.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/SQLCommandReference.html.md.erb b/markdown/reference/SQLCommandReference.html.md.erb
new file mode 100644
index 0000000..7bbe625
--- /dev/null
+++ b/markdown/reference/SQLCommandReference.html.md.erb
@@ -0,0 +1,163 @@
+---
+title: SQL Commands
+---
+
+This�section contains a description and the syntax�of�the SQL commands supported by HAWQ.
+
+-   **[ABORT](../reference/sql/ABORT.html)**
+
+-   **[ALTER AGGREGATE](../reference/sql/ALTER-AGGREGATE.html)**
+
+-   **[ALTER DATABASE](../reference/sql/ALTER-DATABASE.html)**
+
+-   **[ALTER FUNCTION](../reference/sql/ALTER-FUNCTION.html)**
+
+-   **[ALTER OPERATOR](../reference/sql/ALTER-OPERATOR.html)**
+
+-   **[ALTER OPERATOR CLASS](../reference/sql/ALTER-OPERATOR-CLASS.html)**
+
+-   **[ALTER RESOURCE QUEUE](../reference/sql/ALTER-RESOURCE-QUEUE.html)**
+
+-   **[ALTER ROLE](../reference/sql/ALTER-ROLE.html)**
+
+-   **[ALTER TABLE](../reference/sql/ALTER-TABLE.html)**
+
+-   **[ALTER TABLESPACE](../reference/sql/ALTER-TABLESPACE.html)**
+
+-   **[ALTER TYPE](../reference/sql/ALTER-TYPE.html)**
+
+-   **[ALTER USER](../reference/sql/ALTER-USER.html)**
+
+-   **[ANALYZE](../reference/sql/ANALYZE.html)**
+
+-   **[BEGIN](../reference/sql/BEGIN.html)**
+
+-   **[CHECKPOINT](../reference/sql/CHECKPOINT.html)**
+
+-   **[CLOSE](../reference/sql/CLOSE.html)**
+
+-   **[COMMIT](../reference/sql/COMMIT.html)**
+
+-   **[COPY](../reference/sql/COPY.html)**
+
+-   **[CREATE AGGREGATE](../reference/sql/CREATE-AGGREGATE.html)**
+
+-   **[CREATE DATABASE](../reference/sql/CREATE-DATABASE.html)**
+
+-   **[CREATE EXTERNAL TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html)**
+
+-   **[CREATE FUNCTION](../reference/sql/CREATE-FUNCTION.html)**
+
+-   **[CREATE GROUP](../reference/sql/CREATE-GROUP.html)**
+
+-   **[CREATE LANGUAGE](../reference/sql/CREATE-LANGUAGE.html)**
+
+-   **[CREATE OPERATOR](../reference/sql/CREATE-OPERATOR.html)**
+
+-   **[CREATE OPERATOR CLASS](../reference/sql/CREATE-OPERATOR-CLASS.html)**
+
+-   **[CREATE RESOURCE QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html)**
+
+-   **[CREATE ROLE](../reference/sql/CREATE-ROLE.html)**
+
+-   **[CREATE SCHEMA](../reference/sql/CREATE-SCHEMA.html)**
+
+-   **[CREATE SEQUENCE](../reference/sql/CREATE-SEQUENCE.html)**
+
+-   **[CREATE TABLE](../reference/sql/CREATE-TABLE.html)**
+
+-   **[CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html)**
+
+-   **[CREATE TABLESPACE](../reference/sql/CREATE-TABLESPACE.html)**
+
+-   **[CREATE TYPE](../reference/sql/CREATE-TYPE.html)**
+
+-   **[CREATE USER](../reference/sql/CREATE-USER.html)**
+
+-   **[CREATE VIEW](../reference/sql/CREATE-VIEW.html)**
+
+-   **[DEALLOCATE](../reference/sql/DEALLOCATE.html)**
+
+-   **[DECLARE](../reference/sql/DECLARE.html)**
+
+-   **[DROP AGGREGATE](../reference/sql/DROP-AGGREGATE.html)**
+
+-   **[DROP DATABASE](../reference/sql/DROP-DATABASE.html)**
+
+-   **[DROP EXTERNAL TABLE](../reference/sql/DROP-EXTERNAL-TABLE.html)**
+
+-   **[DROP FILESPACE](../reference/sql/DROP-FILESPACE.html)**
+
+-   **[DROP FUNCTION](../reference/sql/DROP-FUNCTION.html)**
+
+-   **[DROP GROUP](../reference/sql/DROP-GROUP.html)**
+
+-   **[DROP OPERATOR](../reference/sql/DROP-OPERATOR.html)**
+
+-   **[DROP OPERATOR CLASS](../reference/sql/DROP-OPERATOR-CLASS.html)**
+
+-   **[DROP OWNED](../reference/sql/DROP-OWNED.html)**
+
+-   **[DROP RESOURCE QUEUE](../reference/sql/DROP-RESOURCE-QUEUE.html)**
+
+-   **[DROP ROLE](../reference/sql/DROP-ROLE.html)**
+
+-   **[DROP SCHEMA](../reference/sql/DROP-SCHEMA.html)**
+
+-   **[DROP SEQUENCE](../reference/sql/DROP-SEQUENCE.html)**
+
+-   **[DROP TABLE](../reference/sql/DROP-TABLE.html)**
+
+-   **[DROP TABLESPACE](../reference/sql/DROP-TABLESPACE.html)**
+
+-   **[DROP TYPE](../reference/sql/DROP-TYPE.html)**
+
+-   **[DROP USER](../reference/sql/DROP-USER.html)**
+
+-   **[DROP VIEW](../reference/sql/DROP-VIEW.html)**
+
+-   **[END](../reference/sql/END.html)**
+
+-   **[EXECUTE](../reference/sql/EXECUTE.html)**
+
+-   **[EXPLAIN](../reference/sql/EXPLAIN.html)**
+
+-   **[FETCH](../reference/sql/FETCH.html)**
+
+-   **[GRANT](../reference/sql/GRANT.html)**
+
+-   **[INSERT](../reference/sql/INSERT.html)**
+
+-   **[PREPARE](../reference/sql/PREPARE.html)**
+
+-   **[REASSIGN OWNED](../reference/sql/REASSIGN-OWNED.html)**
+
+-   **[RELEASE SAVEPOINT](../reference/sql/RELEASE-SAVEPOINT.html)**
+
+-   **[RESET](../reference/sql/RESET.html)**
+
+-   **[REVOKE](../reference/sql/REVOKE.html)**
+
+-   **[ROLLBACK](../reference/sql/ROLLBACK.html)**
+
+-   **[ROLLBACK TO SAVEPOINT](../reference/sql/ROLLBACK-TO-SAVEPOINT.html)**
+
+-   **[SAVEPOINT](../reference/sql/SAVEPOINT.html)**
+
+-   **[SELECT](../reference/sql/SELECT.html)**
+
+-   **[SELECT INTO](../reference/sql/SELECT-INTO.html)**
+
+-   **[SET](../reference/sql/SET.html)**
+
+-   **[SET ROLE](../reference/sql/SET-ROLE.html)**
+
+-   **[SET SESSION AUTHORIZATION](../reference/sql/SET-SESSION-AUTHORIZATION.html)**
+
+-   **[SHOW](../reference/sql/SHOW.html)**
+
+-   **[TRUNCATE](../reference/sql/TRUNCATE.html)**
+
+-   **[VACUUM](../reference/sql/VACUUM.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/catalog_ref-html.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/catalog_ref-html.html.md.erb b/markdown/reference/catalog/catalog_ref-html.html.md.erb
new file mode 100644
index 0000000..864f106
--- /dev/null
+++ b/markdown/reference/catalog/catalog_ref-html.html.md.erb
@@ -0,0 +1,143 @@
+---
+title: System Catalogs Definitions
+---
+
+System catalog table and view definitions in alphabetical order.
+
+-   **[gp\_configuration\_history](../../reference/catalog/gp_configuration_history.html)**
+
+-   **[gp\_distribution\_policy](../../reference/catalog/gp_distribution_policy.html)**
+
+-   **[gp\_global\_sequence](../../reference/catalog/gp_global_sequence.html)**
+
+-   **[gp\_master\_mirroring](../../reference/catalog/gp_master_mirroring.html)**
+
+-   **[gp\_persistent\_database\_node](../../reference/catalog/gp_persistent_database_node.html)**
+
+-   **[gp\_persistent\_filespace\_node](../../reference/catalog/gp_persistent_filespace_node.html)**
+
+-   **[gp\_persistent\_relation\_node](../../reference/catalog/gp_persistent_relation_node.html)**
+
+-   **[gp\_persistent\_relfile\_node](../../reference/catalog/gp_persistent_relfile_node.html)**
+
+-   **[gp\_persistent\_tablespace\_node](../../reference/catalog/gp_persistent_tablespace_node.html)**
+
+-   **[gp\_relfile\_node](../../reference/catalog/gp_relfile_node.html)**
+
+-   **[gp\_segment\_configuration](../../reference/catalog/gp_segment_configuration.html)**
+
+-   **[gp\_version\_at\_initdb](../../reference/catalog/gp_version_at_initdb.html)**
+
+-   **[pg\_aggregate](../../reference/catalog/pg_aggregate.html)**
+
+-   **[pg\_am](../../reference/catalog/pg_am.html)**
+
+-   **[pg\_amop](../../reference/catalog/pg_amop.html)**
+
+-   **[pg\_amproc](../../reference/catalog/pg_amproc.html)**
+
+-   **[pg\_appendonly](../../reference/catalog/pg_appendonly.html)**
+
+-   **[pg\_attrdef](../../reference/catalog/pg_attrdef.html)**
+
+-   **[pg\_attribute](../../reference/catalog/pg_attribute.html)**
+
+-   **[pg\_attribute\_encoding](../../reference/catalog/pg_attribute_encoding.html)**
+
+-   **[pg\_auth\_members](../../reference/catalog/pg_auth_members.html)**
+
+-   **[pg\_authid](../../reference/catalog/pg_authid.html)**
+
+-   **[pg\_cast](../../reference/catalog/pg_cast.html)**
+
+-   **[pg\_class](../../reference/catalog/pg_class.html)**
+
+-   **[pg\_compression](../../reference/catalog/pg_compression.html)**
+
+-   **[pg\_constraint](../../reference/catalog/pg_constraint.html)**
+
+-   **[pg\_conversion](../../reference/catalog/pg_conversion.html)**
+
+-   **[pg\_database](../../reference/catalog/pg_database.html)**
+
+-   **[pg\_depend](../../reference/catalog/pg_depend.html)**
+
+-   **[pg\_description](../../reference/catalog/pg_description.html)**
+
+-   **[pg\_exttable](../../reference/catalog/pg_exttable.html)**
+
+-   **[pg\_filespace](../../reference/catalog/pg_filespace.html)**
+
+-   **[pg\_filespace\_entry](../../reference/catalog/pg_filespace_entry.html)**
+
+-   **[pg\_index](../../reference/catalog/pg_index.html)**
+
+-   **[pg\_inherits](../../reference/catalog/pg_inherits.html)**
+
+-   **[pg\_language](../../reference/catalog/pg_language.html)**
+
+-   **[pg\_largeobject](../../reference/catalog/pg_largeobject.html)**
+
+-   **[pg\_listener](../../reference/catalog/pg_listener.html)**
+
+-   **[pg\_locks](../../reference/catalog/pg_locks.html)**
+
+-   **[pg\_namespace](../../reference/catalog/pg_namespace.html)**
+
+-   **[pg\_opclass](../../reference/catalog/pg_opclass.html)**
+
+-   **[pg\_operator](../../reference/catalog/pg_operator.html)**
+
+-   **[pg\_partition](../../reference/catalog/pg_partition.html)**
+
+-   **[pg\_partition\_columns](../../reference/catalog/pg_partition_columns.html)**
+
+-   **[pg\_partition\_encoding](../../reference/catalog/pg_partition_encoding.html)**
+
+-   **[pg\_partition\_rule](../../reference/catalog/pg_partition_rule.html)**
+
+-   **[pg\_partition\_templates](../../reference/catalog/pg_partition_templates.html)**
+
+-   **[pg\_partitions](../../reference/catalog/pg_partitions.html)**
+
+-   **[pg\_pltemplate](../../reference/catalog/pg_pltemplate.html)**
+
+-   **[pg\_proc](../../reference/catalog/pg_proc.html)**
+
+-   **[pg\_resqueue](../../reference/catalog/pg_resqueue.html)**
+
+-   **[pg\_resqueue\_status](../../reference/catalog/pg_resqueue_status.html)**
+
+-   **[pg\_rewrite](../../reference/catalog/pg_rewrite.html)**
+
+-   **[pg\_roles](../../reference/catalog/pg_roles.html)**
+
+-   **[pg\_shdepend](../../reference/catalog/pg_shdepend.html)**
+
+-   **[pg\_shdescription](../../reference/catalog/pg_shdescription.html)**
+
+-   **[pg\_stat\_activity](../../reference/catalog/pg_stat_activity.html)**
+
+-   **[pg\_stat\_last\_operation](../../reference/catalog/pg_stat_last_operation.html)**
+
+-   **[pg\_stat\_last\_shoperation](../../reference/catalog/pg_stat_last_shoperation.html)**
+
+-   **[pg\_stat\_operations](../../reference/catalog/pg_stat_operations.html)**
+
+-   **[pg\_stat\_partition\_operations](../../reference/catalog/pg_stat_partition_operations.html)**
+
+-   **[pg\_statistic](../../reference/catalog/pg_statistic.html)**
+
+-   **[pg\_stats](../../reference/catalog/pg_stats.html)**
+
+-   **[pg\_tablespace](../../reference/catalog/pg_tablespace.html)**
+
+-   **[pg\_trigger](../../reference/catalog/pg_trigger.html)**
+
+-   **[pg\_type](../../reference/catalog/pg_type.html)**
+
+-   **[pg\_type\_encoding](../../reference/catalog/pg_type_encoding.html)**
+
+-   **[pg\_window](../../reference/catalog/pg_window.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/catalog_ref-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/catalog_ref-tables.html.md.erb b/markdown/reference/catalog/catalog_ref-tables.html.md.erb
new file mode 100644
index 0000000..61aa936
--- /dev/null
+++ b/markdown/reference/catalog/catalog_ref-tables.html.md.erb
@@ -0,0 +1,68 @@
+---
+title: System Tables
+---
+
+This topic lists the system tables included in HAWQ.
+
+-   gp\_configuration (Deprecated. See [gp\_segment\_configuration](gp_segment_configuration.html#topic1).)
+-   [gp\_configuration\_history](gp_configuration_history.html#topic1)
+-   [gp\_distribution\_policy](gp_distribution_policy.html#topic1)
+-   [gp\_global\_sequence](gp_global_sequence.html#topic1)
+-   [gp\_master\_mirroring](gp_master_mirroring.html#topic1)
+-   [gp\_persistent\_database\_node](gp_persistent_database_node.html#topic1)
+-   [gp\_persistent\_filespace\_node](gp_persistent_filespace_node.html#topic1)
+-   [gp\_persistent\_relation\_node](gp_persistent_relation_node.html#topic1)
+-   [gp\_persistent\_tablespace\_node](gp_persistent_tablespace_node.html#topic1)
+-   gp\_relation\_node (Deprecated. See [gp\_relfile\_node](gp_relfile_node.html).)
+-   [gp\_segment\_configuration](gp_segment_configuration.html#topic1)
+-   [gp\_version\_at\_initdb](gp_version_at_initdb.html#topic1)
+-   [pg\_aggregate](pg_aggregate.html#topic1)
+-   [pg\_am](pg_am.html#topic1)
+-   [pg\_amop](pg_amop.html#topic1)
+-   [pg\_amproc](pg_amproc.html#topic1)
+-   [pg\_appendonly](pg_appendonly.html#topic1)
+-   [pg\_attrdef](pg_attrdef.html#topic1)
+-   [pg\_attribute](pg_attribute.html#topic1)
+-   [pg\_auth\_members](pg_auth_members.html#topic1)
+-   [pg\_authid](pg_authid.html#topic1)
+-   pg\_autovacuum (not supported)
+-   [pg\_cast](pg_cast.html#topic1)
+-   [pg\_class](pg_class.html#topic1)
+-   [pg\_constraint](pg_constraint.html#topic1)
+-   [pg\_conversion](pg_conversion.html#topic1)
+-   [pg\_database](pg_database.html#topic1)
+-   [pg\_depend](pg_depend.html#topic1)
+-   [pg\_description](pg_description.html#topic1)
+-   [pg\_exttable](pg_exttable.html#topic1)
+-   [pg\_filespace](pg_filespace.html#topic1)
+-   [pg\_filespace\_entry](pg_filespace_entry.html#topic1)
+-   pg\_foreign\_data\_wrapper (not supported)
+-   pg\_foreign\_server (not supported)
+-   pg\_foreign\_table (not supported)
+-   [pg\_index](pg_index.html#topic1)
+-   [pg\_inherits](pg_inherits.html#topic1)
+-   [pg\_language](pg_language.html#topic1)
+-   [pg\_largeobject](pg_largeobject.html#topic1)
+-   [pg\_listener](pg_listener.html#topic1)
+-   [pg\_namespace](pg_namespace.html#topic1)
+-   [pg\_opclass](pg_opclass.html#topic1)
+-   [pg\_operator](pg_operator.html#topic1)
+-   [pg\_partition](pg_partition.html#topic1)
+-   [pg\_partition\_rule](pg_partition_rule.html#topic1)
+-   [pg\_pltemplate](pg_pltemplate.html#topic1)
+-   [pg\_proc](pg_proc.html#topic1)
+-   [pg\_resqueue](pg_resqueue.html#topic1)
+-   [pg\_rewrite](pg_rewrite.html#topic1)
+-   [pg\_shdepend](pg_shdepend.html#topic1)
+-   [pg\_shdescription](pg_shdescription.html#topic1)
+-   [pg\_stat\_last\_operation](pg_stat_last_operation.html#topic1)
+-   [pg\_stat\_last\_shoperation](pg_stat_last_shoperation.html#topic1)
+-   [pg\_statistic](pg_statistic.html#topic1)
+-   [pg\_stats](pg_stats.html#topic1)
+-   [pg\_tablespace](pg_tablespace.html#topic1)
+-   [pg\_trigger](pg_trigger.html#topic1)
+-   [pg\_type](pg_type.html#topic1)
+-   pg\_user\_mapping (not supported)
+-   [pg\_window](pg_window.html#topic1)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/catalog_ref-views.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/catalog_ref-views.html.md.erb b/markdown/reference/catalog/catalog_ref-views.html.md.erb
new file mode 100644
index 0000000..4a7ad74
--- /dev/null
+++ b/markdown/reference/catalog/catalog_ref-views.html.md.erb
@@ -0,0 +1,21 @@
+---
+title: System Views
+---
+
+HAWQ provides the following system views not available in PostgreSQL.
+
+-   pg\_max\_external\_files (shows number of external table files allowed per segment host when using the file:// protocol)
+-   [pg\_partition\_columns](pg_partition_columns.html#topic1)
+-   [pg\_partition\_templates](pg_partition_templates.html#topic1)
+-   [pg\_partitions](pg_partitions.html#topic1)
+-   [pg\_stat\_activity](pg_stat_activity.html#topic1)
+-   [pg\_resqueue\_status](pg_resqueue_status.html#topic1)
+-   [pg\_stats](pg_stats.html#topic1)
+
+For more information about the standard system views supported in PostgreSQL and HAWQ, see the following sections of the PostgreSQL documentation:
+
+-   [System Views](http://www.postgresql.org/docs/8.2/static/views-overview.html)
+-   [Statistics Collector Views](http://www.postgresql.org/docs/8.2/static/monitoring-stats.html#MONITORING-STATS-VIEWS-TABLE)
+-   [The Information Schema](http://www.postgresql.org/docs/8.2/static/information-schema.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/catalog_ref.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/catalog_ref.html.md.erb b/markdown/reference/catalog/catalog_ref.html.md.erb
new file mode 100644
index 0000000..143091f
--- /dev/null
+++ b/markdown/reference/catalog/catalog_ref.html.md.erb
@@ -0,0 +1,21 @@
+---
+title: System Catalog Reference
+---
+
+This reference describes the HAWQ system catalog tables and views.
+
+System tables prefixed with '`gp_`' relate to the parallel features of HAWQ. Tables prefixed with '`pg_`' are either standard PostgreSQL system catalog tables supported in HAWQ, or are related to features HAWQ that provides to enhance PostgreSQL for data warehousing workloads. Note that the global system catalog for HAWQ resides on the master instance.
+
+-   **[System Tables](../../reference/catalog/catalog_ref-tables.html)**
+
+    This topic lists the system tables included in HAWQ.
+
+-   **[System Views](../../reference/catalog/catalog_ref-views.html)**
+
+    HAWQ provides the following system views not available in PostgreSQL.
+
+-   **[System Catalogs Definitions](../../reference/catalog/catalog_ref-html.html)**
+
+    System catalog table and view definitions in alphabetical order.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_configuration_history.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_configuration_history.html.md.erb b/markdown/reference/catalog/gp_configuration_history.html.md.erb
new file mode 100644
index 0000000..e501d55
--- /dev/null
+++ b/markdown/reference/catalog/gp_configuration_history.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: gp_configuration_history
+---
+
+The `gp_configuration_history` table contains information about system changes related to fault detection and recovery operations. The HAWQ [fault tolerance service](../../admin/FaultTolerance.html) logs data to this table, as do certain related management utilities such as `hawq init`. For example, when you add a new segment to the system, records for these events are logged to `gp_configuration_history`. If a segment is marked as down by the fault tolerance service in the [gp\_segment\_configuration](gp_segment_configuration.html) catalog table, then the reason for being marked as down is recorded in this table.
+
+The event descriptions stored in this table may be helpful for troubleshooting serious system issues in collaboration with HAWQ support technicians.
+
+This table is populated only on the master. This table is defined in the `pg_global` tablespace, meaning it is globally shared across all databases in the system.
+
+<a id="topic1__ev138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_configuration\_history</span>
+
+| column              | type                     | references | description                                                                      |
+|---------------------|--------------------------|------------|----------------------------------------------------------------------------------|
+| `time`              | timestamp with time zone | �          | Timestamp for the event recorded.                                                |
+| `desc`              | text                     | �          | Text description of the event.                                                   |
+| registration\_order | integer                  | �          | The registration order of a segment. May be changed after restarting the master. |
+| hostname            | text                     | �          | The hostname of a segment                                                        |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_distribution_policy.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_distribution_policy.html.md.erb b/markdown/reference/catalog/gp_distribution_policy.html.md.erb
new file mode 100644
index 0000000..6b227e6
--- /dev/null
+++ b/markdown/reference/catalog/gp_distribution_policy.html.md.erb
@@ -0,0 +1,18 @@
+---
+title: gp_distribution_policy
+---
+
+The `gp_distribution_policy` table contains information about HAWQ tables and their policy for distributing table data across the segments. This table is populated only on the master. This table is not globally shared, meaning each database has its own copy of this table.
+
+<a id="topic1__ey142842"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_distribution\_policy</span>
+
+| column      | type         | references           | description                                                                                                                                                                                                   |
+|-------------|--------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `localoid`  | oid          | pg\_class.oid        | The table object identifier (OID).                                                                                                                                                                            |
+| `attrnums`  | smallint\[\] | pg\_attribute.attnum | The column number(s) of the distribution column(s).                                                                                                                                                           |
+| `bucketnum` | integer      | �                    | Number of hash buckets used in creating a hash-distributed table or for external table intermediate processing. The number of buckets also affects how many virtual segment are created when processing data. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_global_sequence.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_global_sequence.html.md.erb b/markdown/reference/catalog/gp_global_sequence.html.md.erb
new file mode 100644
index 0000000..ac2ff2f
--- /dev/null
+++ b/markdown/reference/catalog/gp_global_sequence.html.md.erb
@@ -0,0 +1,16 @@
+---
+title: gp_global_sequence
+---
+
+The `gp_global_sequence` table contains the log sequence number position in the transaction log. This table is used by persistent tables.
+
+<a id="topic1__fe138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_global\_sequence</span>
+
+| column         | type   | references | description                                         |
+|----------------|--------|------------|-----------------------------------------------------|
+| `sequence_num` | bigint | �          | Log sequence number position in the transaction log |
+
+>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_master_mirroring.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_master_mirroring.html.md.erb b/markdown/reference/catalog/gp_master_mirroring.html.md.erb
new file mode 100644
index 0000000..fa7ea18
--- /dev/null
+++ b/markdown/reference/catalog/gp_master_mirroring.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: gp_master_mirroring
+---
+
+The `gp_master_mirroring` table contains state information about the standby master host and its associated write-ahead log (WAL) replication process. If this synchronization process (`gpsyncagent`) fails on the standby master, it may not always be noticeable to users of the system. This catalog is a place where HAWQ administrators can check to see if the standby master is current and fully synchronized.
+
+<a id="topic1__op164584"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_master\_mirroring</span>
+
+| column          | type                     | references | description                                                                                                                                                                                                                                                                                                   |
+|-----------------|--------------------------|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `summary_state` | text                     | �          | The current state of the log replication process between the master and standby master - logs are either \u2018Synchronized\u2019 or \u2018Not Synchronized\u2019                                                                                                                                                                 |
+| `detail_state`  | text                     | �          | If not synchronized, this column will have information about the cause of the error.                                                                                                                                                                                                                          |
+| `log_time`      | timestamp with time zone | �          | This contains the timestamp of the last time a master mirroring change occurred. For example, the timestamp when the value of `summary_state` changed from "Synchronized" to "Not Synchronized". If no changes occur with regards to the standby master (it stays synchronized), the timestamp is not updated. |
+| `error_message` | text                     | �          | If not synchronized, this column will have the error message from the failed synchronization attempt.                                                                                                                                                                                                         |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_persistent_database_node.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_persistent_database_node.html.md.erb b/markdown/reference/catalog/gp_persistent_database_node.html.md.erb
new file mode 100644
index 0000000..2a3537a
--- /dev/null
+++ b/markdown/reference/catalog/gp_persistent_database_node.html.md.erb
@@ -0,0 +1,71 @@
+---
+title: gp_persistent_database_node
+---
+
+The `gp_persistent_database_node` table keeps track of the status of file system objects in relation to the transaction status of database objects. This information is used to make sure the state of the system catalogs and the file system files persisted to disk are synchronized.
+
+<a id="topic1__fi138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_persistent\_database\_node</span>
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">tablespace_oid</code></td>
+<td>oid</td>
+<td>pg_tablespace.oid</td>
+<td>Table space object id.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">database_oid</code></td>
+<td>oid</td>
+<td>pg_database.oid</td>
+<td>Database object id.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">persistent_state</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>0 - free
+<p>1 - create pending</p>
+<p>2 - created</p>
+<p>3 - drop pending</p>
+<p>4 - aborting create</p>
+<p>5 - &quot;Just in Time&quot; create pending</p>
+<p>6 - bulk load create pending</p></td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">parent_xid</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Global transaction id.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">persistent_serial_num</code></td>
+<td>bigint</td>
+<td>�</td>
+<td>Log sequence number position in the transaction log for a file block.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">previous_free_tid</code></td>
+<td>tid</td>
+<td>�</td>
+<td>Used by HAWQ to internally manage persistent representations of file system objects.</td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_persistent_filespace_node.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_persistent_filespace_node.html.md.erb b/markdown/reference/catalog/gp_persistent_filespace_node.html.md.erb
new file mode 100644
index 0000000..b682a88
--- /dev/null
+++ b/markdown/reference/catalog/gp_persistent_filespace_node.html.md.erb
@@ -0,0 +1,83 @@
+---
+title: gp_persistent_filespace_node
+---
+
+The `gp_persistent_filespace_node` table keeps track of the status of file system objects in relation to the transaction status of filespace objects. This information is used to make sure the state of the system catalogs and the file system files persisted to disk are synchronized.
+
+<a id="topic1__fj138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_persistent\_filespace\_node</span>
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">filespace_oid</code></td>
+<td>oid</td>
+<td>pg_filespace.oid</td>
+<td>object id of the filespace</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">db_id</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>primary segment id</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">location</code></td>
+<td>text</td>
+<td>�</td>
+<td>primary filesystem location</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">persistent_state</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>0 - free
+<p>1 - create pending</p>
+<p>2 - created</p>
+<p>3 - drop pending</p>
+<p>4 - aborting create</p>
+<p>5 - &quot;Just in Time&quot; create pending</p>
+<p>6 - bulk load create pending</p></td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">reserved</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Not used. Reserved for future use.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">parent_xid</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Global transaction id.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">persistent_serial_num</code></td>
+<td>bigint</td>
+<td>�</td>
+<td>Log sequence number position in the transaction log for a file block.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">previous_free_tid</code></td>
+<td>tid</td>
+<td>�</td>
+<td>Used by HAWQ to internally manage persistent representations of file system objects.</td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_persistent_relation_node.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_persistent_relation_node.html.md.erb b/markdown/reference/catalog/gp_persistent_relation_node.html.md.erb
new file mode 100644
index 0000000..f141cf7
--- /dev/null
+++ b/markdown/reference/catalog/gp_persistent_relation_node.html.md.erb
@@ -0,0 +1,85 @@
+---
+title: gp_persistent_relation_node
+---
+
+The `gp_persistent_relation_node` table table keeps track of the status of file system objects in relation to the transaction status of relation objects (tables, view, indexes, and so on). This information is used to make sure the state of the system catalogs and the file system files persisted to disk are synchronized.
+
+<a id="topic1__fk138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_persistent\_relation\_node</span>
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">tablespace_oid</code></td>
+<td>oid</td>
+<td>pg_tablespace.oid</td>
+<td>Tablespace object id</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">database_oid</code></td>
+<td>oid</td>
+<td>pg_database.oid</td>
+<td>Database object id</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relfilenode_oid</code></td>
+<td>oid</td>
+<td>pg_class.relfilenode</td>
+<td>The object id of the relation file node.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">persistent_state</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>0 - free
+<p>1 - create pending</p>
+<p>2 - created</p>
+<p>3 - drop pending</p>
+<p>4 - aborting create</p>
+<p>5 - &quot;Just in Time&quot; create pending</p>
+<p>6 - bulk load create pending</p></td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">reserved</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Unused. Reserved for future use.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">parent_xid</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Global transaction id.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">persistent_serial_num</code></td>
+<td>bigint</td>
+<td>�</td>
+<td>Log sequence number position in the transaction log for a file block.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">previous_free_tid</code></td>
+<td>tid</td>
+<td>�</td>
+<td>Used by HAWQ to internally manage persistent representations of file system objects.</td>
+</tr>
+</tbody>
+</table>
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_persistent_relfile_node.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_persistent_relfile_node.html.md.erb b/markdown/reference/catalog/gp_persistent_relfile_node.html.md.erb
new file mode 100644
index 0000000..6d24a41
--- /dev/null
+++ b/markdown/reference/catalog/gp_persistent_relfile_node.html.md.erb
@@ -0,0 +1,96 @@
+---
+title: gp_persistent_relfile_node
+---
+
+The `gp_persistent_relfile_node` table keeps track of the status of file system objects in relation to the transaction status of database objects. This information is used to make sure the state of the system catalogs and the file system files persisted to disk are synchronized.
+
+<a id="topic1__fk138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_persistent\_relfile\_node</span>
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">tablespace_oid</code></td>
+<td>oid</td>
+<td>pg_tablespace.oid</td>
+<td>Tablespace object id</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">database_oid</code></td>
+<td>oid</td>
+<td>pg_database.oid</td>
+<td>Database object id</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relfilenode_oid</code></td>
+<td>oid</td>
+<td>pg_class.relfilenode</td>
+<td>The object id of the relation file node.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">segment_file_num</code></td>
+<td>integer</td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relation_storage_manager</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">persistent_state</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>0 - free
+<p>1 - create pending</p>
+<p>2 - created</p>
+<p>3 - drop pending</p>
+<p>4 - aborting create</p>
+<p>5 - &quot;Just in Time&quot; create pending</p>
+<p>6 - bulk load create pending</p></td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relation_bufpool_kind</code></td>
+<td>integer</td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">parent_xid</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Global transaction id.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">persistent_serial_num</code></td>
+<td>bigint</td>
+<td>�</td>
+<td>Log sequence number position in the transaction log for a file block.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">previous_free_tid</code></td>
+<td>tid</td>
+<td>�</td>
+<td>Used by HAWQ to internally manage persistent representations of file system objects.</td>
+</tr>
+</tbody>
+</table>
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_persistent_tablespace_node.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_persistent_tablespace_node.html.md.erb b/markdown/reference/catalog/gp_persistent_tablespace_node.html.md.erb
new file mode 100644
index 0000000..55c853e
--- /dev/null
+++ b/markdown/reference/catalog/gp_persistent_tablespace_node.html.md.erb
@@ -0,0 +1,72 @@
+---
+title: gp_persistent_tablespace_node
+---
+
+The `gp_persistent_tablespace_node` table keeps track of the status of file system objects in relation to the transaction status of tablespace objects. This information is used to make sure the state of the system catalogs and the file system files persisted to disk are synchronized.
+
+<a id="topic1__fm138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_persistent\_tablespace\_node</span>
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">filespace_oid</code></td>
+<td>oid</td>
+<td>pg_filespace.oid</td>
+<td>Filespace object id</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">tablespace_oid</code></td>
+<td>oid</td>
+<td>pg_tablespace.oid</td>
+<td>Tablespace object id</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">persistent_state</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>0 - free
+<p>1 - create pending</p>
+<p>2 - created</p>
+<p>3 - drop pending</p>
+<p>4 - aborting create</p>
+<p>5 - &quot;Just in Time&quot; create pending</p>
+<p>6 - bulk load create pending</p></td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">parent_xid</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Global transaction id.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">persistent_serial_num</code></td>
+<td>bigint</td>
+<td>�</td>
+<td>Log sequence number position in the transaction log for a file block.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">previous_free_tid</code></td>
+<td>tid</td>
+<td>�</td>
+<td>Used by HAWQ to internally manage persistent representations of file system objects.</td>
+</tr>
+</tbody>
+</table>
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_relfile_node.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_relfile_node.html.md.erb b/markdown/reference/catalog/gp_relfile_node.html.md.erb
new file mode 100644
index 0000000..971ff16
--- /dev/null
+++ b/markdown/reference/catalog/gp_relfile_node.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: gp_relfile_node
+---
+
+The `gp_relfile_node` table contains information about the file system objects for a relation (table, view, index, and so on).
+
+<a id="topic1__fo138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_relation\_node</span>
+
+| column                  | type    | references            | description                                                                          |
+|-------------------------|---------|-----------------------|--------------------------------------------------------------------------------------|
+| `relfilenode_oid`       | oid     | pg\_class.relfilenode | The object id of the relation file node.                                             |
+| `segment_file_num`      | integer | �                     | For append-only tables, the append-only segment file number.                         |
+| `persistent_tid`        | tid     | �                     | Used by HAWQ to internally manage persistent representations of file system objects. |
+| `persistent_serial_num` | bigint  | �                     | Log sequence number position in the transaction log for a file block.                |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_segment_configuration.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_segment_configuration.html.md.erb b/markdown/reference/catalog/gp_segment_configuration.html.md.erb
new file mode 100644
index 0000000..9fcf0cb
--- /dev/null
+++ b/markdown/reference/catalog/gp_segment_configuration.html.md.erb
@@ -0,0 +1,25 @@
+---
+title: gp_segment_configuration
+---
+
+The `gp_segment_configuration` table contains information about master, standby and segment configuration.
+
+The HAWQ fault tolerance service (FTS) automatically detects the status of individual segments and marks the status of each segment in this table. If a segment is marked as DOWN, the corresponding reason is recorded in the [gp\_configuration\_history](gp_configuration_history.html) table. See [Understanding the Fault Tolerance Service](../../admin/FaultTolerance.html) for a description of the fault tolerance service.
+
+<a id="topic1__fr163962"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_segment\_configuration</span>
+
+| column               | type    | references | description                                                                                                                                                                                                                                                                                  |
+|----------------------|---------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `registration_order` | integer | �          | When HAWQ starts, the master and each segment starts itself separately. This column indicates the order in which a segment node registers itself to master node. The `registration_order` for segments starts from 1. Master's registration\_order is 0. Standby's registration\_order is -1. |
+| `role`               | char    | �          | The role that a node is currently running as. Values are `p` (segment), `m`(master) or `s               `(standby).                                                                                                                                                                          |
+| `status`             | char    | �          | The fault status of a segment. Values are `u` (up) or `d` (down).                                                                                                                                                                                                                            |
+| `port`               | integer | �          | The TCP port the database server listener process is using.                                                                                                                                                                                                                                  |
+| `hostname`           | text    | �          | The hostname of a segment host.                                                                                                                                                                                                                                                              |
+| `address`            | text    | �          | The hostname used to access a particular segment on a segment host.                                                                                  |
+| `failed_tmpdir_num`  | integer | �          | The number of failed temporary directories on the segment. User- configured temporary directories may fail on segments due to disk errors. This information is reported to the master.                                                                                                       |
+| `failed_tmpdir`      | text    | �          | A list of failed temporary directories on the segment. Multiple failed temporary directories are separated by commas.                                                                                                                                                                        |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/gp_version_at_initdb.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/gp_version_at_initdb.html.md.erb b/markdown/reference/catalog/gp_version_at_initdb.html.md.erb
new file mode 100644
index 0000000..9e922e1
--- /dev/null
+++ b/markdown/reference/catalog/gp_version_at_initdb.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: gp_version_at_initdb
+---
+
+The `gp_version_at_initdb` table is populated on the master and each segment in the HAWQ system. It identifies the version of HAWQ used when the system was first initialized. This table is defined in the `pg_global` tablespace, meaning it is globally shared across all databases in the system.
+
+<a id="topic1__ft142845"></a>
+<span class="tablecap">Table 1. pg\_catalog.gp\_version</span>
+
+| column           | type    | references | description             |
+|------------------|---------|------------|-------------------------|
+| `schemaversion`  | integer | �          | Schema version number.  |
+| `productversion` | text    | �          | Product version number. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_aggregate.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_aggregate.html.md.erb b/markdown/reference/catalog/pg_aggregate.html.md.erb
new file mode 100644
index 0000000..1a02e89
--- /dev/null
+++ b/markdown/reference/catalog/pg_aggregate.html.md.erb
@@ -0,0 +1,25 @@
+---
+title: pg_aggregate
+---
+
+The `pg_aggregate` table stores information about aggregate functions. An aggregate function is a function that operates on a set of values (typically one column from each row that matches a query condition) and returns a single value computed from all these values. Typical aggregate functions are `sum`, `count`, and `max`. Each entry in `pg_aggregate` is an extension of an entry in `pg_proc`. The `pg_proc` entry carries the aggregate's name, input and output data types, and other information that is similar to ordinary functions.
+
+<a id="topic1__fu141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_aggregate</span>
+
+| column           | type    | references       | description                                                                                                                                                                                           |
+|------------------|---------|------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `aggfnoid`       | regproc | pg\_proc.oid     | Aggregate function OID                                                                                                                                                                                |
+| `aggtransfn`     | regproc | pg\_proc.oid     | Transition function OID                                                                                                                                                                               |
+| `aggprelimfn`    | regproc | �                | Preliminary function OID (zero if none)                                                                                                                                                               |
+| `aggfinalfn`     | regproc | pg\_proc.oid     | Final function OID (zero if none)                                                                                                                                                                     |
+| `agginitval`     | text    | �                | The initial value of the transition state. This is a text field containing the initial value in its external string representation. If this field is NULL, the transition state value starts out NULL |
+| `agginvtransfn ` | regproc | pg\_proc.oid     | The OID in pg\_proc of the inverse function of *aggtransfn*                                                                                                                                           |
+| `agginvprelimfn` | regproc | pg\_proc.oid     | The OID in pg\_proc of the inverse function of aggprelimfn                                                                                                                                            |
+| `aggordered`     | boolean | �                | If `true`, the aggregate is defined as `ORDERED`.                                                                                                                                                     |
+| `aggsortop`      | oid     | pg\_operator.oid | Associated sort operator OID (zero if none)                                                                                                                                                           |
+| `aggtranstype`   | oid     | pg\_type.oid     | Data type of the aggregate function's internal transition (state) data                                                                                                                                |
+
+ 
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_am.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_am.html.md.erb b/markdown/reference/catalog/pg_am.html.md.erb
new file mode 100644
index 0000000..96ba56b
--- /dev/null
+++ b/markdown/reference/catalog/pg_am.html.md.erb
@@ -0,0 +1,38 @@
+---
+title: pg_am
+---
+
+The `pg_am` table stores information about index access methods. There is one row for each index access method supported by the system.
+
+<a id="topic1__fv141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_am</span>
+
+| column            | type     | references   | description                                                                                                                  |
+|-------------------|----------|--------------|------------------------------------------------------------------------------------------------------------------------------|
+| `amname`          | name     | �            | Name of the access method                                                                                                    |
+| `amstrategies`    | smallint | �            | Number of operator strategies for this access method                                                                         |
+| `amsupport`       | smallint | �            | Number of support routines for this access method                                                                            |
+| `amorderstrategy` | smallint | �            | Zero if the index offers no sort order, otherwise the strategy number of the strategy operator that describes the sort order |
+| `amcanunique`     | boolean  | �            | Does the access method support unique indexes?                                                                               |
+| `amcanmulticol`   | boolean  | �            | Does the access method support multicolumn indexes?                                                                          |
+| `amoptionalkey`   | boolean  | �            | Does the access method support a scan without any constraint for the first index column?                                     |
+| `amindexnulls`    | boolean  | �            | Does the access method support null index entries?                                                                           |
+| `amstorage`       | boolean  | �            | Can index storage data type differ from column data type?                                                                    |
+| `amclusterable`   | boolean  | �            | Can an index of this type be clustered on?                                                                                   |
+| `aminsert`        | regproc  | pg\_proc.oid | "Insert this tuple" function                                                                                                 |
+| `ambeginscan`     | regproc  | pg\_proc.oid | "Start new scan" function                                                                                                    |
+| `amgettuple`      | regproc  | pg\_proc.oid | "Next valid tuple" function                                                                                                  |
+| `amgetmulti`      | regproc  | pg\_proc.oid | "Fetch multiple tuples" function                                                                                             |
+| `amrescan`        | regproc  | pg\_proc.oid | "Restart this scan" function                                                                                                 |
+| `amendscan`       | regproc  | pg\_proc.oid | "End this scan" function                                                                                                     |
+| `ammarkpos`       | regproc  | pg\_proc.oid | "Mark current scan position" function                                                                                        |
+| `amrestrpos`      | regproc  | pg\_proc.oid | "Restore marked scan position" function                                                                                      |
+| `ambuild`         | regproc  | pg\_proc.oid | "Build new index" function                                                                                                   |
+| `ambulkdelete`    | regproc  | pg\_proc.oid | Bulk-delete function                                                                                                         |
+| `amvacuumcleanup` | regproc  | pg\_proc.oid | Post-`VACUUM` cleanup function                                                                                               |
+| `amcostestimate`  | regproc  | pg\_proc.oid | Function to estimate cost of an index scan                                                                                   |
+| `amoptions`       | regproc  | pg\_proc.oid | Function to parse and validate reloptions for an index                                                                       |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_amop.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_amop.html.md.erb b/markdown/reference/catalog/pg_amop.html.md.erb
new file mode 100644
index 0000000..c07bcd1
--- /dev/null
+++ b/markdown/reference/catalog/pg_amop.html.md.erb
@@ -0,0 +1,20 @@
+---
+title: pg_amop
+---
+
+The `pg_amop` table stores information about operators associated with index access method operator classes. There is one row for each operator that is a member of an operator class.
+
+<a id="topic1__fw143542"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_amop</span>
+
+| column         | type     | references       | description                                                                |
+|----------------|----------|------------------|----------------------------------------------------------------------------|
+| `amopclaid`    | oid      | pg\_opclass.oid  | The index operator class this entry is for                                 |
+| `amopsubtype`  | oid      | pg\_type.oid     | Subtype to distinguish multiple entries for one strategy; zero for default |
+| `amopstrategy` | smallint | �                | Operator strategy number                                                   |
+| `amopreqcheck` | boolean  | �                | Index hit must be rechecked                                                |
+| `amopopr`      | oid      | pg\_operator.oid | OID of the operator                                                        |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_amproc.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_amproc.html.md.erb b/markdown/reference/catalog/pg_amproc.html.md.erb
new file mode 100644
index 0000000..7668bac
--- /dev/null
+++ b/markdown/reference/catalog/pg_amproc.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: pg_amproc
+---
+
+The `pg_amproc` table stores information about support procedures associated with index access method operator classes. There is one row for each support procedure belonging to an operator class.
+
+<a id="topic1__fx143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_amproc</span>
+
+| column          | type    | references      | description                                |
+|-----------------|---------|-----------------|--------------------------------------------|
+| `amopclaid`     | oid     | pg\_opclass.oid | The index operator class this entry is for |
+| `amprocsubtype` | oid     | pg\_type.oid    | Subtype, if cross-type routine, else zero  |
+| `amprocnum`     | int2    | �               | Support procedure number                   |
+| `amproc`        | regproc | pg\_proc.oid    | OID of the procedure                       |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_appendonly.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_appendonly.html.md.erb b/markdown/reference/catalog/pg_appendonly.html.md.erb
new file mode 100644
index 0000000..b0a56d6
--- /dev/null
+++ b/markdown/reference/catalog/pg_appendonly.html.md.erb
@@ -0,0 +1,29 @@
+---
+title: pg_appendonly
+---
+
+The `pg_appendonly` table contains information about the storage options and other characteristics of append-only tables.
+
+<a id="topic1__fy138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_appendonly</span>
+
+| column          | type     | references | description                                                                                                                                        |
+|-----------------|----------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
+| `relid`         | oid      | �          | The table object identifier (OID) of the compressed table.                                                                                         |
+| `compresslevel` | smallint | �          | The compression level, with increasing compression ratio. If the `gzip` or `zlib` compression type is specified, valid values are 1-9.                           |
+| `majorversion`  | smallint | �          | The major version number of the `pg_appendonly` table.                                                                                              |
+| `minorversion`  | smallint | �          | The minor version number of the `pg_appendonly` table.                                                                                              |
+| `checksum`      | boolean  | �          | A checksum value that is stored to compare the state of a block of data at compression time and at scan time to ensure data integrity.             |
+| `compresstype`  | text     | �          | Type of compression used on append-only and parquet tables. `zlib`, `snappy`, and `gzip` compression types are supported. |
+| `columnstore`   | boolean  | �          | `0` for row-oriented storage.                                                                                                                      |
+| `segrelid`      | oid      | �          | Table on-disk segment file id.                                                                                                                     |
+| `segidxid`      | oid      | �          | Index on-disk segment file id.                                                                                                                     |
+| `blkdirrelid`   | oid      | �          | Block used for on-disk column-oriented table file.                                                                                                 |
+| `blkdiridxid`   | oid      | �          | Block used for on-disk column-oriented index file.                                                                                                 |
+| `version`       | integer  | �          | Version of MemTuples and block layout for this table.                                                                                              |
+| `pagesize`      | integer  | �          | The max page size of this relation. Only valid for Parquet tables; otherwise, the value is 0.                                                      |
+| `splitsize`     | integer  | �          | Size of a split. Default value is 64M, which is controlled by server configuration parameter `appendonly_split_write_size_mb`.                     |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_attrdef.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_attrdef.html.md.erb b/markdown/reference/catalog/pg_attrdef.html.md.erb
new file mode 100644
index 0000000..f4af77d
--- /dev/null
+++ b/markdown/reference/catalog/pg_attrdef.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: pg_attrdef
+---
+
+The `pg_attrdef` table stores column default values. The main information about columns is stored in [pg\_attribute](pg_attribute.html#topic1). Only columns that explicitly specify a default value (when the table is created or the column is added) will have an entry here.
+
+<a id="topic1__fz143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_attrdef</span>
+
+| column    | type     | references           | description                                                                                           |
+|-----------|----------|----------------------|-------------------------------------------------------------------------------------------------------|
+| `adrelid` | oid      | pg\_class.oid        | The table this column belongs to                                                                      |
+| `adnum`   | smallint | pg\_attribute.attnum | The number of the column                                                                              |
+| `adbin `  | text     | �                    | The internal representation of the column default value                                               |
+| `adsrc`   | text     | �                    | A human-readable representation of the default value. This field is historical, and is best not used. |
+
+
+
+


[12/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/SELECT.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/SELECT.html.md.erb b/markdown/reference/sql/SELECT.html.md.erb
new file mode 100644
index 0000000..4649bad
--- /dev/null
+++ b/markdown/reference/sql/SELECT.html.md.erb
@@ -0,0 +1,507 @@
+---
+title: SELECT
+---
+
+Retrieves rows from a table or view.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+SELECT [ALL | DISTINCT [ON (<expression> [, ...])]]
+��* | <expression> [[AS] <output_name>] [, ...]
+��[FROM <from_item> [, ...]]
+��[WHERE <condition>]
+��[GROUP BY <grouping_element> [, ...]]
+��[HAVING <condition> [, ...]]
+��[WINDOW <window_name> AS (<window_specification>)]
+��[{UNION | INTERSECT | EXCEPT} [ALL] <select>]
+��[ORDER BY <expression> [ASC | DESC | USING <operator>] [, ...]]
+��[LIMIT {<count> | ALL}]
+��[OFFSET <start>]
+```
+
+where \<grouping\_element\> can be one of:
+
+``` pre
+��()
+��<expression>
+��ROLLUP (<expression> [,...])
+��CUBE (<expression> [,...])
+��GROUPING SETS ((<grouping_element> [, ...]))
+```
+
+where \<window\_specification\> can be:
+
+``` pre
+��[<window_name>]
+��[PARTITION BY <expression> [, ...]]
+��[ORDER BY <expression> [ASC | DESC | USING <operator>] [, ...]
+�����[{RANGE | ROWS}
+����������{ UNBOUNDED PRECEDING
+����������| <expression> PRECEDING
+����������| CURRENT ROW
+����������| BETWEEN <window_frame_bound> AND <window_frame_bound> }]]
+����                where <window_frame_bound> can be one of:
+                  ������UNBOUNDED PRECEDING
+                  ������<expression> PRECEDING
+    �              �����CURRENT ROW
+                  ������<expression> FOLLOWING
+                  ������UNBOUNDED FOLLOWING
+```
+
+where \<from\_item\> can be one of:
+
+``` pre
+[ONLY] <table_name> [[AS] <alias> [( <column_alias> [, ...] )]]
+(select) [AS] <alias> [( <column_alias> [, ...] )]
+<function_name> ( [<argument> [, ...]] ) [AS] <alias>
+�������������[( <column_alias> [, ...]
+����������������| <column_definition> [, ...] )]
+<function_name> ( [<argument> [, ...]] ) AS
+��������������( <column_definition> [, ...] )
+<from_item> [NATURAL] <join_type>
+            <from_item>
+����������[ON <join_condition> | USING ( <join_column> [, ...] )]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`SELECT` retrieves rows from zero or more tables. The general processing of `SELECT` is as follows:
+
+1.  All elements in the `FROM` list are computed. (Each element in the `FROM` list is a real or virtual table.) If more than one element is specified in the `FROM` list, they are cross-joined together.
+2.  If the `WHERE` clause is specified, all rows that do not satisfy the condition are eliminated from the output.
+3.  If the `GROUP BY` clause is specified, the output is divided into groups of rows that match on one or more of the defined grouping elements. If the `HAVING` clause is present, it eliminates groups that do not satisfy the given condition.
+4.  If a window expression is specified (and optional `WINDOW` clause), the output is organized according to the positional (row) or value-based (range) window frame.
+5.  `DISTINCT` eliminates duplicate rows from the result. `DISTINCT ON` eliminates rows that match on all the specified expressions. `ALL` (the default) will return all candidate rows, including duplicates.
+6.  The actual output rows are computed using the `SELECT` output expressions for each selected row.
+7.  Using the operators `UNION`, `INTERSECT`, and `EXCEPT`, the output of more than one `SELECT` statement can be combined to form a single result set. The `UNION` operator returns all rows that are in one or both of the result sets. The `INTERSECT` operator returns all rows that are strictly in both result sets. The `EXCEPT` operator returns the rows that are in the first result set but not in the second. In all three cases, duplicate rows are eliminated unless `ALL` is specified.
+8.  If the `ORDER BY` clause is specified, the returned rows are sorted in the specified order. If `ORDER BY` is not given, the rows are returned in whatever order the system finds fastest to produce.
+9.  If the `LIMIT` or `OFFSET` clause is specified, the `SELECT` statement only returns a subset of the result rows.
+
+You must have `SELECT` privilege on a table to read its values.
+
+## <a id="topic1__section4"></a>Parameters
+
+**The SELECT List**
+
+The `SELECT` list (between the key words `SELECT` and `FROM`) specifies expressions that form the output rows of the `SELECT` statement. The expressions can (and usually do) refer to columns computed in the `FROM` clause.
+
+Using the clause `[AS] ` \<output\_name\>, another name can be specified for an output column. This name is primarily used to label the column for display. It can also be used to refer to the column's value in `ORDER BY` and `GROUP BY` clauses, but not in the `WHERE` or `HAVING` clauses; there you must write out the expression instead. The `AS` keyword is optional in most cases (such as when declaring an alias for column names, constants, function calls, and simple unary operator expressions). In cases where the declared alias is a reserved SQL keyword, the \<output\_name\> must be enclosed in double quotes to avoid ambiguity.
+
+An \<expression\> in the `SELECT` list can be a constant value, a column reference, an operator invocation, a function call, an aggregate expression, a window expression, a scalar subquery, and so on. There are a number of constructs that can be classified as an expression but do not follow any general syntax rules.
+
+Instead of an expression, `*` can be written in the output list as a shorthand for all the columns of the selected rows. Also, you can write `                   table_name.*` as a shorthand for the columns coming from just that table.
+
+**The FROM Clause**
+
+The `FROM` clause specifies one or more source tables for the `SELECT`. If multiple sources are specified, the result is the Cartesian product (cross join) of all the sources. But usually qualification conditions are added to restrict the returned rows to a small subset of the Cartesian product. The `FROM` clause can contain the following elements:
+
+<dt> \<table\_name\>  </dt>
+<dd>The name (optionally schema-qualified) of an existing table or view. If `ONLY` is specified, only that table is scanned. If `ONLY` is not specified, the table and all its descendant tables (if any) are scanned.</dd>
+
+<dt> \<alias\>  </dt>
+<dd>A substitute name for the `FROM` item containing the alias. An alias is used for brevity or to eliminate ambiguity for self-joins (where the same table is scanned multiple times). When an alias is provided, it completely hides the actual name of the table or function; for example given `FROM foo AS f`, the remainder of the `SELECT` must refer to this `FROM` item as `f` not `foo`. If an alias is written, a column alias list can also be written to provide substitute names for one or more columns of the table.</dd>
+
+<dt> \<select\>  </dt>
+<dd>A sub-`SELECT` can appear in the `FROM` clause. This acts as though its output were created as a temporary table for the duration of this single `SELECT` command. Note that the sub-`SELECT` must be surrounded by parentheses, and an alias must be provided for it. A `VALUES` command can also be used here. See "Non-standard Clauses" in the [Compatibility](#topic1__section19) section for limitations of using correlated sub-selects in HAWQ.</dd>
+
+<dt> \<function\_name\>  </dt>
+<dd>Function calls can appear in the `FROM` clause. (This is especially useful for functions that return result sets, but any function can be used.) This acts as though its output were created as a temporary table for the duration of this single `SELECT` command. An alias may also be used. If an alias is written, a column alias list can also be written to provide substitute names for one or more attributes of the function's composite return type. If the function has been defined as returning the record data type, then an alias or the key word `AS` must be present, followed by a column definition list in the form `(<column_name> <data_type> [, ... ] )`. The column definition list must match the actual number and types of columns returned by the function.</dd>
+
+<dt> \<join\_type\>  </dt>
+<dd>One of:
+
+-   **\[INNER\] JOIN**
+-   **LEFT \[OUTER\] JOIN**
+-   **RIGHT \[OUTER\] JOIN**
+-   **FULL \[OUTER\] JOIN**
+-   **CROSS JOIN**
+
+For the `INNER` and `OUTER` join types, a join condition must be specified, namely exactly one of `NATURAL`, `ON <join_condition>                      `, or `USING (<join_column> [, ...])`. See below for the meaning. For `CROSS JOIN`, none of these clauses may appear.
+
+A `JOIN` clause combines two `FROM` items. Use parentheses if necessary to determine the order of nesting. In the absence of parentheses, `JOIN`s nest left-to-right. In any case `JOIN` binds more tightly than the commas separating `FROM` items.
+
+`CROSS JOIN` and `INNER JOIN` produce a simple Cartesian product, the same result as you get from listing the two items at the top level of `FROM`, but restricted by the join condition (if any). `CROSS JOIN` is equivalent to `INNER JOIN                 ON` `(TRUE)`, that is, no rows are removed by qualification. These join types are just a notational convenience, since they do nothing you could not do with plain `FROM` and `WHERE`.
+
+`LEFT OUTER JOIN` returns all rows in the qualified Cartesian product (i.e., all combined rows that pass its join condition), plus one copy of each row in the left-hand table for which there was no right-hand row that passed the join condition. This left-hand row is extended to the full width of the joined table by inserting null values for the right-hand columns. Note that only the `JOIN` clause's own condition is considered while deciding which rows have matches. Outer conditions are applied afterwards.
+
+Conversely, `RIGHT OUTER JOIN` returns all the joined rows, plus one row for each unmatched right-hand row (extended with nulls on the left). This is just a notational convenience, since you could convert it to a `LEFT OUTER                 JOIN` by switching the left and right inputs.
+
+`FULL OUTER JOIN` returns all the joined rows, plus one row for each unmatched left-hand row (extended with nulls on the right), plus one row for each unmatched right-hand row (extended with nulls on the left).</dd>
+
+<dt>ON \<join\_condition\>  </dt>
+<dd>\<join\_condition\> is an expression resulting in a value of type `boolean` (similar to a `WHERE` clause) that specifies which rows in a join are considered to match.</dd>
+
+<dt>USING (\<join\_column\> \[, ...\])  </dt>
+<dd>A clause of the form `USING ( a, b, ... )` is shorthand for `ON left_table.a = right_table.a AND left_table.b = right_table.b ...               `. Also, `USING` implies that only one of each pair of equivalent columns will be included in the join output, not both.</dd>
+
+<dt>NATURAL  </dt>
+<dd>`NATURAL` is shorthand for a `USING` list that mentions all columns in the two tables that have the same names.</dd>
+
+**The WHERE Clause**
+
+The optional `WHERE` clause has the general form:
+
+``` pre
+WHERE <condition>
+```
+
+where \<condition\> is any expression that evaluates to a result of type `boolean`. Any row that does not satisfy this condition will be eliminated from the output. A row satisfies the condition if it returns true when the actual row values are substituted for any variable references.
+
+**The GROUP BY Clause**
+
+The optional `GROUP BY` clause has the general form:
+
+``` pre
+GROUP BY <grouping_element> [, ...]
+```
+
+where \<grouping\_element\> can be one of:
+
+``` pre
+()
+<expression>
+ROLLUP (<expression> [,...])
+CUBE (<expression> [,...])
+GROUPING SETS ((<grouping_element> [, ...]))
+```
+
+`GROUP             BY` will condense into a single row all selected rows that share the same values for the grouped expressions. \<expression\> can be an input column name, or the name or ordinal number of an output column (`SELECT` list item), or an arbitrary expression formed from input-column values. In case of ambiguity, a `GROUP BY` name will be interpreted as an input-column name rather than an output column name.
+
+Aggregate functions, if any are used, are computed across all rows making up each group, producing a separate value for each group (whereas without `GROUP BY`, an aggregate produces a single value computed across all the selected rows). When `GROUP BY` is present, it is not valid for the `SELECT` list expressions to refer to ungrouped columns except within aggregate functions, since there would be more than one possible value to return for an ungrouped column.
+
+HAWQ has the following additional OLAP grouping extensions (often referred to as *supergroups*):
+
+<dt>ROLLUP  </dt>
+<dd>A `ROLLUP` grouping is an extension to the `GROUP BY` clause that creates aggregate subtotals that roll up from the most detailed level to a grand total, following a list of grouping columns (or expressions). `ROLLUP` takes an ordered list of grouping columns, calculates the standard aggregate values specified in the `GROUP BY` clause, then creates progressively higher-level subtotals, moving from right to left through the list. Finally, it creates a grand total. A `ROLLUP` grouping can be thought of as a series of grouping sets. For example:
+
+``` pre
+GROUP BY ROLLUP (a,b,c)
+```
+
+is equivalent to:
+
+``` pre
+GROUP BY GROUPING SETS( (a,b,c), (a,b), (a), () )
+```
+
+Notice that the *n* elements of a `ROLLUP` translate to *n*+1 grouping sets. Also, the order in which the grouping expressions are specified is significant in a `ROLLUP`.</dd>
+
+<dt>CUBE  </dt>
+<dd>A `CUBE` grouping is an extension to the `GROUP BY` clause that creates subtotals for all of the possible combinations of the given list of grouping columns (or expressions). In terms of multidimensional analysis, `CUBE` generates all the subtotals that could be calculated for a data cube with the specified dimensions. For example:
+
+``` pre
+GROUP BY CUBE (a,b,c)
+```
+
+is equivalent to:
+
+``` pre
+GROUP BY GROUPING SETS( (a,b,c), (a,b), (a,c), (b,c), (a),
+(b), (c), () )
+```
+
+Notice that *n* elements of a `CUBE` translate to 2n grouping sets. Consider using `CUBE` in any situation requiring cross-tabular reports. `CUBE` is typically most suitable in queries that use columns from multiple dimensions rather than columns representing different levels of a single dimension. For instance, a commonly requested cross-tabulation might need subtotals for all the combinations of month, state, and product.</dd>
+
+<dt>GROUPING SETS  </dt>
+<dd>You can selectively specify the set of groups that you want to create using a `GROUPING SETS` expression within a `GROUP BY` clause. This allows precise specification across multiple dimensions without computing a whole `ROLLUP` or `CUBE`. For example:
+
+``` pre
+GROUP BY GROUPING SETS( (a,c), (a,b) )
+```
+
+If using the grouping extension clauses `ROLLUP`, `CUBE`, or `GROUPING SETS`, two challenges arise. First, how do you determine which result rows are subtotals, and then the exact level of aggregation for a given subtotal. Or, how do you differentiate between result rows that contain both stored `NULL` values and "NULL" values created by the `ROLLUP` or `CUBE`. Secondly, when duplicate grouping sets are specified in the `GROUP BY` clause, how do you determine which result rows are duplicates? There are two additional grouping functions you can use in the `SELECT` list to help with this:
+
+-   **grouping(\<column\> \[, ...\])** \u2014 The `grouping` function can be applied to one or more grouping attributes to distinguish super-aggregated rows from regular grouped rows. This can be helpful in distinguishing a "NULL" representing the set of all values in a super-aggregated row from a `NULL` value in a regular row. Each argument in this function produces a bit \u2014 either `1` or `0`, where `1` means the result row is super-aggregated, and `0` means the result row is from a regular grouping. The `grouping` function returns an integer by treating these bits as a binary number and then converting it to a base-10 integer.
+-   **group\_id()** \u2014 For grouping extension queries that contain duplicate grouping sets, the `group_id` function is used to identify duplicate rows in the output. All *unique* grouping set output rows will have a group\_id value of 0. For each duplicate grouping set detected, the `group_id` function assigns a group\_id number greater than 0. All output rows in a particular duplicate grouping set are identified by the same group\_id number.</dd>
+
+**The WINDOW Clause**
+
+The `WINDOW` clause is used to define a window that can be used in the `OVER()` expression of a window function such as `rank` or `avg`. For example:
+
+``` pre
+SELECT vendor, rank() OVER (mywindow) FROM sale
+GROUP BY vendor
+WINDOW mywindow AS (ORDER BY sum(prc*qty));
+```
+
+A `WINDOW` clause has this general form:
+
+``` pre
+WINDOW <window_name> AS (<window_specification>)
+```
+
+where \<window\_specification\> can be:
+
+``` pre
+[<window_name>]
+[PARTITION BY <expression> [, ...]]
+[ORDER BY <expression> [ASC | DESC | USING <operator>] [, ...]
+����[{RANGE | ROWS}
+������{ UNBOUNDED PRECEDING
+������| <expression> PRECEDING
+������| CURRENT ROW
+������| BETWEEN <window_frame_bound> AND <window_frame_bound> }]]
+����         where window_frame_bound can be one of:
+           ����UNBOUNDED PRECEDING
+           ����<expression> PRECEDING
+           ����CURRENT ROW
+           ����<expression> FOLLOWING
+           ����UNBOUNDED FOLLOWING
+```
+
+<dt> \<window\_name\>  </dt>
+<dd>Gives a name to the window specification.</dd>
+
+<dt>PARTITION BY  </dt>
+<dd>The `PARTITION BY` clause organizes the result set into logical groups based on the unique values of the specified expression. When used with window functions, the functions are applied to each partition independently. For example, if you follow `PARTITION BY` with a column name, the result set is partitioned by the distinct values of that column. If omitted, the entire result set is considered one partition.
+
+<dt>ORDER BY  </dt>
+<dd>The `ORDER BY` clause defines how to sort the rows in each partition of the result set. If omitted, rows are returned in whatever order is most efficient and may vary.
+
+**Note:** Columns of data types that lack a coherent ordering, such as `time`, are not good candidates for use in the `ORDER                   BY` clause of a window specification. Time, with or without time zone, lacks a coherent ordering because addition and subtraction do not have the expected effects. For example, the following is not generally true: `x::time <                   x::time + '2 hour'::interval`</dd>
+
+<dt>ROWS | RANGE  </dt>
+<dd>Use either a `ROWS` or `RANGE` clause to express the bounds of the window. The window bound can be one, many, or all rows of a partition. You can express the bound of the window either in terms of a range of data values offset from the value in the current row (`RANGE`), or in terms of the number of rows offset from the currentrow (`ROWS`). When using the `RANGE` clause, you must also use an `ORDER BY` clause. This is because the calculation performed to produce the window requires that the values be sorted. Additionally, the `ORDER BY` clause cannot contain more than one expression, and the expression must result in either a date or a numeric value. When using the `ROWS` or `RANGE` clauses, if you specify only a starting row, the current row is used as the last row in the window.
+
+**PRECEDING** \u2014 The `PRECEDING` clause defines the first row of the window using the current row as a reference point. The starting row is expressed in terms of the number of rows preceding the current row. For example, in the case of `ROWS` framing, 5 `PRECEDING` sets the window to start with the fifth row preceding the current row. In the case of `RANGE` framing, it sets the window to start with the first row whose ordering column value precedes that of the current row by 5 in the given order. If the specified order is ascending by date, this will be the first row within 5 days before the current row. `UNBOUNDED PRECEDING` sets the first row in the window to be the first row in the partition.
+
+**BETWEEN** \u2014 The `BETWEEN` clause defines the first and last row of the window, using the current row as a reference point. First and last rows are expressed in terms of the number of rows preceding and following the current row, respectively. For example, `BETWEEN 3 PRECEDING AND 5 FOLLOWING` sets the window to start with the third row preceding the current row, and end with the fifth row following the current row. Use `BETWEEN UNBOUNDED PRECEDING AND                 UNBOUNDED FOLLOWING` to set the first and last rows in the window to be the first and last row in the partition, respectively. This is equivalent to the default behavior if no `ROW` or `RANGE` clause is specified.
+
+**FOLLOWING** \u2014 The `FOLLOWING` clause defines the last row of the window using the current row as a reference point. The last row is expressed in terms of the number of rows following the current row. For example, in the case of `ROWS` framing, `5 FOLLOWING` sets the window to end with the fifth row following the current row. In the case of `RANGE` framing, it sets the window to end with the last row whose ordering column value follows that of the current row by 5 in the given order. If the specified order is ascending by date, this will be the last row within 5 days after the current row. Use `UNBOUNDED FOLLOWING` to set the last row in the window to be the last row in the partition.
+
+If you do not specify a `ROW` or a `RANGE` clause, the window bound starts with the first row in the partition (`UNBOUNDED                 PRECEDING`) and ends with the current row (`CURRENT ROW`) if `ORDER BY` is used. If an `ORDER BY` is not specified, the window starts with the first row in the partition (`UNBOUNDED                 PRECEDING`) and ends with last row in the partition (`UNBOUNDED                 FOLLOWING`).</dd>
+
+**The HAVING Clause**
+
+The optional `HAVING` clause has the general form:
+
+``` pre
+HAVING <condition>
+```
+
+where \<condition\> is the same as specified for the `WHERE` clause. `HAVING` eliminates group rows that do not satisfy the condition. `HAVING` is different from `WHERE`: `WHERE` filters individual rows before the application of `GROUP BY`, while `HAVING` filters group rows created by `GROUP BY`. Each column referenced in \<condition\> must unambiguously reference a grouping column, unless the reference appears within an aggregate function.
+
+The presence of `HAVING` turns a query into a grouped query even if there is no `GROUP BY` clause. This is the same as what happens when the query contains aggregate functions but no `GROUP BY` clause. All the selected rows are considered to form a single group, and the `SELECT` list and `HAVING` clause can only reference table columns from within aggregate functions. Such a query will emit a single row if the `HAVING` condition is true, zero rows if it is not true.
+
+**The UNION Clause**
+
+The `UNION` clause has this general form:
+
+``` pre
+<select_statement> UNION [ALL] <select_statement>
+```
+
+where \<select\_statement\> is any `SELECT` statement without an `ORDER BY`, `LIMIT`, `FOR UPDATE`, or `FOR SHARE` clause. (`ORDER BY` and `LIMIT` can be attached to a subquery expression if it is enclosed in parentheses. Without parentheses, these clauses will be taken to apply to the result of the `UNION`, not to its right-hand input expression.)
+
+The `UNION` operator computes the set union of the rows returned by the involved `SELECT` statements. A row is in the set union of two result sets if it appears in at least one of the result sets. The two `SELECT` statements that represent the direct operands of the `UNION` must produce the same number of columns, and corresponding columns must be of compatible data types.
+
+The result of `UNION` does not contain any duplicate rows unless the `ALL` option is specified. `ALL` prevents elimination of duplicates. (Therefore, `UNION ALL` is usually significantly quicker than `UNION`; use `ALL` when you can.)
+
+Multiple `UNION` operators in the same `SELECT` statement are evaluated left to right, unless otherwise indicated by parentheses.
+
+Currently, `FOR UPDATE` and `FOR SHARE` may not be specified either for a `UNION` result or for any input of a `UNION`.
+
+**The INTERSECT Clause**
+
+The `INTERSECT` clause has this general form:
+
+``` pre
+<select_statement> INTERSECT [ALL] <select_statement>
+```
+
+where \<select\_statement\> is any SELECT statement without an `ORDER BY`, `LIMIT`, `FOR UPDATE`, or `FOR SHARE` clause.
+
+The `INTERSECT` operator computes the set intersection of the rows returned by the involved `SELECT` statements. A row is in the intersection of two result sets if it appears in both result sets.
+
+The result of `INTERSECT` does not contain any duplicate rows unless the `ALL` option is specified. With `ALL`, a row that has *m* duplicates in the left table and *n* duplicates in the right table will appear min(*m*, *n*) times in the result set.
+
+Multiple `INTERSECT` operators in the same `SELECT` statement are evaluated left to right, unless parentheses dictate otherwise. `INTERSECT` binds more tightly than `UNION`. That is, `A UNION B INTERSECT C` will be read as `A UNION (B INTERSECT C)`.
+
+Currently, `FOR UPDATE` and `FOR SHARE` may not be specified either for an `INTERSECT` result or for any input of an `INTERSECT`.
+
+**The EXCEPT Clause**
+
+The `EXCEPT` clause has this general form:
+
+``` pre
+<select_statement> EXCEPT [ALL] <select_statement>
+```
+
+where \<select\_statement\> is any `SELECT` statement without an `ORDER BY`, `LIMIT`, `FOR UPDATE`, or `FOR SHARE` clause.
+
+The `EXCEPT` operator computes the set of rows that are in the result of the left `SELECT` statement but not in the result of the right one.
+
+The result of `EXCEPT` does not contain any duplicate rows unless the `ALL` option is specified. With `ALL`, a row that has *m* duplicates in the left table and *n* duplicates in the right table will appear max(*m-n*,0) times in the result set.
+
+Multiple `EXCEPT` operators in the same `SELECT` statement are evaluated left to right unless parentheses dictate otherwise. `EXCEPT` binds at the same level as `UNION`.
+
+Currently, `FOR             UPDATE` and `FOR SHARE` may not be specified either for an `EXCEPT` result or for any input of an `EXCEPT`.
+
+**The ORDER BY Clause**
+
+The optional `ORDER BY` clause has this general form:
+
+``` pre
+ORDER BY <expression> [ASC | DESC | USING <operator>] [, ...]
+```
+
+where \<expression\> can be the name or ordinal number of an output column (`SELECT` list item), or it can be an arbitrary expression formed from input-column values.
+
+The `ORDER BY` clause causes the result rows to be sorted according to the specified expressions. If two rows are equal according to the left-most expression, they are compared according to the next expression and so on. If they are equal according to all specified expressions, they are returned in an implementation-dependent order.
+
+The ordinal number refers to the ordinal (left-to-right) position of the result column. This feature makes it possible to define an ordering on the basis of a column that does not have a unique name. This is never absolutely necessary because it is always possible to assign a name to a result column using the `AS` clause.
+
+It is also possible to use arbitrary expressions in the `ORDER BY` clause, including columns that do not appear in the `SELECT` result list. Thus the following statement is valid:
+
+``` pre
+SELECT name FROM distributors ORDER BY code;
+```
+
+A limitation of this feature is that an `ORDER BY` clause applying to the result of a `UNION`, `INTERSECT`, or `EXCEPT` clause may only specify an output column name or number, not an expression.
+
+If an `ORDER BY` expression is a simple name that matches both a result column name and an input column name, `ORDER BY` will interpret it as the result column name. This is the opposite of the choice that `GROUP BY` will make in the same situation. This inconsistency is made to be compatible with the SQL standard.
+
+Optionally one may add the key word `ASC` (ascending) or `DESC` (descending) after any expression in the `ORDER BY` clause. If not specified, `ASC` is assumed by default. Alternatively, a specific ordering operator name may be specified in the `USING` clause. `ASC` is usually equivalent to `USING <` and `DESC` is usually equivalent to `USING >`. (But the creator of a user-defined data type can define exactly what the default sort ordering is, and it might correspond to operators with other names.)
+
+The null value sorts higher than any other value. In other words, with ascending sort order, null values sort at the end, and with descending sort order, null values sort at the beginning.
+
+Character-string data is sorted according to the locale-specific collation order that was established when the HAWQ system was initialized.
+
+**The DISTINCT Clause**
+
+If `DISTINCT` is specified, all duplicate rows are removed from the result set (one row is kept from each group of duplicates). `ALL` specifies the opposite: all rows are kept. `ALL` is the default.
+
+`DISTINCT ON ( <expression> [, ...] )` keeps only the first row of each set of rows where the given expressions evaluate to equal. The `DISTINCT ON` expressions are interpreted using the same rules as for `ORDER BY`. Note that the 'first row' of each set is unpredictable unless `ORDER BY` is used to ensure that the desired row appears first. For example:
+
+``` pre
+SELECT DISTINCT ON (location) location, time, report FROM
+weather_reports ORDER BY location, time DESC;
+```
+
+retrieves the most recent weather report for each location. But if we had not used `ORDER             BY` to force descending order of time values for each location, we would have gotten a report from an unpredictable time for each location.
+
+The `DISTINCT ON` expression(s) must match the left-most `ORDER BY` expression(s). The `ORDER BY` clause will normally contain additional expression(s) that determine the desired precedence of rows within each `DISTINCT             ON` group.
+
+**The LIMIT Clause**
+
+The `LIMIT` clause consists of two independent sub-clauses:
+
+``` pre
+LIMIT {<count> | ALL}
+OFFSET <start>
+
+```
+
+where \<count\> specifies the maximum number of rows to return, while \<start\> specifies the number of rows to skip before starting to return rows. When both are specified, start rows are skipped before starting to count the count rows to be returned.
+
+When using `LIMIT`, it is a good idea to use an `ORDER BY` clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You may be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? You don't know what ordering unless you specify `ORDER BY`.
+
+The query planner takes `LIMIT` into account when generating a query plan, so you are very likely to get different plans (yielding different row orders) depending on what you use for `LIMIT` and `OFFSET`. Thus, using different `LIMIT/OFFSET` values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with `ORDER BY`. This is not a defect; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless `ORDER BY` is used to constrain the order.
+
+## <a id="topic1__section18"></a>Examples
+
+To join the table `films` with the table `distributors`:
+
+``` sql
+SELECT f.title, f.did, d.name, f.date_prod, f.kind FROM
+distributors d, films f WHERE f.did = d.did
+```
+
+To sum the column `length` of all films and group the results by `kind`:
+
+``` sql
+SELECT kind, sum(length) AS total FROM films GROUP BY kind;
+```
+
+To sum the column `length` of all films, group the results by `kind` and show those group totals that are less than 5 hours:
+
+``` sql
+SELECT kind, sum(length) AS total FROM films GROUP BY kind
+HAVING sum(length) < interval '5 hours';
+```
+
+Calculate the subtotals and grand totals of all sales for movie `kind` and `distributor`.
+
+``` sql
+SELECT kind, distributor, sum(prc*qty) FROM sales
+GROUP BY ROLLUP(kind, distributor)
+ORDER BY 1,2,3;
+```
+
+Calculate the rank of movie distributors based on total sales:
+
+``` sql
+SELECT distributor, sum(prc*qty),
+       rank() OVER (ORDER BY sum(prc*qty) DESC)
+FROM sale
+GROUP BY distributor ORDER BY 2 DESC;
+```
+
+The following two examples are identical ways of sorting the individual results according to the contents of the second column (`name`):
+
+``` sql
+SELECT * FROM distributors ORDER BY name;
+SELECT * FROM distributors ORDER BY 2;
+```
+
+The next example shows how to obtain the union of the tables `distributors` and `actors`, restricting the results to those that begin with the letter `W` in each table. Only distinct rows are wanted, so the key word `ALL` is omitted:
+
+``` sql
+SELECT distributors.name FROM distributors WHERE
+distributors.name LIKE 'W%' UNION SELECT actors.name FROM
+actors WHERE actors.name LIKE 'W%';
+```
+
+This example shows how to use a function in the `FROM` clause, both with and without a column definition list:
+
+``` pre
+CREATE FUNCTION distributors(int) RETURNS SETOF distributors
+AS $$ SELECT * FROM distributors WHERE did = $1; $$ LANGUAGE
+SQL;
+SELECT * FROM distributors(111);
+
+CREATE FUNCTION distributors_2(int) RETURNS SETOF record AS
+$$ SELECT * FROM distributors WHERE did = $1; $$ LANGUAGE
+SQL;
+SELECT * FROM distributors_2(111) AS (dist_id int, dist_name
+text);
+```
+
+## <a id="topic1__section19"></a>Compatibility
+
+The `SELECT` statement is compatible with the SQL standard, but there are some extensions and some missing features.
+
+**Omitted FROM Clauses**
+
+HAWQ allows you to omit the `FROM` clause. It has a straightforward use to compute the results of simple expressions. For example:
+
+``` sql
+SELECT 2+2;
+```
+
+Some other SQL databases cannot do this except by introducing a dummy one-row table from which to do the `SELECT`.
+
+Note that if a `FROM` clause is not specified, the query cannot reference any database tables. For compatibility with applications that rely on this behavior the *add\_missing\_from* configuration parameter can be enabled.
+
+**The AS Key Word**
+
+In the SQL standard, the optional key word `AS` is just noise and can be omitted without affecting the meaning. The HAWQ parser requires this key word when renaming output columns because the type extensibility features lead to parsing ambiguities without it. `AS` is optional in `FROM` items, however.
+
+**Namespace Available to GROUP BY and ORDER BY**
+
+In the SQL-92 standard, an `ORDER BY` clause may only use result column names or numbers, while a `GROUP BY` clause may only use expressions based on input column names. HAWQ extends each of these clauses to allow the other choice as well (but it uses the standard's interpretation if there is ambiguity). HAWQ also allows both clauses to specify arbitrary expressions. Note that names appearing in an expression will always be taken as input-column names, not as result-column names.
+
+SQL:1999 and later use a slightly different definition which is not entirely upward compatible with SQL-92. In most cases, however, HAWQ will interpret an `ORDER BY` or `GROUP             BY` expression the same way SQL:1999 does.
+
+**Nonstandard Clauses**
+
+The clauses `DISTINCT ON`, `LIMIT`, and `OFFSET` are not defined in the SQL standard.
+
+**Limited Use of STABLE and VOLATILE Functions**
+
+To prevent data from becoming out-of-sync across the segments in HAWQ, any function classified as `STABLE` or `VOLATILE` cannot be executed at the segment database level if it contains SQL or modifies the database in any way.
+
+## <a id="topic1__section25"></a>See Also
+
+[EXPLAIN](EXPLAIN.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/SET-ROLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/SET-ROLE.html.md.erb b/markdown/reference/sql/SET-ROLE.html.md.erb
new file mode 100644
index 0000000..63a03f6
--- /dev/null
+++ b/markdown/reference/sql/SET-ROLE.html.md.erb
@@ -0,0 +1,72 @@
+---
+title: SET ROLE
+---
+
+Sets the current role identifier of the current session.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+SET [SESSION | LOCAL] ROLE <rolename>
+SET [SESSION | LOCAL] ROLE NONE
+RESET ROLE
+```
+
+## <a id="topic1__section3"></a>Description
+
+This command sets the current role identifier of the current SQL-session context to be \<rolename\>. The role name may be written as either an identifier or a string literal. After `SET ROLE`, permissions checking for SQL commands is carried out as though the named role were the one that had logged in originally.
+
+The specified \<rolename\> must be a role that the current session user is a member of. If the session user is a superuser, any role can be selected.
+
+The `NONE` and `RESET` forms reset the current role identifier to be the current session role identifier. These forms may be executed by any user.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>SESSION  </dt>
+<dd>Specifies that the command takes effect for the current session. This is the default.</dd>
+
+<dt>LOCAL  </dt>
+<dd>Specifies that the command takes effect for only the current transaction. After `COMMIT` or `ROLLBACK`, the session-level setting takes effect again. Note that `SET LOCAL` will appear to have no effect if it is executed outside of a transaction.</dd>
+
+<dt> \<rolename\>   </dt>
+<dd>The name of a role to use for permissions checking in this session.</dd>
+
+<dt>NONE  
+RESET  </dt>
+<dd>Reset the current role identifier to be the current session role identifier (that of the role used to log in).</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Using this command, it is possible to either add privileges or restrict privileges. If the session user role has the `INHERITS` attribute, then it automatically has all the privileges of every role that it could `SET ROLE` to; in this case `SET ROLE` effectively drops all the privileges assigned directly to the session user and to the other roles it is a member of, leaving only the privileges available to the named role. On the other hand, if the session user role has the `NOINHERITS` attribute, `SET ROLE` drops the privileges assigned directly to the session user and instead acquires the privileges available to the named role.
+
+In particular, when a superuser chooses to `SET ROLE` to a non-superuser role, she loses her superuser privileges.
+
+`SET ROLE` has effects comparable to `SET SESSION AUTHORIZATION`, but the privilege checks involved are quite different. Also, `SET SESSION AUTHORIZATION` determines which roles are allowable for later `SET ROLE` commands, whereas changing roles with `SET ROLE` does not change the set of roles allowed to a later `SET ROLE`.
+
+## <a id="topic1__section6"></a>Examples
+
+``` sql
+SELECT SESSION_USER, CURRENT_USER;
+```
+``` pre
+ session_user | current_user 
+--------------+--------------
+ peter        | peter
+```
+``` sql
+SET ROLE 'paul';
+SELECT SESSION_USER, CURRENT_USER;
+```
+``` pre
+ session_user | current_user 
+--------------+--------------
+ peter        | paul
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+HAWQ allows identifier syntax (\<rolename\>), while the SQL standard requires the role name to be written as a string literal. SQL does not allow this command during a transaction; HAWQ does not make this restriction. The `SESSION` and `LOCAL` modifiers are a HAWQ extension, as is the `RESET` syntax.
+
+## <a id="topic1__section8"></a>See Also
+
+[SET SESSION AUTHORIZATION](SET-SESSION-AUTHORIZATION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/SET-SESSION-AUTHORIZATION.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/SET-SESSION-AUTHORIZATION.html.md.erb b/markdown/reference/sql/SET-SESSION-AUTHORIZATION.html.md.erb
new file mode 100644
index 0000000..adea314
--- /dev/null
+++ b/markdown/reference/sql/SET-SESSION-AUTHORIZATION.html.md.erb
@@ -0,0 +1,66 @@
+---
+title: SET SESSION AUTHORIZATION
+---
+
+Sets the session role identifier and the current role identifier of the current session.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+SET [SESSION | LOCAL] SESSION AUTHORIZATION <rolename>
+SET [SESSION | LOCAL] SESSION AUTHORIZATION DEFAULT
+RESET SESSION AUTHORIZATION
+```
+
+## <a id="topic1__section3"></a>Description
+
+This command sets the session role identifier and the current role identifier of the current SQL-session context to \<rolename\> . The role name may be written as either an identifier or a string literal. Using this command, it is possible, for example, to temporarily become an unprivileged user and later switch back to being a superuser.
+
+The session role identifier is initially set to be the (possibly authenticated) role name provided by the client. The current role identifier is normally equal to the session user identifier, but may change temporarily in the context of setuid functions and similar mechanisms; it can also be changed by [SET ROLE](SET-ROLE.html). The current user identifier is relevant for permission checking.
+
+The session user identifier may be changed only if the initial session user (the authenticated user) had the superuser privilege. Otherwise, the command is accepted only if it specifies the authenticated user name.
+
+The `DEFAULT` and `RESET` forms reset the session and current user identifiers to be the originally authenticated user name. These forms may be executed by any user.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>SESSION  </dt>
+<dd>Specifies that the command takes effect for the current session. This is the default.</dd>
+
+<dt>LOCAL  </dt>
+<dd>Specifies that the command takes effect for only the current transaction. After `COMMIT` or `ROLLBACK`, the session-level setting takes effect again. Note that `SET LOCAL` will appear to have no effect if it is executed outside of a transaction.</dd>
+
+<dt> \<rolename\>   </dt>
+<dd>The name of the role to assume.</dd>
+
+<dt>NONE  
+RESET  </dt>
+<dd>Reset the session and current role identifiers to be that of the role used to log in.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+``` sql
+SELECT SESSION_USER, CURRENT_USER;
+```
+``` pre
+ session_user | current_user 
+--------------+--------------
+ peter        | peter
+```
+``` sql
+SET SESSION AUTHORIZATION 'paul';
+SELECT SESSION_USER, CURRENT_USER;
+```
+``` pre
+ session_user | current_user 
+--------------+--------------
+ paul         | paul
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+The SQL standard allows some other expressions to appear in place of the literal \<rolename\>, but these options are not important in practice. HAWQ allows identifier syntax (\<rolename\>), while SQL does not. SQL does not allow this command during a transaction; HAWQ does not make this restriction. The `SESSION` and `LOCAL` modifiers are a HAWQ extension, as is the `RESET` syntax.
+
+## <a id="topic1__section7"></a>See Also
+
+[SET ROLE](SET-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/SET.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/SET.html.md.erb b/markdown/reference/sql/SET.html.md.erb
new file mode 100644
index 0000000..4f4ad24
--- /dev/null
+++ b/markdown/reference/sql/SET.html.md.erb
@@ -0,0 +1,87 @@
+---
+title: SET
+---
+
+Changes the value of a HAWQ configuration parameter.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+SET [SESSION | LOCAL] <configuration_parameter> {TO | =} {<value> | '<value>' | DEFAULT}
+SET [SESSION | LOCAL] TIME ZONE {<timezone> | LOCAL | DEFAULT}
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `SET` command changes server configuration parameters. Any configuration parameter classified as a *session* parameter can be changed on-the-fly with `SET`. See [About Server Configuration Parameters](../guc/guc_config.html#topic1). `SET` only affects the value used by the current session.
+
+If `SET` or `SET SESSION` is issued within a transaction that is later aborted, the effects of the `SET` command disappear when the transaction is rolled back. Once the surrounding transaction is committed, the effects will persist until the end of the session, unless overridden by another `SET`.
+
+The effects of `SET LOCAL` only last till the end of the current transaction, whether committed or not. A special case is `SET` followed by `SET LOCAL` within a single transaction: the `SET                LOCAL` value will be seen until the end of the transaction, but afterwards (if the transaction is committed) the `SET` value will take effect.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>SESSION  </dt>
+<dd>Specifies that the command takes effect for the current session. This is the default.</dd>
+
+<dt>LOCAL  </dt>
+<dd>Specifies that the command takes effect for only the current transaction. After `COMMIT` or `ROLLBACK`, the session-level setting takes effect again. Note that `SET LOCAL` will appear to have no effect if it is executed outside of a transaction.</dd>
+
+<dt> \<configuration\_parameter\>  </dt>
+<dd>The name of a HAWQ configuration parameter. Only parameters classified as *session* can be changed with `SET`. See [About Server Configuration Parameters](../guc/guc_config.html#topic1).</dd>
+
+<dt> \<value\>  </dt>
+<dd>New value of parameter. Values can be specified as string constants, identifiers, numbers, or comma-separated lists of these. `DEFAULT` can be used to specify resetting the parameter to its default value. If specifying memory sizing or time units, enclose the value in single quotes.</dd>
+
+<dt>TIME ZONE  </dt>
+<dd>`SET TIME ZONE` value is an alias for `SET timezone TO                         value`.
+
+<dt>LOCAL,  
+DEFAULT  </dt>
+<dd>Set the time zone to your local time zone (the one that the server's operating system defaults to).</dd>
+
+<dt> \<timezone\>  </dt>
+<dd>The \<timezone\> specification. Examples of syntactically valid values:
+
+`'PST8PDT'`
+
+`'Europe/Rome'`
+
+`-7` (time zone 7 hours west from UTC)
+
+`INTERVAL '-08:00' HOUR TO MINUTE` (time zone 8 hours west from UTC).</dd>
+</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Set the schema search path:
+
+``` sql
+SET search_path TO my_schema, public;
+```
+
+Set the style of date to traditional POSTGRES with "day before month" input convention:
+
+``` sql
+SET datestyle TO postgres, dmy;
+```
+
+Set the time zone for San Mateo, California (Pacific Time):
+
+``` sql
+SET TIME ZONE 'PST8PDT';
+```
+
+Set the time zone for Italy:
+
+``` sql
+SET TIME ZONE 'Europe/Rome';
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`SET TIME ZONE` extends the syntax defined in the SQL standard. The standard allows only numeric time zone offsets while HAWQ allows more flexible time-zone specifications. All other `SET` features are HAWQ extensions.
+
+## <a id="topic1__section7"></a>See Also
+
+[RESET](RESET.html), [SHOW](SHOW.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/SHOW.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/SHOW.html.md.erb b/markdown/reference/sql/SHOW.html.md.erb
new file mode 100644
index 0000000..802761b
--- /dev/null
+++ b/markdown/reference/sql/SHOW.html.md.erb
@@ -0,0 +1,47 @@
+---
+title: SHOW
+---
+
+Shows the value of a system configuration parameter.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+SHOW <configuration_parameter>
+
+SHOW ALL
+```
+
+## <a id="topic1__section3"></a>Description
+
+`SHOW` displays the current settings of HAWQ system configuration parameters. These parameters can be set using the `SET` statement, or by editing the `hawq-site.xml` configuration file of the HAWQ master. Note that some parameters viewable by `SHOW` are read-only \u2014 their values can be viewed but not set. See [About Server Configuration Parameters](../guc/guc_config.html#topic1).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<configuration\_parameter\>   </dt>
+<dd>The name of a system configuration parameter.</dd>
+
+<dt>ALL  </dt>
+<dd>Shows the current value of all configuration parameters.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Show the current setting of the parameter `search_path`:
+
+``` sql
+SHOW search_path;
+```
+
+Show the current setting of all parameters:
+
+``` sql
+SHOW ALL;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`SHOW` is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[SET](SET.html), [RESET](RESET.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/TRUNCATE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/TRUNCATE.html.md.erb b/markdown/reference/sql/TRUNCATE.html.md.erb
new file mode 100644
index 0000000..c91ae84
--- /dev/null
+++ b/markdown/reference/sql/TRUNCATE.html.md.erb
@@ -0,0 +1,52 @@
+---
+title: TRUNCATE
+---
+
+Empties a table of all rows.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+TRUNCATE [TABLE] <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`TRUNCATE` quickly removes all rows from a table or set of tables.This is most useful on large tables.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>Required. The name (optionally schema-qualified) of a table to be truncated.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Since this key word applies to foreign key references (which are not supported in HAWQ) it has no effect.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Since this key word applies to foreign key references (which are not supported in HAWQ) it has no effect.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Only the owner of a table may `TRUNCATE` it. `TRUNCATE` will not perform the following:
+
+-   Run any user-defined `ON DELETE` triggers that might exist for the tables.
+
+    **Note:** HAWQ does not support user-defined triggers.
+
+-   Truncate any tables that inherit from the named table. Only the named table is truncated, not its child tables.
+
+## <a id="topic1__section6"></a>Examples
+
+Empty the table `films`:
+
+``` sql
+TRUNCATE films;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+There is no `TRUNCATE` command in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[DROP TABLE](DROP-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/VACUUM.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/VACUUM.html.md.erb b/markdown/reference/sql/VACUUM.html.md.erb
new file mode 100644
index 0000000..57ccc0e
--- /dev/null
+++ b/markdown/reference/sql/VACUUM.html.md.erb
@@ -0,0 +1,96 @@
+---
+title: VACUUM
+---
+
+Garbage-collects and optionally analyzes a database. 
+
+**Note**: HAWQ `VACUUM` support is provided only for system catalog tables.  `VACUUM`ing a HAWQ user table has no effect.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+VACUUM [FULL] [FREEZE] [VERBOSE] <table>
+VACUUM [FULL] [FREEZE] [VERBOSE] ANALYZE
+��������������[<table> [(<column> [, ...] )]]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`VACUUM` reclaims storage occupied by deleted tuples. In normal HAWQ operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present on disk until a `VACUUM` is done. Therefore it is necessary to do `VACUUM` periodically, especially on frequently-updated catalog tables. (`VACUUM` has no effect on a normal HAWQ table, since the delete or update operations are not supported on normal HAWQ table.)
+
+With no parameter, `VACUUM` processes every table in the current database. With a parameter, `VACUUM` processes only that table. `VACUUM ANALYZE` performs a `VACUUM` and then an `ANALYZE` for each selected table. This is a handy combination form for routine maintenance scripts. See [ANALYZE](ANALYZE.html) for more details about its processing.
+
+Plain `VACUUM` (without `FULL`) simply reclaims space and makes it available for re-use. This form of the command can operate in parallel with normal reading and writing of the table, as an exclusive lock is not obtained. `VACUUM FULL` does more extensive processing, including moving of tuples across blocks to try to compact the table to the minimum number of disk blocks. This form is much slower and requires an exclusive lock on each table while it is being processed.  
+
+**Note:** `VACUUM FULL` is not recommended in HAWQ.
+
+**Outputs**
+
+When `VERBOSE` is specified, `VACUUM` emits progress messages to indicate which table is currently being processed. Various statistics about the tables are printed as well.
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt>FULL  </dt>
+<dd>Selects a full vacuum, which may reclaim more space but takes much longer and exclusively locks the table.
+
+**Note:** A VACUUM FULL is not recommended in HAWQ. See [Notes](#topic1__section6).</dd>
+
+<dt>FREEZE  </dt>
+<dd>Specifying `FREEZE` is equivalent to performing `VACUUM` with the `vacuum_freeze_min_age` server configuration parameter set to zero. The `FREEZE` option is deprecated and will be removed in a future release.</dd>
+
+<dt>VERBOSE  </dt>
+<dd>Prints a detailed vacuum activity report for each table.</dd>
+
+<dt>ANALYZE  </dt>
+<dd>Updates statistics used by the planner to determine the most efficient way to execute a query.</dd>
+
+<dt> \<table\>   </dt>
+<dd>The name (optionally schema-qualified) of a specific table to vacuum. Defaults to all tables in the current database.</dd>
+
+<dt> \<column\>   </dt>
+<dd>The name of a specific column to analyze. Defaults to all columns.</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+`VACUUM` cannot be executed inside a transaction block.
+
+A recommended practice is to vacuum active production databases frequently (at least nightly), in order to remove expired rows. After adding or deleting a large number of rows, it may be a good idea to issue a `VACUUM ANALYZE` command for the affected table. This will update the system catalogs with the results of all recent changes, and allow the HAWQ query planner to make better choices in planning queries.
+
+`VACUUM` causes a substantial increase in I/O traffic, which can cause poor performance for other active sessions. Therefore, it is advisable to vacuum the database at low usage times. The `auto vacuum` daemon feature, that automates the execution of `VACUUM` and `ANALYZE` commands is currently disabled in HAWQ.
+
+Expired rows are held in what is called the *free space map*. The free space map must be sized large enough to cover the dead rows of all tables in your database. If not sized large enough, space occupied by dead rows that overflow the free space map cannot be reclaimed by a regular `VACUUM` command.
+
+`VACUUM FULL` will reclaim all expired row space, but is a very expensive operation and may take an unacceptably long time to finish on large, distributed HAWQ tables. If you do get into a situation where the free space map has overflowed, it may be more timely to recreate the table with a `CREATE TABLE AS` statement and drop the old table.
+
+`VACUUM FULL` is not recommended in HAWQ. It is best to size the free space map appropriately. The free space map is configured with the following server configuration parameters:
+
+-   `max_fsm_pages`
+-   `max_fsm_relations`
+
+## <a id="topic1__section7"></a>Examples
+
+Vacuum all tables in the current database:
+
+``` sql
+VACUUM;
+```
+
+Vacuum a specific table only:
+
+``` sql
+VACUUM mytable;
+```
+
+Vacuum all tables in the current database and collect statistics for the query planner:
+
+``` sql
+VACUUM ANALYZE;
+```
+
+## <a id="topic1__section8"></a>Compatibility
+
+There is no `VACUUM` statement in the SQL standard.
+
+## <a id="topic1__section9"></a>See Also
+
+[ANALYZE](ANALYZE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/toolkit/hawq_toolkit.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/toolkit/hawq_toolkit.html.md.erb b/markdown/reference/toolkit/hawq_toolkit.html.md.erb
new file mode 100644
index 0000000..ac5db66
--- /dev/null
+++ b/markdown/reference/toolkit/hawq_toolkit.html.md.erb
@@ -0,0 +1,263 @@
+---
+title: The hawq_toolkit Administrative Schema
+---
+
+This section provides a reference on the `hawq_toolkit` administrative schema.
+
+HAWQ provides an administrative schema called `hawq_toolkit` that you can use to query the system catalogs, log files, and operating environment for system status information. The `hawq_toolkit` schema contains a number of views that you can access using SQL commands. The `hawq_toolkit` schema is accessible to all database users, although some objects may require superuser permissions.
+
+This documentation describes the most useful views in `hawq_toolkit`. You may notice other objects (views, functions, and external tables) within the `hawq_toolkit` schema that are not described in this documentation (these are supporting objects to the views described in this section).
+
+**Warning:** Do not change database objects in the `hawq_toolkit` schema. Do not create database objects in the schema. Changes to objects in the schema might affect the accuracy of administrative information returned by schema objects.
+
+## <a id="topic2"></a>Checking for Tables that Need Routine Maintenance
+
+The following views can help identify tables that need routine table maintenance (`VACUUM` and/or `ANALYZE`).
+
+-   [hawq\_stats\_missing](#topic4)
+
+The `VACUUM` command is applicable only to system catalog tables. The `VACUUM` command reclaims disk space occupied by deleted or obsolete rows. Because of the MVCC transaction concurrency model used in HAWQ, data rows that are deleted or updated still occupy physical space on disk even though they are not visible to any new transactions. Expired rows increase table size on disk and eventually slow down scans of the table.
+
+**Note:** VACUUM FULL is not recommended in HAWQ. See [VACUUM](../sql/VACUUM.html#topic1).
+
+The `ANALYZE` command collects column-level statistics needed by the query optimizer. HAWQ uses a cost-based query optimizer that relies on database statistics. Accurate statistics allow the query optimizer to better estimate selectivity and the number of rows retrieved by a query operation in order to choose the most efficient query plan.
+
+### <a id="topic4"></a>hawq\_stats\_missing
+
+This view shows tables that do not have statistics and therefore may require an `ANALYZE` be run on the table.
+
+<a id="topic4__ie194266"></a>
+
+<span class="tablecap">Table 1. hawq\_stats\_missing view</span>
+
+| Column    | Description                                                                                                                                                                                                                                                                                                                                                                |
+|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| smischema | Schema name.                                                                                                                                                                                                                                                                                                                                                               |
+| smitable  | Table name.                                                                                                                                                                                                                                                                                                                                                                |
+| smisize   | Does this table have statistics? False if the table does not have row count and row sizing statistics recorded in the system catalog, which may indicate that the table needs to be analyzed. This will also be false if the table does not contain any rows. For example, the parent tables of partitioned tables are always empty and will always return a false result. |
+| smicols   | Number of columns in the table.                                                                                                                                                                                                                                                                                                                                            |
+| smirecs   | Number of rows in the table.                                                                                                                                                                                                                                                                                                                                               |
+
+
+## <a id="topic16"></a>Viewing HAWQ Server Log Files
+
+Each component of a HAWQ system (master, standby master, and segments) keeps its own server log files. The `hawq_log_*` family of views allows you to issue SQL queries against the server log files to find particular entries of interest. The use of these views requires superuser permissions.
+
+-   [hawq\_log\_command\_timings](#topic17)
+-   [hawq\_log\_master\_concise](#topic19)
+
+### <a id="topic17"></a>hawq\_log\_command\_timings
+
+This view uses an external table to read the log files on the master and report the execution time of SQL commands executed in a database session. The use of this view requires superuser permissions.
+
+<a id="topic17__ie176169"></a>
+
+<span class="tablecap">Table 2. hawq\_log\_command\_timings view</span>
+
+| Column      | Description                                                |
+|-------------|------------------------------------------------------------|
+| logsession  | The session identifier (prefixed with "con").              |
+| logcmdcount | The command number within a session (prefixed with "cmd"). |
+| logdatabase | The name of the database.                                  |
+| loguser     | The name of the database user.                             |
+| logpid      | The process id (prefixed with "p").                        |
+| logtimemin  | The time of the first log message for this command.        |
+| logtimemax  | The time of the last log message for this command.         |
+| logduration | Statement duration from start to end time.                 |
+
+
+### <a id="topic19"></a>hawq\_log\_master\_concise
+
+This view uses an external table to read a subset of the log fields from the master log file. The use of this view requires superuser permissions.
+
+<a id="topic19__ie177543"></a>
+
+<span class="tablecap">Table 3. hawq\_log\_master\_concise view</span>
+
+| Column      | Description                                                |
+|-------------|------------------------------------------------------------|
+| logtime     | The timestamp of the log message.                          |
+| logdatabase | The name of the database.                                  |
+| logsession  | The session identifier (prefixed with "con").              |
+| logcmdcount | The command number within a session (prefixed with "cmd"). |
+| logseverity | The severity level for the record.                         |
+| logmessage  | Log or error message text.                                 |
+
+
+## <a id="topic38"></a>Checking Database Object Sizes and Disk Space
+
+The `hawq_size_*` family of views can be used to determine the disk space usage for a distributed HAWQ, schema, table, or index. The following views calculate the total size of an object across all segments.
+
+-   [hawq\_size\_of\_all\_table\_indexes](#topic39)
+-   [hawq\_size\_of\_database](#topic40)
+-   [hawq\_size\_of\_index](#topic41)
+-   [hawq\_size\_of\_partition\_and\_indexes\_disk](#topic42)
+-   [hawq\_size\_of\_schema\_disk](#topic43)
+-   [hawq\_size\_of\_table\_and\_indexes\_disk](#topic44)
+-   [hawq\_size\_of\_table\_and\_indexes\_licensing](#topic45)
+-   [hawq\_size\_of\_table\_disk](#topic46)
+-   [hawq\_size\_of\_table\_uncompressed](#topic47)
+
+The table and index sizing views list the relation by object ID (not by name). To check the size of a table or index by name, you must look up the relation name (`relname`) in the `pg_class` table. For example:
+
+``` pre
+SELECT relname as name, sotdsize as size, sotdtoastsize as 
+toast, sotdadditionalsize as other 
+FROM hawq_size_of_table_disk as sotd, pg_class 
+WHERE sotd.sotdoid=pg_class.oid ORDER BY relname;
+```
+
+### <a id="topic39"></a>hawq\_size\_of\_all\_table\_indexes
+
+This view shows the total size of all indexes for a table. This view is accessible to all users, however non-superusers will only be able to see relations that they have permission to access.
+
+<a id="topic39__ie181657"></a>
+
+<span class="tablecap">Table 4. hawq\_size\_of\_all\_table\_indexes view</span>
+
+| Column          | Description                                  |
+|-----------------|----------------------------------------------|
+| soatioid        | The object ID of the table                   |
+| soatisize       | The total size of all table indexes in bytes |
+| soatischemaname | The schema name                              |
+| soatitablename  | The table name                               |
+
+
+### <a id="topic40"></a>hawq\_size\_of\_database
+
+This view shows the total size of a database. This view is accessible to all users, however non-superusers will only be able to see databases that they have permission to access.
+
+<a id="topic40__ie181758"></a>
+
+<span class="tablecap">Table 5. hawq\_size\_of\_database view</span>
+
+| Column      | Description                       |
+|-------------|-----------------------------------|
+| sodddatname | The name of the database          |
+| sodddatsize | The size of the database in bytes |
+
+
+### <a id="topic41"></a>hawq\_size\_of\_index
+
+This view shows the total size of an index. This view is accessible to all users, however non-superusers will only be able to see relations that they have permission to access.
+
+<a id="topic41__ie181709"></a>
+
+<span class="tablecap">Table 6. hawq\_size\_of\_index view</span>
+
+| Column             | Description                                           |
+|--------------------|-------------------------------------------------------|
+| soioid             | The object ID of the index                            |
+| soitableoid        | The object ID of the table to which the index belongs |
+| soisize            | The size of the index in bytes                        |
+| soiindexschemaname | The name of the index schema                          |
+| soiindexname       | The name of the index                                 |
+| soitableschemaname | The name of the table schema                          |
+| soitablename       | The name of the table                                 |
+
+
+### <a id="topic42"></a>hawq\_size\_of\_partition\_and\_indexes\_disk
+
+This view shows the size on disk of partitioned child tables and their indexes. This view is accessible to all users, however non-superusers will only be able to see relations that they have permission to access.
+
+<a id="topic42__ie181803"></a>
+
+<span class="tablecap">Table 7. hawq\_size\_of\_partition\_and\_indexes\_disk view</span>
+
+| Column                     | Description                                     |
+|----------------------------|-------------------------------------------------|
+| sopaidparentoid            | The object ID of the parent table               |
+| sopaidpartitionoid         | The object ID of the partition table            |
+| sopaidpartitiontablesize   | The partition table size in bytes               |
+| sopaidpartitionindexessize | The total size of all indexes on this partition |
+| Sopaidparentschemaname     | The name of the parent schema                   |
+| Sopaidparenttablename      | The name of the parent table                    |
+| Sopaidpartitionschemaname  | The name of the partition schema                |
+| sopaidpartitiontablename   | The name of the partition table                 |
+
+
+### <a id="topic43"></a>hawq\_size\_of\_schema\_disk
+
+This view shows schema sizes for the public schema and the user-created schemas in the current database. This view is accessible to all users, however non-superusers will be able to see only the schemas that they have permission to access.
+
+<a id="topic43__ie183105"></a>
+
+<span class="tablecap">Table 8. hawq\_size\_of\_schema\_disk view</span>
+
+| Column              | Description                                      |
+|---------------------|--------------------------------------------------|
+| sosdnsp             | The name of the schema                           |
+| sosdschematablesize | The total size of tables in the schema in bytes  |
+| sosdschemaidxsize   | The total size of indexes in the schema in bytes |
+
+
+### <a id="topic44"></a>hawq\_size\_of\_table\_and\_indexes\_disk
+
+This view shows the size on disk of tables and their indexes. This view is accessible to all users, however non-superusers will only be able to see relations that they have permission to access.
+
+<a id="topic44__ie183128"></a>
+
+<span class="tablecap">Table 9. hawq\_size\_of\_table\_and\_indexes\_disk view</span>
+
+| Column           | Description                                |
+|------------------|--------------------------------------------|
+| sotaidoid        | The object ID of the parent table          |
+| sotaidtablesize  | The disk size of the table                 |
+| sotaididxsize    | The total size of all indexes on the table |
+| sotaidschemaname | The name of the schema                     |
+| sotaidtablename  | The name of the table                      |
+
+
+### <a id="topic45"></a>hawq\_size\_of\_table\_and\_indexes\_licensing
+
+This view shows the total size of tables and their indexes for licensing purposes. The use of this view requires superuser permissions.
+
+<a id="topic45__ie181949"></a>
+
+<span class="tablecap">Table 10. hawq\_size\_of\_table\_and\_indexes\_licensing view</span>
+
+| Column                      | Description                                                                                 |
+|-----------------------------|---------------------------------------------------------------------------------------------|
+| sotailoid                   | The object ID of the table                                                                  |
+| sotailtablesizedisk         | The total disk size of the table                                                            |
+| sotailtablesizeuncompressed | If the table is a compressed append-only table, shows the uncompressed table size in bytes. |
+| sotailindexessize           | The total size of all indexes in the table                                                  |
+| sotailschemaname            | The schema name                                                                             |
+| sotailtablename             | The table name                                                                              |
+
+
+### <a id="topic46"></a>hawq\_size\_of\_table\_disk
+
+This view shows the size of a table on disk. This view is accessible to all users, however non-superusers will only be able to see tables that they have permission to access
+
+<a id="topic46__ie183408"></a>
+
+<span class="tablecap">Table 11. hawq\_size\_of\_table\_disk view</span>
+
+| Column             | Description                                                                                                                                                                                          |
+|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| sotdoid            | The object ID of the table                                                                                                                                                                           |
+| sotdsize           | The size of the table in bytes. The size is only the main table size. The size does not include auxiliary objects such as oversized (toast) attributes, or additional storage objects for AO tables. |
+| sotdtoastsize      | The size of the TOAST table (oversized attribute storage), if there is one.                                                                                                                          |
+| sotdadditionalsize | Reflects the segment and block directory table sizes for append-only (AO) tables.                                                                                                                    |
+| sotdschemaname     | The schema name                                                                                                                                                                                      |
+| sotdtablename      | The table name                                                                                                                                                                                       |
+
+
+### <a id="topic47"></a>hawq\_size\_of\_table\_uncompressed
+
+This view shows the uncompressed table size for append-only (AO) tables. Otherwise, the table size on disk is shown. The use of this view requires superuser permissions.
+
+<a id="topic47__ie183582"></a>
+
+<span class="tablecap">Table 12. hawq\_size\_of\_table\_uncompressed view</span>
+
+| Column         | Description                                                                                                   |
+|----------------|---------------------------------------------------------------------------------------------------------------|
+| sotuoid        | The object ID of the table                                                                                    |
+| sotusize       | The uncomressed size of the table in bytes if it is a compressed AO table. Otherwise, the table size on disk. |
+| sotuschemaname | The schema name                                                                                               |
+| sotutablename  | The table name                                                                                                |
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/requirements/system-requirements.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/requirements/system-requirements.html.md.erb b/markdown/requirements/system-requirements.html.md.erb
new file mode 100644
index 0000000..7f117dc
--- /dev/null
+++ b/markdown/requirements/system-requirements.html.md.erb
@@ -0,0 +1,239 @@
+---
+title: Apache HAWQ System Requirements
+---
+
+Follow these guidelines to configure each host machine that will run an Apache HAWQ or PXF service.
+
+
+## <a id="topic_d3f_vlz_g5"></a>Host Memory Configuration
+
+In order to prevent data loss or corruption in an Apache HAWQ cluster, you must configure the memory on each host machine so that the Linux Out-of-Memory \(OOM\) killer process never kills a HAWQ process due to OOM conditions. \(HAWQ applies its own rules to enforce memory restrictions.\)
+
+**For mission critical deployments of HAWQ, perform these steps on each host machine to configure memory:**
+
+1.  Set the operating system `vm.overcommit_memory` parameter to 2. With this setting, the OOM killer process reports an error instead of killing running processes. To set this parameter:
+    1.  Open the `/etc/sysctl.conf` file with a text editor.
+    2.  Add or change the parameter definition so that the file includes these lines:
+
+        ```
+        kernel.threads-max=798720
+        vm.overcommit_memory=2
+        ```
+
+    3.  Save and close the file, then execute this command to apply your change:
+
+        ``` shell
+        $ sysctl -p
+        ```
+
+    4.  To view the current `vm.overcommit_memory` setting, execute the command:
+
+        ``` shell
+        $ sysctl -a | grep overcommit_memory
+        ```
+
+    5.  To view the runtime overcommit settings, execute the command:
+
+        ``` shell
+        $ cat /proc/meminfo | grep Commit
+        ```
+
+2.  Set the Linux swap space size and `vm.overcommit_ratio` parameter according to the available memory on each host. For hosts having 2GB-8GB of memory, set swap space = physical RAM and set `vm.overcommit_ratio=50`. For hosts having more than 8GB up to 64GB of memory, set swap space = 0.5 \* physical RAM and set `vm.overcommit_ratio=50`. For hosts having more than 64GB memory, set swap space = 4GB and set `vm.overcommit_ratio=100`
+
+    To set the `vm.overcommit_ratio` parameter:
+
+    1.  Open the `/etc/sysctl.conf` file with a text editor.
+    2.  Add or change the parameter definition so that the file includes the line:
+
+           ```
+       vm.overcommit_ratio=50
+       ```
+
+           \(Use `vm.overcommit_ratio=100` for hosts with more than 64GB RAM.\)
+    3.  Save and close the file, then execute this command to apply your change:
+
+        ``` shell
+        $ sysctl -p
+        ```
+
+    4.  To view the current `vm.overcommit_ratio` setting, execute the command:
+
+        ``` shell
+        $ sysctl -a | grep overcommit_ratio
+        ```
+        You can choose to use a dedicated swap partition, a swap file, or a combination of both. View the current swap settings using the command:
+
+        ``` shell
+        $ cat /proc/meminfo | grep Swap
+        ```
+3.  Ensure that all Java services that run on the machine use the `-Xmx` switch to allocate only their required heap.
+4.  Ensure that no other services \(such as Puppet\) or automated processes attempt to reset the overcommit settings on cluster hosts.
+5.  During the installation process, configure HAWQ memory by setting YARN or HAWQ configuration parameters, as described in [HAWQ Memory Configuration](#topic_uzf_flz_g5).
+
+## <a id="topic_uzf_flz_g5"></a>HAWQ Memory Configuration
+
+You must configure the memory used by HAWQ according to whether you plan to use YARN or HAWQ to manage system resources.
+
+After you configure the `vm.overcommit_ratio` and swap space according to [Host Memory Configuration](#topic_d3f_vlz_g5), the total memory available to a Linux host machine can be represented by the equation:
+
+```
+TOTAL_MEMORY = RAM * overcommit_ratio_percentage + SWAP
+```
+
+`TOTAL_MEMORY` comprises both HAWQ memory and `NON_HAWQ_MEMORY`, which is the memory used by components such as:
+
+-   Operating system
+-   DataNode
+-   NodeManager
+-   PXF
+-   All other software you run on the host machine.
+
+To configure the HAWQ memory for a given host, first determine the amount of `NON_HAWQ_MEMORY` that is used on the machine. Then configure HAWQ memory by setting the correct parameter according to whether you use the HAWQ default resource manager or YARN to manage resources:
+
+-   If you are using YARN for resource management, set `yarn.nodemanager.resource.memory-mb` to the smaller of `TOTAL_MEMORY - NON_HAWQ_MEMORY` or `RAM`.
+-   If you are using the HAWQ default resource manager, set `hawq_rm_memory_limit_perseg = RAM - NON_HAWQ_MEMORY`.
+
+You can set either parameter using Ambari when configuring YARN or when installing HAWQ with Ambari.
+
+### Example 1 - Large Host Machine
+
+An example large host machine uses the memory configuration:
+
+>RAM: 256GB
+>
+>SWAP: 4GB
+
+>NON\_HAWQ\_MEMORY:
+
+>> 2GB for Operating System
+
+>> 2GB for DataNode
+
+>> 2GB for NodeManager
+
+>> 1GB for PXF
+
+>overcommit\_ratio\_percentage:1 \(`vm.overcommit_ratio` = 100\)
+
+For this machine, `TOTAL_MEMORY = 256GB * 1 + 4GB = 260GB`.
+
+If this system uses YARN for resource management, you would set `yarn.nodemanager.resource.memory-mb` to `TOTAL_MEMORY - NON_HAWQ_MEMORY` = 260GB - 7GB = 253 \(because 253GB is smaller than the available amount of RAM\).
+
+If this system uses the default HAWQ resource manager, you would set `hawq_rm_memory_limit_perseg` = `RAM - NON_HAWQ_MEMORY` = 256 GB - 7GB = 249.
+
+### Example 2 - Medium Host Machine
+
+An example medium host machine uses the memory configuration:
+
+>RAM: 64GB
+
+>SWAP: 32GB
+
+>NON\_HAWQ\_MEMORY:
+
+>>2GB for Operating System
+
+>>2GB for DataNode
+
+>>2GB for NodeManager
+
+>>1GB for PXF
+
+>overcommit\_ratio\_percentage: .5 \(`vm.overcommit_ratio` = 50\)
+
+For this machine, `TOTAL_MEMORY = 64GB * .5 + 32GB = 64GB`.
+
+If this system uses YARN for resource management, you would set `yarn.nodemanager.resource.memory-mb` to `TOTAL_MEMORY - NON_HAWQ_MEMORY` = 64GB - 7GB = 57 \(because 57GB is smaller than the available amount of RAM\).
+
+If this system uses the default HAWQ resource manager, you would set `hawq_rm_memory_limit_perseg` = `RAM - NON_HAWQ_MEMORY` = 64 GB - 11GB = 57.
+
+### Example 3 - Small Host Machine \(Not recommended for production use\)
+
+An example small machine uses the memory configuration:
+
+>RAM: 8GB
+
+>SWAP: 8GB
+
+>NON\_HAWQ\_MEMORY:
+
+>>2GB for Operating System
+
+>>2GB for DataNode
+
+>>2GB for NodeManager
+
+>>1GB for PXF
+
+>overcommit\_ratio\_percentage:  .5 \(`vm.overcommit_ratio` = 50\)
+
+For this machine, `TOTAL_MEMORY = 8GB * .5 + 8GB = 12GB`.
+
+If this system uses YARN for resource management, you would set `yarn.nodemanager.resource.memory-mb` to `TOTAL_MEMORY - NON_HAWQ_MEMORY` = 12GB - 7GB = 5 \(because 5GB is smaller than the available amount of RAM\).
+
+If this system uses the default HAWQ resource manager, you would set `hawq_rm_memory_limit_perseg` = `RAM - NON_HAWQ_MEMORY` = 8 GB - 7GB = 1.
+
+## <a id="topic_pwdlessssh"></a>Passwordless SSH Configuration
+
+HAWQ hosts will be configured to use passwordless SSH for intra-cluster communications during the installation process. Temporary password-based authentication must be enabled on each HAWQ host in preparation for this configuration.
+
+1. Install the SSH server if not already configured on the HAWQ system:
+
+    ``` shell
+    $ yum list installed | grep openssh-server
+    $ yum -y install openssh-server
+    ```
+
+2. Update the host's SSH configuration to allow password-based authentication. Edit the SSH config file and change the `PasswordAuthentication` configuration value from `no` to `yes`:
+
+    ``` shell
+    $ sudo vi /etc/ssh/sshd_config
+    ```
+
+    ```
+    PasswordAuthentication yes
+    ```
+
+3. Restart SSH:
+
+    ``` shell
+    $ sudo service sshd restart
+    ```
+
+*After installation is complete*, you may choose to turn off the temporary password-based authentication configured in the previous steps:
+
+1. Open the SSH `/etc/ssh/sshd_config` file in a text editor and update the configuration option you enabled in step 2 above:
+    
+    ```
+    PasswordAuthentication no
+    ```
+
+2.  Restart SSH:
+    
+    ``` shell
+    $ sudo service sshd restart
+    ```
+�
+
+## <a id="topic_bsm_hhv_2v"></a>Disk Requirements
+
+-   2GB per host for HAWQ installation.�
+-   Approximately 300MB per segment instance for metadata.
+-   Multiple large (2TB or greater) disks are recommended for HAWQ master and segment temporary directories. For a given query, HAWQ will use a separate temp directory (if available) for each virtual segment to store spill files. Multiple HAWQ sessions will also use separate temp directories where available to avoid disk contention. If you configure too few temp directories, or you place multiple temp directories on the same disk, you increase the risk of disk contention or running out of disk space when multiple virtual segments target the same disk. Each HAWQ segment node can have 6 virtual segments.  
+-   Appropriate free space for data: disks should have at least 30% free space \(no more than 70% capacity\).
+-   High-speed, local storage
+
+## <a id="topic_rdb_jhv_2v"></a>Network Requirements
+
+-   Gigabit Ethernet within the array. For a production cluster, 10 Gigabit Ethernet recommended.
+-   Dedicated, non-blocking switch.
+-   Systems with multiple NICs require NIC bonding to utilize all available network bandwidth.
+-   Communication between the HAWQ master and segments requires reverse DNS lookup be configured in your cluster network.
+
+## <a id="port-req"></a>Port Requirements
+Individual PXF plug-ins, which you install after adding the HAWQ and PXF services, require that you Tomcat on the host machine. Tomcat reserves ports 8005, 8080, and 8009.
+
+If you have configured Oozie JXM reporting on a host that will run a PXF plug-in, make sure that the reporting service uses a port other than 8005. This helps to prevent port conflict errors from occurring when you start the PXF service.
+
+## <a id="umask"></a>Umask Requirement
+Set the OS file system umask to 022 on all cluster hosts. This ensure that users can read the HDFS block files.


[25/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_attribute.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_attribute.html.md.erb b/markdown/reference/catalog/pg_attribute.html.md.erb
new file mode 100644
index 0000000..53db267
--- /dev/null
+++ b/markdown/reference/catalog/pg_attribute.html.md.erb
@@ -0,0 +1,32 @@
+---
+title: pg_attribute
+---
+
+The `pg_attribute` table stores information about table columns. There will be exactly one `pg_attribute` row for every column in every table in the database. (There will also be attribute entries for indexes, and all objects that have `pg_class` entries.) The term attribute is equivalent to column.
+
+<a id="topic1__ga143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_attribute</span>
+
+| column          | type     | references    | description                                                                                                                                                                                                                                                                                                                                                                                                                    |
+|-----------------|----------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `attrelid`      | oid      | pg\_class.oid | The table this column belongs to                                                                                                                                                                                                                                                                                                                                                                                               |
+| `attname`       | name     | �             | The column name                                                                                                                                                                                                                                                                                                                                                                                                                |
+| `atttypid`      | oid      | pg\_type.oid  | The data type of this column                                                                                                                                                                                                                                                                                                                                                                                                   |
+| `attstattarget` | integer  | �             | Controls the level of detail of statistics accumulated for this column by `ANALYZE`. A zero value indicates that no statistics should be collected. A negative value says to use the system default statistics target. The exact meaning of positive values is data type-dependent. For scalar data types, it is both the target number of "most common values" to collect, and the target number of histogram bins to create. |
+| `attlen`        | smallint | �             | A copy of pg\_type.typlen of this column's type.                                                                                                                                                                                                                                                                                                                                                                               |
+| `attnum`        | smallint | �             | The number of the column. Ordinary columns are numbered from 1 up. System columns, such as oid, have (arbitrary) negative numbers.                                                                                                                                                                                                                                                                                             |
+| `attndims`      | integer  | �             | Number of dimensions, if the column is an array type; otherwise `0`. (Presently, the number of dimensions of an array is not enforced, so any nonzero value effectively means it is an array)                                                                                                                                                                                                                                  |
+| `attcacheoff`   | integer  | �             | Always `-1` in storage, but when loaded into a row descriptor in memory this may be updated to cache the offset of the attribute within the row                                                                                                                                                                                                                                                                                |
+| `atttypmod`     | integer  | �             | Records type-specific data supplied at table creation time (for example, the maximum length of a varchar column). It is passed to type-specific input functions and length coercion functions. The value will generally be `-1` for types that do not need it.                                                                                                                                                                 |
+| `attbyval`      | boolean  | �             | A copy of pg\_type.typbyval of this column's type                                                                                                                                                                                                                                                                                                                                                                              |
+| `attstorage`    | char     | �             | Normally a copy of `pg_type.typstorage` of this column's type. For TOAST-able data types, this can be altered after column creation to control storage policy.                                                                                                                                                                                                                                                                  |
+| `attalign`      | char     | �             | A copy of `pg_type.typalign` of this column's type                                                                                                                                                                                                                                                                                                                                                                              |
+| `attnotnull`    | boolean  | �             | This represents a not-null constraint. It is possible to change this column to enable or disable the constraint.                                                                                                                                                                                                                                                                                                               |
+| `atthasdef`     | boolean  | �             | This column has a default value, in which case there will be a corresponding entry in the `pg_attrdef` catalog that actually defines the value                                                                                                                                                                                                                                                                                  |
+| `attisdropped`  | boolean  | �             | This column has been dropped and is no longer valid. A dropped column is still physically present in the table, but is ignored by the parser and so cannot be accessed via SQL                                                                                                                                                                                                                                                 |
+| `attislocal`    | boolean  | �             | This column is defined locally in the relation. Note that a column may be locally defined and inherited simultaneously                                                                                                                                                                                                                                                                                                         |
+| `attinhcount`   | integer  | �             | The number of direct ancestors this column has. A column with a nonzero number of ancestors cannot be dropped nor renamed                                                                                                                                                                                                                                                                                                      |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_attribute_encoding.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_attribute_encoding.html.md.erb b/markdown/reference/catalog/pg_attribute_encoding.html.md.erb
new file mode 100644
index 0000000..3067a93
--- /dev/null
+++ b/markdown/reference/catalog/pg_attribute_encoding.html.md.erb
@@ -0,0 +1,18 @@
+---
+title: pg_attribute_encoding
+---
+
+The `pg_attribute_encoding` system catalog table contains column storage information.
+
+<a id="topic1__gb177839"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_attribute\_encoding</span>
+
+| column       | type       | modifers | storage  | description                            |
+|--------------|------------|----------|----------|----------------------------------------|
+| `attrelid`   | oid        | not null | plain    | Foreign key to `pg_attribute.attrelid` |
+| `attnum`     | smallint   | not null | plain    | Foreign key to `pg_attribute.attnum`   |
+| `attoptions` | text \[ \] | �        | extended | The options                            |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_auth_members.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_auth_members.html.md.erb b/markdown/reference/catalog/pg_auth_members.html.md.erb
new file mode 100644
index 0000000..7e770e0
--- /dev/null
+++ b/markdown/reference/catalog/pg_auth_members.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: pg_auth_members
+---
+
+The `pg_auth_members` system catalog table shows the membership relations between roles. Any non-circular set of relationships is allowed. Because roles are system-wide, `pg_auth_members` is shared across all databases of a HAWQ system.
+
+<a id="topic1__gc143898"></a>
+ <span class="tablecap">Table 1. pg\_catalog.pg\_auth\_members</span>
+
+| column         | type    | references     | description                                        |
+|----------------|---------|----------------|----------------------------------------------------|
+| `roleid`       | oid     | pg\_authid.oid | ID of the parent-level (group) role                |
+| `member`       | oid     | pg\_authid.oid | ID of a member role                                |
+| `grantor`      | oid     | pg\_authid.oid | ID of the role that granted this membership        |
+| `admin_option` | boolean | �              | True if role member may grant membership to others |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_authid.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_authid.html.md.erb b/markdown/reference/catalog/pg_authid.html.md.erb
new file mode 100644
index 0000000..ebae67c
--- /dev/null
+++ b/markdown/reference/catalog/pg_authid.html.md.erb
@@ -0,0 +1,36 @@
+---
+title: pg_authid
+---
+
+The `pg_authid` table contains information about database authorization identifiers (roles). A role subsumes the concepts of users and groups. A user is a role with the `rolcanlogin` flag set. Any role (with or without `rolcanlogin`) may have other roles as members. See [pg\_auth\_members](pg_auth_members.html#topic1).
+
+Since this catalog contains passwords, it must not be publicly readable. [pg\_roles](pg_roles.html#topic1) is a publicly readable view on `pg_authid` that blanks out the password field.
+
+Because user identities are system-wide, `pg_authid` is shared across all databases in a HAWQ system: there is only one copy of `pg_authid` per system, not one per database.
+
+<a id="topic1__gd143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_authid</span>
+
+| column              | type        | references | description                                                                                                           |
+|---------------------|-------------|------------|-----------------------------------------------------------------------------------------------------------------------|
+| `rolname`           | name        | �          | Role name                                                                                                             |
+| `rolsuper`          | boolean     | �          | Role has superuser privileges                                                                                         |
+| `rolinherit`        | boolean     | �          | Role automatically inherits privileges of roles it is a member of                                                     |
+| `rolcreaterole`     | boolean     | �          | Role may create more roles                                                                                            |
+| `rolcreatedb`       | boolean     | �          | Role may create databases                                                                                             |
+| `rolcatupdate`      | boolean     | �          | Role may update system catalogs directly. (Even a superuser may not do this unless this column is true)               |
+| `rolcanlogin`       | boolean     | �          | Role may log in. That is, this role can be given as the initial session authorization identifier                      |
+| `rolconnlimit`      | int4        | �          | For roles that can log in, this sets maximum number of concurrent connections this role can make. `-1` means no limit |
+| `rolpassword`       | text        | �          | Password (possibly encrypted); NULL if none                                                                           |
+| `rolvaliduntil`     | timestamptz | �          | Password expiry time (only used for password authentication); NULL if no expiration                                   |
+| `rolconfig`         | text\[\]    | �          | Session defaults for server configuration parameters                                                                  |
+| `relresqueue`       | oid         | �          | Object ID of the associated resource queue ID in `pg_resqueue`                                                       |
+| `rolcreaterextgpfd` | boolean     | �          | Privilege to create read external tables with the `gpfdist` or `gpfdists` protocol                                    |
+| `rolcreaterexhttp`  | boolean     | �          | Privilege to create read external tables with the `http` protocol                                                     |
+| `rolcreatewextgpfd` | boolean     | �          | Privilege to create write external tables with the `gpfdist` or `gpfdists` protocol                                   |
+| `rolcreaterexthdfs` | boolean     | �          | Privilege to create read external tables with the `gphdfs` protocol. (`gphdfs` is deprecated.)                        |
+| `rolcreatewexthdfs` | boolean     | �          | Privilege to create write external tables with the `gphdfs` protocol. (`gphdfs` is deprecated.)                       |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_cast.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_cast.html.md.erb b/markdown/reference/catalog/pg_cast.html.md.erb
new file mode 100644
index 0000000..513c7a3
--- /dev/null
+++ b/markdown/reference/catalog/pg_cast.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: pg_cast
+---
+
+The `pg_cast` table stores data type conversion paths, both built-in paths and those defined with `CREATE CAST`. The cast functions listed in `pg_cast` must always take the cast source type as their first argument type, and return the cast destination type as their result type. A cast function can have up to three arguments. The second argument, if present, must be type `integer`; it receives the type modifier associated with the destination type, or `-1` if there is none. The third argument, if present, must be type `boolean`; it receives `true` if the cast is an explicit cast, `false` otherwise.
+
+It is legitimate to create a `pg_cast` entry in which the source and target types are the same, if the associated function takes more than one argument. Such entries represent 'length coercion functions' that coerce values of the type to be legal for a particular type modifier value. Note however that at present there is no support for associating non-default type modifiers with user-created data types, and so this facility is only of use for the small number of built-in types that have type modifier syntax built into the grammar.
+
+When a `pg_cast` entry has different source and target types and a function that takes more than one argument, it represents converting from one type to another and applying a length coercion in a single step. When no such entry is available, coercion to a type that uses a type modifier involves two steps, one to convert between data types and a second to apply the modifier.
+
+<a id="topic1__ge143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_cast</span>
+
+| column        | type | references   | description                                                                                                                                                                                                                                                            |
+|---------------|------|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `castsource`  | oid  | pg\_type.oid | OID of the source data type.                                                                                                                                                                                                                                           |
+| `casttarget`  | oid  | pg\_type.oid | OID of the target data type.                                                                                                                                                                                                                                           |
+| `castfunc`    | oid  | pg\_proc.oid | The OID of the function to use to perform this cast. Zero is stored if the data types are binary compatible (that is, no run-time operation is needed to perform the cast).                                                                                            |
+| `castcontext` | char | �            | Indicates what contexts the cast may be invoked in. `e` means only as an explicit cast (using `CAST` or `::` syntax). `a` means implicitly in assignment to a target column, as well as explicitly. `i` means implicitly in expressions, as well as the other cases*.* |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_class.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_class.html.md.erb b/markdown/reference/catalog/pg_class.html.md.erb
new file mode 100644
index 0000000..112375e
--- /dev/null
+++ b/markdown/reference/catalog/pg_class.html.md.erb
@@ -0,0 +1,213 @@
+---
+title: pg_class
+---
+
+The system catalog table `pg_class` catalogs tables and most everything else that has columns or is otherwise similar to a table (also known as *relations*). This includes indexes (see also [pg\_index](pg_index.html#topic1)), sequences, views, composite types, and TOAST tables. Not all columns are meaningful for all relation types.
+
+<a id="topic1__gf143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_class</span>
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>column</th>
+<th>type</th>
+<th>references</th>
+<th>description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">relname</code></td>
+<td>name</td>
+<td>�</td>
+<td>Name of the table, index, view, etc.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relnamespace</code></td>
+<td>oid</td>
+<td>pg_namespace.oid</td>
+<td>The OID of the namespace (schema) that contains this relation</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">reltype</code></td>
+<td>oid</td>
+<td>pg_type.oid</td>
+<td>The OID of the data type that corresponds to this table's row type, if any (zero for indexes, which have no <code class="ph codeph">pg_type</code> entry)</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relowner</code></td>
+<td>oid</td>
+<td>pg_authid.oid</td>
+<td>Owner of the relation</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relam</code></td>
+<td>oid</td>
+<td>pg_am.oid</td>
+<td>If this is an index, the access method used (B-tree, Bitmap, hash, etc.)</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relfilenode</code></td>
+<td>oid</td>
+<td>�</td>
+<td>Name of the on-disk file of this relation; <code class="ph codeph">0</code> if none.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">reltablespace</code></td>
+<td>oid</td>
+<td>pg_tablespace.oid</td>
+<td>The tablespace in which this relation is stored. If zero, the database's default tablespace is implied. (Not meaningful if the relation has no on-disk file.)</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relpages</code></td>
+<td>integer</td>
+<td>�</td>
+<td>Size of the on-disk representation of this table in pages (of 32K each). This is only an estimate used by the planner. It is updated by  <code class="ph codeph">ANALYZE</code>, and a few DDL commands.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">reltuples</code></td>
+<td>real</td>
+<td>�</td>
+<td>Number of rows in the table. This is only an estimate used by the planner. It is updated by <code class="ph codeph">VACUUM</code>, <code class="ph codeph">ANALYZE</code>, and a few DDL commands.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">reltoastrelid</code></td>
+<td>oid</td>
+<td>pg_class.oid</td>
+<td>OID of the TOAST table associated with this table, <code class="ph codeph">0</code> if none. The TOAST table stores large attributes &quot;out of line&quot; in a secondary table.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">reltoastidxid</code></td>
+<td>oid</td>
+<td>pg_class.oid</td>
+<td>For a TOAST table, the OID of its index. <code class="ph codeph">0</code> if not a TOAST table.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relaosegidxid</code></td>
+<td>oid</td>
+<td>�</td>
+<td>Deprecated.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relaosegrelid</code></td>
+<td>oid</td>
+<td>�</td>
+<td>Deprecated.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relhasindex </code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if this is a table and it has (or recently had) any indexes. This is set by <code class="ph codeph">CREATE INDEX</code>, but not cleared immediately by <code class="ph codeph">DROP INDEX</code>. <code class="ph codeph">VACUUM</code> will clear if it finds the table has no indexes.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relisshared</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if this table is shared across all databases in the system. Only certain system catalog tables are shared.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relkind</code></td>
+<td>char</td>
+<td>�</td>
+<td>The type of object
+<p><code class="ph codeph">r</code> = heap table, <code class="ph codeph">i</code> = index, <code class="ph codeph">S</code> = sequence, <code class="ph codeph">v</code> = view, <code class="ph codeph">c</code> = composite type, <code class="ph codeph">t</code> = TOAST value, <code class="ph codeph">c</code> = composite type, <code class="ph codeph">u</code> = uncataloged temporary heap table</p></td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relstorage</code></td>
+<td>char</td>
+<td>�</td>
+<td>The storage mode of a table
+<p><code class="ph codeph">a</code> = append-only, <code class="ph codeph">h</code> = heap, <code class="ph codeph">p</code> = append-only parquet, <code class="ph codeph">v</code> = virtual, <code class="ph codeph">x</code>= external table.</p></td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relnatts</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>Number of user columns in the relation (system columns not counted). There must be this many corresponding entries in pg_attribute.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relchecks</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>Number of check constraints on the table.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">reltriggers</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>Number of triggers on the table.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relukeys</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>Unused</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relfkeys</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>Unused</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relrefs</code></td>
+<td>smallint</td>
+<td>�</td>
+<td>Unused</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relhasoids</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if an OID is generated for each row of the relation.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relhaspkey</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if the table once had a primary key.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relhasrules</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if table has rules.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relhassubclass</code></td>
+<td>boolean</td>
+<td>�</td>
+<td>True if table has (or once had) any inheritance children.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">relfrozenxid</code></td>
+<td>xid</td>
+<td>�</td>
+<td>All transaction IDs before this one have been replaced with a permanent (frozen) transaction ID in this table. This is used to track whether the table needs to be vacuumed in order to prevent transaction ID wraparound or to allow pg_clog to be shrunk. Zero (<code class="ph codeph">InvalidTransactionId</code>) if the relation is not a table.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">relacl</code></td>
+<td>aclitem[]</td>
+<td>�</td>
+<td>Access privileges assigned by <code class="ph codeph">GRANT</code> and <code class="ph codeph">REVOKE</code>.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">reloptions</code></td>
+<td>text[]</td>
+<td>�</td>
+<td>Access-method-specific options, as &quot;keyword=value&quot; strings.</td>
+</tr>
+</tbody>
+</table>
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_compression.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_compression.html.md.erb b/markdown/reference/catalog/pg_compression.html.md.erb
new file mode 100644
index 0000000..3524af0
--- /dev/null
+++ b/markdown/reference/catalog/pg_compression.html.md.erb
@@ -0,0 +1,22 @@
+---
+title: pg_compression
+---
+
+The `pg_compression` system catalog table describes the compression methods available..
+
+<a id="topic1__gg143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_compression</span>
+
+| column             | type    | modifers | storage | description                       |
+|--------------------|---------|----------|---------|-----------------------------------|
+| `compname`         | name    | not null | plain   | Name of the compression           |
+| `compconstructor`  | regproc | not null | plain   | Name of compression constructor   |
+| `compdestructor`   | regproc | not null | plain   | Name of compression destructor    |
+| `compcompressor`   | regproc | not null | plain   | Name of the compressor            |
+| `compdecompressor` | regproc | not null | plain   | Name of the decompressor          |
+| `compvalidator`    | regproc | not null | plain   | Name of the compression validator |
+| `compowner`        | oid     | not null | plain   | oid from pg\_authid               |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_constraint.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_constraint.html.md.erb b/markdown/reference/catalog/pg_constraint.html.md.erb
new file mode 100644
index 0000000..0f591fd
--- /dev/null
+++ b/markdown/reference/catalog/pg_constraint.html.md.erb
@@ -0,0 +1,30 @@
+---
+title: pg_constraint
+---
+
+The `pg_constraint` system catalog table stores check and foreign key constraints on tables. Column constraints are not treated specially. Every column constraint is equivalent to some table constraint. Not-null constraints are represented in the [pg\_attribute](pg_attribute.html#topic1) catalog table. Check constraints on domains are stored here, too.
+
+<a id="topic1__gh143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_constraint</span>
+
+| column          | type         | references           | description                                                                                                                                                                                                                                                                                                   |
+|-----------------|--------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `conname`       | name         | �                    | Constraint name                                                                                                                                                                                                                                                                                               |
+| `connamespace`  | oid          | pg\_namespace.oid    | The OID of the namespace (schema) that contains this constraint.                                                                                                                                                                                                                                              |
+| `contype `      | char         | �                    | `c` = check constraint, `f` = foreign key constraint.                                                                                                                                                                                                                                                         |
+| `condeferrable` | boolean      | �                    | Is the constraint deferrable?                                                                                                                                                                                                                                                                                 |
+| `condeferred `  | boolean      | �                    | Is the constraint deferred by default?                                                                                                                                                                                                                                                                        |
+| `conrelid`      | oid          | pg\_class.oid        | The table this constraint is on; 0 if not a table constraint.                                                                                                                                                                                                                                                 |
+| `contypid `     | oid          | pg\_type.oid         | The domain this constraint is on; 0 if not a domain constraint.                                                                                                                                                                                                                                               |
+| `confrelid`     | oid          | pg\_class.oid        | If a foreign key, the referenced table; else 0.                                                                                                                                                                                                                                                               |
+| `confupdtype`   | char         | �                    | Foreign key update action code.                                                                                                                                                                                                                                                                               |
+| `confdeltype`   | char         | �                    | Foreign key deletion action code.                                                                                                                                                                                                                                                                             |
+| `confmatchtype` | char         | �                    | Foreign key match type.                                                                                                                                                                                                                                                                                       |
+| `conkey`        | smallint\[\] | pg\_attribute.attnum | If a table constraint, list of columns which the constraint constrains.                                                                                                                                                                                                                                       |
+| `confkey`       | smallint\[\] | pg\_attribute.attnum | If a foreign key, list of the referenced columns.                                                                                                                                                                                                                                                             |
+| `conbin`        | text         | �                    | If a check constraint, an internal representation of the expression.                                                                                                                                                                                                                                          |
+| `consrc`        | text         | �                    | If a check constraint, a human-readable representation of the expression. This is not updated when referenced objects change; for example, it won't track renaming of columns. Rather than relying on this field, it is best to use `pg_get_constraintdef()` to extract the definition of a check constraint. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_conversion.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_conversion.html.md.erb b/markdown/reference/catalog/pg_conversion.html.md.erb
new file mode 100644
index 0000000..43763bc
--- /dev/null
+++ b/markdown/reference/catalog/pg_conversion.html.md.erb
@@ -0,0 +1,22 @@
+---
+title: pg_conversion
+---
+
+The `pg_conversion` system catalog table describes the available encoding conversion procedures as defined by `CREATE CONVERSION`.
+
+<a id="topic1__gi143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_conversion</span>
+
+| column           | type    | references        | description                                                      |
+|------------------|---------|-------------------|------------------------------------------------------------------|
+| `conname`        | name    | �                 | Conversion name (unique within a namespace).                     |
+| `connamespace`   | oid     | pg\_namespace.oid | The OID of the namespace (schema) that contains this conversion. |
+| `conowner`       | oid     | pg\_authid.oid    | Owner of the conversion.                                         |
+| `conforencoding` | integer | �                 | Source encoding ID.                                              |
+| `contoencoding`  | integer | �                 | Destination encoding ID.                                         |
+| `conproc`        | regproc | pg\_proc.oid      | Conversion procedure.                                            |
+| `condefault`     | boolean | �                 | True if this is the default conversion.                          |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_database.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_database.html.md.erb b/markdown/reference/catalog/pg_database.html.md.erb
new file mode 100644
index 0000000..b02a532
--- /dev/null
+++ b/markdown/reference/catalog/pg_database.html.md.erb
@@ -0,0 +1,26 @@
+---
+title: pg_database
+---
+
+The `pg_database` system catalog table stores information about the available databases. Databases are created with the `CREATE DATABASE` SQL command. Unlike most system catalogs, `pg_database` is shared across all databases in the system. There is only one copy of `pg_database` per system, not one per database.
+
+<a id="topic1__gj143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_database</span>
+
+| column          | type        | references         | description                                                                                                                                                                                                                                                                                                                            |
+|-----------------|-------------|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `datname`       | name        | �                  | Database name.                                                                                                                                                                                                                                                                                                                         |
+| `datdba`        | oid         | pg\_authid.oid     | Owner of the database, usually the user who created it.                                                                                                                                                                                                                                                                                |
+| `encoding`      | integer     | �                  | Character encoding for this database. `pg_encoding_to_char()` can translate this number to the encoding name.                                                                                                                                                                                                                          |
+| `datistemplate` | boolean     | �                  | If true then this database can be used in the `TEMPLATE` clause of `CREATE DATABASE` to create a new database as a clone of this one.                                                                                                                                                                                                  |
+| `datallowconn`  | boolean     | �                  | If false then no one can connect to this database. This is used to protect the `template0` database from being altered.                                                                                                                                                                                                                |
+| `datconnlimit`  | integer     | �                  | Sets the maximum number of concurrent connections that can be made to this database. `-1` means no limit.                                                                                                                                                                                                                              |
+| `datlastsysoid` | oid         | �                  | Last system OID in the database.                                                                                                                                                                                                                                                                                                       |
+| `datfrozenxid ` | xid         | �                  | All transaction IDs before this one have been replaced with a permanent (frozen) transaction ID in this database. This is used to track whether the database needs to be vacuumed in order to prevent transaction ID wraparound or to allow pg\_clog to be shrunk. It is the minimum of the per-table *pg\_class.relfrozenxid* values. |
+| `dattablespace` | oid         | pg\_tablespace.oid | The default tablespace for the database. Within this database, all tables for which *pg\_class.reltablespace* is zero will be stored in this tablespace. All non-shared system catalogs will also be there.                                                                                                                            |
+| `datconfig`     | text\[\]    | �                  | Session defaults for user-settable server configuration parameters.                                                                                                                                                                                                                                                                    |
+| `datacl`        | aclitem\[\] | �                  | Database access privileges as given by `GRANT` and `REVOKE`.                                                                                                                                                                                                                                                                           |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_depend.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_depend.html.md.erb b/markdown/reference/catalog/pg_depend.html.md.erb
new file mode 100644
index 0000000..85a1835
--- /dev/null
+++ b/markdown/reference/catalog/pg_depend.html.md.erb
@@ -0,0 +1,26 @@
+---
+title: pg_depend
+---
+
+The `pg_depend` system catalog table records the dependency relationships between database objects. This information allows `DROP` commands to find which other objects must be dropped by `DROP CASCADE` or prevent dropping in the `DROP RESTRICT` case. See also [pg\_shdepend](pg_shdepend.html#topic1), which performs a similar function for dependencies involving objects that are shared across a HAWQ system.
+
+In all cases, a `pg_depend` entry indicates that the referenced object may not be dropped without also dropping the dependent object. However, there are several subflavors identified by `deptype`:
+
+-   **DEPENDENCY\_NORMAL (n)** \u2014 A normal relationship between separately-created objects. The dependent object may be dropped without affecting the referenced object. The referenced object may only be dropped by specifying `CASCADE`, in which case the dependent object is dropped, too. Example: a table column has a normal dependency on its data type.
+-   **DEPENDENCY\_AUTO (a)** \u2014 The dependent object can be dropped separately from the referenced object, and should be automatically dropped (regardless of `RESTRICT` or `CASCADE` mode) if the referenced object is dropped. Example: a named constraint on a table is made autodependent on the table, so that it will go away if the table is dropped.
+-   **DEPENDENCY\_INTERNAL (i)** \u2014 The dependent object was created as part of creation of the referenced object, and is really just a part of its internal implementation. A `DROP` of the dependent object will be disallowed outright (we'll tell the user to issue a `DROP` against the referenced object, instead). A `DROP` of the referenced object will be propagated through to drop the dependent object whether `CASCADE` is specified or not.
+-   **DEPENDENCY\_PIN (p)** \u2014 There is no dependent object; this type of entry is a signal that the system itself depends on the referenced object, and so that object must never be deleted. Entries of this type are created only by system initialization. The columns for the dependent object contain zeroes. <a id="topic1__gk143898"></a>
+
+<span class="tablecap">Table 1. pg\_catalog.pg\_depend</span>
+
+    | column         | type    | references     | description                                                                                                |
+    |----------------|---------|----------------|------------------------------------------------------------------------------------------------------------|
+    | `classid`      | oid     | pg\_class.oid  | The OID of the system catalog the dependent object is in.                                                  |
+    | `objid`        | oid     | any OID column | The OID of the specific dependent object.                                                                  |
+    | `objsubid `    | integer | �              | For a table column, this is the column number. For all other object types, this column is zero.            |
+    | `refclassid`   | oid     | pg\_class.oid  | The OID of the system catalog the referenced object is in.                                                 |
+    | `refobjid`     | oid     | any OID column | The OID of the specific referenced object.                                                                 |
+    | `refobjsubid ` | integer | �              | For a table column, this is the referenced column number. For all other object types, this column is zero. |
+    | `deptype`      | char    | �              | A code defining the specific semantics of this dependency relationship.                                    |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_description.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_description.html.md.erb b/markdown/reference/catalog/pg_description.html.md.erb
new file mode 100644
index 0000000..bad9627
--- /dev/null
+++ b/markdown/reference/catalog/pg_description.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: pg_description
+---
+
+The `pg_description` system catalog table stores optional descriptions (comments) for each database object. Descriptions can be manipulated with the `COMMENT` command and viewed with `psql`'s `\d` meta-commands. Descriptions of many built-in system objects are provided in the initial contents of `pg_description`. See also [pg\_shdescription](pg_shdescription.html#topic1), which performs a similar function for descriptions involving objects that are shared across a HAWQ system.
+
+<a id="topic1__gm143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_description</span>
+
+| column         | type    | references     | description                                                                                                  |
+|----------------|---------|----------------|--------------------------------------------------------------------------------------------------------------|
+| `objoid`       | oid     | any OID column | The OID of the object this description pertains to.                                                          |
+| `classoid`     | oid     | pg\_class.oid  | The OID of the system catalog this object appears in                                                         |
+| `objsubid `    | integer | �              | For a comment on a table column, this is the column number. For all other object types, this column is zero. |
+| `description ` | text    | �              | Arbitrary text that serves as the description of this object.                                                |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_exttable.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_exttable.html.md.erb b/markdown/reference/catalog/pg_exttable.html.md.erb
new file mode 100644
index 0000000..ca5fc88
--- /dev/null
+++ b/markdown/reference/catalog/pg_exttable.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: pg_exttable
+---
+
+The `pg_exttable` system catalog table is used to track external tables and web tables created by the `CREATE EXTERNAL TABLE` command.
+
+<a id="topic1__gn143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_exttable</span>
+
+| column            | type     | references    | description                                                                                                      |
+|-------------------|----------|---------------|------------------------------------------------------------------------------------------------------------------|
+| `reloid`          | oid      | pg\_class.oid | The OID of this external table.                                                                                  |
+| `location`        | text\[\] | �             | The URI location(s) of the external table files.                                                                 |
+| `fmttype`         | char     | �             | Format of the external table files: `t` for text, or `c` for csv.                                                |
+| `fmtopts`         | text     | �             | Formatting options of the external table files, such as the field delimiter, null string, escape character, etc. |
+| `command`         | text     | �             | The OS command to execute when the external table is accessed.                                                   |
+| `rejectlimit`     | integer  | �             | The per segment reject limit for rows with errors, after which the load will fail.                               |
+| `rejectlimittype` | char     | �             | Type of reject limit threshold: `r` for number of rows.                                                          |
+| `fmterrtbl`       | oid      | pg\_class.oid | The object id of the error table where format errors will be logged.                                             |
+| `encoding`        | text     | �             | The client encoding.                                                                                             |
+| `writable`        | boolean  | �             | `0` for readable external tables, `1` for writable external tables.                                              |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_filespace.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_filespace.html.md.erb b/markdown/reference/catalog/pg_filespace.html.md.erb
new file mode 100644
index 0000000..e0b810e
--- /dev/null
+++ b/markdown/reference/catalog/pg_filespace.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: pg_filespace
+---
+
+The `pg_filespace` table contains information about the filespaces created in a HAWQ system. Every system contains a default filespace, `pg_system`, which is a collection of all the data directory locations created at system initialization time.
+
+A tablespace requires a file system location to store its database files. In HAWQ, the master and each segment needs its own distinct storage location. This collection of file system locations for all components in a HAWQ system is referred to as a filespace.
+
+<a id="topic1__go138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_filespace</span>
+
+| column    | type | references    | description                                           |
+|-----------|------|---------------|-------------------------------------------------------|
+| `fsname`  | name | �             | The name of the filespace.                            |
+| `fsowner` | oid  | pg\_roles.oid | The object id of the role that created the filespace. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_filespace_entry.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_filespace_entry.html.md.erb b/markdown/reference/catalog/pg_filespace_entry.html.md.erb
new file mode 100644
index 0000000..5a45113
--- /dev/null
+++ b/markdown/reference/catalog/pg_filespace_entry.html.md.erb
@@ -0,0 +1,18 @@
+---
+title: pg_filespace_entry
+---
+
+A tablespace requires a file system location to store its database files. In HAWQ, the master and each segment needs its own distinct storage location. This collection of file system locations for all components in a HAWQ system is referred to as a *filespace*. The `pg_filespace_entry` table contains information about the collection of file system locations across a HAWQ system that comprise a HAWQ filespace.
+
+<a id="topic1__gp138428"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_filespace\_entry</span>
+
+
+| column        | type    | references                       | description                               |
+|---------------|---------|----------------------------------|-------------------------------------------|
+| `fsefsoid`    | oid     | pg\_filespace.oid                | Object id of the filespace.               |
+| `fsedbid`     | integer | gp\_segment\_ configuration.dbid | Segment id.                               |
+| `fselocation` | text    | �                                | File system location for this segment id. |
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_index.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_index.html.md.erb b/markdown/reference/catalog/pg_index.html.md.erb
new file mode 100644
index 0000000..e93bd86
--- /dev/null
+++ b/markdown/reference/catalog/pg_index.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: pg_index
+---
+
+The `pg_index` system catalog table contains part of the information about indexes. The rest is mostly in [pg\_class](pg_class.html#topic1).
+
+<a id="topic1__gq143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_index</span>
+
+| column           | type       | references           | description                                                                                                                                                                                                                                                                                                                                             |
+|------------------|------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `indexrelid`     | oid        | pg\_class.oid        | The OID of the `pg_class` entry for this index.                                                                                                                                                                                                                                                                                                          |
+| `indrelid`       | oid        | pg\_class.oid        | The OID of the `pg_class` entry for the table this index is for.                                                                                                                                                                                                                                                                                         |
+| `indnatts`       | smallint   | �                    | The number of columns in the index (duplicates `pg_class.relnatts`).                                                                                                                                                                                                                                                                                     |
+| `indisunique`    | boolean    | �                    | If true, this is a unique index.                                                                                                                                                                                                                                                                                                                        |
+| `indisclustered` | boolean    | �                    | If true, the table was last clustered on this index via the `CLUSTER` command.                                                                                                                                                                                                                                                                          |
+| `indisvalid`     | boolean    | �                    | If true, the index is currently valid for queries. False means the index is possibly incomplete: it must still be modified by `INSERT` operations, but it cannot safely be used for queries.                                                                                                                                                   |
+| `indkey`         | int2vector | pg\_attribute.attnum | This is an array of indnatts values that indicate which table columns this index indexes. For example a value of 1 3 would mean that the first and the third table columns make up the index key. A zero in this array indicates that the corresponding index attribute is an expression over the table columns, rather than a simple column reference. |
+| `indclass`       | oidvector  | pg\_opclass.oid      | For each column in the index key, this contains the OID of the operator class to use.                                                                                                                                                                                                                                                                    |
+| `indexprs`       | text       | �                    | Expression trees (in `nodeToString()` representation) for index attributes that are not simple column references. This is a list with one element for each zero entry in indkey. NULL if all index attributes are simple references.                                                                                                                    |
+| `indpred`        | text       | �                    | Expression tree (in `nodeToString()` representation) for partial index predicate. NULL if not a partial index.                                                                                                                                                                                                                                          |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_inherits.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_inherits.html.md.erb b/markdown/reference/catalog/pg_inherits.html.md.erb
new file mode 100644
index 0000000..9868602
--- /dev/null
+++ b/markdown/reference/catalog/pg_inherits.html.md.erb
@@ -0,0 +1,16 @@
+---
+title: pg_inherits
+---
+
+The `pg_inherits` system catalog table records information about table inheritance hierarchies. There is one entry for each direct child table in the database. (Indirect inheritance can be determined by following chains of entries.) In HAWQ, inheritance relationships are created by both the `INHERITS` clause (standalone inheritance) and the `PARTITION         BY` clause (partitioned child table inheritance) of `CREATE       TABLE`.
+
+<a id="topic1__gr143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_inherits</span>
+
+| column      | type    | references    | description                                                                                                                                                                             |
+|-------------|---------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `inhrelid`  | oid     | pg\_class.oid | The OID of the child table.                                                                                                                                                             |
+| `inhparent` | oid     | pg\_class.oid | The OID of the parent table.                                                                                                                                                            |
+| `inhseqno`  | integer | �             | If there is more than one direct parent for a child table (multiple inheritance), this number tells the order in which the inherited columns are to be arranged. The count starts at 1. |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_language.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_language.html.md.erb b/markdown/reference/catalog/pg_language.html.md.erb
new file mode 100644
index 0000000..9b626f9
--- /dev/null
+++ b/markdown/reference/catalog/pg_language.html.md.erb
@@ -0,0 +1,21 @@
+---
+title: pg_language
+---
+
+The `pg_language` system catalog table registers languages in which you can write functions or stored procedures. It is populated by `CREATE LANGUAGE`.
+
+<a id="topic1__gs143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_language</span>
+
+| column          | type        | references   | description                                                                                                                                                                                                                                   |
+|-----------------|-------------|--------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `lanname`       | name        | �            | Name of the language.                                                                                                                                                                                                                         |
+| `lanispl `      | boolean     | �            | This is false for internal languages (such as SQL) and true for user-defined languages. Currently, `pg_dump` still uses this to determine which languages need to be dumped, but this may be replaced by a different mechanism in the future. |
+| `lanpltrusted ` | boolean     | �            | True if this is a trusted language, which means that it is believed not to grant access to anything outside the normal SQL execution environment. Only superusers may create functions in untrusted languages.                                |
+| `lanplcallfoid` | oid         | pg\_proc.oid | For noninternal languages this references the language handler, which is a special function that is responsible for executing all functions that are written in the particular language.                                                      |
+| `lanvalidator`  | oid         | pg\_proc.oid | This references a language validator function that is responsible for checking the syntax and validity of new functions when they are created. Zero if no validator is provided.                                                              |
+| `lanacl `       | aclitem\[\] | �            | Access privileges for the language.                                                                                                                                                                                                           |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_largeobject.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_largeobject.html.md.erb b/markdown/reference/catalog/pg_largeobject.html.md.erb
new file mode 100644
index 0000000..59d2c6d
--- /dev/null
+++ b/markdown/reference/catalog/pg_largeobject.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: pg_largeobject
+---
+
+The `pg_largeobject` system catalog table holds the data making up 'large objects'. A large object is identified by an OID assigned when it is created. Each large object is broken into segments or 'pages' small enough to be conveniently stored as rows in `pg_largeobject`. The amount of data per page is defined to be `LOBLKSIZE` (which is currently `BLCKSZ`/4, or typically 8K).
+
+Each row of `pg_largeobject` holds data for one page of a large object, beginning at byte offset (*pageno*` * LOBLKSIZE`) within the object. The implementation allows sparse storage: pages may be missing, and may be shorter than `LOBLKSIZE` bytes even if they are not the last page of the object. Missing regions within a large object read as zeroes.
+
+<a id="topic1__gt143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_largeobject</span>
+
+| column   | type    | references | description                                                                                             |
+|----------|---------|------------|---------------------------------------------------------------------------------------------------------|
+| `loid`   | oid     | �          | Identifier of the large object that includes this page.                                                 |
+| `pageno` | integer | �          | Page number of this page within its large object (counting from zero).                                  |
+| `data`   | bytea   | �          | Actual data stored in the large object. This will never be more than `LOBLKSIZE` bytes and may be less. |
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_listener.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_listener.html.md.erb b/markdown/reference/catalog/pg_listener.html.md.erb
new file mode 100644
index 0000000..4df5fda
--- /dev/null
+++ b/markdown/reference/catalog/pg_listener.html.md.erb
@@ -0,0 +1,20 @@
+---
+title: pg_listener
+---
+
+The `pg_listener` system catalog table supports the `LISTEN` and `NOTIFY` commands. A listener creates an entry in `pg_listener` for each notification name it is listening for. A notifier scans and updates each matching entry to show that a notification has occurred. The notifier also sends a signal (using the PID recorded in the table) to awaken the listener from sleep.
+
+This table is not currently used in HAWQ.
+
+<a id="topic1__gu143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_listener</span>
+
+| column         | type    | references | description                                                                                                                      |
+|----------------|---------|------------|----------------------------------------------------------------------------------------------------------------------------------|
+| `relname`      | name    | �          | Notify condition name. (The name need not match any actual relation in the database.                                             |
+| `listenerpid`  | integer | �          | PID of the server process that created this entry.                                                                               |
+| `notification` | integer | �          | Zero if no event is pending for this listener. If an event is pending, the PID of the server process that sent the notification. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_locks.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_locks.html.md.erb b/markdown/reference/catalog/pg_locks.html.md.erb
new file mode 100644
index 0000000..a20a4d7
--- /dev/null
+++ b/markdown/reference/catalog/pg_locks.html.md.erb
@@ -0,0 +1,35 @@
+---
+title: pg_locks
+---
+
+The `pg_locks` view provides access to information about the locks held by open transactions within HAWQ.
+
+`pg_locks` contains one row per active lockable object, requested lock mode, and relevant transaction. Thus, the same lockable object may appear many times, if multiple transactions are holding or waiting for locks on it. However, an object that currently has no locks on it will not appear at all.
+
+There are several distinct types of lockable objects: whole relations (such as tables), individual pages of relations, individual tuples of relations, transaction IDs, and general database objects. Also, the right to extend a relation is represented as a separate lockable object.
+
+<a id="topic1__gv141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_locks</span>
+
+| column           | type     | references       | description                                                                                                                                                                                           |
+|------------------|----------|------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `locktype`       | text     | �                | Type of the lockable object: `relation`, `extend`, `page`, `tuple`, `transactionid`, `object`, `userlock`, `resource queue`, or `advisory`                                                            |
+| `database`       | oid      | pg\_database.oid | OID of the database in which the object exists, zero if the object is a shared object, or NULL if the object is a transaction ID                                                                      |
+| `relation`       | oid      | pg\_class.oid    | OID of the relation, or NULL if the object is not a relation or part of a relation                                                                                                                    |
+| `page `          | integer  | �                | Page number within the relation, or NULL if the object is not a tuple or relation page                                                                                                                |
+| `tuple `         | smallint | �                | Tuple number within the page, or NULL if the object is not a tuple                                                                                                                                    |
+| `transactionid ` | xid      | �                | ID of a transaction, or NULL if the object is not a transaction ID                                                                                                                                    |
+| `classid`        | oid      | pg\_class.oid    | OID of the system catalog containing the object, or NULL if the object is not a general database object                                                                                               |
+| `objid `         | oid      | any OID column   | OID of the object within its system catalog, or NULL if the object is not a general database object                                                                                                   |
+| `objsubid `      | smallint | �                | For a table column, this is the column number (the `classid` and `objid` refer to the table itself). For all other object types, this column is zero. NULL if the object is not a general database object |
+| `transaction`    | xid      | �                | ID of the transaction that is holding or awaiting this lock                                                                                                                                           |
+| `pid`            | integer  | �                | Process ID of the server process holding or awaiting this lock. NULL if the lock is held by a prepared transaction                                                                                    |
+| `mode`           | text     | �                | Name of the lock mode held or desired by this process                                                                                                                                                 |
+| `granted`        | boolean  | �                | True if lock is held, false if lock is awaited.                                                                                                                                                       |
+| `mppsessionid`   | integer  | �                | The id of the client session associated with this lock.                                                                                                                                               |
+| `mppiswriter`    | boolean  | �                | Specifies whether the lock is held by a writer process.                                                                                                                                               |
+| `gp_segment_id`  | integer  | �                | The HAWQ segment id (`dbid`) where the lock is held.                                                                                                                                                  |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_namespace.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_namespace.html.md.erb b/markdown/reference/catalog/pg_namespace.html.md.erb
new file mode 100644
index 0000000..b307ecb
--- /dev/null
+++ b/markdown/reference/catalog/pg_namespace.html.md.erb
@@ -0,0 +1,18 @@
+---
+title: pg_namespace
+---
+
+The `pg_namespace` system catalog table stores namespaces. A namespace is the structure underlying SQL schemas: each namespace can have a separate collection of relations, types, etc. without name conflicts.
+
+<a id="topic1__gx143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_namespace</span>
+
+| column     | type        | references     | description                                         |
+|------------|-------------|----------------|-----------------------------------------------------|
+| `nspname`  | name        | �              | Name of the namespace                               |
+| `nspowner` | oid         | pg\_authid.oid | Owner of the namespace                              |
+| `nspacl `  | aclitem\[\] | �              | Access privileges as given by `GRANT` and `REVOKE`. |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_opclass.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_opclass.html.md.erb b/markdown/reference/catalog/pg_opclass.html.md.erb
new file mode 100644
index 0000000..d03315c
--- /dev/null
+++ b/markdown/reference/catalog/pg_opclass.html.md.erb
@@ -0,0 +1,22 @@
+---
+title: pg_opclass
+---
+
+The `pg_opclass` system catalog table defines index access method operator classes. Each operator class defines semantics for index columns of a particular data type and a particular index access method. Note that there can be multiple operator classes for a given data type/access method combination, thus supporting multiple behaviors. The majority of the information defining an operator class is actually not in its `pg_opclass` row, but in the associated rows in [pg\_amop](pg_amop.html#topic1) and [pg\_amproc](pg_amproc.html#topic1). Those rows are considered to be part of the operator class definition \u2014 this is not unlike the way that a relation is defined by a single [pg\_class](pg_class.html#topic1) row plus associated rows in [pg\_attribute](pg_attribute.html#topic1) and other tables.
+
+<a id="topic1__gw141982"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_opclass</span>
+
+| column         | type    | references        | description                                                             |
+|----------------|---------|-------------------|-------------------------------------------------------------------------|
+| `opcamid`      | oid     | pg\_am.oid        | Index access method operator class is for.                              |
+| `opcname`      | name    | �                 | Name of this operator class                                             |
+| `opcnamespace` | oid     | pg\_namespace.oid | Namespace of this operator class                                        |
+| `opcowner`     | oid     | pg\_authid.oid    | Owner of the operator class                                             |
+| `opcintype`    | oid     | pg\_type.oid      | Data type that the operator class indexes.                              |
+| `opcdefault`   | boolean | �                 | True if this operator class is the default for the data type opcintype. |
+| `opckeytype`   | oid     | pg\_type.oid      | Type of data stored in index, or zero if same as opcintype.             |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_operator.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_operator.html.md.erb b/markdown/reference/catalog/pg_operator.html.md.erb
new file mode 100644
index 0000000..71c632d
--- /dev/null
+++ b/markdown/reference/catalog/pg_operator.html.md.erb
@@ -0,0 +1,32 @@
+---
+title: pg_operator
+---
+
+The `pg_operator` system catalog table stores information about operators, both built-in and those defined by `CREATE OPERATOR`. Unused column contain zeroes. For example, `oprleft` is zero for a prefix operator.
+
+<a id="topic1__gy150092"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_operator</span>
+
+| column         | type    | references        | description                                                                                                              |
+|----------------|---------|-------------------|--------------------------------------------------------------------------------------------------------------------------|
+| `oprname`      | name    | �                 | Name of the operator.                                                                                                    |
+| `oprnamespace` | oid     | pg\_namespace.oid | The OID of the namespace that contains this operator.                                                                    |
+| `oprowner`     | oid     | pg\_authid.oid    | Owner of the operator.                                                                                                   |
+| `oprkind`      | char    | �                 | `b` = infix (both), `l` = prefix (left), `r` = postfix (right)                                                           |
+| `oprcanhash`   | boolean | �                 | This operator supports hash joins.                                                                                       |
+| `oprleft`      | oid     | pg\_type.oid      | Type of the left operand.                                                                                                |
+| `oprright`     | oid     | pg\_type.oid      | Type of the right operand.                                                                                               |
+| `oprresult`    | oid     | pg\_type.oid      | Type of the result.                                                                                                      |
+| `oprcom`       | oid     | pg\_operator.oid  | Commutator of this operator, if any.                                                                                     |
+| `oprnegate`    | �       | pg\_operator.oid  | Negator of this operator, if any.                                                                                        |
+| `oprlsortop`   | oid     | pg\_operator.oid  | If this operator supports merge joins, the operator that sorts the type of the left-hand operand (`L<L`).                |
+| `oprrsortop`   | oid     | pg\_operator.oid  | If this operator supports merge joins, the operator that sorts the type of the right-hand operand (`R<R`).               |
+| `oprltcmpop`   | oid     | pg\_operator.oid  | If this operator supports merge joins, the less-than operator that compares the left and right operand types (`L<R`).    |
+| `oprgtcmpop`   | oid     | pg\_operator.oid  | If this operator supports merge joins, the greater-than operator that compares the left and right operand types (`L>R`). |
+| `oprcode`      | regproc | pg\_proc.oid      | Function that implements this operator.                                                                                  |
+| `oprrest `     | regproc | pg\_proc.oid      | Restriction selectivity estimation function for this operator.                                                           |
+| `oprjoin`      | regproc | pg\_proc.oid      | Join selectivity estimation function for this operator.                                                                  |
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/catalog/pg_partition.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/catalog/pg_partition.html.md.erb b/markdown/reference/catalog/pg_partition.html.md.erb
new file mode 100644
index 0000000..6795930
--- /dev/null
+++ b/markdown/reference/catalog/pg_partition.html.md.erb
@@ -0,0 +1,20 @@
+---
+title: pg_partition
+---
+
+The `pg_partition` system catalog table is used to track partitioned tables and their inheritance level relationships. Each row of `pg_partition` represents either the level of a partitioned table in the partition hierarchy, or a subpartition template description. The value of the attribute `paristemplate` determines what a particular row represents.
+
+<a id="topic1__gz143898"></a>
+<span class="tablecap">Table 1. pg\_catalog.pg\_partition</span>
+
+| column          | type       | references      | description                                                                                                                                         |
+|-----------------|------------|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| `parrelid`      | oid        | pg\_class.oid   | The object identifier of the table.                                                                                                                 |
+| `parkind`       | char       | �               | The partition type - `R` for range or `L` for list.                                                                                                 |
+| `parlevel`      | smallint   | �               | The partition level of this row: 0 for the top-level parent table, 1 for the first level under the parent table, 2 for the second level, and so on. |
+| `paristemplate` | boolean    | �               | Whether or not this row represents a subpartition template definition (true) or an actual partitioning level (false).                               |
+| `parnatts`      | smallint   | �               | The number of attributes that define this level.                                                                                                    |
+| `paratts`       | int2vector | �               | An array of the attribute numbers (as in `pg_attribute.attnum`) of the attributes that participate in defining this level.                          |
+| `parclass`      | oidvector  | pg\_opclass.oid | The operator class identifier(s) of the partition columns.                                                                                          |
+
+


[02/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/query-performance.html.md.erb
----------------------------------------------------------------------
diff --git a/query/query-performance.html.md.erb b/query/query-performance.html.md.erb
deleted file mode 100644
index 981d77b..0000000
--- a/query/query-performance.html.md.erb
+++ /dev/null
@@ -1,155 +0,0 @@
----
-title: Query Performance
----
-
-<span class="shortdesc">HAWQ dynamically allocates resources to queries. Query performance depends on several factors such as data locality, number of virtual segments used for the query and general cluster health.</span>
-
--   Dynamic Partition Elimination
-
-    In HAWQ, values available only when a query runs are used to dynamically prune partitions, which improves query processing speed. Enable or disable dynamic partition elimination by setting the server configuration parameter `gp_dynamic_partition_pruning` to `ON` or `OFF`; it is `ON` by default.
-
--   Memory Optimizations
-
-    HAWQ allocates memory optimally for different operators in a query and frees and re-allocates memory during the stages of processing a query.
-
--   Runaway Query Termination
-
-    HAWQ can automatically terminate the most memory-intensive queries based on a memory usage threshold. The threshold is set as a configurable percentage ([runaway\_detector\_activation\_percent](../reference/guc/parameter_definitions.html#runaway_detector_activation_percent)) of the resource quota for the segment, which is calculated by HAWQ's resource manager.
-
-    If the amount of virtual memory utilized by a physical segment exceeds the calculated threshold, then HAWQ begins terminating queries based on memory usage, starting with the query that is consuming the largest amount of memory. Queries are terminated until the percentage of utilized virtual memory is below the specified percentage.
-
-    To calculate the memory usage threshold for runaway queries, HAWQ uses the following formula:
-
-    *vmem threshold* = (*virtual memory quota calculated by resource manager* + [hawq\_re\_memory\_overcommit\_max](../reference/guc/parameter_definitions.html#hawq_re_memory_overcommit_max)) \* [runaway\_detector\_activation\_percent](../reference/guc/parameter_definitions.html#runaway_detector_activation_percent).
-
-    For example, if HAWQ resource manager calculates a virtual memory quota of 9GB,`             hawq_re_memory_overcommit_max` is set to 1GB and the value of `runaway_detector_activation_percent` is 95 (95%), then HAWQ starts terminating queries when the utilized virtual memory exceeds 9.5 GB.
-
-    To disable automatic query detection and termination, set the value of `runaway_detector_activation_percent` to 100.
-
-## <a id="id_xkg_znj_f5"></a>How to Investigate Query Performance Issues
-
-A query is not executing as quickly as you would expect. Here is how to investigate possible causes of slowdown:
-
-1.  Check the health of the cluster.
-    1.  Are any DataNodes, segments or nodes down?
-    2.  Are there many failed disks?
-
-2.  Check table statistics. Have the tables involved in the query been analyzed?
-3.  Check the plan of the query and run /3/4 to determine the bottleneck. 
-    Sometimes, there is not enough memory for some operators, such as Hash Join, or spill files are used. If an operator cannot perform all of its work in the memory allocated to it, it caches data on disk in *spill files*. Compared with no spill files, a query will run much slower.
-
-4.  Check data locality statistics using /3/4. Alternately you can check the logs. Data locality result for every query could also be found in the log of HAWQ. See [Data Locality Statistics](query-performance.html#topic_amk_drc_d5) for information on the statistics.
-5.  Check resource queue status. You can query view `pg_resqueue_status` to check if the target queue has already dispatched some resource to the queries, or if the target queue is lacking resources. See [Checking Existing Resource Queues](../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
-6.  Analyze a dump of the resource manager's status to see more resource queue status. See [Analyzing Resource Manager Status](../resourcemgmt/ResourceQueues.html#topic_zrh_pkc_f5).
-
-## <a id="topic_amk_drc_d5"></a>Data Locality Statistics
-
-For visibility into query performance, use the EXPLAIN ANALYZE to obtain data locality statistics. For example:
-
-``` sql
-postgres=# CREATE TABLE test (i int);
-postgres=# INSERT INTO test VALUES(2);
-postgres=# EXPLAIN ANALYZE SELECT * FROM test;
-```
-```
-QUERY PLAN
-.......
-Data locality statistics:
-data locality ratio: 1.000; virtual segment number: 1; different host number: 1;
-virtual segment number per host(avg/min/max): (1/1/1);
-segment size(avg/min/max): (32.000 B/32 B/32 B);
-segment size with penalty(avg/min/max): (32.000 B/32 B/32 B);
-continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 7.816 ms;
-resource allocation: 0.615 ms; datalocality calculation: 0.136 ms.
-```
-
-The following table describes the metrics related to data locality. Use these metrics to examine issues behind a query's performance.
-
-<a id="topic_amk_drc_d5__table_q4p_25c_d5"></a>
-
-<table>
-<caption><span class="tablecap">Table 1. Data Locality Statistics</span></caption>
-<colgroup>
-<col width="50%" />
-<col width="50%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Statistic</th>
-<th>Description</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>data locality ratio</td>
-<td><p>Indicates the total local read ratio of a query. The lower the ratio, the more remote read happens. Since remote read on HDFS needs network IO, the execution time of a query may increase.</p>
-<p>For hash distributed tables, all the blocks of a file will be processed by one segment, so if data on HDFS is redistributed, such as by the HDFS Balancer, the data locality ratio will be decreased. In this case, you can redistribute the hash distributed table manually by using CREATE TABLE AS SELECT.</p></td>
-</tr>
-<tr class="even">
-<td>number of virtual segments</td>
-<td>Typically, the more virtual segments are used, the faster the query will be executed. If the virtual segment number is too small, you can check whether <code class="ph codeph">default_hash_table_bucket_number</code>, <code class="ph codeph">hawq_rm_nvseg_perquery_limit</code>, or the bucket number of a hash distributed table is small. See <a href="#topic_wv3_gzc_d5">Number of Virtual Segments</a>.</td>
-</tr>
-<tr class="odd">
-<td>different host number</td>
-<td>Indicates how many hosts are used to run this query. All the hosts should be used when the virtual segment number is bigger than the total number of hosts according to the resource allocation strategy of HAWQ. As a result, if this metric is smaller than the total number of hosts for a big query, it often indicates that some hosts are down. In this case, use \u201cselect gp_segment_configuration\u201d to check the node states first.</td>
-</tr>
-<tr class="even">
-<td>segment size and segment size with penalty</td>
-<td>\u201csegment size\u201d indicates the (avg/min/max) data size which is processed by a virtual segment. \u201csegment size with penalty\u201d is the segment size when remote read is calculated as \u201cnet_disk_ratio\u201d * block size. The virtual segment that contains remote read should process less data than the virtual segment that contains only local read. \u201cnet_disk_ratio\u201d can be tuned to measure how much slower the remote read is than local read for different network environments, while considering the workload balance between the nodes. The default value of \u201cnet_disk_ratio\u201d is 1.01.</td>
-</tr>
-<tr class="odd">
-<td>continuity</td>
-<td>reading a HDFS file discontinuously will introduce additional seek, which will slow the table scan of a query. A low value of continuity indicates that the blocks of a file are not continuously distributed on a DataNode.</td>
-</tr>
-<tr class="even">
-<td>DFS metadatacache</td>
-<td>Indicates the metadatacache time cost for a query. In HAWQ, HDFS block information is cached in a metadatacache process. If cache miss happens, time cost of metadatacache may increase.</td>
-</tr>
-<tr class="odd">
-<td>resource allocation</td>
-<td>Indicates the time cost of acquiring resources from the resource manager.</td>
-</tr>
-<tr class="even">
-<td>datalocality calculation</td>
-<td>Indicates the time to run the algorithm that assigns HDFS blocks to virtual segments and calculates the data locality ratio.</td>
-</tr>
-</tbody>
-</table>
-
-## <a id="topic_wv3_gzc_d5"></a>Number of Virtual Segments
-
-To obtain the best results when querying data in HAWQ, review the best practices described in this topic.
-
-### <a id="virtual_seg_performance"></a>Factors Impacting Query Performance
-
-The number of virtual segments used for a query directly impacts the query's performance. The following factors can impact the degree of parallelism of a query:
-
--   **Cost of the query**. Small queries use fewer segments and larger queries use more segments. Some techniques used in defining resource queues can influence the number of both virtual segments and general resources allocated to queries.
--   **Available resources at query time**. If more resources are available in the resource queue, those resources will be used.
--   **Hash table and bucket number**. If the query involves only hash-distributed tables, the query's parallelism is fixed (equal to the hash table bucket number) under the following conditions:
-
-   - The bucket number (bucketnum) configured for all the hash tables is the same bucket number
-   - The table size for random tables is no more than 1.5 times the size allotted for the hash tables.
-
-  Otherwise, the number of virtual segments depends on the query's cost: hash-distributed table queries behave like queries on randomly-distributed tables.
-
--   **Query Type**: It can be difficult to calculate  resource costs for queries with some user-defined functions or for queries to external tables. With these queries,  the number of virtual segments is controlled by the  `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, as well as by the ON clause and the location list of external tables. If the query has a hash result table (e.g. `INSERT into hash_table`), the number of virtual segments must be equal to the bucket number of the resulting hash table. If the query is performed in utility mode, such as for `COPY` and `ANALYZE` operations, the virtual segment number is calculated by different policies.
-
-###General Guidelines
-
-The following guidelines expand on the numbers of virtual segments to use, provided there are sufficient resources available.
-
--   **Random tables exist in the select list:** \#vseg (number of virtual segments) depends on the size of the table.
--   **Hash tables exist in the select list:** \#vseg depends on the bucket number of the table.
--   **Random and hash tables both exist in the select list:** \#vseg depends on the bucket number of the table, if the table size of random tables is no more than 1.5 times the size of hash tables. Otherwise, \#vseg depends on the size of the random table.
--   **User-defined functions exist:** \#vseg depends on the `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit` parameters.
--   **PXF external tables exist:** \#vseg depends on the `default_hash_table_bucket_number` parameter.
--   **gpfdist external tables exist:** \#vseg is at least the number of locations in the location list.
--   **The command for CREATE EXTERNAL TABLE is used:** \#vseg must reflect the value in the command and use the `ON` clause in the command.
--   **Hash tables are copied to or from files:** \#vseg depends on the bucket number of the hash table.
--   **Random tables are copied to files:** \#vseg depends on the size of the random table.
--   **Random tables are copied from files:** \#vseg is a fixed value. \#vseg is 6, when there are sufficient resources.
--   **ANALYZE table:** Analyzing a nonpartitioned table will use more virtual segments than a partitioned table.
--   **Relationship between hash distribution results:** \#vseg must be the same as the bucket number for the hash table.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/query-profiling.html.md.erb
----------------------------------------------------------------------
diff --git a/query/query-profiling.html.md.erb b/query/query-profiling.html.md.erb
deleted file mode 100644
index ea20e0a..0000000
--- a/query/query-profiling.html.md.erb
+++ /dev/null
@@ -1,240 +0,0 @@
----
-title: Query Profiling
----
-
-<span class="shortdesc">Examine the query plans of poorly performing queries to identify possible performance tuning opportunities.</span>
-
-HAWQ devises a *query plan* for each query. Choosing the right query plan to match the query and data structure is necessary for good performance. A query plan defines how HAWQ will run the query in the parallel execution environment.
-
-The query optimizer uses data statistics maintained by the database to choose a query plan with the lowest possible cost. Cost is measured in disk I/O, shown as units of disk page fetches. The goal is to minimize the total execution cost for the plan.
-
-View the plan for a given query with the `EXPLAIN` command. `EXPLAIN` shows the query optimizer's estimated cost for the query plan. For example:
-
-``` sql
-EXPLAIN SELECT * FROM names WHERE id=22;
-```
-
-`EXPLAIN ANALYZE` runs the statement in addition to displaying its plan. This is useful for determining how close the optimizer's estimates are to reality. For example:
-
-``` sql
-EXPLAIN ANALYZE SELECT * FROM names WHERE id=22;
-```
-
-**Note:** The legacy and GPORCA query optimizers coexist in HAWQ. GPORCA is the default HAWQ optimizer. HAWQ uses GPORCA to generate an execution plan for a query when possible. The `EXPLAIN` output generated by GPORCA is different than the output generated by the legacy query optimizer.
-
-When the `EXPLAIN ANALYZE` command uses GPORCA, the `EXPLAIN` plan shows only the number of partitions that are being eliminated. The scanned partitions are not shown. To show name of the scanned partitions in the segment logs set the server configuration parameter `gp_log_dynamic_partition_pruning` to `on`. This example `SET` command enables the parameter.
-
-``` sql
-SET gp_log_dynamic_partition_pruning = on;
-```
-
-For information about GPORCA, see [Querying Data](query.html#topic1).
-
-## <a id="topic40"></a>Reading EXPLAIN Output
-
-A query plan is a tree of nodes. Each node in the plan represents a single operation, such as a table scan, join, aggregation, or sort.
-
-Read plans from the bottom to the top: each node feeds rows into the node directly above it. The bottom nodes of a plan are usually table scan operations. If the query requires joins, aggregations, sorts, or other operations on the rows, there are additional nodes above the scan nodes to perform these operations. The topmost plan nodes are usually HAWQ motion nodes: redistribute, broadcast, or gather motions. These operations move rows between segment instances during query processing.
-
-The output of `EXPLAIN` has one line for each node in the plan tree and shows the basic node type and the following execution cost estimates for that plan node:
-
--   **cost** \u2014Measured in units of disk page fetches. 1.0 equals one sequential disk page read. The first estimate is the start-up cost of getting the first row and the second is the total cost of cost of getting all rows. The total cost assumes all rows will be retrieved, which is not always true; for example, if the query uses `LIMIT`, not all rows are retrieved.
--   **rows** \u2014The total number of rows output by this plan node. This number is usually less than the number of rows processed or scanned by the plan node, reflecting the estimated selectivity of any `WHERE` clause conditions. Ideally, the estimate for the topmost node approximates the number of rows that the query actually returns.
--   **width** \u2014The total bytes of all the rows that this plan node outputs.
-
-Note the following:
-
--   The cost of a node includes the cost of its child nodes. The topmost plan node has the estimated total execution cost for the plan. This is the number the optimizer intends to minimize.
--   The cost reflects only the aspects of plan execution that the query optimizer takes into consideration. For example, the cost does not reflect time spent transmitting result rows to the client.
-
-### <a id="topic41"></a>EXPLAIN Example
-
-The following example describes how to read an `EXPLAIN` query plan for a query:
-
-``` sql
-EXPLAIN SELECT * FROM names WHERE name = 'Joelle';
-```
-
-```
-                                 QUERY PLAN
------------------------------------------------------------------------------
- Gather Motion 2:1  (slice1; segments: 2)  (cost=0.00..1.01 rows=1 width=11)
-   ->  Append-only Scan on names  (cost=0.00..1.01 rows=1 width=11)
-         Filter: name::text = 'Joelle'::text
-(3 rows)
-```
-
-Read the plan from the bottom to the top. To start, the query optimizer sequentially scans the *names* table. Notice the `WHERE` clause is applied as a *filter* condition. This means the scan operation checks the condition for each row it scans and outputs only the rows that satisfy the condition.
-
-The results of the scan operation are passed to a *gather motion* operation. In HAWQ, a gather motion is when segments send rows to the master. In this example, we have two segment instances that send to one master instance. This operation is working on `slice1` of the parallel query execution plan. A query plan is divided into *slices* so the segments can work on portions of the query plan in parallel.
-
-The estimated startup cost for this plan is `00.00` (no cost) and a total cost of `1.01` disk page fetches. The optimizer estimates this query will return one row.
-
-## <a id="topic42"></a>Reading EXPLAIN ANALYZE Output
-
-`EXPLAIN ANALYZE` plans and runs the statement. The `EXPLAIN           ANALYZE` plan shows the actual execution cost along with the optimizer's estimates. This allows you to see if the optimizer's estimates are close to reality. `EXPLAIN ANALYZE` also shows the following:
-
--   The total runtime (in milliseconds) in which the query executed.
--   The memory used by each slice of the query plan, as well as the memory reserved for the whole query statement.
--   Statistics for the query dispatcher, including the number of executors used for the current query (total number/number of executors cached by previous queries/number of executors newly connected), dispatcher time (total dispatch time/connection establish time/dispatch data to executor time); and some time(max/min/avg) details for dispatching data, consuming executor data, and freeing executor.
--   Statistics about data locality. See [Data Locality Statistics](query-performance.html#topic_amk_drc_d5) for details about these statistics.
--   The number of *workers* (segments) involved in a plan node operation. Only segments that return rows are counted.
--   The Max/Last statistics are for the segment that output the maximum number of rows and the segment with the longest *&lt;time&gt; to end*.
--   The segment id of the segment that produced the most rows for an operation.
--   For relevant operations, the amount of memory (`work_mem`) used by the operation. If the `work_mem` was insufficient to perform the operation in memory, the plan shows the amount of data spilled to disk for the lowest-performing segment. For example:
-
-    ``` pre
-    Work_mem used: 64K bytes avg, 64K bytes max (seg0).
-    Work_mem wanted: 90K bytes avg, 90K byes max (seg0) to lessen
-    workfile I/O affecting 2 workers.
-    ```
-**Note**
-The *work\_mem* property is not configurable. Use resource queues to manage memory use. For more information on resource queues, see [Configuring Resource Management](../resourcemgmt/ConfigureResourceManagement.html) and [Working with Hierarchical Resource Queues](../resourcemgmt/ResourceQueues.html).
-
--   The time (in milliseconds) in which the segment that produced the most rows retrieved the first row, and the time taken for that segment to retrieve all rows. The result may omit *&lt;time&gt; to first row* if it is the same as the *&lt;time&gt; to end*.
-
-### <a id="topic43"></a>EXPLAIN ANALYZE Example
-
-This example describes how to read an `EXPLAIN ANALYZE` query plan using the same query. The `bold` parts of the plan show actual timing and rows returned for each plan node, as well as memory and time statistics for the whole query.
-
-``` sql
-EXPLAIN ANALYZE SELECT * FROM names WHERE name = 'Joelle';
-```
-
-```
-                                 QUERY PLAN
-------------------------------------------------------------------------
- Gather Motion 1:1  (slice1; segments: 1)  (cost=0.00..1.01 rows=1 width=7)
-   Rows out:  Avg 1.0 rows x 1 workers at destination.  Max/Last(seg0:ip-10-0-1-16/seg0:ip-10-0-1-16) 1/1 rows with 8.713/8.713 ms to first row, 8.714/8.714 ms to end, start offset by 0.708/0.708 ms.
-   ->  Append-only Scan on names  (cost=0.00..1.01 rows=1 width=7)
-         Filter: name = 'Joelle'::text
-         Rows out:  Avg 1.0 rows x 1 workers.  Max/Last(seg0:ip-10-0-1-16/seg0:ip-10-0-1-16) 1/1 rows with 7.053/7.053 ms to first row, 7.089/7.089 ms to end, start offset by 2.162/2.162 ms.
- Slice statistics:
-   (slice0)    Executor memory: 159K bytes.
-   (slice1)    Executor memory: 247K bytes (seg0:ip-10-0-1-16).
- Statement statistics:
-   Memory used: 262144K bytes
- Dispatcher statistics:
-   executors used(total/cached/new connection): (1/1/0); dispatcher time(total/connection/dispatch data): (0.217 ms/0.000 ms/0.037 ms).
-   dispatch data time(max/min/avg): (0.037 ms/0.037 ms/0.037 ms); consume executor data time(max/min/avg): (0.015 ms/0.015 ms/0.015 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
- Data locality statistics:
-   data locality ratio: 1.000; virtual segment number: 1; different host number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment size(avg/min/max): (48.000 B/48 B/48 B); segment size with penalty(avg/min/max): (48.000 B/48 B/48 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 9.343 ms; resource allocation: 0.638 ms; datalocality calculation: 0.144 ms.
- Total runtime: 19.690 ms
-(16 rows)
-```
-
-Read the plan from the bottom to the top. The total elapsed time to run this query was *19.690* milliseconds.
-
-The *Append-only scan* operation had only one segment (*seg0*) that returned rows, and it returned just *1 row*. The Max/Last statistics are identical in this example because only one segment returned rows. It took *7.053* milliseconds to find the first row and *7.089* milliseconds to scan all rows. This result is close to the optimizer's estimate: the query optimizer estimated it would return one row for this query. The *gather motion* (segments sending data to the master) received 1 row. The total elapsed time for this operation was *19.690* milliseconds.
-
-## <a id="topic44"></a>Examining Query Plans to Solve Problems
-
-If a query performs poorly, examine its query plan and ask the following questions:
-
--   **Do operations in the plan take an exceptionally long time?** Look for an operation that consumes the majority of query processing time. For example, if a scan on a hash table takes longer than expected, the data locality may be low; reloading the data can increase the data locality and speed up the query. Or, adjust `enable_<operator>` parameters to see if you can force the legacy query optimizer (planner) to choose a different plan by disabling a particular query plan operator for that query.
--   **Are the optimizer's estimates close to reality?** Run `EXPLAIN             ANALYZE` and see if the number of rows the optimizer estimates is close to the number of rows the query operation actually returns. If there is a large discrepancy, collect more statistics on the relevant columns.
--   **Are selective predicates applied early in the plan?** Apply the most selective filters early in the plan so fewer rows move up the plan tree. If the query plan does not correctly estimate query predicate selectivity, collect more statistics on the relevant columns. You can also try reordering the `WHERE` clause of your SQL statement.
--   **Does the optimizer choose the best join order?** When you have a query that joins multiple tables, make sure that the optimizer chooses the most selective join order. Joins that eliminate the largest number of rows should be done earlier in the plan so fewer rows move up the plan tree.
-
-    If the plan is not choosing the optimal join order, set `join_collapse_limit=1` and use explicit `JOIN` syntax in your SQL statement to force the legacy query optimizer (planner) to the specified join order. You can also collect more statistics on the relevant join columns.
-
--   **Does the optimizer selectively scan partitioned tables?** If you use table partitioning, is the optimizer selectively scanning only the child tables required to satisfy the query predicates? Scans of the parent tables should return 0 rows since the parent tables do not contain any data. See [Verifying Your Partition Strategy](../ddl/ddl-partition.html#topic74) for an example of a query plan that shows a selective partition scan.
--   **Does the optimizer choose hash aggregate and hash join operations where applicable?** Hash operations are typically much faster than other types of joins or aggregations. Row comparison and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer to choose hash operations, there must be sufficient memory available to hold the estimated number of rows. Try increasing work memory to improve performance for a query. If possible, run an `EXPLAIN             ANALYZE` for the query to show which plan operations spilled to disk, how much work memory they used, and how much memory was required to avoid spilling to disk. For example:
-
-    `Work_mem used: 23430K bytes avg, 23430K bytes max (seg0). Work_mem               wanted: 33649K bytes avg, 33649K bytes max (seg0) to lessen workfile I/O affecting 2               workers.`
-
-    The "bytes wanted" message from `EXPLAIN               ANALYZE` is based on the amount of data written to work files and is not exact. The minimum `work_mem` needed can differ from the suggested value.
-
-## <a id="explainplan_plpgsql"></a>Generating EXPLAIN Plan from a PL/pgSQL Function
-
-User-defined PL/pgSQL functions often include dynamically created queries.  You may find it useful to generate the `EXPLAIN` plan for such queries for query performance optimization and tuning. 
-
-Perform the following steps to create and run a user-defined PL/pgSQL function.  This function displays the `EXPLAIN` plan for a simple query on a test database.
-
-1. Log in to the HAWQ master node as user `gpadmin` and set up the HAWQ environment:
-
-    ``` shell
-    $ ssh gpadmin@hawq_master
-    $ . /usr/local/hawq/greenplum_path.sh
-    ```
-
-2. Create a test database named `testdb`:
-
-    ``` shell
-    $ createdb testdb
-    ```
-   
-3. Start the PostgreSQL interactive utility, connecting to `testdb`:
-
-    ``` shell
-    $ psql -d testdb
-    ```
-
-4. Create the table `test_tbl` with a single column named `id` of type `integer`:
-
-    ``` sql
-    testdb=# CREATE TABLE test_tbl (id int);
-    ```
-   
-5. Add some data to the `test_tbl` table:
-
-    ``` sql
-    testdb=# INSERT INTO test_tbl SELECT generate_series(1,100);
-    ```
-   
-    This `INSERT` command adds 100 rows to `test_tbl`, incrementing the `id` for each row.
-   
-6. Create a PL/pgSQL function named `explain_plan_func()` by copying and pasting the following text at the `psql` prompt:
-
-    ``` sql
-    CREATE OR REPLACE FUNCTION explain_plan_func() RETURNS varchar as $$
-   declare
-
-     a varchar;
-     b varchar;
-
-     begin
-       a = '';
-       for b in execute 'explain select count(*) from test_tbl group by id' loop
-         a = a || E'\n' || b;
-       end loop;
-       return a;
-     end;
-   $$
-   LANGUAGE plpgsql
-   VOLATILE;
-    ```
-
-7. Verify the `explain_plan_func()` user-defined function was created successfully:
-
-    ``` shell
-    testdb=# \df+
-    ```
-
-    The `\df+` command lists all user-defined functions.
-   
-8. Perform a query using the user-defined function you just created:
-
-    ``` sql
-    testdb=# SELECT explain_plan_func();
-    ```
-
-    The `EXPLAIN` plan results for the query are displayed:
-    
-    ``` pre
-                                             explain_plan_func                               
----------------------------------------------------------------------------------------------------------                                                                                             
- Gather Motion 1:1  (slice2; segments: 1)  (cost=0.00..431.04 rows=100 width=8)                          
-   ->  Result  (cost=0.00..431.03 rows=100 width=8)                         
-         ->  HashAggregate  (cost=0.00..431.03 rows=100 width=8)                
-               Group By: id                                                 
-               ->  Redistribute Motion 1:1  (slice1; segments: 1)  (cost=0.00..431.02 rows=100 width=12) 
-                     Hash Key: id                                              
-                     ->  Result  (cost=0.00..431.01 rows=100 width=12)      
-                           ->  HashAggregate  (cost=0.00..431.01 rows=100 width=12)                      
-                                 Group By: id                               
-                                 ->  Table Scan on test_tbl  (cost=0.00..431.00 rows=100 width=4) 
- Settings:  default_hash_table_bucket_number=6                              
- Optimizer status: PQO version 1.627
-(1 row)
-    ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/query/query.html.md.erb
----------------------------------------------------------------------
diff --git a/query/query.html.md.erb b/query/query.html.md.erb
deleted file mode 100644
index 9c218c7..0000000
--- a/query/query.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Querying Data
----
-
-This topic provides information about using SQL in HAWQ databases.
-
-You enter SQL statements called queries to view and analyze data in a database using the `psql` interactive SQL client and other client tools.
-
-**Note:** HAWQ queries timeout after a period of 600 seconds. For this reason, long-running queries may appear to hang until results are processed or until the timeout period expires.
-
--   **[About HAWQ Query Processing](../query/HAWQQueryProcessing.html)**
-
-    This topic provides an overview of how HAWQ processes queries. Understanding this process can be useful when writing and tuning queries.
-
--   **[About GPORCA](../query/gporca/query-gporca-optimizer.html)**
-
-    In HAWQ, you can use GPORCA or the legacy query optimizer.
-
--   **[Defining Queries](../query/defining-queries.html)**
-
-    HAWQ is based on the PostgreSQL implementation of the SQL standard. SQL commands are typically entered using the standard PostgreSQL interactive terminal `psql`, but other programs that have similar functionality can be used as well.
-
--   **[Using Functions and Operators](../query/functions-operators.html)**
-
-    HAWQ evaluates functions and operators used in SQL expressions.
-
--   **[Query Performance](../query/query-performance.html)**
-
-    HAWQ dynamically allocates resources to queries. Query performance depends on several factors such as data locality, number of virtual segments used for the query and general cluster health.
-
--   **[Query Profiling](../query/query-profiling.html)**
-
-    Examine the query plans of poorly performing queries to identify possible performance tuning opportunities.
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/reference/CharacterSetSupportReference.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/CharacterSetSupportReference.html.md.erb b/reference/CharacterSetSupportReference.html.md.erb
deleted file mode 100644
index 8a12471..0000000
--- a/reference/CharacterSetSupportReference.html.md.erb
+++ /dev/null
@@ -1,439 +0,0 @@
----
-title: Character Set Support Reference
----
-
-This topic provides a referene of the character sets supported in HAWQ.
-
-The character set support in HAWQ allows you to store text in a variety of character sets, including single-byte character sets such as the ISO 8859 series and multiple-byte character sets such as EUC (Extended Unix Code), UTF-8, and Mule internal code. All supported character sets can be used transparently by clients, but a few are not supported for use within the server (that is, as a server-side encoding). The default character set is selected while initializing your HAWQ using `hawq init.` It can be overridden when you create a database, so you can have multiple databases each with a different character set.
-
-<table style="width:100%;">
-<colgroup>
-<col width="16%" />
-<col width="16%" />
-<col width="16%" />
-<col width="16%" />
-<col width="16%" />
-<col width="16%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Name</th>
-<th>Description</th>
-<th>Language</th>
-<th>Server</th>
-<th>Bytes/Char</th>
-<th>Aliases</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>BIG5</td>
-<td>Big Five</td>
-<td>Traditional Chinese</td>
-<td>No</td>
-<td>1-2</td>
-<td>WIN950, Windows950</td>
-</tr>
-<tr class="even">
-<td>EUC_CN</td>
-<td>Extended UNIX Code-CN</td>
-<td>Simplified Chinese</td>
-<td>Yes</td>
-<td>1-3</td>
-<td></td>
-</tr>
-<tr class="odd">
-<td>EUC_JP</td>
-<td>Extended UNIX Code-JP</td>
-<td>Japanese</td>
-<td>Yes</td>
-<td>1-3</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>EUC_KR</td>
-<td>Extended UNIX Code-KR</td>
-<td>Korean</td>
-<td>Yes</td>
-<td>1-3</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>EUC_TW</td>
-<td>Extended UNIX Code-TW</td>
-<td>Traditional Chinese, Taiwanese</td>
-<td>Yes</td>
-<td>1-3</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>GB18030</td>
-<td>National Standard</td>
-<td>Chinese</td>
-<td>No</td>
-<td>1-2</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>GBK</td>
-<td>Extended National Standard</td>
-<td>Simplified Chinese</td>
-<td>No</td>
-<td>1-2</td>
-<td>WIN936,Windows936</td>
-</tr>
-<tr class="even">
-<td>ISO_8859_5</td>
-<td>ISO 8859-5, ECMA 113</td>
-<td>Latin/Cyrillic</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>ISO_8859_6</td>
-<td>ISO 8859-6, ECMA 114</td>
-<td>Latin/Arabic</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>ISO_8859_7</td>
-<td>ISO 8859-7, ECMA 118</td>
-<td>Latin/Greek</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>ISO_8859_8</td>
-<td>ISO 8859-8, ECMA 121</td>
-<td>Latin/Hebrew</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>JOHAB</td>
-<td>JOHA</td>
-<td>Korean (Hangul)</td>
-<td>Yes</td>
-<td>1-3</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>KOI8</td>
-<td>KOI8-R(U)</td>
-<td>Cyrillic</td>
-<td>Yes</td>
-<td>1</td>
-<td>KOI8R</td>
-</tr>
-<tr class="even">
-<td>LATIN1</td>
-<td>ISO 8859-1, ECMA 94</td>
-<td>Western European</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO88591</td>
-</tr>
-<tr class="odd">
-<td>LATIN2</td>
-<td>ISO 8859-2, ECMA 94</td>
-<td>Central European</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO88592</td>
-</tr>
-<tr class="even">
-<td>LATIN3</td>
-<td>ISO 8859-3, ECMA 94</td>
-<td>South European</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO88593</td>
-</tr>
-<tr class="odd">
-<td>LATIN4</td>
-<td>ISO 8859-4, ECMA 94</td>
-<td>North European</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO88594</td>
-</tr>
-<tr class="even">
-<td>LATIN5</td>
-<td>ISO 8859-9, ECMA 128</td>
-<td>Turkish</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO88599</td>
-</tr>
-<tr class="odd">
-<td>LATIN6</td>
-<td>ISO 8859-10, ECMA 144</td>
-<td>Nordic</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO885910</td>
-</tr>
-<tr class="even">
-<td>LATIN7</td>
-<td>ISO 8859-13</td>
-<td>Baltic</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO885913</td>
-</tr>
-<tr class="odd">
-<td>LATIN8</td>
-<td>ISO 8859-14</td>
-<td>Celtic</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO885914</td>
-</tr>
-<tr class="even">
-<td>LATIN9</td>
-<td>ISO 8859-15</td>
-<td>LATIN1 with Euro and accents</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO885915</td>
-</tr>
-<tr class="odd">
-<td>LATIN10</td>
-<td>ISO 8859-16, ASRO SR 14111</td>
-<td>Romanian</td>
-<td>Yes</td>
-<td>1</td>
-<td>ISO885916</td>
-</tr>
-<tr class="even">
-<td>MULE_INTERNAL</td>
-<td>Mule internal code</td>
-<td>Multilingual Emacs</td>
-<td>Yes</td>
-<td>1-4</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>SJIS</td>
-<td>Shift JIS</td>
-<td>Japanese</td>
-<td>No</td>
-<td>1-2</td>
-<td>Mskanji, ShiftJIS, WIN932, Windows932</td>
-</tr>
-<tr class="even">
-<td>SQL_ASCII</td>
-<td>unspecified2</td>
-<td>any</td>
-<td>No</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>UHC</td>
-<td>Unified Hangul Code</td>
-<td>Korean</td>
-<td>No</td>
-<td>1-2</td>
-<td>WIN949, Windows949</td>
-</tr>
-<tr class="even">
-<td>UTF8</td>
-<td>Unicode, 8-bit�</td>
-<td>all</td>
-<td>Yes</td>
-<td>1-4</td>
-<td>Unicode</td>
-</tr>
-<tr class="odd">
-<td>WIN866</td>
-<td>Windows CP866</td>
-<td>Cyrillic</td>
-<td>Yes</td>
-<td>1</td>
-<td>ALT</td>
-</tr>
-<tr class="even">
-<td>WIN874</td>
-<td>Windows CP874</td>
-<td>Thai</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>WIN1250</td>
-<td>Windows CP1250</td>
-<td>Central European</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>WIN1251</td>
-<td>Windows CP1251</td>
-<td>Cyrillic</td>
-<td>Yes</td>
-<td>1</td>
-<td>WIN</td>
-</tr>
-<tr class="odd">
-<td>WIN1252</td>
-<td>Windows CP1252</td>
-<td>Western European</td>
-<td>Yes</td>
-<td><p>1</p></td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>WIN1253</td>
-<td>Windows CP1253</td>
-<td>Greek</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>WIN1254</td>
-<td>Windows CP1254</td>
-<td>Turkish</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>WIN1255</td>
-<td>Windows CP1255</td>
-<td>Hebrew</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>WIN1256</td>
-<td>Windows CP1256</td>
-<td>Arabic</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="even">
-<td>WIN1257</td>
-<td>Windows CP1257</td>
-<td>Baltic</td>
-<td>Yes</td>
-<td>1</td>
-<td>�</td>
-</tr>
-<tr class="odd">
-<td>WIN1258</td>
-<td>Windows CP1258</td>
-<td>Vietnamese</td>
-<td>Yes</td>
-<td>1</td>
-<td>ABC, TCVN, TCVN5712, VSCII�</td>
-</tr>
-</tbody>
-</table>
-
-**Note:**
-
--   Not all the APIs support all the listed character sets. For example, the JDBC driver does not support MULE\_INTERNAL, LATIN6, LATIN8, and LATIN10.
--   The SQLASCII setting behaves considerable differently from the other settings. Byte values 0-127 are interpreted according to the ASCII standard, while byte values 128-255 are taken as uninterpreted characters. If you are working with any nonASCII data, it is unwise to use the SQL\_ASCII setting as a client encoding. SQL\_ASCII is not supported as a server encoding.
-
-## <a id="settingthecharacterset"></a>Setting the Character Set
-
-`hawq init` defines the default character set for a HAWQ system by reading the setting of the ENCODING parameter in the gp\_init\_config file at initialization time. The default character set is UNICODE or UTF8.
-
-You can create a database with a different character set besides what is used as the system-wide default. For example:
-
-``` sql
-CREATE DATABASE korean WITH ENCODING 'EUC_KR';
-```
-
-**Note:** Although you can specify any encoding you want for a database, it is unwise to choose an encoding that is not what is expected by the locale you have selected. The LC\_COLLATE and LC\_CTYPE settings imply a particular encoding, and locale-dependent operations (such as sorting) are likely to misinterpret data that is in an incompatible encoding.
-
-Since these locale settings are frozen by hawq init, the apparent flexibility to use different encodings in different databases is more theoretical than real.
-
-One way to use multiple encodings safely is to set the locale to C or POSIX during initialization time, thus disabling any real locale awareness.
-
-## <a id="charactersetconversionbetweenserverandclient"></a>Character Set Conversion Between Server and Client
-
-HAWQ supports automatic character set conversion between server and client for certain character set combinations. The conversion information is stored in the master pg\_conversion system catalog table. HAWQ comes with some predefined conversions or you can create a new conversion using the SQL command CREATE CONVERSION.
-
-| Server Character Set | Available Client Sets                                                                                                          |
-|----------------------|--------------------------------------------------------------------------------------------------------------------------------|
-| BIG5                 | not supported as a server encoding                                                                                             |
-| EUC\_CN              | EUC\_CN, MULE\_INTERNAL, UTF8                                                                                                  |
-| EUC\_JP              | EUC\_JP, MULE\_INTERNAL, SJIS, UTF8                                                                                            |
-| EUC\_KR              | EUC\_KR, MULE\_INTERNAL, UTF8                                                                                                  |
-| EUC\_TW�             | EUC\_TW, BIG5, MULE\_INTERNAL, UTF8�                                                                                           |
-| GB18030              | not supported as a server encoding                                                                                             |
-| GBK                  | not supported as a server encoding                                                                                             |
-| ISO\_8859\_5         | ISO\_8859\_5, KOI8, MULE\_INTERNAL, UTF8, WIN866, WIN1251                                                                      |
-| ISO\_8859\_6         | ISO\_8859\_6, UTF8                                                                                                             |
-| ISO\_8859\_7         | ISO\_8859\_7, UTF8                                                                                                             |
-| ISO\_8859\_8         | ISO\_8859\_8, UTF8                                                                                                             |
-| JOHAB                | JOHAB, UTF8                                                                                                                    |
-| KOI8                 | KOI8, ISO\_8859\_5, MULE\_INTERNAL, UTF8, WIN866, WIN1251                                                                      |
-| LATIN1               | LATIN1, MULE\_INTERNAL, UTF8                                                                                                   |
-| LATIN2               | LATIN2, MULE\_INTERNAL, UTF8, WIN1250                                                                                          |
-| LATIN3               | LATIN3, MULE\_INTERNAL, UTF8                                                                                                   |
-| LATIN4               | LATIN4, MULE\_INTERNAL, UTF8                                                                                                   |
-| LATIN5               | LATIN5, UTF8                                                                                                                   |
-| LATIN6               | LATIN6, UTF8                                                                                                                   |
-| LATIN7               | LATIN7, UTF8                                                                                                                   |
-| LATIN8               | LATIN8, UTF8�                                                                                                                  |
-| LATIN9               | LATIN9, UTF8                                                                                                                   |
-| LATIN10              | LATIN10, UTF8                                                                                                                  |
-| MULE\_INTERNAL       | MULE\_INTERNAL, BIG5, EUC\_CN, EUC\_JP, EUC\_KR, EUC\_TW, ISO\_8859\_5, KOI8, LATIN1 to LATIN4, SJIS, WIN866, WIN1250, WIN1251 |
-| SJIS                 | not supported as a server encoding                                                                                             |
-| SQL\_ASCII           | not supported as a server encoding                                                                                             |
-| UHC                  | not supported as a server encoding                                                                                             |
-| UTF8                 | all supported encodings                                                                                                        |
-| WIN866               | WIN866                                                                                                                         |
-| WIN874               | WIN874, UTF8                                                                                                                   |
-| WIN1250              | WIN1250, LATIN2, MULE\_INTERNAL, UTF8                                                                                          |
-| WIN1251              | WIN1251, ISO\_8859\_5, KOI8, MULE\_INTERNAL, UTF8, WIN866�                                                                     |
-| WIN1252              | WIN1252, UTF8                                                                                                                  |
-| WIN1253              | WIN1253, UTF8                                                                                                                  |
-| WIN1254              | WIN1254, UTF8                                                                                                                  |
-| WIN1255              | WIN1255, UTF8                                                                                                                  |
-| WIN1256              | WIN1256, UTF8                                                                                                                  |
-| WIN1257              | WIN1257, UTF8�                                                                                                                 |
-| WIN1258              | WIN1258, UTF8�                                                                                                                 |
-
-To enable automatic character set conversion, you have to tell HAWQ the character set (encoding) you would like to use in the client. There are several ways to accomplish this:�
-
--   Using the \\encoding command in psql, which allows you to change client encoding on the fly.
--   Using SET client\_encoding TO. Setting the client encoding can be done with this SQL command:
-
-    ``` sql
-    SET CLIENT_ENCODING TO 'value';
-    ```
-
-    To query the current client encoding:
-
-    ``` sql
-    SHOW client_encoding;
-    ```
-
-    To return the default encoding:
-
-    ``` sql
-    RESET client_encoding;
-    ```
-
--   Using the PGCLIENTENCODING environment variable. When PGCLIENTENCODING is defined in the client's environment, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.)
--   Setting the configuration parameter client\_encoding. If client\_encoding is set in the master `hawq-site.xml` file, that client encoding is automatically selected when a connection to HAWQ is made. (This can subsequently be overridden using any of the other methods mentioned above.)
-
-If the conversion of a particular character is not possible \u2014 suppose you chose EUC\_JP for the server and LATIN1 for the client, then some Japanese characters do not have a representation in LATIN1 \u2014 then an error is reported.�
-
-If the client character set is defined as SQL\_ASCII, encoding conversion is disabled, regardless of the server\u2019s character set. The use of SQL\_ASCII is unwise unless you are working with all-ASCII data. SQL\_ASCII is not supported as a server encoding.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/reference/HAWQDataTypes.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/HAWQDataTypes.html.md.erb b/reference/HAWQDataTypes.html.md.erb
deleted file mode 100644
index fe5cff7..0000000
--- a/reference/HAWQDataTypes.html.md.erb
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: Data Types
----
-
-This topic provides a reference of the data types supported in HAWQ.
-
-HAWQ has a rich set of native data types available to users. Users may also define new data types using the `CREATE TYPE` command. This reference shows all of the built-in data types. In addition to the types listed here, there are also some internally�used data types, such as **oid**�(object identifier), but those are not documented in this guide.
-
-The following data types are specified by SQL:
-
--   array (*)
--   bit
--   bit varying
--   boolean
--   character varying
--   char
--   character
--   date
--   decimal
--   double precision
--   integer
--   interval
--   numeric
--   real
--   smallint
--   time (with or without time zone)
--   timestamp�(with or without time zone)
--   varchar
-
-**Note**(\*): HAWQ supports the array data type for append-only tables; parquet table storage does *not* support the array type. 
-
-Each data type has an external representation determined by its input and output functions. Many of the built-in types have obvious external formats. However, several types are unique to HAWQ, such as geometric paths, or have several possibilities for formats, such as the date and time types. Some of the input and output functions are not invertible. That is, the result of an output function may lose accuracy when compared to the original input.
-
- 
- <span class="tablecap">Table 1. HAWQ Built-in Data Types</span>
-=======
- 
-
-| Name                                       | Alias               | Size                  | Range                                       | Description                                                                       |
-|--------------------------------------------|---------------------|-----------------------|---------------------------------------------|-----------------------------------------------------------------------------------|
-| array                                     |          [ ]       |    variable (ignored)    | multi-dimensional |   any built-in or user-defined base type, enum type, or composite type                                                               |
-| bigint                                     | int8                | 8 bytes               | -9223372036854775808 to 9223372036854775807 | large range integer                                                               |
-| bigserial                                  | serial8             | 8 bytes               | 1 to 9223372036854775807                    | large autoincrementing integer                                                    |
-| bit \[ (n) \]                              | �                   | n bits                | bit string constant                         | fixed-length bit string                                                           |
-| bit varying \[ (n) \]                      | varbit              | actual number of bits | bit string constant                         | variable-length bit string                                                        |
-| boolean                                    | bool                | 1 byte                | true/false, t/f, yes/no, y/n, 1/0           | logical Boolean (true/false)                                                      |
-| box                                        | �                   | 32 bytes              | ((x1,y1),(x2,y2))                           | rectangular box in the plane - not allowed in distribution key columns.           |
-| bytea                                      | �                   | 1 byte + binarystring | sequence of octets                          | variable-length binary string                                                     |
-| character�\[ (n) \]                        | char �\[ (n) \]     | 1 byte + n            | strings up to n characters in length        | fixed-length, blank padded                                                        |
-| character varying�\[ (n) \]                | varchar� \[ (n) \]  | 1 byte + binarystring | strings up to n characters in length        | variable-length� with limit                                                       |
-| cidr                                       | �                   | 12 or 24 bytes        | �                                           | IPv4 networks                                                                     |
-| circle                                     | �                   | 24 bytes              | &lt;(x,y),r&gt; (center and radius)         | circle in the plane - not allowed in distribution key columns.                    |
-| date                                       | �                   | 4 bytes               | 4713 BC - 294,277 AD                        | �calendar date (year, month, day)                                                 |
-| decimal \[ (p, s) \]                       | numeric \[ (p,s) \] | variable              | no limit                                    | user-specified, inexact                                                           |
-| double precision                           | float8 float        | 8 bytes               | 15 decimal digits�precision                 | variable-precision, inexact                                                       |
-| inet                                       | �                   | 12 or 24 bytes        | �                                           | �IPv4 hosts and networks                                                          |
-| integer                                    | int, int4           | 4 bytes               | -2147483648 to +2147483647                  | usual choice for integer                                                          |
-| interval�\[ (p) \]                         | �                   | 12 bytes              | -178000000 years - 178000000 years          | time span                                                                         |
-| lseg                                       | �                   | 32 bytes              | ((x1,y1),(x2,y2))                           | line segment in the plane - not allowed in distribution key columns.              |
-| macaddr                                    | �                   | 6 bytes               | �                                           | MAC addresses                                                                     |
-| money                                      | �                   | 4 bytes               | -21474836.48 to +21474836.47                | currency amount                                                                   |
-| path                                       | �                   | 16+16n bytes          | \[(x1,y1),...\]                             | geometric path in the plane - not allowed in distribution key columns.            |
-| point                                      | �                   | 16 bytes              | (x, y)                                      | geometric path in the plane - not allowed in distribution key columns.            |
-| polygon                                    | �                   | 40+16n bytes          | �\[(x1,y1),...\]                            | closed geometric path in the plane - not allowed in the distribution key columns. |
-| real                                       | float4              | 4 bytes               | 6 decimal digits precision                  | �variable-precision, inexact                                                      |
-| serial                                     | serial4             | 4 bytes               | 1 to 2147483647                             | autoincrementing integer                                                          |
-| smallint                                   | int2                | 2�bytes               | -32768 to +32767                            | small range integer                                                               |
-| text                                       | �                   | 1 byte�+ string size  | strings of any length                       | variable unlimited length                                                         |
-| time \[ (p) \] \[ without time zone \]     | �                   | 8 bytes               | 00:00:00\[.000000\] - 24:00:00\[.000000\]   | time of day only                                                                  |
-| time \[ (p) \] with time zone              | timetz              | 12 bytes              | 00:00:00+1359 - 24:00:00-1359               | time of day only, with time zone                                                  |
-| timestamp \[ (p) \] \[without time zone \] | �                   | 8 bytes               | 4713 BC - 294,277 AD                        | both date and time                                                                |
-| timestamp \[ (p) \] with time zone         | timestamptz         | 8 bytes               | 4713 BC - 294,277 AD                        | both date and time, with time zone                                                |
-| xml                                        | �                   | 1 byte + xml size     | xml of any length                           | variable unlimited length                                                         |
-
- 
-For variable length data types (such as char, varchar, text, xml, etc.) if the data is greater than or equal to 127 bytes, the storage overhead is 4 bytes instead of 1.
-
-**Note**: Use these documented built-in types when creating user tables.  Any other data types that might be visible in the source code are for internal use only.
-
-## <a id="timezones"></a>Time Zones
-
-Time zones, and time-zone conventions, are influenced by political decisions, not just earth geometry. Time zones around the world became somewhat standardized during the 1900's, but continue to be prone to arbitrary changes, particularly with respect to daylight-savings rules.�HAWQ�uses the widely-used�zoneinfo�time zone database for information about historical time zone rules. For times in the future, the assumption is that the latest known rules for a given time zone will continue to be observed indefinitely far into the future.
-
-HAWQ is compatible with the�SQL�standard definitions for typical usage. However, the�SQL�standard has an odd mix of date and time types and capabilities. Two obvious problems are:
-
--   Although the�date�type cannot have an associated time zone, the�time�type can. Time zones in the real world have little meaning unless associated with a date as well as a time, since the offset can vary through the year with daylight-saving time boundaries.
--   The default time zone is specified as a constant numeric offset from�UTC. It is therefore impossible to adapt to daylight-saving time when doing date/time arithmetic across�DST�boundaries.
-
-To address these difficulties, use date/time types that contain both date and time when using time zones. Do not use the type�time with time zone�(although HAWQ supports this�for legacy applications and for compliance with the�SQL�standard).�HAWQ�assumes your local time zone for any type containing only date or time.
-
-All timezone-aware dates and times are stored internally in�UTC. They are converted to local time in the zone specified by the�timezone�configuration parameter before being displayed to the client.
-
-HAWQ�allows you to specify time zones in three different forms:
-
--   A full time zone name, for example�America/New\_York.�HAWQ uses the widely-used�zoneinfo�time zone data for this purpose, so the same names are also recognized by much other software.
--   A time zone abbreviation, for example�PST. Such a specification merely defines a particular offset from UTC, in contrast to full time zone names which can imply a set of daylight savings transition-date rules as well. You cannot set the configuration parameters�timezone�or�log\_timezone�to a time zone abbreviation, but you can use abbreviations in date/time input values and with the�AT TIME ZONE�operator.
--   In addition to the timezone names and abbreviations, HAWQ /&gt; accepts�POSIX-style time zone specifications of the form�STDoffset�or�STDoffsetDST, where STD�is a zone abbreviation,�offset�is a numeric offset in hours west from UTC, and�DST�is an optional daylight-savings zone abbreviation, assumed to stand for one hour ahead of the given offset. For example, if�EST5EDT�were not already a recognized zone name, it would be accepted and would be functionally equivalent to United States East Coast time. When a daylight-savings zone name is present, it is assumed to be used according to the same daylight-savings transition rules used in the�zoneinfo�time zone database's�posixrules�entry. In a standard�HAWQ�installation,�posixrules�is the same as�US/Eastern, so that POSIX-style time zone specifications follow USA daylight-savings rules. If needed, you can adjust this behavior by replacing the�posixrules�file.
-
-In short, this is the difference between abbreviations and full names: abbreviations always represent a fixed offset from UTC, whereas most of the full names imply a local daylight-savings time rule, and so have two possible UTC offsets.
-
-One should be wary that the POSIX-style time zone feature can lead to silently accepting bogus input, since there is no check on the reasonableness of the zone abbreviations. For example,�SET TIMEZONE TO FOOBAR0�will work, leaving the system effectively using a rather peculiar abbreviation for UTC. Another issue to keep in mind is that in POSIX time zone names, positive offsets are used for locations�west�of Greenwich. Everywhere else,�PostgreSQL�follows the ISO-8601 convention that positive timezone offsets are�east�of Greenwich.
-
-In all cases, timezone names are recognized case-insensitively.�
-
-Neither full names nor abbreviations are hard-wired into the server, see�[Date and Time Configuration Files](#dateandtimeconfigurationfiles).
-
-The�timezone�configuration parameter can be set in the file�`hawq-site.xml`. There are also several special ways to set it:
-
--   If�timezone�is not specified in�`hawq-site.xml`�or as a server command-line option, the server attempts to use the value of the�TZ�environment variable as the default time zone. If�TZ�is not defined or is not any of the time zone names known to�PostgreSQL, the server attempts to determine the operating system's default time zone by checking the behavior of the C library function�localtime(). The default time zone is selected as the closest match from the known time zones.�
--   The�SQL�command�SET TIME ZONE�sets the time zone for the session. This is an alternative spelling of�SET TIMEZONE TO�with a more SQL-spec-compatible syntax.
--   The�PGTZ�environment variable is used by�libpq�clients to send a�SET TIME ZONE�command to the server upon connection.
-
-## <a id="dateandtimeconfigurationfiles"></a>Date and Time Configuration Files
-
-Since timezone abbreviations are not well standardized, HAWQ /&gt;�provides a means to customize the set of abbreviations accepted by the server. The�timezone\_abbreviations�run-time parameter determines the active set of abbreviations. While this parameter can be altered by any database user, the possible values for it are under the control of the database administrator \u2014 they are in fact names of configuration files stored in�.../share/timezonesets/�of the installation directory. By adding or altering files in that directory, the administrator can set local policy for timezone abbreviations.
-
-timezone\_abbreviations�can be set to any file name found in�.../share/timezonesets/, if the file's name is entirely alphabetic. (The prohibition against non-alphabetic characters in�timezone\_abbreviations�prevents reading files outside the intended directory, as well as reading editor backup files and other extraneous files.)
-
-A timezone abbreviation file can contain blank lines and comments beginning with�\#. Non-comment lines must have one of these formats:
-
-``` pre
-time_zone_nameoffsettime_zone_nameoffset D
-@INCLUDE file_name
-@OVERRIDE
-```
-
-A�time\_zone\_name�is just the abbreviation being defined. The�offset�is the zone's offset in seconds from UTC, positive being east from Greenwich and negative being west. For example, -18000 would be five hours west of Greenwich, or North American east coast standard time.�D�indicates that the zone name represents local daylight-savings time rather than standard time. Since all known time zone offsets are on 15 minute boundaries, the number of seconds has to be a multiple of 900.
-
-The�@INCLUDE�syntax allows inclusion of another file in the�.../share/timezonesets/�directory. Inclusion can be nested, to a limited depth.
-
-The�@OVERRIDE�syntax indicates that subsequent entries in the file can override previous entries (i.e., entries obtained from included files). Without this, conflicting definitions of the same timezone abbreviation are considered an error.
-
-In an unmodified installation, the file�Default�contains all the non-conflicting time zone abbreviations for most of the world. Additional files�Australia�and�India�are provided for those regions: these files first include the�Default�file and then add or modify timezones as needed.
-
-For reference purposes, a standard installation also contains files�Africa.txt,�America.txt, etc, containing information about every time zone abbreviation known to be in use according to the�zoneinfo�timezone database. The zone name definitions found in these files can be copied and pasted into a custom configuration file as needed.
-
-**Note:** These files cannot be directly referenced as�timezone\_abbreviations�settings, because of the dot embedded in their names.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/reference/HAWQEnvironmentVariables.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/HAWQEnvironmentVariables.html.md.erb b/reference/HAWQEnvironmentVariables.html.md.erb
deleted file mode 100644
index ce21798..0000000
--- a/reference/HAWQEnvironmentVariables.html.md.erb
+++ /dev/null
@@ -1,97 +0,0 @@
----
-title: Environment Variables
----
-
-This topic contains a reference of the environment variables that you set for HAWQ.
-
-Set these in your user\u2019s startup shell profile (such as `~/.bashrc` or `~/.bash_profile`), or in `/etc/profile`, if you want to set them for all users.
-
-## <a id="requiredenvironmentvariables"></a>Required Environment Variables
-
-**Note:** `GPHOME`, `PATH` and `LD_LIBRARY_PATH` can be set by sourcing the `greenplum_path.sh` file from your HAWQ installation directory.
-
-### <a id="gphome"></a>GPHOME
-
-This is the installed location of your HAWQ software. For example:
-
-``` pre
-GPHOME=/usr/local/hawq  
-export GPHOME
-```
-
-### <a id="path"></a>PATH
-
-Your `PATH` environment variable should point to the location of the HAWQ bin directory. For example:
-
-``` pre
-PATH=$GPHOME/bin:$PATH
-export PATH 
-```
-
-### <a id="ld_library_path"></a>LD\_LIBRARY\_PATH
-
-The `LD_LIBRARY_PATH` environment variable should point to the location of the `HAWQ/PostgreSQL` library files. For example:
-
-``` pre
-LD_LIBRARY_PATH=$GPHOME/lib
-export LD_LIBRARY_PATH
-```
-
-## <a id="optionalenvironmentvariables"></a>Optional Environment Variables
-
-The following are HAWQ environment variables. You may want to add the connection-related environment variables to your profile, for convenience. That way, you do not have to type so many options on the command line for client connections. Note that these environment variables should be set on the HAWQ master host only.
-
-
-### <a id="pgappname"></a>PGAPPNAME
-
-This is the name of the application that is usually set by an application when it connects to the server. This name is displayed in the activity view and in log entries. The `PGAPPNAME` environmental variable behaves the same as the `application_name` connection parameter. The default value for `application_name` is `psql`. The name cannot be longer than 63 characters.
-
-### <a id="pgdatabase"></a>PGDATABASE
-
-The name of the default database to use when connecting.
-
-### <a id="pghost"></a>PGHOST
-
-The HAWQ master host name.
-
-### <a id="pghostaddr"></a>PGHOSTADDR
-
-The numeric IP address of the master host. This can be set instead of, or in addition to, `PGHOST`, to avoid DNS lookup overhead.
-
-### <a id="pgpassword"></a>PGPASSWORD
-
-The password used if the server demands password authentication. Use of this environment variable is not recommended, for security reasons (some operating systems allow non-root users to see process environment variables via ps). Instead, consider using the `~/.pgpass` file.
-
-### <a id="pgpassfile"></a>PGPASSFILE
-
-The name of the password file to use for lookups. If not set, it defaults to `~/.pgpass`.
-
-See The Password File under�[Configuring Client Authentication](../clientaccess/client_auth.html).
-
-### <a id="pgoptions"></a>PGOPTIONS
-
-Sets additional configuration parameters for the HAWQ master server.
-
-### <a id="pgport"></a>PGPORT
-
-The port number of the HAWQ server on the master host. The default port is 5432.
-
-### <a id="pguser"></a>PGUSER
-
-The HAWQ user name used to connect.
-
-### <a id="pgdatestyle"></a>PGDATESTYLE
-
-Sets the default style of date/time representation for a session. (Equivalent to `SET datestyle TO....`)
-
-### <a id="pgtz"></a>PGTZ
-
-Sets the default time zone for a session. (Equivalent to `SET timezone                   TO....`)
-
-### <a id="pgclientencoding"></a>PGCLIENTENCODING
-
-Sets the default client character set encoding for a session. (Equivalent to `SET client_encoding TO....`)
-
-��
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/reference/HAWQSampleSiteConfig.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/HAWQSampleSiteConfig.html.md.erb b/reference/HAWQSampleSiteConfig.html.md.erb
deleted file mode 100644
index d4cae5a..0000000
--- a/reference/HAWQSampleSiteConfig.html.md.erb
+++ /dev/null
@@ -1,120 +0,0 @@
----
-title: Sample hawq-site.xml Configuration File
----
-
-```xml
-<configuration>
-        <property>
-                <name>default_hash_table_bucket_number</name>
-                <value>18</value>
-        </property>
-
-        <property>
-                <name>hawq_dfs_url</name>
-                <value>hawq.example.com:8020/hawq_default</value>
-        </property>
-
-        <property>
-                <name>hawq_global_rm_type</name>
-                <value>none</value>
-        </property>
-
-        <property>
-                <name>hawq_master_address_host</name>
-                <value>hawq.example.com</value>
-        </property>
-
-        <property>
-                <name>hawq_master_address_port</name>
-                <value>5432</value>
-        </property>
-
-        <property>
-                <name>hawq_master_directory</name>
-                <value>/data/hawq/master</value>
-        </property>
-
-        <property>
-                <name>hawq_master_temp_directory</name>
-                <value>/tmp/hawq/master</value>
-        </property>
-
-        <property>
-                <name>hawq_re_cgroup_hierarchy_name</name>
-                <value>hawq</value>
-        </property>
-
-        <property>
-                <name>hawq_re_cgroup_mount_point</name>
-                <value>/sys/fs/cgroup</value>
-        </property>
-
-        <property>
-                <name>hawq_re_cpu_enable</name>
-                <value>false</value>
-        </property>
-
-        <property>
-                <name>hawq_rm_memory_limit_perseg</name>
-                <value>64GB</value>
-        </property>
-
-        <property>
-                <name>hawq_rm_nvcore_limit_perseg</name>
-                <value>16</value>
-        </property>
-
-        <property>
-                <name>hawq_rm_nvseg_perquery_limit</name>
-                <value>512</value>
-        </property>
-
-        <property>
-                <name>hawq_rm_nvseg_perquery_perseg_limit</name>
-                <value>6</value>
-        </property>
-
-        <property>
-                <name>hawq_rm_yarn_address</name>
-                <value>rm.example.com:8050</value>
-        </property>
-
-        <property>
-                <name>hawq_rm_yarn_app_name</name>
-                <value>hawq</value>
-        </property>
-
-        <property>
-                <name>hawq_rm_yarn_queue_name</name>
-                <value>default</value>
-        </property>
-
-        <property>
-                <name>hawq_rm_yarn_scheduler_address</name>
-                <value>rm.example.com:8030</value>
-        </property>
-
-        <property>
-                <name>hawq_segment_address_port</name>
-                <value>40000</value>
-        </property>
-
-        <property>
-                <name>hawq_segment_directory</name>
-                <value>/data/hawq/segment</value>
-        </property>
-
-        <property>
-                <name>hawq_segment_temp_directory</name>
-                <value>/tmp/hawq/segment</value>
-        </property>
-
-        <property>
-                <name>hawq_standby_address_host</name>
-                <value>standbyhost.example.com</value>
-        </property>
-
-</configuration>
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/reference/HAWQSiteConfig.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/HAWQSiteConfig.html.md.erb b/reference/HAWQSiteConfig.html.md.erb
deleted file mode 100644
index 3d20297..0000000
--- a/reference/HAWQSiteConfig.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: Server Configuration Parameter Reference
----
-
-This section describes all server configuration guc/parameters that are available in HAWQ.
-
-Configuration guc/parameters are located in `$GPHOME/etc/hawq-site.xml`. This configuration file resides on all HAWQ instances and is managed either by Ambari or by using the `hawq config` utility. On HAWQ clusters installed and managed by Ambari, always use the Ambari administration interface, and not `hawq config`, to configure HAWQ properties. Ambari will overwrite any changes made using `hawq config`. 
-
-You can use the same configuration file cluster-wide across both master and segments.
-
-**Note:** While `postgresql.conf` still exists in HAWQ, any parameters defined in `hawq-site.xml` will overwrite configurations in `postgresql.conf`. For this reason, we recommend that you only use `hawq-site.xml` to configure your HAWQ cluster.
-
-**Note:** If you install and manage HAWQ using Ambari, be aware that any property changes to `hawq-site.xml` made using the command line could be overwritten by Ambari. For Ambari-managed HAWQ clusters, always use the Ambari administration interface to set or change HAWQ configuration properties.
-
--   **[About Server Configuration Parameters](../reference/guc/guc_config.html)**
-
--   **[Configuration Parameter Categories](../reference/guc/guc_category-list.html)**
-
--   **[Configuration Parameters](../reference/guc/parameter_definitions.html)**
-
--   **[Sample hawq-site.xml Configuration File](../reference/HAWQSampleSiteConfig.html)**
-
-


[28/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/defining-queries.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/defining-queries.html.md.erb b/markdown/query/defining-queries.html.md.erb
new file mode 100644
index 0000000..b796511
--- /dev/null
+++ b/markdown/query/defining-queries.html.md.erb
@@ -0,0 +1,528 @@
+---
+title: Defining Queries
+---
+
+HAWQ is based on the PostgreSQL implementation of the SQL standard. SQL commands are typically entered using the standard PostgreSQL interactive terminal `psql`, but other programs that have similar functionality can be used as well.
+
+
+## <a id="topic3"></a>SQL Lexicon
+
+SQL is a standard language for accessing databases. The language consists of elements that enable data storage, retrieval, analysis, viewing, and so on. You use SQL commands to construct queries and commands that the HAWQ engine understands.
+
+SQL queries consist of a sequence of commands. Commands consist of a sequence of valid tokens in correct syntax order, terminated by a semicolon (`;`).
+
+H uses PostgreSQL's structure and syntax, with some exceptions. For more information about SQL rules and concepts in PostgreSQL, see "SQL Syntax" in the PostgreSQL documentation.
+
+## <a id="topic4"></a>SQL Value Expressions
+
+SQL value expressions consist of one or more values, symbols, operators, SQL functions, and data. The expressions compare data or perform calculations and return a value as the result. Calculations include logical, arithmetic, and set operations.
+
+The following are value expressions:
+
+-   Aggregate expressions
+-   Array constructors
+-   Column references
+-   Constant or literal values
+-   Correlated subqueries
+-   Field selection expressions
+-   Function calls
+-   New column values in an `INSERT`
+-   Operator invocation column references
+-   Positional parameter references, in the body of a function definition or prepared statement
+-   Row constructors
+-   Scalar subqueries
+-   Search conditions in a `WHERE` clause
+-   Target lists of a `SELECT` command
+-   Type casts
+-   Value expressions in parentheses, useful to group sub-expressions and override precedence
+-   Window expressions
+
+SQL constructs such as functions and operators are expressions but do not follow any general syntax rules. For more information about these constructs, see [Using Functions and Operators](functions-operators.html#topic26).
+
+### <a id="topic5"></a>Column References
+
+A column reference has the form:
+
+```
+correlation.columnname
+```
+
+Here, `correlation` is the name of a table (possibly qualified with a schema name) or an alias for a table defined with a `FROM` clause or one of the keywords `NEW` or `OLD`. `NEW` and `OLD` can appear only in rewrite rules, but you can use other correlation names in any SQL statement. If the column name is unique across all tables in the query, you can omit the "`correlation.`" part of the column reference.
+
+### <a id="topic6"></a>Positional Parameters
+
+Positional parameters are arguments to SQL statements or functions that you reference by their positions in a series of arguments. For example, `$1` refers to the first argument, `$2` to the second argument, and so on. The values of positional parameters are set from arguments external to the SQL statement or supplied when SQL functions are invoked. Some client libraries support specifying data values separately from the SQL command, in which case parameters refer to the out-of-line data values. A parameter reference has the form:
+
+```
+$number
+```
+
+For example:
+
+``` pre
+CREATE FUNCTION dept(text) RETURNS dept
+    AS $$ SELECT * FROM dept WHERE name = $1 $$
+    LANGUAGE SQL;
+```
+
+Here, the `$1` references the value of the first function argument whenever the function is invoked.
+
+### <a id="topic7"></a>Subscripts
+
+If an expression yields a value of an array type, you can extract a specific element of the array value as follows:
+
+``` pre
+expression[subscript]
+```
+
+You can extract multiple adjacent elements, called an array slice, as follows (including the brackets):
+
+``` pre
+expression[lower_subscript:upper_subscript]
+```
+
+Each subscript is an expression and yields an integer value.
+
+Array expressions usually must be in parentheses, but you can omit the parentheses when the expression to be subscripted is a column reference or positional parameter. You can concatenate multiple subscripts when the original array is multidimensional. For example (including the parentheses):
+
+``` pre
+mytable.arraycolumn[4]
+```
+
+``` pre
+mytable.two_d_column[17][34]
+```
+
+``` pre
+$1[10:42]
+```
+
+``` pre
+(arrayfunction(a,b))[42]
+```
+
+### <a id="topic8"></a>Field Selections
+
+If an expression yields a value of a composite type (row type), you can extract a specific field of the row as follows:
+
+```
+expression.fieldname
+```
+
+The row expression usually must be in parentheses, but you can omit these parentheses when the expression to be selected from is a table reference or positional parameter. For example:
+
+``` pre
+mytable.mycolumn
+```
+
+``` pre
+$1.somecolumn
+```
+
+``` pre
+(rowfunction(a,b)).col3
+```
+
+A qualified column reference is a special case of field selection syntax.
+
+### <a id="topic9"></a>Operator Invocations
+
+Operator invocations have the following possible syntaxes:
+
+``` pre
+expression operator expression(binary infix operator)
+```
+
+``` pre
+operator expression(unary prefix operator)
+```
+
+``` pre
+expression operator(unary postfix operator)
+```
+
+Where *operator* is an operator token, one of the key words `AND`, `OR`, or `NOT`, or qualified operator name in the form:
+
+``` pre
+OPERATOR(schema.operatorname)
+```
+
+Available operators and whether they are unary or binary depends on the operators that the system or user defines. For more information about built-in operators, see [Built-in Functions and Operators](functions-operators.html#topic29).
+
+### <a id="topic10"></a>Function Calls
+
+The syntax for a function call is the name of a function (possibly qualified with a schema name), followed by its argument list enclosed in parentheses:
+
+``` pre
+function ([expression [, expression ... ]])
+```
+
+For example, the following function call computes the square root of 2:
+
+``` pre
+sqrt(2)
+```
+
+### <a id="topic11"></a>Aggregate Expressions
+
+An aggregate expression applies an aggregate function across the rows that a query selects. An aggregate function performs a calculation on a set of values and returns a single value, such as the sum or average of the set of values. The syntax of an aggregate expression is one of the following:
+
+-   `aggregate_name ([ , ... ] ) [FILTER (WHERE                 condition)] ` \u2014 operates across all input rows for which the expected result value is non-null. `ALL` is the default.
+-   `aggregate_name(ALLexpression[ , ... ] ) [FILTER               (WHERE condition)]` \u2014 operates identically to the first form because `ALL` is the default
+-   `aggregate_name(DISTINCT expression[ , ... ] )               [FILTER (WHERE condition)]` \u2014 operates across all distinct non-null values of input rows
+-   `aggregate_name(*) [FILTER (WHERE               condition)]` \u2014 operates on all rows with values both null and non-null. Generally, this form is most useful for the `count(*)` aggregate function.
+
+Where *aggregate\_name* is a previously defined aggregate (possibly schema-qualified) and *expression* is any value expression that does not contain an aggregate expression.
+
+For example, `count(*)` yields the total number of input rows, `count(f1)` yields the number of input rows in which `f1` is <span class="ph">non-null, and </span>`count(distinct f1)` yields the number of distinct non-null values of `f1`.
+
+You can specify a condition with the `FILTER` clause to limit the input rows to the aggregate function. For example:
+
+``` sql
+SELECT count(*) FILTER (WHERE gender='F') FROM employee;
+```
+
+The `WHERE condition` of the `FILTER` clause cannot contain a set-returning function, subquery, window function, or outer reference. If you use a user-defined aggregate function, declare the state transition function as `STRICT` (see `CREATE AGGREGATE`).
+
+For predefined aggregate functions, see [Built-in Functions and Operators](functions-operators.html#topic29). You can also add custom aggregate functions.
+
+HAWQ provides the `MEDIAN` aggregate function, which returns the fiftieth percentile of the `PERCENTILE_CONT` result and special aggregate expressions for inverse distribution functions as follows:
+
+``` sql
+PERCENTILE_CONT(_percentage_) WITHIN GROUP (ORDER BY _expression_)
+```
+
+``` sql
+PERCENTILE_DISC(_percentage_) WITHIN GROUP (ORDER BY _expression_)
+```
+
+Currently you can use only these two expressions with the keyword `WITHIN             GROUP`.
+
+#### <a id="topic12"></a>Limitations of Aggregate Expressions
+
+The following are current limitations of the aggregate expressions:
+
+-   HAWQ does not support the following keywords: ALL, DISTINCT, FILTER and OVER. See [Advanced Aggregate Functions](functions-operators.html#topic31__in2073121) for more details.
+-   An aggregate expression can appear only in the result list or HAVING clause of a SELECT command. It is forbidden in other clauses, such as WHERE, because those clauses are logically evaluated before the results of aggregates form. This restriction applies to the query level to which the aggregate belongs.
+-   When an aggregate expression appears in a subquery, the aggregate is normally evaluated over the rows of the subquery. If the aggregate's arguments contain only outer-level variables, the aggregate belongs to the nearest such outer level and evaluates over the rows of that query. The aggregate expression as a whole is then an outer reference for the subquery in which it appears, and the aggregate expression acts as a constant over any one evaluation of that subquery. See [Scalar Subqueries](#topic15) and [Built-in functions and operators](functions-operators.html#topic29__in204913).
+-   HAWQ does not support DISTINCT with multiple input expressions.
+
+### <a id="topic13"></a>Window Expressions
+
+Window expressions allow application developers to more easily compose complex online analytical processing (OLAP) queries using standard SQL commands. For example, with window expressions, users can calculate moving averages or sums over various intervals, reset aggregations and ranks as selected column values change, and express complex ratios in simple terms.
+
+A window expression represents the application of a *window function* applied to a *window frame*, which is defined in a special `OVER()` clause. A window partition is a set of rows that are grouped together to apply a window function. Unlike aggregate functions, which return a result value for each group of rows, window functions return a result value for every row, but that value is calculated with respect to the rows in a particular window partition. If no partition is specified, the window function is computed over the complete intermediate result set.
+
+The syntax of a window expression is:
+
+``` pre
+window_function ( [expression [, ...]] ) OVER ( window_specification )
+```
+
+Where *`window_function`* is one of the functions listed in [Window functions](functions-operators.html#topic30__in164369), *`expression`* is any value expression that does not contain a window expression, and *`window_specification`* is:
+
+```
+[window_name]
+[PARTITION BY expression [, ...]]
+[[ORDER BY expression [ASC | DESC | USING operator] [, ...]
+����[{RANGE | ROWS} 
+�������{ UNBOUNDED PRECEDING
+�������| expression PRECEDING
+�������| CURRENT ROW
+�������| BETWEEN window_frame_bound AND window_frame_bound }]]
+```
+
+and where `window_frame_bound` can be one of:
+
+``` 
+ ���UNBOUNDED PRECEDING
+����expression PRECEDING
+����CURRENT ROW
+����expression FOLLOWING
+����UNBOUNDED FOLLOWING
+```
+
+A window expression can appear only in the select list of a `SELECT` command. For example:
+
+``` sql
+SELECT count(*) OVER(PARTITION BY customer_id), * FROM sales;
+```
+
+The `OVER` clause differentiates window functions from other aggregate or reporting functions. The `OVER` clause defines the *`window_specification`* to which the window function is applied. A window specification has the following characteristics:
+
+-   The `PARTITION BY` clause defines the window partitions to which the window function is applied. If omitted, the entire result set is treated as one partition.
+-   The `ORDER BY` clause defines the expression(s) for sorting rows within a window partition. The `ORDER BY` clause of a window specification is separate and distinct from the `ORDER BY` clause of a regular query expression. The `ORDER BY` clause is required for the window functions that calculate rankings, as it identifies the measure(s) for the ranking values. For OLAP aggregations, the `ORDER BY` clause is required to use window frames (the `ROWS` | `RANGE` clause).
+
+**Note:** Columns of data types without a coherent ordering, such as `time`, are not good candidates for use in the `ORDER BY` clause of a window specification. `Time`, with or without a specified time zone, lacks a coherent ordering because addition and subtraction do not have the expected effects. For example, the following is not generally true: `x::time < x::time +             '2 hour'::interval`
+
+-   The `ROWS/RANGE` clause defines a window frame for aggregate (non-ranking) window functions. A window frame defines a set of rows within a window partition. When a window frame is defined, the window function computes on the contents of this moving frame rather than the fixed contents of the entire window partition. Window frames are row-based (`ROWS`) or value-based (`RANGE`).
+
+### <a id="topic14"></a>Type Casts
+
+A type cast specifies a conversion from one data type to another. HAWQ accepts two equivalent syntaxes for type casts:
+
+``` sql
+CAST ( expression AS type )
+expression::type
+```
+
+The `CAST` syntax conforms to SQL; the syntax with `::` is historical PostgreSQL usage.
+
+A cast applied to a value expression of a known type is a run-time type conversion. The cast succeeds only if a suitable type conversion function is defined. This differs from the use of casts with constants. A cast applied to a string literal represents the initial assignment of a type to a literal constant value, so it succeeds for any type if the contents of the string literal are acceptable input syntax for the data type.
+
+You can usually omit an explicit type cast if there is no ambiguity about the type a value expression must produce; for example, when it is assigned to a table column, the system automatically applies a type cast. The system applies automatic casting only to casts marked "OK to apply implicitly" in system catalogs. Other casts must be invoked with explicit casting syntax to prevent unexpected conversions from being applied without the user's knowledge.
+
+### <a id="topic15"></a>Scalar Subqueries
+
+A scalar subquery is a `SELECT` query in parentheses that returns exactly one row with one column. Do not use a `SELECT` query that returns multiple rows or columns as a scalar subquery. The query runs and uses the returned value in the surrounding value expression. A correlated scalar subquery contains references to the outer query block.
+
+### <a id="topic16"></a>Correlated Subqueries
+
+A correlated subquery (CSQ) is a `SELECT` query with a `WHERE` clause or target list that contains references to the parent outer clause. CSQs efficiently express results in terms of results of another query. HAWQ supports correlated subqueries that provide compatibility with many existing applications. A CSQ is a scalar or table subquery, depending on whether it returns one or multiple rows. HAWQ does not support correlated subqueries with skip-level correlations.
+
+### <a id="topic17"></a>Correlated Subquery Examples
+
+#### <a id="topic18"></a>Example 1 \u2013 Scalar correlated subquery
+
+``` sql
+SELECT * FROM t1 WHERE t1.x 
+> (SELECT MAX(t2.x) FROM t2 WHERE t2.y = t1.y);
+```
+
+#### <a id="topic19"></a>Example 2 \u2013 Correlated EXISTS subquery
+
+``` sql
+SELECT * FROM t1 WHERE 
+EXISTS (SELECT 1 FROM t2 WHERE t2.x = t1.x);
+```
+
+HAWQ uses one of the following methods to run CSQs:
+
+-   Unnest the CSQ into join operations \u2013 This method is most efficient, and it is how HAWQ runs most CSQs, including queries from the TPC-H benchmark.
+-   Run the CSQ on every row of the outer query \u2013 This method is relatively inefficient, and it is how HAWQ runs queries that contain CSQs in the `SELECT` list or are connected by `OR` conditions.
+
+The following examples illustrate how to rewrite some of these types of queries to improve performance.
+
+#### <a id="topic20"></a>Example 3 - CSQ in the Select List
+
+*Original Query*
+
+``` sql
+SELECT T1.a,
+(SELECT COUNT(DISTINCT T2.z) FROM t2 WHERE t1.x = t2.y) dt2 
+FROM t1;
+```
+
+Rewrite this query to perform an inner join with `t1` first and then perform a left join with `t1` again. The rewrite applies for only an equijoin in the correlated condition.
+
+*Rewritten Query*
+
+``` sql
+SELECT t1.a, dt2 FROM t1 
+LEFT JOIN 
+(SELECT t2.y AS csq_y, COUNT(DISTINCT t2.z) AS dt2 
+FROM t1, t2 WHERE t1.x = t2.y 
+GROUP BY t1.x) 
+ON (t1.x = csq_y);
+```
+
+### <a id="topic21"></a>Example 4 - CSQs connected by OR Clauses
+
+*Original Query*
+
+``` sql
+SELECT * FROM t1 
+WHERE 
+x > (SELECT COUNT(*) FROM t2 WHERE t1.x = t2.x) 
+OR x < (SELECT COUNT(*) FROM t3 WHERE t1.y = t3.y)
+```
+
+Rewrite this query to separate it into two parts with a union on the `OR` conditions.
+
+*Rewritten Query*
+
+``` sql
+SELECT * FROM t1 
+WHERE x > (SELECT count(*) FROM t2 WHERE t1.x = t2.x) 
+UNION 
+SELECT * FROM t1 
+WHERE x < (SELECT count(*) FROM t3 WHERE t1.y = t3.y)
+```
+
+To view the query plan, use `EXPLAIN SELECT` or `EXPLAIN ANALYZE             SELECT`. Subplan nodes in the query plan indicate that the query will run on every row of the outer query, and the query is a candidate for rewriting. For more information about these statements, see [Query Profiling](query-profiling.html#topic39).
+
+### <a id="topic22"></a>Advanced Table Functions
+
+HAWQ supports table functions with `TABLE` value expressions. You can sort input rows for advanced table functions with an `ORDER BY` clause. You can redistribute them with a `SCATTER BY` clause to specify one or more columns or an expression for which rows with the specified characteristics are available to the same process. This usage is similar to using a `DISTRIBUTED BY` clause when creating a table, but the redistribution occurs when the query runs.
+
+**Note:**
+Based on the distribution of data, HAWQ automatically parallelizes table functions with `TABLE` value parameters over the nodes of the cluster.
+
+### <a id="topic23"></a>Array Constructors
+
+An array constructor is an expression that builds an array value from values for its member elements. A simple array constructor consists of the key word `ARRAY`, a left square bracket `[`, one or more expressions separated by commas for the array element values, and a right square bracket `]`. For example,
+
+``` sql
+SELECT ARRAY[1,2,3+4];
+```
+
+```
+  array
+---------
+ {1,2,7}
+```
+
+The array element type is the common type of its member expressions, determined using the same rules as for `UNION` or `CASE` constructs.
+
+You can build multidimensional array values by nesting array constructors. In the inner constructors, you can omit the keyword `ARRAY`. For example, the following two `SELECT` statements produce the same result:
+
+``` sql
+SELECT ARRAY[ARRAY[1,2], ARRAY[3,4]];
+SELECT ARRAY[[1,2],[3,4]];
+```
+
+```
+     array
+---------------
+ {{1,2},{3,4}}
+```
+
+Since multidimensional arrays must be rectangular, inner constructors at the same level must produce sub-arrays of identical dimensions.
+
+Multidimensional array constructor elements are not limited to a sub-`ARRAY` construct; they are anything that produces an array of the proper kind. For example:
+
+``` sql
+CREATE TABLE arr(f1 int[], f2 int[]);
+INSERT INTO arr VALUES (ARRAY[[1,2],[3,4]], 
+ARRAY[[5,6],[7,8]]);
+SELECT ARRAY[f1, f2, '{{9,10},{11,12}}'::int[]] FROM arr;
+```
+
+```
+                     array
+------------------------------------------------
+ {{{1,2},{3,4}},{{5,6},{7,8}},{{9,10},{11,12}}}
+```
+
+You can construct an array from the results of a subquery. Write the array constructor with the keyword `ARRAY` followed by a subquery in parentheses. For example:
+
+``` sql
+SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%');
+```
+
+```
+                          ?column?
+-----------------------------------------------------------
+ {2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31}
+```
+
+The subquery must return a single column. The resulting one-dimensional array has an element for each row in the subquery result, with an element type matching that of the subquery's output column. The subscripts of an array value built with `ARRAY` always begin with `1`.
+
+### <a id="topic24"></a>Row Constructors
+
+A row constructor is an expression that builds a row value (also called a composite value) from values for its member fields. For example,
+
+``` sql
+SELECT ROW(1,2.5,'this is a test');
+```
+
+Row constructors have the syntax `rowvalue.*`, which expands to a list of the elements of the row value, as when you use the syntax `.*` at the top level of a `SELECT` list. For example, if table `t` has columns `f1` and `f2`, the following queries are the same:
+
+``` sql
+SELECT ROW(t.*, 42) FROM t;
+SELECT ROW(t.f1, t.f2, 42) FROM t;
+```
+
+By default, the value created by a `ROW` expression has an anonymous record type. If necessary, it can be cast to a named composite type \u2014 either the row type of a table, or a composite type created with `CREATE TYPE AS`. To avoid ambiguity, you can explicitly cast the value if necessary. For example:
+
+``` sql
+CREATE TABLE mytable(f1 int, f2 float, f3 text);
+CREATE FUNCTION getf1(mytable) RETURNS int AS 'SELECT $1.f1' 
+LANGUAGE SQL;
+```
+
+In the following query, you do not need to cast the value because there is only one `getf1()` function and therefore no ambiguity:
+
+``` sql
+SELECT getf1(ROW(1,2.5,'this is a test'));
+```
+
+```
+ getf1
+-------
+     1
+```
+
+``` sql
+CREATE TYPE myrowtype AS (f1 int, f2 text, f3 numeric);
+CREATE FUNCTION getf1(myrowtype) RETURNS int AS 'SELECT 
+$1.f1' LANGUAGE SQL;
+```
+
+Now we need a cast to indicate which function to call:
+
+``` sql
+SELECT getf1(ROW(1,2.5,'this is a test'));
+```
+```
+ERROR:  function getf1(record) is not unique
+```
+
+``` sql
+SELECT getf1(ROW(1,2.5,'this is a test')::mytable);
+```
+
+```
+ getf1
+-------
+     1
+```
+
+``` sql
+SELECT getf1(CAST(ROW(11,'this is a test',2.5) AS myrowtype));
+```
+
+```
+ getf1
+-------
+    11
+```
+
+You can use row constructors to build composite values to be stored in a composite-type table column or to be passed to a function that accepts a composite parameter.
+
+### <a id="topic25"></a>Expression Evaluation Rules
+
+The order of evaluation of subexpressions is undefined. The inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order.
+
+If you can determine the result of an expression by evaluating only some parts of the expression, then other subexpressions might not be evaluated at all. For example, in the following expression:
+
+``` sql
+SELECT true OR somefunc();
+```
+
+`somefunc()` would probably not be called at all. The same is true in the following expression:
+
+``` sql
+SELECT somefunc() OR true;
+```
+
+This is not the same as the left-to-right evaluation order that Boolean operators enforce in some programming languages.
+
+Do not use functions with side effects as part of complex expressions, especially in `WHERE` and `HAVING` clauses, because those clauses are extensively reprocessed when developing an execution plan. Boolean expressions (`AND`/`OR`/`NOT` combinations) in those clauses can be reorganized in any manner that Boolean algebra laws allow.
+
+Use a `CASE` construct to force evaluation order. The following example is an untrustworthy way to avoid division by zero in a `WHERE` clause:
+
+``` sql
+SELECT ... WHERE x <> 0 AND y/x > 1.5;
+```
+
+The following example shows a trustworthy evaluation order:
+
+``` sql
+SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false 
+END;
+```
+
+This `CASE` construct usage defeats optimization attempts; use it only when necessary.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/functions-operators.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/functions-operators.html.md.erb b/markdown/query/functions-operators.html.md.erb
new file mode 100644
index 0000000..8f14ee6
--- /dev/null
+++ b/markdown/query/functions-operators.html.md.erb
@@ -0,0 +1,437 @@
+---
+title: Using Functions and Operators
+---
+
+HAWQ evaluates functions and operators used in SQL expressions.
+
+## <a id="topic27"></a>Using Functions in HAWQ
+
+In HAWQ, functions can only be run on master.
+
+<a id="topic27__in201681"></a>
+
+<span class="tablecap">Table 1. Functions in HAWQ</span>
+
+
+| Function Type | HAWQ Support       | Description                                                                                                               | Comments                                                                                                                                               |
+|---------------|--------------------|---------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| IMMUTABLE     | Yes                | Relies only on information directly in its argument list. Given the same argument values, always returns the same result. | �                                                                                                                                                      |
+| STABLE        | Yes, in most cases | Within a single table scan, returns the same result for same argument values, but results change across SQL statements.   | Results depend on database lookups or parameter values. `current_timestamp` family of functions is `STABLE`; values do not change within an execution. |
+| VOLATILE      | Restricted         | Function values can change within a single table scan. For example: `random()`, `currval()`, `timeofday()`.               | Any function with side effects is volatile, even if its result is predictable. For example: `setval()`.                                                |
+
+HAWQ does not support functions that return a table reference (`rangeFuncs`) or functions that use the `refCursor` datatype.
+
+## <a id="topic28"></a>User-Defined Functions
+
+HAWQ supports user-defined functions. See [Extending SQL](http://www.postgresql.org/docs/8.2/static/extend.html) in the PostgreSQL documentation for more information.
+
+In HAWQ, the shared library files for user-created functions must reside in the same library path location on every host in the HAWQ array (masters and segments).
+
+**Important:**
+HAWQ does not support the following:
+
+-   Enhanced table functions
+-   PL/Java Type Maps
+
+
+Use the `CREATE FUNCTION` statement to register user-defined functions that are used as described in [Using Functions in HAWQ](#topic27). By default, user-defined functions are declared as `VOLATILE`, so if your user-defined function is `IMMUTABLE` or `STABLE`, you must specify the correct volatility level when you register your function.
+
+### <a id="functionvolatility"></a>Function Volatility
+
+Every function has a�**volatility**�classification, with the possibilities being�VOLATILE,�STABLE, or�IMMUTABLE.�VOLATILE�is the default if the�[CREATE FUNCTION](../reference/sql/CREATE-FUNCTION.html)�command does not specify a category. The volatility category is a promise to the optimizer about the behavior of the function:
+
+-   A�VOLATILE�function can do anything, including modifying the database. It can return different results on successive calls with the same arguments. The optimizer makes no assumptions about the behavior of such functions. A query using a volatile function will re-evaluate the function at every row where its value is needed.
+-   A�STABLE�function cannot modify the database and is guaranteed to return the same results given the same arguments for all rows within a single statement. This category allows the optimizer to optimize multiple calls of the function to a single call.
+-   An�IMMUTABLE�function cannot modify the database and is guaranteed to return the same results given the same arguments forever. This category allows the optimizer to pre-evaluate the function when a query calls it with constant arguments. For example, a query like�SELECT ... WHERE x = 2 + 2�can be simplified on sight to�SELECT ... WHERE x = 4, because the function underlying the integer addition operator is marked�IMMUTABLE.
+
+For best optimization results, you should label your functions with the strictest volatility category that is valid for them.
+
+Any function with side-effects�must�be labeled�VOLATILE, so that calls to it cannot be optimized away. Even a function with no side-effects needs to be labeled�VOLATILE�if its value can change within a single query; some examples are�random(),�currval(),�timeofday().
+
+Another important example is that the�`current_timestamp`�family of functions qualify as�STABLE, since their values do not change within a transaction.
+
+There is relatively little difference between�STABLE�and�IMMUTABLE�categories when considering simple interactive queries that are planned and immediately executed: it doesn't matter a lot whether a function is executed once during planning or once during query execution startup. But there is a big difference if the plan is saved and reused later. Labeling a function�IMMUTABLE�when it really isn't might allow it to be prematurely folded to a constant during planning, resulting in a stale value being re-used during subsequent uses of the plan. This is a hazard when using prepared statements or when using function languages that cache plans (such as�PL/pgSQL).
+
+For functions written in SQL or in any of the standard procedural languages, there is a second important property determined by the volatility category, namely the visibility of any data changes that have been made by the SQL command that is calling the function. A�VOLATILE�function will see such changes, a�STABLE�or�IMMUTABLE�function will not. STABLE�and�IMMUTABLE�functions use a snapshot established as of the start of the calling query, whereas�VOLATILE�functions obtain a fresh snapshot at the start of each query they execute.
+
+Because of this snapshotting behavior, a function containing only�SELECT�commands can safely be marked�STABLE, even if it selects from tables that might be undergoing modifications by concurrent queries.�PostgreSQL�will execute all commands of a�STABLE�function using the snapshot established for the calling query, and so it will see a fixed view of the database throughout that query.
+
+The same snapshotting behavior is used for�SELECT�commands within�IMMUTABLE�functions. It is generally unwise to select from database tables within an�IMMUTABLE function at all, since the immutability will be broken if the table contents ever change. However,�PostgreSQL�does not enforce that you do not do that.
+
+A common error is to label a function�IMMUTABLE�when its results depend on a configuration parameter. For example, a function that manipulates timestamps might well have results that depend on the�timezone�setting. For safety, such functions should be labeled�STABLE�instead.
+
+When you create user defined functions, avoid using fatal errors or destructive calls. HAWQ may respond to such errors with a sudden shutdown or restart.
+
+### <a id="nestedUDFs"></a>Nested Function Query Limitations
+
+HAWQ queries employing nested user-defined functions will fail when dispatched to segment node(s). 
+
+HAWQ stores the system catalog only on the master node. User-defined functions are stored in system catalog tables. HAWQ has no built-in knowledge about how to interpret the source text of a user-defined function. Consequently, the text is not parsed by HAWQ.
+
+This behavior may be problematic in queries where a user-defined function includes a nested function(s). When a query includes a user-defined function, metadata passed to the query executor includes function invocation information.  If run on the HAWQ master node, the nested function will be recognized. If such a query is dispatched to a segment, the nested function will not be found and the query will throw an error.
+
+## <a id="userdefinedtypes"></a>User Defined Types
+
+HAWQ can be extended to support new data types. This section describes how to define new base types, which are data types defined below the level of the�SQL�language. Creating a new base type requires implementing functions to operate on the type in a low-level language, usually C.
+
+A user-defined type must always have input and output functions. �These functions determine how the type appears in strings (for input by the user and output to the user) and how the type is organized in memory. The input function takes a null-terminated character string as its argument and returns the internal (in memory) representation of the type. The output function takes the internal representation of the type as argument and returns a null-terminated character string. If we want to do anything more with the type than merely store it, we must provide additional functions to implement whatever operations we'd like to have for the type.
+
+You should be careful to make the input and output functions inverses of each other. If you do not, you will have severe problems when you need to dump your data into a file and then read it back in. This is a particularly common problem when floating-point numbers are involved.
+
+Optionally, a user-defined type can provide binary input and output routines. Binary I/O is normally faster but less portable than textual I/O. As with textual I/O, it is up to you to define exactly what the external binary representation is. Most of the built-in data types try to provide a machine-independent binary representation.�
+
+Once we have written the I/O functions and compiled them into a shared library, we can define the�complex�type in SQL. First we declare it as a shell type:
+
+``` sql
+CREATE TYPE complex;
+```
+
+This serves as a placeholder that allows us to reference the type while defining its I/O functions. Now we can define the I/O functions:
+
+``` sql
+CREATE FUNCTION complex_in(cstring)
+    RETURNS complex
+    AS 'filename'
+    LANGUAGE C IMMUTABLE STRICT;
+
+CREATE FUNCTION complex_out(complex)
+    RETURNS cstring
+    AS 'filename'
+    LANGUAGE C IMMUTABLE STRICT;
+
+CREATE FUNCTION complex_recv(internal)
+   RETURNS complex
+   AS 'filename'
+   LANGUAGE C IMMUTABLE STRICT;
+
+CREATE FUNCTION complex_send(complex)
+   RETURNS bytea
+   AS 'filename'
+   LANGUAGE C IMMUTABLE STRICT;
+```
+
+Finally, we can provide the full definition of the data type:
+
+``` sql
+CREATE TYPE complex (
+   internallength = 16, 
+   input = complex_in,
+   output = complex_out,
+   receive = complex_recv,
+   send = complex_send,
+   alignment = double
+);
+```
+
+When you define a new base type,�HAWQ�automatically provides support for arrays of that type.�For historical reasons, the array type has the same name as the base type with the underscore character (\_) prepended.
+
+Once the data type exists, we can declare additional functions to provide useful operations on the data type. Operators can then be defined atop the functions, and if needed, operator classes can be created to support indexing of the data type.�
+
+For further details, see the description of the�[CREATE TYPE](../reference/sql/CREATE-TYPE.html) command.
+
+## <a id="userdefinedoperators"></a>User Defined Operators
+
+Every operator is�"syntactic sugar"�for a call to an underlying function that does the real work; so you must first create the underlying function before you can create the operator. However, an operator is�not merely�syntactic sugar, because it carries additional information that helps the query planner optimize queries that use the operator. The next section will be devoted to explaining that additional information.
+
+HAWQ�supports left unary, right unary, and binary operators. Operators can be overloaded;�that is, the same operator name can be used for different operators that have different numbers and types of operands. When a query is executed, the system determines the operator to call from the number and types of the provided operands.
+
+Here is an example of creating an operator for adding two complex numbers. We assume we've already created the definition of type�complex. First we need a function that does the work, then we can define the operator:
+
+``` sql
+CREATE FUNCTION complex_add(complex, complex)
+    RETURNS complex
+    AS 'filename', 'complex_add'
+    LANGUAGE C IMMUTABLE STRICT;
+
+CREATE OPERATOR + (
+    leftarg = complex,
+    rightarg = complex,
+    procedure = complex_add,
+    commutator = +
+);
+```
+
+Now we could execute a query like this:
+
+``` sql
+SELECT (a + b) AS c FROM test_complex;
+```
+
+```
+        c
+-----------------
+ (5.2,6.05)
+ (133.42,144.95)
+```
+
+We've shown how to create a binary operator here. To create unary operators, just omit one of�leftarg�(for left unary) or�rightarg�(for right unary). The�procedure�clause and the argument clauses are the only required items in�CREATE OPERATOR. The�commutator�clause shown in the example is an optional hint to the query optimizer. Further details aboutcommutator�and other optimizer hints appear in the next section.
+
+## <a id="topic29"></a>Built-in Functions and Operators
+
+The following table lists the categories of built-in functions and operators supported by PostgreSQL. All functions and operators are supported in HAWQ as in PostgreSQL with the exception of `STABLE` and `VOLATILE` functions, which are subject to the restrictions noted in [Using Functions in HAWQ](#topic27). See the [Functions and Operators](http://www.postgresql.org/docs/8.2/static/functions.html) section of the PostgreSQL documentation for more information about these built-in functions and operators.
+
+<a id="topic29__in204913"></a>
+
+<table>
+<caption><span class="tablecap">Table 2. Built-in functions and operators</span></caption>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Operator/Function Category</th>
+<th>VOLATILE Functions</th>
+<th>STABLE Functions</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions.html#FUNCTIONS-LOGICAL">Logical Operators</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-comparison.html">Comparison Operators</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-math.html">Mathematical Functions and Operators</a></td>
+<td>random
+<p>setseed</p></td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-string.html">String Functions and Operators</a></td>
+<td><em>All built-in conversion functions</em></td>
+<td>convert
+<p>pg_client_encoding</p></td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-binarystring.html">Binary String Functions and Operators</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-bitstring.html">Bit String Functions and Operators</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.3/static/functions-matching.html">Pattern Matching</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-formatting.html">Data Type Formatting Functions</a></td>
+<td>�</td>
+<td>to_char
+<p>to_timestamp</p></td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-datetime.html">Date/Time Functions and Operators</a></td>
+<td>timeofday</td>
+<td>age
+<p>current_date</p>
+<p>current_time</p>
+<p>current_timestamp</p>
+<p>localtime</p>
+<p>localtimestamp</p>
+<p>now</p></td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-geometry.html">Geometric Functions and Operators</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-net.html">Network Address Functions and Operators</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-sequence.html">Sequence Manipulation Functions</a></td>
+<td>currval
+<p>lastval</p>
+<p>nextval</p>
+<p>setval</p></td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-conditional.html">Conditional Expressions</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-array.html">Array Functions and Operators</a></td>
+<td>�</td>
+<td><em>All array functions</em></td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-aggregate.html">Aggregate Functions</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-subquery.html">Subquery Expressions</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-comparisons.html">Row and Array Comparisons</a></td>
+<td>�</td>
+<td>�</td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-srf.html">Set Returning Functions</a></td>
+<td>generate_series</td>
+<td>�</td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-info.html">System Information Functions</a></td>
+<td>�</td>
+<td><em>All session information functions</em>
+<p><em>All access privilege inquiry functions</em></p>
+<p><em>All schema visibility inquiry functions</em></p>
+<p><em>All system catalog information functions</em></p>
+<p><em>All comment information functions</em></p></td>
+</tr>
+<tr class="even">
+<td><a href="http://www.postgresql.org/docs/8.2/static/functions-admin.html">System Administration Functions</a></td>
+<td>set_config
+<p>pg_cancel_backend</p>
+<p>pg_reload_conf</p>
+<p>pg_rotate_logfile</p>
+<p>pg_start_backup</p>
+<p>pg_stop_backup</p>
+<p>pg_size_pretty</p>
+<p>pg_ls_dir</p>
+<p>pg_read_file</p>
+<p>pg_stat_file</p></td>
+<td>current_setting
+<p><em>All database object size functions</em></p></td>
+</tr>
+<tr class="odd">
+<td><a href="http://www.postgresql.org/docs/9.1/interactive/functions-xml.html">XML Functions</a></td>
+<td>�</td>
+<td>xmlagg(xml)
+<p>xmlexists(text, xml)</p>
+<p>xml_is_well_formed(text)</p>
+<p>xml_is_well_formed_document(text)</p>
+<p>xml_is_well_formed_content(text)</p>
+<p>xpath(text, xml)</p>
+<p>xpath(text, xml, text[])</p>
+<p>xpath_exists(text, xml)</p>
+<p>xpath_exists(text, xml, text[])</p>
+<p>xml(text)</p>
+<p>text(xml)</p>
+<p>xmlcomment(xml)</p>
+<p>xmlconcat2(xml, xml)</p></td>
+</tr>
+</tbody>
+</table>
+
+## <a id="topic30"></a>Window Functions
+
+The following built-in window functions are HAWQ extensions to the PostgreSQL database. All window functions are *immutable*. For more information about window functions, see [Window Expressions](defining-queries.html#topic13).
+
+<a id="topic30__in164369"></a>
+
+<span class="tablecap">Table 3. Window functions</span>
+
+| Function                                             | Return Type               | Full Syntax                                                                                               | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+|------------------------------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `cume_dist()`                                        | `double precision`        | `CUME_DIST() OVER ( [PARTITION BY ` *expr* `] ORDER BY ` *expr* ` )`                                      | Calculates the cumulative distribution of a value in a group of values. Rows with equal values always evaluate to the same cumulative distribution value.                                                                                                                                                                                                                                                                                                  |
+| `dense_rank()`                                       | `bigint`                  | `DENSE_RANK () OVER ( [PARTITION BY ` *expr* `] ORDER BY ` *expr* `)`                                     | Computes the rank of a row in an ordered group of rows without skipping rank values. Rows with equal values are given the same rank value.                                                                                                                                                                                                                                                                                                                 |
+| `first_value(expr)`                                  | same as input *expr* type | FIRST\_VALUE expr ) OVER ( \[PARTITION BY expr \] ORDER BY expr \[ROWS|RANGE frame\_expr \] )             | Returns the first value in an ordered set of values.                                                                                                                                                                                                                                                                                                                                                                                                       |
+| `lag(expr [,offset] [,default])`                     | same as input *expr* type | `LAG(` *expr* ` [,` *offset* `] [,` *default* `]) OVER ( [PARTITION BY ` *expr* `] ORDER BY ` *expr* ` )` | Provides access to more than one row of the same table without doing a self join. Given a series of rows returned from a query and a position of the cursor, `LAG` provides access to a row at a given physical offset prior to that position. The default `offset` is 1. *default* sets the value that is returned if the offset goes beyond the scope of the window. If *default* is not specified, the default value is null.                           |
+| `last_valueexpr`                                     | same as input *expr* type | LAST\_VALUE(expr) OVER ( \[PARTITION BY expr\] ORDER BY expr \[ROWS|RANGE frame\_expr\] )                 | Returns the last value in an ordered set of values.                                                                                                                                                                                                                                                                                                                                                                                                        |
+| `                   lead(expr [,offset] [,default])` | same as input *expr* type | `LEAD(expr [,offset] [,exprdefault]) OVER (                   [PARTITION BY expr] ORDER BY expr )`        | Provides access to more than one row of the same table without doing a self join. Given a series of rows returned from a query and a position of the cursor, `lead` provides access to a row at a given physical offset after that position. If *offset* is not specified, the default offset is 1. *default* sets the value that is returned if the offset goes beyond the scope of the window. If *default* is not specified, the default value is null. |
+| `ntile(expr)`                                        | bigint                    | `NTILE(expr) OVER ( [PARTITION BY expr] ORDER BY expr                   )`                                | Divides an ordered data set into a number of buckets (as defined by *expr*) and assigns a bucket number to each row.                                                                                                                                                                                                                                                                                                                                       |
+| `percent_rank(`)                                     | `double precision`        | `PERCENT_RANK () OVER ( [PARTITION BY expr] ORDER BY expr                   )`                            | Calculates the rank of a hypothetical row `R` minus 1, divided by 1 less than the number of rows being evaluated (within a window partition).                                                                                                                                                                                                                                                                                                              |
+| `rank()`                                             | bigint                    | `RANK () OVER ( [PARTITION BY expr] ORDER BY expr )`                                                      | Calculates the rank of a row in an ordered group of values. Rows with equal values for the ranking criteria receive the same rank. The number of tied rows are added to the rank number to calculate the next rank value. Ranks may not be consecutive numbers in this case.                                                                                                                                                                               |
+| `row_number(`)                                       | `bigint`                  | `ROW_NUMBER () OVER ( [PARTITION BY expr] ORDER BY expr                   )`                              | Assigns a unique number to each row to which it is applied (either each row in a window partition or each row of the query).                                                                                                                                                                                                                                                                                                                               |
+
+
+## <a id="topic31"></a>Advanced Aggregate Functions
+
+The following built-in advanced aggregate functions are HAWQ extensions of the PostgreSQL database.
+
+<a id="topic31__in2073121"></a>
+
+<table>
+
+<caption><span class="tablecap">Table 4. Advanced Aggregate Functions</span></caption>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+<col width="25%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Function</th>
+<th>Return Type</th>
+<th>Full Syntax</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">MEDIAN (expr)</code></td>
+<td><code class="ph codeph">timestamp, timestampz, interval, float</code></td>
+<td><code class="ph codeph">MEDIAN (expression)</code>
+<p><em>Example:</em></p>
+<pre class="pre codeblock"><code>SELECT department_id, MEDIAN(salary) 
+FROM employees 
+GROUP BY department_id; </code></pre></td>
+<td>Can take a two-dimensional array as input. Treats such arrays as matrices.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">PERCENTILE_CONT (expr) WITHIN GROUP (ORDER BY expr                   [DESC/ASC])</code></td>
+<td><code class="ph codeph">timestamp, timestampz, interval, float</code></td>
+<td><code class="ph codeph">PERCENTILE_CONT(percentage) WITHIN GROUP (ORDER BY                   expression)</code>
+<p><em>Example:</em></p>
+<pre class="pre codeblock"><code>SELECT department_id,
+PERCENTILE_CONT (0.5) WITHIN GROUP (ORDER BY salary DESC)
+&quot;Median_cont&quot;; 
+FROM employees GROUP BY department_id;</code></pre></td>
+<td>Performs an inverse function that assumes a continuous distribution model. It takes a percentile value and a sort specification and returns the same datatype as the numeric datatype of the argument. This returned value is a computed result after performing linear interpolation. Null are ignored in this calculation.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">PERCENTILE_DISC (expr) WITHIN GROUP (ORDER BY                     expr [DESC/ASC]</code>)</td>
+<td><code class="ph codeph">timestamp, timestampz, interval, float</code></td>
+<td><code class="ph codeph">PERCENTILE_DISC(percentage) WITHIN GROUP (ORDER BY                   expression)</code>
+<p><em>Example:</em></p>
+<pre class="pre codeblock"><code>SELECT department_id, 
+PERCENTILE_DISC (0.5) WITHIN GROUP (ORDER BY salary DESC)
+&quot;Median_desc&quot;; 
+FROM employees GROUP BY department_id;</code></pre></td>
+<td>Performs an inverse distribution function that assumes a discrete distribution model. It takes a percentile value and a sort specification. This returned value is an element from the set. Null are ignored in this calculation.</td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">sum(array[])</code></td>
+<td><code class="ph codeph">smallint[]int[], bigint[], float[]</code></td>
+<td><code class="ph codeph">sum(array[[1,2],[3,4]])</code>
+<p><em>Example:</em></p>
+<pre class="pre codeblock"><code>CREATE TABLE mymatrix (myvalue int[]);
+INSERT INTO mymatrix VALUES (array[[1,2],[3,4]]);
+INSERT INTO mymatrix VALUES (array[[0,1],[1,0]]);
+SELECT sum(myvalue) FROM mymatrix;
+ sum 
+---------------
+ {{1,3},{4,4}}</code></pre></td>
+<td>Performs matrix summation. Can take as input a two-dimensional array that is treated as a matrix.</td>
+</tr>
+<tr class="odd">
+<td><code class="ph codeph">pivot_sum (label[], label, expr)</code></td>
+<td><code class="ph codeph">int[], bigint[], float[]</code></td>
+<td><code class="ph codeph">pivot_sum( array['A1','A2'], attr, value)</code></td>
+<td>A pivot aggregation using sum to resolve duplicate entries.</td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/gporca/query-gporca-changed.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/gporca/query-gporca-changed.html.md.erb b/markdown/query/gporca/query-gporca-changed.html.md.erb
new file mode 100644
index 0000000..041aa4b
--- /dev/null
+++ b/markdown/query/gporca/query-gporca-changed.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: Changed Behavior with GPORCA
+---
+
+<span class="shortdesc">When GPORCA is enabled, HAWQ's behavior changes. This topic describes these changes.</span>
+
+-   The command `CREATE TABLE AS` distributes table data randomly if the `DISTRIBUTED BY` clause is not specified and no primary or unique keys are specified.
+-   Statistics are required on the root table of a partitioned table. The `ANALYZE` command generates statistics on both root and individual partition tables (leaf child tables). See the `ROOTPARTITION` clause for `ANALYZE` command.
+-   Additional Result nodes in the query plan:
+    -   Query plan `Assert` operator.
+    -   Query plan `Partition selector` operator.
+    -   Query plan `Split` operator.
+-   When running `EXPLAIN`, the query plan generated by GPORCA is different than the plan generated by the legacy query optimizer.
+-   HAWQ adds the log file message `Planner produced plan` when GPORCA is enabled and HAWQ falls back to the legacy query optimizer to generate the query plan.
+-   HAWQ issues a warning when statistics are missing from one or more table columns. When executing an SQL command with GPORCA, HAWQ issues a warning if the command performance could be improved by collecting statistics on a column or set of columns referenced by the command. The warning is issued on the command line and information is added to the HAWQ log file. For information about collecting statistics on table columns, see the `ANALYZE` command.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/gporca/query-gporca-enable.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/gporca/query-gporca-enable.html.md.erb b/markdown/query/gporca/query-gporca-enable.html.md.erb
new file mode 100644
index 0000000..e8cc93f
--- /dev/null
+++ b/markdown/query/gporca/query-gporca-enable.html.md.erb
@@ -0,0 +1,95 @@
+---
+title: Enabling GPORCA
+---
+
+<span class="shortdesc">Precompiled versions of HAWQ that include the GPORCA query optimizer enable it by default, no additional configuration is required. To use the GPORCA query optimizer in a HAWQ built from source, your build must include GPORCA. You must also enable specific HAWQ server configuration parameters at or after install time: </span>
+
+-   [Set the <code class="ph codeph">optimizer\_analyze\_root\_partition</code> parameter to <code class="ph codeph">on</code>](#topic_r5d_hv1_kr) to enable statistics collection for the root partition of a partitioned table.
+-   Set the `optimizer` parameter to `on` to enable GPORCA. You can set the parameter at these levels:
+    -   [A HAWQ system](#topic_byp_lqk_br)
+    -   [A specific HAWQ database](#topic_pzr_3db_3r)
+    -   [A session or query](#topic_lx4_vqk_br)
+
+**Important:** If you intend to execute queries on partitioned tables with GPORCA enabled, you must collect statistics on the partitioned table root partition with the `ANALYZE ROOTPARTITION` command. The command `ANALYZE         ROOTPARTITION` collects statistics on the root partition of a partitioned table without collecting statistics on the leaf partitions. If you specify a list of column names for a partitioned table, the statistics for the columns and the root partition are collected. For information on the `ANALYZE` command, see [ANALYZE](../../reference/sql/ANALYZE.html).
+
+You can also use the HAWQ utility `analyzedb` to update table statistics. The HAWQ utility `analyzedb` can update statistics for multiple tables in parallel. The utility can also check table statistics and update statistics only if the statistics are not current or do not exist. For information about the `analyzedb` utility, see [analyzedb](../../reference/cli/admin_utilities/analyzedb.html#topic1).
+
+As part of routine database maintenance, you should refresh statistics on the root partition when there are significant changes to child leaf partition data.
+
+## <a id="topic_r5d_hv1_kr"></a>Setting the optimizer\_analyze\_root\_partition Parameter
+
+When the configuration parameter `optimizer_analyze_root_partition` is set to `on`, root partition statistics will be collected when `ANALYZE` is run on a partitioned table. Root partition statistics are required by GPORCA.
+
+You will perform different procedures to set optimizer configuration parameters for your whole HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set optimizer server configuration parameters.
+
+If you use Ambari to manage your HAWQ cluster:
+
+1. Set the `optimizer_analyze_root_partition` configuration property to `on` via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. 
+2. Select **Service Actions > Restart All** to load the updated configuration.
+
+If you manage your HAWQ cluster from the command line:
+
+1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
+
+    ``` shell
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+
+1. Use the `hawq config` utility to set `optimizer_analyze_root_partition`:
+
+    ``` shell
+    $ hawq config -c optimizer_analyze_root_partition -v on
+    ```
+2. Reload the HAWQ configuration:
+
+    ``` shell
+    $ hawq stop cluster -u
+    ```
+
+## <a id="topic_byp_lqk_br"></a>Enabling GPORCA for a System
+
+Set the server configuration parameter `optimizer` for the HAWQ system.
+
+If you use Ambari to manage your HAWQ cluster:
+
+1. Set the `optimizer` configuration property to `on` via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. 
+2. Select **Service Actions > Restart All** to load the updated configuration.
+
+If you manage your HAWQ cluster from the command line:
+
+1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
+
+    ``` shell
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+
+1. Use the `hawq config` utility to set `optimizer`:
+
+    ``` shell
+    $ hawq config -c optimizer -v on
+    ```
+2. Reload the HAWQ configuration:
+
+    ``` shell
+    $ hawq stop cluster -u
+    ```
+
+## <a id="topic_pzr_3db_3r"></a>Enabling GPORCA for a Database
+
+Set the server configuration parameter `optimizer` for individual HAWQ databases with the `ALTER DATABASE` command. For example, this command enables GPORCA for the database *test\_db*.
+
+``` sql
+=> ALTER DATABASE test_db SET optimizer = ON ;
+```
+
+## <a id="topic_lx4_vqk_br"></a>Enabling GPORCA for a Session or a Query
+
+You can use the `SET` command to set `optimizer` server configuration parameter for a session. For example, after you use the `psql` utility to connect to HAWQ, this `SET` command enables GPORCA:
+
+``` sql
+=> SET optimizer = on ;
+```
+
+To set the parameter for a specific query, include the `SET` command prior to running the query.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/gporca/query-gporca-fallback.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/gporca/query-gporca-fallback.html.md.erb b/markdown/query/gporca/query-gporca-fallback.html.md.erb
new file mode 100644
index 0000000..999e9a7
--- /dev/null
+++ b/markdown/query/gporca/query-gporca-fallback.html.md.erb
@@ -0,0 +1,142 @@
+---
+title: Determining The Query Optimizer In Use
+---
+
+<span class="shortdesc"> When GPORCA is enabled, you can determine if HAWQ is using GPORCA or is falling back to the legacy query optimizer. </span>
+
+These are two ways to determine which query optimizer HAWQ used to execute the query:
+
+-   Examine `EXPLAIN` query plan output for the query. (Your output may include other settings.)
+    -   When GPORCA generates the query plan, the GPORCA version is displayed near the end of the query plan . For example.
+
+        ``` pre
+         Settings:  optimizer=on
+         Optimizer status:  PQO version 1.627
+        ```
+
+        When HAWQ falls back to the legacy optimizer to generate the plan, `legacy query                 optimizer` is displayed near the end of the query plan. For example.
+
+        ``` pre
+         Settings:  optimizer=on
+         Optimizer status: legacy query optimizer
+        ```
+
+        When the server configuration parameter `OPTIMIZER` is `off`, the following lines are displayed near the end of a query plan.
+
+        ``` pre
+         Settings:  optimizer=off
+         Optimizer status: legacy query optimizer
+        ```
+
+    -   These plan items appear only in the `EXPLAIN` plan output generated by GPORCA. The items are not supported in a legacy optimizer query plan.
+        -   Assert operator
+        -   Sequence operator
+        -   DynamicIndexScan
+        -   DynamicTableScan
+        -   Table Scan
+    -   When a query against a partitioned table is generated by GPORCA, the `EXPLAIN` plan displays only the number of partitions that are being eliminated is listed. The scanned partitions are not shown. The `EXPLAIN` plan generated by the legacy optimizer lists the scanned partitions.
+
+-   View the log messages in the HAWQ log file.
+
+    The log file contains messages that indicate which query optimizer was used. In the log file message, the `[OPT]` flag appears when GPORCA attempts to optimize a query. If HAWQ falls back to the legacy optimizer, an error message is added to the log file, indicating the unsupported feature. Also, in the message, the label `Planner produced             plan:` appears before the query when HAWQ falls back to the legacy optimizer.
+
+    **Note:** You can configure HAWQ to display log messages on the psql command line by setting the HAWQ server configuration parameter `client_min_messages` to `LOG`. See [Server Configuration Parameter Reference](../../reference/HAWQSiteConfig.html) for information about the parameter.
+
+## <a id="topic_n4w_nb5_xr"></a>Example
+
+This example shows the differences for a query that is run against partitioned tables when GPORCA is enabled.
+
+This `CREATE TABLE` statement creates a table with single level partitions:
+
+``` sql
+CREATE TABLE sales (trans_id int, date date, 
+    amount decimal(9,2), region text)
+   DISTRIBUTED BY (trans_id)
+   PARTITION BY RANGE (date)
+      (START (date '2011�01�01') 
+       INCLUSIVE END (date '2012�01�01') 
+       EXCLUSIVE EVERY (INTERVAL '1 month'),
+   DEFAULT PARTITION outlying_dates);
+```
+
+This query against the table is supported by GPORCA and does not generate errors in the log file:
+
+``` sql
+SELECT * FROM sales;
+```
+
+The `EXPLAIN` plan output lists only the number of selected partitions.
+
+``` 
+ ->  Partition Selector for sales (dynamic scan id: 1)  (cost=10.00..100.00 rows=50 width=4)
+       Partitions selected:  13 (out of 13)
+```
+
+Output from the log file indicates that GPORCA attempted to optimize the query:
+
+``` 
+2015-05-06 15:00:53.293451 PDT,"gpadmin","test",p2809,th297883424,"[local]",
+  ,2015-05-06 14:59:21 PDT,1120,con6,cmd1,seg-1,,dx3,x1120,sx1,"LOG","00000"
+  ,"statement: explain select * from sales
+;",,,,,,"explain select * from sales
+;",0,,"postgres.c",1566,
+
+2015-05-06 15:00:54.258412 PDT,"gpadmin","test",p2809,th297883424,"[local]",
+  ,2015-05-06 14:59:21 PDT,1120,con6,cmd1,seg-1,,dx3,x1120,sx1,"LOG","00000","
+[OPT]: Using default search strategy",,,,,,"explain select * from sales
+;",0,,"COptTasks.cpp",677,
+```
+
+The following cube query is not supported by GPORCA.
+
+``` sql
+SELECT count(*) FROM foo GROUP BY cube(a,b);
+```
+
+The following EXPLAIN plan output includes the message "Feature not supported by GPORCA."
+
+``` sql
+postgres=# EXPLAIN SELECT count(*) FROM foo GROUP BY cube(a,b);
+```
+```
+LOG:  statement: explain select count(*) from foo group by cube(a,b);
+LOG:  2016-04-14 16:26:15:487935 PDT,THD000,NOTICE,"Feature not supported by the GPORCA: Cube",
+LOG:  Planner produced plan :0
+                                                        QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------
+ Gather Motion 3:1  (slice3; segments: 3)  (cost=9643.62..19400.26 rows=40897 width=28)
+   ->  Append  (cost=9643.62..19400.26 rows=13633 width=28)
+         ->  HashAggregate  (cost=9643.62..9993.39 rows=9328 width=28)
+               Group By: "rollup".unnamed_attr_2, "rollup".unnamed_attr_1, "rollup"."grouping", "rollup"."group_id"
+               ->  Subquery Scan "rollup"  (cost=8018.50..9589.81 rows=1435 width=28)
+                     ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=8018.50..9546.76 rows=1435 width=28)
+                           Hash Key: "rollup".unnamed_attr_2, "rollup".unnamed_attr_1, "grouping", group_id()
+                           ->  GroupAggregate  (cost=8018.50..9460.66 rows=1435 width=28)
+                                 Group By: "rollup"."grouping", "rollup"."group_id"
+                                 ->  Subquery Scan "rollup"  (cost=8018.50..9326.13 rows=2153 width=28)
+                                       ->  GroupAggregate  (cost=8018.50..9261.56 rows=2153 width=28)
+                                             Group By: "rollup".unnamed_attr_2, "rollup"."grouping", "rollup"."group_id"
+                                             ->  Subquery Scan "rollup"  (cost=8018.50..9073.22 rows=2870 width=28)
+                                                   ->  GroupAggregate  (cost=8018.50..8987.12 rows=2870 width=28)
+                                                         Group By: public.foo.b, public.foo.a
+                                                         ->  Sort  (cost=8018.50..8233.75 rows=28700 width=8)
+                                                               Sort Key: public.foo.b, public.foo.a
+                                                               ->  Seq Scan on foo  (cost=0.00..961.00 rows=28700 width=8)
+         ->  HashAggregate  (cost=9116.27..9277.71 rows=4305 width=28)
+               Group By: "rollup".unnamed_attr_1, "rollup".unnamed_attr_2, "rollup"."grouping", "rollup"."group_id"
+               ->  Subquery Scan "rollup"  (cost=8018.50..9062.46 rows=1435 width=28)
+                     ->  Redistribute Motion 3:3  (slice2; segments: 3)  (cost=8018.50..9019.41 rows=1435 width=28)
+                           Hash Key: public.foo.a, public.foo.b, "grouping", group_id()
+                           ->  GroupAggregate  (cost=8018.50..8933.31 rows=1435 width=28)
+                                 Group By: public.foo.a
+                                 ->  Sort  (cost=8018.50..8233.75 rows=28700 width=8)
+                                       Sort Key: public.foo.a
+                                       ->  Seq Scan on foo  (cost=0.00..961.00 rows=28700 width=8)
+ Settings:  optimizer=on
+ Optimizer status: legacy query optimizer
+(30 rows)
+```
+
+Since this query is not supported by GPORCA, HAWQ falls back to the legacy optimizer.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/gporca/query-gporca-features.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/gporca/query-gporca-features.html.md.erb b/markdown/query/gporca/query-gporca-features.html.md.erb
new file mode 100644
index 0000000..4941866
--- /dev/null
+++ b/markdown/query/gporca/query-gporca-features.html.md.erb
@@ -0,0 +1,215 @@
+---
+title: GPORCA Features and Enhancements
+---
+
+GPORCA includes enhancements for specific types of queries and operations.  GPORCA also includes these optimization enhancements:
+
+-   Improved join ordering
+-   Join-Aggregate reordering
+-   Sort order optimization
+-   Data skew estimates included in query optimization
+
+## <a id="topic_dwy_zml_gr"></a>Queries Against Partitioned Tables
+
+GPORCA includes these enhancements for queries against partitioned tables:
+
+-   Partition elimination is improved.
+-   Query plan can contain the `Partition selector` operator.
+-   Partitions are not enumerated in `EXPLAIN` plans.
+
+    For queries that involve static partition selection where the partitioning key is compared to a constant, GPORCA lists the number of partitions to be scanned in the `EXPLAIN` output under the Partition Selector operator. This example Partition Selector operator shows the filter and number of partitions selected:
+
+    ``` pre
+    Partition Selector for Part_Table (dynamic scan id: 1) 
+           Filter: a > 10
+           Partitions selected:  1 (out of 3)
+    ```
+
+    For queries that involve dynamic partition selection where the partitioning key is compared to a variable, the number of partitions that are scanned will be known only during query execution. The partitions selected are not shown in the `EXPLAIN` output.
+
+-   Plan size is independent of number of partitions.
+-   Out of memory errors caused by number of partitions are reduced.
+
+This example `CREATE TABLE` command creates a range partitioned table.
+
+``` sql
+CREATE TABLE sales(order_id int, item_id int, amount numeric(15,2), 
+      date date, yr_qtr int)
+   RANGE PARTITIONED BY yr_qtr;
+```
+
+GPORCA improves on these types of queries against partitioned tables:
+
+-   Full table scan. Partitions are not enumerated in plans.
+
+    ``` sql
+    SELECT * FROM sales;
+    ```
+
+-   Query with a constant filter predicate. Partition elimination is performed.
+
+    ``` sql
+    SELECT * FROM sales WHERE yr_qtr = 201201;
+    ```
+
+-   Range selection. Partition elimination is performed.
+
+    ``` sql
+    SELECT * FROM sales WHERE yr_qtr BETWEEN 201301 AND 201404 ;
+    ```
+
+-   Joins involving partitioned tables. In this example, the partitioned dimension table *date\_dim* is joined with fact table *catalog\_sales*:
+
+    ``` sql
+    SELECT * FROM catalog_sales
+       WHERE date_id IN (SELECT id FROM date_dim WHERE month=12);
+    ```
+
+## <a id="topic_vph_wml_gr"></a>Queries that Contain Subqueries
+
+GPORCA handles subqueries more efficiently. A subquery is query that is nested inside an outer query block. In the following query, the `SELECT` in the `WHERE` clause is a subquery.
+
+``` sql
+SELECT * FROM part
+  WHERE price > (SELECT avg(price) FROM part);
+```
+
+GPORCA also handles queries that contain a correlated subquery (CSQ) more efficiently. A correlated subquery is a subquery that uses values from the outer query. In the following query, the `price` column is used in both the outer query and the subquery.
+
+``` sql
+SELECT * FROM part p1
+  WHERE price > (SELECT avg(price) FROM part p2 
+  WHERE  p2.brand = p1.brand);
+```
+
+GPORCA generates more efficient plans for the following types of subqueries:
+
+-   CSQ in the `SELECT` list.
+
+    ``` sql
+    SELECT *,
+     (SELECT min(price) FROM part p2 WHERE p1.brand = p2.brand)
+     AS foo
+    FROM part p1;
+    ```
+
+-   CSQ in disjunctive (`OR`) filters.
+
+    ``` sql
+    SELECT FROM part p1 WHERE p_size > 40 OR 
+          p_retailprice > 
+          (SELECT avg(p_retailprice) 
+              FROM part p2 
+              WHERE p2.p_brand = p1.p_brand)
+    ```
+
+-   Nested CSQ with skip level correlations
+
+    ``` sql
+    SELECT * FROM part p1 WHERE p1.p_partkey 
+    IN (SELECT p_partkey FROM part p2 WHERE p2.p_retailprice = 
+         (SELECT min(p_retailprice)
+           FROM part p3 
+           WHERE p3.p_brand = p1.p_brand)
+    );
+    ```
+
+    **Note:** Nested CSQ with skip level correlations are not supported by the legacy query optimizer.
+
+-   CSQ with aggregate and inequality. This example contains a CSQ with an inequality.
+
+    ``` sql
+    SELECT * FROM part p1 WHERE p1.p_retailprice =
+     (SELECT min(p_retailprice) FROM part p2 WHERE p2.p_brand <> p1.p_brand);
+    ```
+
+<!-- -->
+
+-   CSQ that must return one row.
+
+    ``` sql
+    SELECT p_partkey, 
+      (SELECT p_retailprice FROM part p2 WHERE p2.p_brand = p1.p_brand )
+    FROM part p1;
+    ```
+
+## <a id="topic_c3v_rml_gr"></a>Queries that Contain Common Table Expressions
+
+GPORCA handles queries that contain the `WITH` clause. The `WITH` clause, also known as a common table expression (CTE), generates temporary tables that exist only for the query. This example query contains a CTE.
+
+``` sql
+WITH v AS (SELECT a, sum(b) as s FROM T WHERE c < 10 GROUP BY a)
+  SELECT *FROM  v AS v1 ,  v AS v2
+  WHERE v1.a <> v2.a AND v1.s < v2.s;
+```
+
+As part of query optimization, GPORCA can push down predicates into a CTE. For example query, GPORCA pushes the equality predicates to the CTE.
+
+``` sql
+WITH v AS (SELECT a, sum(b) as s FROM T GROUP BY a)
+  SELECT *
+  FROM v as v1, v as v2, v as v3
+  WHERE v1.a < v2.a
+    AND v1.s < v3.s
+    AND v1.a = 10
+    AND v2.a = 20
+    AND v3.a = 30;
+```
+
+GPORCA can handle these types of CTEs:
+
+-   CTE that defines one or multiple tables. In this query, the CTE defines two tables.
+
+    ``` sql
+    WITH cte1 AS (SELECT a, sum(b) as s FROM T 
+                   where c < 10 GROUP BY a),
+          cte2 AS (SELECT a, s FROM cte1 where s > 1000)
+      SELECT *
+      FROM cte1 as v1, cte2 as v2, cte2 as v3
+      WHERE v1.a < v2.a AND v1.s < v3.s;
+    ```
+
+-   Nested CTEs.
+
+    ``` sql
+    WITH v AS (WITH w AS (SELECT a, b FROM foo 
+                          WHERE b < 5) 
+               SELECT w1.a, w2.b 
+               FROM w AS w1, w AS w2 
+               WHERE w1.a = w2.a AND w1.a > 2)
+      SELECT v1.a, v2.a, v2.b
+      FROM v as v1, v as v2
+      WHERE v1.a < v2.a; 
+    ```
+
+## <a id="topic_plx_mml_gr"></a>DML Operation Enhancements with GPORCA
+
+GPORCA contains enhancements for DML operations such as `INSERT`.
+
+-   A DML node in a query plan is a query plan operator.
+    -   Can appear anywhere in the plan, as a regular node (top slice only for now)
+    -   Can have consumers
+-   New query plan operator `Assert` is used for constraints checking.
+
+    This example plan shows the `Assert` operator.
+
+    ```
+    QUERY PLAN
+    ------------------------------------------------------------
+     Insert  (cost=0.00..4.61 rows=3 width=8)
+       ->  Assert  (cost=0.00..3.37 rows=3 width=24)
+             Assert Cond: (dmlsource.a > 2) IS DISTINCT FROM 
+    false
+             ->  Assert  (cost=0.00..2.25 rows=3 width=24)
+                   Assert Cond: NOT dmlsource.b IS NULL
+                   ->  Result  (cost=0.00..1.14 rows=3 width=24)
+                         ->  Table Scan on dmlsource
+    ```
+
+## <a id="topic_anl_t3t_pv"></a>Queries with Distinct Qualified Aggregates (DQA)
+
+GPORCA improves performance for queries that contain distinct qualified aggregates (DQA) without a grouping column and when the table is not distributed on the columns used by the DQA. When encountering these types of queries, GPORCA uses an alternative plan that evaluates the aggregate functions in three stages (local, intermediate, and global aggregations).
+
+See [optimizer\_prefer\_scalar\_dqa\_multistage\_agg](../../reference/guc/parameter_definitions.html#optimizer_prefer_scalar_dqa_multistage_agg) for information on the configuration parameter that controls this behavior.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/gporca/query-gporca-limitations.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/gporca/query-gporca-limitations.html.md.erb b/markdown/query/gporca/query-gporca-limitations.html.md.erb
new file mode 100644
index 0000000..b63f0d2
--- /dev/null
+++ b/markdown/query/gporca/query-gporca-limitations.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: GPORCA Limitations
+---
+
+<span class="shortdesc">There are limitations in HAWQ when GPORCA is enabled. GPORCA and the legacy query optimizer currently coexist in HAWQ because GPORCA does not support all HAWQ features. </span>
+
+
+## <a id="topic_kgn_vxl_vp"></a>Unsupported SQL Query Features
+
+These HAWQ features are unsupported when GPORCA is enabled:
+
+-   Indexed expressions
+-   `PERCENTILE` window function
+-   External parameters
+-   SortMergeJoin (SMJ)
+-   Ordered aggregations
+-   These analytics extensions:
+    -   CUBE
+    -   Multiple grouping sets
+-   These scalar operators:
+    -   `ROW`
+    -   `ROWCOMPARE`
+    -   `FIELDSELECT`
+-   Multiple `DISTINCT` qualified aggregate functions
+-   Inverse distribution functions
+
+## <a id="topic_u4t_vxl_vp"></a>Performance Regressions
+
+When GPORCA is enabled in HAWQ, the following features are known performance regressions:
+
+-   Short running queries - For GPORCA, short running queries might encounter additional overhead due to GPORCA enhancements for determining an optimal query execution plan.
+-   `ANALYZE` - For GPORCA, the `ANALYZE` command generates root partition statistics for partitioned tables. For the legacy optimizer, these statistics are not generated.
+-   DML operations - For GPORCA, DML enhancements including the support of updates on partition and distribution keys might require additional overhead.
+
+Also, enhanced functionality of the features from previous versions could result in additional time required when GPORCA executes SQL statements with the features.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/gporca/query-gporca-notes.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/gporca/query-gporca-notes.html.md.erb b/markdown/query/gporca/query-gporca-notes.html.md.erb
new file mode 100644
index 0000000..ed943e4
--- /dev/null
+++ b/markdown/query/gporca/query-gporca-notes.html.md.erb
@@ -0,0 +1,28 @@
+---
+title: Considerations when Using GPORCA
+---
+
+<span class="shortdesc"> To execute queries optimally with GPORCA, consider certain criteria for the query. </span>
+
+Ensure the following criteria are met:
+
+-   The table does not contain multi-column partition keys.
+-   The table does not contain multi-level partitioning.
+-   The query does not run against master only tables such as the system table *pg\_attribute*.
+-   Statistics have been collected on the root partition of a partitioned table.
+
+If the partitioned table contains more than 20,000 partitions, consider a redesign of the table schema.
+
+GPORCA generates minidumps to describe the optimization context for a given query. Use the minidump files to analyze HAWQ issues. The minidump file is located under the master data directory and uses the following naming format:
+
+`Minidump_date_time.mdp`
+
+For information about the minidump file, see the server configuration parameter `optimizer_minidump`.
+
+When the `EXPLAIN ANALYZE` command uses GPORCA, the `EXPLAIN` plan shows only the number of partitions that are being eliminated. The scanned partitions are not shown. To show name of the scanned partitions in the segment logs set the server configuration parameter `gp_log_dynamic_partition_pruning` to `on`. This example `SET` command enables the parameter.
+
+``` sql
+SET gp_log_dynamic_partition_pruning = on;
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/gporca/query-gporca-optimizer.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/gporca/query-gporca-optimizer.html.md.erb b/markdown/query/gporca/query-gporca-optimizer.html.md.erb
new file mode 100644
index 0000000..11814f8
--- /dev/null
+++ b/markdown/query/gporca/query-gporca-optimizer.html.md.erb
@@ -0,0 +1,39 @@
+---
+title: About GPORCA
+---
+
+In HAWQ, you can use GPORCA or the legacy query optimizer.
+
+**Note:** To use the GPORCA query optimizer, you must be running a version of HAWQ built with GPORCA, and GPORCA must be enabled in your HAWQ deployment.
+
+These sections describe GPORCA functionality and usage:
+
+-   **[Overview of GPORCA](../../query/gporca/query-gporca-overview.html)**
+
+    GPORCA extends the planning and optimization capabilities of the HAWQ legacy optimizer.
+
+-   **[GPORCA Features and Enhancements](../../query/gporca/query-gporca-features.html)**
+
+    GPORCA includes enhancements for specific types of queries and operations:
+
+-   **[Enabling GPORCA](../../query/gporca/query-gporca-enable.html)**
+
+    Precompiled versions of HAWQ that include the GPORCA query optimizer enable it by default, no additional configuration is required. To use the GPORCA query optimizer in a HAWQ built from source, your build must include GPORCA. You must also enable specific HAWQ server configuration parameters at or after install time:
+
+-   **[Considerations when Using GPORCA](../../query/gporca/query-gporca-notes.html)**
+
+    To execute queries optimally with GPORCA, consider certain criteria for the query.
+
+-   **[Determining The Query Optimizer In Use](../../query/gporca/query-gporca-fallback.html)**
+
+    When GPORCA is enabled, you can determine if HAWQ is using GPORCA or is falling back to the legacy query optimizer.
+
+-   **[Changed Behavior with GPORCA](../../query/gporca/query-gporca-changed.html)**
+
+    When GPORCA is enabled, HAWQ's behavior changes. This topic describes these changes.
+
+-   **[GPORCA Limitations](../../query/gporca/query-gporca-limitations.html)**
+
+    There are limitations in HAWQ when GPORCA is enabled. GPORCA and the legacy query optimizer currently coexist in HAWQ because GPORCA does not support all HAWQ features.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/query/gporca/query-gporca-overview.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/query/gporca/query-gporca-overview.html.md.erb b/markdown/query/gporca/query-gporca-overview.html.md.erb
new file mode 100644
index 0000000..56f97eb
--- /dev/null
+++ b/markdown/query/gporca/query-gporca-overview.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: Overview of GPORCA
+---
+
+<span class="shortdesc">GPORCA extends the planning and optimization capabilities of the HAWQ legacy optimizer. </span> GPORCA is extensible and achieves better optimization in multi-core architecture environments. When GPORCA is available in your HAWQ installation and enabled, HAWQ uses GPORCA to generate an execution plan for a query when possible.
+
+GPORCA also enhances HAWQ query performance tuning in the following areas:
+
+-   Queries against partitioned tables
+-   Queries that contain a common table expression (CTE)
+-   Queries that contain subqueries
+
+The legacy and GPORCA query optimizers coexist in HAWQ. The default query optimizer is GPORCA. When GPORCA is available and enabled in your HAWQ installation, HAWQ uses GPORCA to generate an execution plan for a query when possible. If GPORCA cannot be used, the legacy query optimizer is used.
+
+The following flow chart shows how GPORCA�fits into the query planning architecture:
+
+<img src="../../images/gporca.png" id="topic1__image_rf5_svc_fv" class="image" width="672" />
+
+You can inspect the log to determine whether GPORCA�or the legacy query optimizer produced the plan. The log message, "Optimizer produced plan" indicates that GPORCA�generated the plan for your query. If the legacy query optimizer generated the plan, the log message reads "Planner produced plan". See [Determining The Query Optimizer In Use](query-gporca-fallback.html#topic1).
+
+**Note:** All legacy query optimizer (planner) server configuration parameters are ignored by GPORCA. However, if HAWQ falls back to the legacy optimizer, the planner server configuration parameters will impact the query plan generation.
+
+



[42/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/BackingUpandRestoringHAWQDatabases.html.md.erb b/markdown/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
new file mode 100644
index 0000000..78b0dec
--- /dev/null
+++ b/markdown/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
@@ -0,0 +1,373 @@
+---
+title: Backing Up and Restoring HAWQ
+---
+
+This chapter provides information on backing up and restoring databases in HAWQ system.
+
+As an administrator, you will need to back up and restore your database. HAWQ provides three utilities to help you back up your data:
+
+-   `gpfdist`
+-   PXF
+-   `pg_dump`
+
+`gpfdist` and PXF�are parallel loading and unloading tools that provide the best performance. �You can use `pg_dump`, a non-parallel utility inherited from PostgreSQL.
+
+In addition, in some situations you should back up your raw data from ETL processes.
+
+This section describes these three utilities, as well as raw data backup, to help you decide what fits your needs.
+
+## <a id="usinggpfdistorpxf"></a>About gpfdist and PXF 
+
+You can perform a parallel backup in HAWQ using `gpfdist` or PXF�to unload all data to external tables. Backup files can reside on a local file system or HDFS. To recover tables, you can load data back from external tables to the database.�
+
+### <a id="performingaparallelbackup"></a>Performing a Parallel Backup 
+
+1.  Check the database size to ensure that the file system has enough space to save the backed up files.
+2.  Use the�`pg_dump` utility to dump the schema of the target database.
+3.  Create a writable external table for each table to back up to that database.
+4.  Load table data into the newly created external tables.
+
+>    **Note:** Put the insert statements in a single transaction to prevent problems if you perform any update operations during the backup.
+
+
+### <a id="restoringfromabackup"></a>Restoring from a Backup 
+
+1.  Create a database to recover to.
+2.  Recreate the schema from the schema file \(created during the `pg_dump` process\).
+3.  Create a readable external table for each table in the database.
+4.  Load data from the external table to the actual table.
+5.  Run the `ANALYZE` command once loading is complete. This ensures that the query planner generates optimal plan based on up-to-date table statistics.
+
+### <a id="differencesbetweengpfdistandpxf"></a>Differences between gpfdist and PXF 
+
+`gpfdist` and PXF differ in the following ways:
+
+-   `gpfdist` stores backup files on local file system, while PXF stores files on HDFS.
+-   `gpfdist` only supports plain text format, while PXF also supports binary format like AVRO and customized format.
+-   `gpfdist` doesn\u2019t support generating compressed files, while PXF supports compression \(you can specify a compression codec used in Hadoop such as `org.apache.hadoop.io.compress.GzipCodec`\).
+-   Both `gpfdist` and PXF have fast loading performance, but `gpfdist` is much faster than PXF.
+
+## <a id="usingpg_dumpandpg_restore"></a>About pg\_dump and pg\_restore 
+
+HAWQ supports the PostgreSQL backup and restore utilities, `pg_dump` and `pg_restore`. The�`pg_dump`�utility creates a single, large dump file in the master host containing the data from all active segments. The�`pg_restore`�utility restores a HAWQ database from the archive created by `pg_dump`. In most cases, this is probably not practical, as there is most likely not enough disk space in the master host for creating a single backup file of an entire distributed database. HAWQ supports these utilities in case you are migrating data from PostgreSQL to HAWQ.
+
+To create a backup archive for database `mydb`:
+
+```shell
+$ pg_dump -Ft -f mydb.tar mydb
+```
+
+To create a compressed backup using custom format and compression level 3:
+
+```shell
+$ pg_dump -Fc -Z3 -f mydb.dump mydb
+```
+
+To restore from an archive using `pg_restore`:
+
+```shell
+$ pg_restore -d new_db mydb.dump
+```
+
+## <a id="aboutbackinguprawdata"></a>About Backing Up Raw Data 
+
+Parallel backup using�`gpfdist` or�PXF�works fine in most cases. There are a couple of situations where you cannot perform parallel backup and restore operations:
+
+-   Performing periodically incremental backups.
+-   Dumping a large data volume to external tables - this process takes a long time.
+
+In such situations, you can back up raw data generated during ETL processes and reload it into HAWQ. This provides the flexibility to choose where you store backup files.
+
+## <a id="estimatingthebestpractice"></a>Selecting a Backup Strategy/Utility 
+
+The table below summaries the differences between the four approaches we discussed above.�
+
+<table>
+  <tr>
+    <th></th>
+    <th><code>gpfdist</code></th>
+    <th>PXF</th>
+    <th><code>pg_dump</code></th>
+    <th>Raw Data Backup</th>
+  </tr>
+  <tr>
+    <td><b>Parallel</b></td>
+    <td>Yes</td>
+    <td>Yes</td>
+    <td>No</td>
+    <td>No</td>
+  </tr>
+  <tr>
+    <td><b>Incremental Backup</b></td>
+    <td>No</td>
+    <td>No</td>
+    <td>No</td>
+    <td>Yes</td>
+  </tr>
+  <tr>
+    <td><b>Backup Location</b></td>
+    <td>Local FS</td>
+    <td>HDFS</td>
+    <td>Local FS</td>
+    <td>Local FS, HDFS</td>
+  </tr>
+  <tr>
+    <td><b>Format</b></td>
+    <td>Text, CSV</td>
+    <td>Text, CSV, Custom</td>
+    <td>Text, Tar, Custom</td>
+    <td>Depends on format of row data</td>
+  </tr>
+  <tr>
+<td><b>Compression</b></td><td>No</td><td>Yes</td><td>Only support custom format</td><td>Optional</td></tr>
+<tr><td><b>Scalability</b></td><td>Good</td><td>Good</td><td>---</td><td>Good</td></tr>
+<tr><td><b>Performance</b></td><td>Fast loading, Fast unloading</td><td>Fast loading, Normal unloading</td><td>---</td><td>Fast (Just file copy)</td><tr>
+</table>
+
+## <a id="estimatingspacerequirements"></a>Estimating Space Requirements 
+
+Before you back up your database, ensure that you have enough space to store backup files. This section describes how to get the database size and estimate space requirements.
+
+-   Use `hawq_toolkit` to query size of the database you want to backup.�
+
+    ```
+    mydb=# SELECT sodddatsize FROM hawq_toolkit.hawq_size_of_database WHERE sodddatname=\u2019mydb\u2019;
+    ```
+
+    If tables in your database are compressed, this query shows the compressed size of the database.
+
+-   Estimate the total size of the backup files.
+    -   If your database tables and backup files are both compressed, you can use the value `sodddatsize` as an estimate value.
+    -   If your database tables are compressed �and backup files are not, you need to multiply `sodddatsize` by the compression ratio. Although this depends on the compression algorithms, you can use an empirical value such as 300%.
+    -   If your back files are compressed and database tables are not, you need to divide `sodddatsize` by the compression ratio.
+-   Get space requirement.
+    -   If you use HDFS with PXF, the space requirement is `size_of_backup_files * replication_factor`.
+
+    -   If you use gpfdist, the space requirement for each gpfdist instance is `size_of_backup_files / num_gpfdist_instances`�since�table data will be evenly distributed to all `gpfdist` instances.
+
+
+## <a id="usinggpfdist"></a>Using gpfdist 
+
+This section discusses `gpfdist` and shows an example of how to backup and restore HAWQ database.
+
+`gpfdist` is HAWQ\u2019s parallel file distribution program. It is used by readable external tables and `hawq load` to serve external table files to all HAWQ segments in parallel. It is used by writable external tables to accept output streams from HAWQ segments in parallel and write them out to a file.
+
+To use `gpfdist`, start the `gpfdist` server program on the host where you want to store backup files. You can start multiple `gpfdist` instances on the same host or on different hosts. For each `gpfdist` instance, you specify a directory from which `gpfdist` will serve files for readable external tables or create output files for writable external tables. For example, if you have a dedicated machine for backup with two disks, you can start two `gpfdist` instances, each using one disk:
+
+![](../mdimages/gpfdist_instances_backup.png "Deploying multiple gpfdist instances on a backup host")
+
+You can also run `gpfdist` instances on each segment host. During backup, table data will be evenly distributed to all `gpfdist` instances specified in the `LOCATION` clause in the `CREATE EXTERNAL TABLE` definition.
+
+![](../mdimages/gpfdist_instances.png "Deploying gpfdist instances on each segment host")
+
+### <a id="example"></a>Example 
+
+This example of using `gpfdist`�backs up and restores a 1TB `tpch` database. To do so, start two `gpfdist` instances on the backup host `sdw1` with two 1TB disks \(One disk mounts at `/data1`, another disk mounts at `/data2`\).
+
+#### <a id="usinggpfdisttobackupthetpchdatabase"></a>Using gpfdist to Back Up the tpch Database 
+
+1.  Create backup locations and start the `gpfdist` instances.
+
+    In this example, issuing the first command creates two folders on two different disks with the same postfix `backup/tpch_20140627`. These folders are labeled as backups of the `tpch` database on 2014-06-27. In the next two commands, the example shows two `gpfdist` instances, one using port 8080, and another using port 8081:
+
+    ```shell
+    sdw1$ mkdir -p /data1/gpadmin/backup/tpch_20140627 /data2/gpadmin/backup/tpch_20140627
+    sdw1$ gpfdist -d /data1/gpadmin/backup/tpch_20140627 -p 8080 &
+    sdw1$ gpfdist -d /data2/gpadmin/backup/tpch_20140627 -p 8081 &
+    ```
+
+2.  Save the schema for the database:
+
+    ```shell
+    master_host$ pg_dump --schema-only -f tpch.schema tpch
+    master_host$ scp tpch.schema sdw1:/data1/gpadmin/backup/tpch_20140627
+    ```
+
+    On the HAWQ master host, use the�`pg_dump` utility to save the schema of the tpch database to the file tpch.schema. Copy the schema file to the backup location to restore the database schema.
+
+3.  Create a writable external table for each table in the database:
+
+    ```shell
+    master_host$ psql tpch
+    ```
+    ```sql
+    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_orders (LIKE orders)
+    tpch-# LOCATION('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') FORMAT 'CSV';
+    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_lineitem (LIKE lineitem)
+    tpch-# LOCATION('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') FORMAT 'CSV';
+    ```
+
+    The sample shows two tables in the `tpch` database: `orders` and�`line item`. The sample shows that two corresponding external tables are created. Specify a location or each `gpfdist` instance in the `LOCATION` clause. This sample uses the CSV text format here, but you can also choose other delimited text formats. For more information, see the `CREATE EXTERNAL TABLE` SQL command.
+
+4.  Unload data to the external tables:
+
+    ```sql
+    tpch=# BEGIN;
+    tpch=# INSERT INTO wext_orders SELECT * FROM orders;
+    tpch=# INSERT INTO wext_lineitem SELECT * FROM lineitem;
+    tpch=# COMMIT;
+    ```
+
+5.  **\(Optional\)** Stop `gpfdist` servers to free ports for other processes:
+
+    Find the progress ID and kill the process:
+
+    ```shell
+    sdw1$ ps -ef | grep gpfdist
+    sdw1$ kill 612368; kill 612369
+    ```
+
+
+#### <a id="torecoverusinggpfdist"></a>Recovering Using gpfdist 
+
+1.  Restart `gpfdist` instances if they aren\u2019t running:
+
+    ```shell
+    sdw1$ gpfdist -d /data1/gpadmin/backup/tpch_20140627 -p 8080 &
+    sdw1$ gpfdist -d /data2/gpadmin/backup/tpch_20140627 -p 8081 &
+    ```
+
+2.  Create a new database and restore the schema:
+
+    ```shell
+    master_host$ createdb tpch2
+    master_host$ scp sdw1:/data1/gpadmin/backup/tpch_20140627/tpch.schema .
+    master_host$ psql -f tpch.schema -d tpch2
+    ```
+
+3.  Create a readable external table for each table:
+
+    ```shell
+    master_host$ psql tpch2
+    ```
+    
+    ```sql
+    tpch2=# CREATE EXTERNAL TABLE rext_orders (LIKE orders) LOCATION('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') FORMAT 'CSV';
+    tpch2=# CREATE EXTERNAL TABLE rext_lineitem (LIKE lineitem) LOCATION('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') FORMAT 'CSV';
+    ```
+
+    **Note:** The location clause is the same as the writable external table above.
+
+4.  Load data back from external tables:
+
+    ```sql
+    tpch2=# INSERT INTO orders SELECT * FROM rext_orders;
+    tpch2=# INSERT INTO lineitem SELECT * FROM rext_lineitem;
+    ```
+
+5.  Run the `ANALYZE` command after data loading:
+
+    ```sql
+    tpch2=# analyze;
+    ```
+
+
+### <a id="troubleshootinggpfdist"></a>Troubleshooting gpfdist 
+
+Keep in mind that `gpfdist` is accessed at runtime by the segment instances. Therefore, you must ensure that the HAWQ segment hosts have network access to gpfdist. Since the `gpfdist` program is a �web server, to test connectivity you can run the following command from each host in your HAWQ array \(segments and master\):
+
+```shell
+$ wget http://gpfdist_hostname:port/filename
+```
+
+Also, make sure that your `CREATE EXTERNAL TABLE` definition has the correct host name, port, and file names for `gpfdist`. The file names and paths specified should be relative to the directory where gpfdist is serving files \(the directory path used when you started the `gpfdist` program\). See \u201cDefining External Tables - Examples\u201d.
+
+## <a id="usingpxf"></a>Using PXF 
+
+HAWQ Extension Framework \(PXF\) is an extensible framework that allows HAWQ to query external system data. The details of how to install and use PXF can be found in [Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html).
+
+### <a id="usingpxftobackupthetpchdatabase"></a>Using PXF to Back Up the tpch Database 
+
+1.  Create a folder on HDFS for this backup:
+
+    ```shell
+    master_host$ hdfs dfs -mkdir -p /backup/tpch-2014-06-27
+    ```
+
+2.  Dump the database schema using `pg_dump` and store the schema file in a backup folder:
+
+    ```shell
+    master_host$ pg_dump --schema-only -f tpch.schema tpch
+    master_host$ hdfs dfs -copyFromLocal tpch.schema /backup/tpch-2014-06-27
+    ```
+
+3.  Create a writable external table for each table in the database:
+
+    ```shell
+    master_host$ psql tpch
+    ```
+    
+    ```sql
+    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_orders (LIKE orders)
+    tpch-# LOCATION('pxf://namenode_host:51200/backup/tpch-2014-06-27/orders'
+    tpch-#           '?Profile=HdfsTextSimple'
+    tpch-#           '&COMPRESSION_CODEC=org.apache.hadoop.io.compress.SnappyCodec'
+    tpch-#          )
+    tpch-# FORMAT 'TEXT';
+
+    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_lineitem (LIKE lineitem)
+    tpch-# LOCATION('pxf://namenode_host:51200/backup/tpch-2014-06-27/lineitem'
+    tpch-#           '?Profile=HdfsTextSimple'
+    tpch-#           '&COMPRESSION_CODEC=org.apache.hadoop.io.compress.SnappyCodec')
+    tpch-# FORMAT 'TEXT';
+    ```
+
+    Here, all backup files for the `orders` table go in the /backup/tpch-2014-06-27/orders folder, all backup files for the `lineitem` table go in /backup/tpch-2014-06-27/lineitem folder. We use snappy compression to save disk space.
+
+4.  Unload the data to external tables:
+
+    ```sql
+    tpch=# BEGIN;
+    tpch=# INSERT INTO wext_orders SELECT * FROM orders;
+    tpch=# INSERT INTO wext_lineitem SELECT * FROM lineitem;
+    tpch=# COMMIT;
+    ```
+
+5.  **\(Optional\)** Change the HDFS file replication factor for the backup folder. HDFS replicates each block into three blocks by default for reliability. You can decrease this number for your backup files if you need to:
+
+    ```shell
+    master_host$ hdfs dfs -setrep 2 /backup/tpch-2014-06-27
+    ```
+
+    **Note:** This only changes the replication factor for existing files; new files will still use the default replication factor.
+
+
+### <a id="torecoverfromapxfbackup"></a>Recovering a PXF Backup 
+
+1.  Create a new database and restore the schema:
+
+    ```shell
+    master_host$ createdb tpch2
+    master_host$ hdfs dfs -copyToLocal /backup/tpch-2014-06-27/tpch.schema .
+    master_host$ psql -f tpch.schema -d tpch2
+    ```
+
+2.  Create a readable external table for each table to restore:
+
+    ```shell
+    master_host$ psql tpch2
+    ```
+    
+    ```sql
+    tpch2=# CREATE EXTERNAL TABLE rext_orders (LIKE orders)
+    tpch2-# LOCATION('pxf://namenode_host:51200/backup/tpch-2014-06-27/orders?Profile=HdfsTextSimple')
+    tpch2-# FORMAT 'TEXT';
+    tpch2=# CREATE EXTERNAL TABLE rext_lineitem (LIKE lineitem)
+    tpch2-# LOCATION('pxf://namenode_host:51200/backup/tpch-2014-06-27/lineitem?Profile=HdfsTextSimple')
+    tpch2-# FORMAT 'TEXT';
+    ```
+
+    The location clause is almost the same as above, except you don\u2019t have to specify the `COMPRESSION_CODEC` because PXF will automatically detect it.
+
+3.  Load data back from external tables:
+
+    ```sql
+    tpch2=# INSERT INTO ORDERS SELECT * FROM rext_orders;
+    tpch2=# INSERT INTO LINEITEM SELECT * FROM rext_lineitem;
+    ```
+
+4.  Run `ANALYZE` after data loading:
+
+    ```sql
+    tpch2=# ANALYZE;
+    ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/ClusterExpansion.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/ClusterExpansion.html.md.erb b/markdown/admin/ClusterExpansion.html.md.erb
new file mode 100644
index 0000000..d3d921b
--- /dev/null
+++ b/markdown/admin/ClusterExpansion.html.md.erb
@@ -0,0 +1,226 @@
+---
+title: Expanding a Cluster
+---
+
+Apache HAWQ supports dynamic node expansion. You can add segment nodes while HAWQ is running without having to suspend or terminate cluster operations.
+
+**Note:** This topic describes how to expand a cluster using the command-line interface. If you are using Ambari to manage your HAWQ cluster, see [Expanding the HAWQ Cluster](../admin/ambari-admin.html#amb-expand) in [Managing HAWQ Using Ambari](../admin/ambari-admin.html)
+
+## <a id="topic_kkc_tgb_h5"></a>Guidelines for Cluster Expansion 
+
+This topic provides some guidelines around expanding your HAWQ cluster.
+
+There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
+
+-   When you add a new node, install both a DataNode and a physical segment on the new node. If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node.
+-   After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
+-   Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, execute **`select gp_metadata_cache_clear();`**.
+-   Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command.
+-   If you are using hash tables, consider updating the `default_hash_table_bucket_number` server configuration parameter to a larger value after expanding the cluster but before redistributing the hash tables.
+
+## <a id="task_hawq_expand"></a>Adding a New Node to an Existing HAWQ Cluster 
+
+The following procedure describes the steps required to add a node to an existing HAWQ cluster.  First ensure that the new node has been configured per the instructions found in [Apache HAWQ System Requirements](../requirements/system-requirements.html) and [Select HAWQ Host Machines](../install/select-hosts.html).
+
+For example purposes in this procedure, we are adding a new node named `sdw4`.
+
+1.  Prepare the target machine by checking operating system configurations and passwordless ssh. HAWQ requires passwordless ssh access to all cluster nodes. To set up passwordless ssh on the new node, perform the following steps:
+    1.  Login to the master HAWQ node as gpadmin. If you are logged in as a different user, switch to the gpadmin user and source the `greenplum_path.sh` file.
+
+        ```shell
+        $ su - gpadmin
+        $ source /usr/local/hawq/greenplum_path.sh
+        ```
+
+    2.  On the HAWQ master node, change directories to /usr/local/hawq/etc. In this location, create a file called `new_hosts` and add the hostname\(s\) of the node\(s\) you wish to add to the existing HAWQ cluster, one per line. For example:
+
+        ```
+        sdw4
+        ```
+
+    3.  Login to the master HAWQ node as root and source the `greenplum_path.sh` file.
+
+        ```shell
+        $ su - root
+        $ source /usr/local/hawq/greenplum_path.sh
+        ```
+
+    4.  Execute the following hawq command to set up passwordless ssh for root on the new host machine:
+
+        ```shell
+        $ hawq ssh-exkeys -e hawq_hosts -x new_hosts
+        ```
+
+    5.  Create the gpadmin user on the new host\(s\).
+
+        ```shell
+        $ hawq ssh -f new_hosts -e '/usr/sbin/useradd gpadmin'
+        $ hawq ssh \u2013f new_hosts -e 'echo -e "changeme\changeme" | passwd gpadmin'
+        ```
+
+    6.  Switch to the gpadmin user and source the `greenplum_path.sh` file again.
+
+        ```shell
+        $ su - gpadmin
+        $ source /usr/local/hawq/greenplum_path.sh
+        ```
+
+    7.  Execute the following hawq command a second time to set up passwordless ssh for the gpadmin user:
+
+        ```shell
+        $ hawq ssh-exkeys -e hawq_hosts -x new_hosts
+        ```
+
+    8.  (Optional) If you enabled temporary password-based authentication while preparing/configuring your new HAWQ host system, turn off password-based authentication as described in [Apache HAWQ System Requirements](../requirements/system-requirements.html#topic_pwdlessssh).
+
+    8.  After setting up passwordless ssh, you can execute the following hawq command to check the target machine's configuration.
+
+        ```shell
+        $ hawq check -f new_hosts
+        ```
+
+        Configure operating system parameters as needed on the host machine. See the HAWQ installation documentation for a list of specific operating system parameters to configure.
+
+2.  Login to the target host machine `sdw4` as the root user. If you are logged in as a different user, switch to the root account:
+
+    ```shell
+    $ su - root
+    ```
+
+3.  If not already installed, install the target machine \(`sdw4`\) as an HDFS DataNode.
+4.  If you have any user-defined function (UDF) libraries installed in your existing HAWQ cluster, install them on the new node.
+4.  Download and install HAWQ on the target machine \(`sdw4`\) as described in the [software build instructions](https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install) or in the distribution installation documentation.
+5.  On the HAWQ master node, check current cluster and host information using `psql`.
+
+    ```shell
+    $ psql -d postgres
+    ```
+    
+    ```sql
+    postgres=# SELECT * FROM gp_segment_configuration;
+    ```
+    
+    ```
+     registration_order | role | status | port  | hostname |    address    
+    --------------------+------+--------+-------+----------+---------------
+                     -1 | s    | u      |  5432 | sdw1     | 192.0.2.0
+                      0 | m    | u      |  5432 | mdw      | rhel64-1
+                      1 | p    | u      | 40000 | sdw3     | 192.0.2.2
+                      2 | p    | u      | 40000 | sdw2     | 192.0.2.1
+    (4 rows)
+    ```
+
+    At this point the new node does not appear in the cluster.
+
+6.  Execute the following command to confirm that HAWQ was installed on the new host:
+
+    ```shell
+    $ hawq ssh -f new_hosts -e "ls -l $GPHOME"
+    ```
+
+7.  On the master node, use a text editor to add hostname `sdw4` into the `hawq_hosts` file you created during HAWQ installation. \(If you do not already have this file, then you create it first and list all the nodes in your cluster.\)
+
+    ```
+    mdw
+    smdw
+    sdw1
+    sdw2
+    sdw3
+    sdw4
+    ```
+
+8.  On the master node, use a text editor to add hostname `sdw4` to the `$GPHOME/etc/slaves` file. This file lists all the segment host names for your cluster. For example:
+
+    ```
+    sdw1
+    sdw2
+    sdw3
+    sdw4
+    ```
+
+9.  Sync the `hawq-site.xml` and `slaves` configuration files to all nodes in the cluster \(as listed in hawq\_hosts\).
+
+    ```shell
+    $ hawq scp -f hawq_hosts hawq-site.xml slaves =:$GPHOME/etc/
+    ```
+
+10. Make sure that the HDFS DataNode service has started on the new node.
+11. On `sdw4`, create directories based on the values assigned to the following properties in `hawq-site.xml`. These new directories must be owned by the same database user \(for example, `gpadmin`\) who will execute the `hawq init segment` command in the next step.
+    -   `hawq_segment_directory`
+    -   `hawq_segment_temp_directory`
+    **Note:** The `hawq_segment_directory` must be empty.
+
+12. On `sdw4`, switch to the database user \(for example, `gpadmin`\), and initalize the segment.
+
+    ```shell
+    $ su - gpadmin
+    $ hawq init segment
+    ```
+
+13. On the master node, check current cluster and host information using `psql` to verify that the new `sdw4` node has initialized successfully.
+
+    ```shell
+    $ psql -d postgres
+    ```
+    
+    ```sql
+    postgres=# SELECT * FROM gp_segment_configuration ;
+    ```
+    
+    ```
+     registration_order | role | status | port  | hostname |    address    
+    --------------------+------+--------+-------+----------+---------------
+                     -1 | s    | u      |  5432 | sdw1     | 192.0.2.0
+                      0 | m    | u      |  5432 | mdw      | rhel64-1
+                      1 | p    | u      | 40000 | sdw3     | 192.0.2.2
+                      2 | p    | u      | 40000 | sdw2     | 192.0.2.1
+                      3 | p    | u      | 40000 | sdw4     | 192.0.2.3
+    (5 rows)
+    ```
+
+14. To maintain optimal cluster performance, rebalance HDFS data by running the following command:
+15. 
+    ```shell
+    $ sudo -u hdfs hdfs balancer -threshold threshold_value
+    ```
+    
+    where *threshold\_value* represents how much a DataNode's disk usage, in percentage, can differ from overall disk usage in the cluster. Adjust the threshold value according to the needs of your production data and disk. The smaller the value, the longer the rebalance time.
+>
+    **Note:** If you do not specify a threshold, then a default value of 20 is used. If the balancer detects that a DataNode is using less than a 20% difference of the cluster's overall disk usage, then data on that node will not be rebalanced. For example, if disk usage across all DataNodes in the cluster is 40% of the cluster's total disk-storage capacity, then the balancer script ensures that a DataNode's disk usage is between 20% and 60% of that DataNode's disk-storage capacity. DataNodes whose disk usage falls within that percentage range will not be rebalanced.
+
+    Rebalance time is also affected by network bandwidth. You can adjust network bandwidth used by the balancer by using the following command:
+    
+    ```shell
+    $ sudo -u hdfs hdfs dfsadmin -setBalancerBandwidth network_bandwith
+    ```
+    
+    The default value is 1MB/s. Adjust the value according to your network.
+
+15. Speed up the clearing of the metadata cache by using the following command:
+
+    ```shell
+    $ psql -d postgres
+    ```
+    
+    ```sql
+    postgres=# SELECT gp_metadata_cache_clear();
+    ```
+
+16. After expansion, if the new size of your cluster is greater than or equal \(#nodes >=4\) to 4, change the value of the `output.replace-datanode-on-failure` HDFS parameter in `hdfs-client.xml` to `false`.
+
+17. (Optional) If you are using hash tables, adjust the `default_hash_table_bucket_number` server configuration property to reflect the cluster's new size. Update this configuration's value by multiplying the new number of nodes in the cluster by the appropriate amount indicated below.
+
+	|Number of Nodes After Expansion|Suggested default\_hash\_table\_bucket\_number value|
+	|---------------|------------------------------------------|
+	|<= 85|6 \* \#nodes|
+	|\> 85 and <= 102|5 \* \#nodes|
+	|\> 102 and <= 128|4 \* \#nodes|
+	|\> 128 and <= 170|3 \* \#nodes|
+	|\> 170 and <= 256|2 \* \#nodes|
+	|\> 256 and <= 512|1 \* \#nodes|
+	|\> 512|512| 
+   
+18. If you are using hash distributed tables and wish to take advantage of the performance benefits of using a larger cluster, redistribute the data in all hash-distributed tables by using either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command. You should redistribute the table data if you modified the `default_hash_table_bucket_number` configuration parameter. 
+
+
+	**Note:** The redistribution of table data can take a significant amount of time.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/ClusterShrink.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/ClusterShrink.html.md.erb b/markdown/admin/ClusterShrink.html.md.erb
new file mode 100644
index 0000000..33c5cc2
--- /dev/null
+++ b/markdown/admin/ClusterShrink.html.md.erb
@@ -0,0 +1,55 @@
+---
+title: Removing a Node
+---
+
+This topic outlines the proper procedure for removing a node from a HAWQ cluster.
+
+In general, you should not need to remove nodes manually from running HAWQ clusters. HAWQ isolates any nodes that HAWQ detects as failing due to hardware or other types of errors.
+
+## <a id="topic_p53_ct3_kv"></a>Guidelines for Removing a Node 
+
+If you do need to remove a node from a HAWQ cluster, keep in mind the following guidelines around removing nodes:
+
+-   Never remove more than two nodes at a time since the risk of data loss is high.
+-   Only remove nodes during system maintenance windows when the cluster is not busy or running queries.
+
+## <a id="task_oy5_ct3_kv"></a>Removing a Node from a Running HAWQ Cluster 
+
+The following is a typical procedure to remove a node from a running HAWQ cluster:
+
+1.  Login as gpadmin to the node that you wish to remove and source `greenplum_path.sh`.
+
+    ```shell
+    $ su - gpadmin
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+
+2.  Make sure that there are no running QEs on the segment. Execute the following command to check for running QE processes:
+
+    ```shell
+    $ ps -ef | grep postgres
+    ```
+
+    In the output, look for processes that contain SQL commands such as INSERT or SELECT. For example:
+
+    ```shell
+    [gpadmin@rhel64-3 ~]$ ps -ef | grep postgres
+    gpadmin 3000 2999 0 Mar21 ? 00:00:08 postgres: port 40000, logger process
+    gpadmin 3003 2999 0 Mar21 ? 00:00:03 postgres: port 40000, stats collector process
+    gpadmin 3004 2999 0 Mar21 ? 00:00:50 postgres: port 40000, writer process
+    gpadmin 3005 2999 0 Mar21 ? 00:00:06 postgres: port 40000, checkpoint process
+    gpadmin 3006 2999 0 Mar21 ? 00:01:25 postgres: port 40000, segment resource manager
+    gpadmin 7880 2999 0 02:08 ? 00:00:00 postgres: port 40000, gpadmin postgres 192.0.2.0(33874) con11 seg0 cmd18 MPPEXEC INSERT
+    ```
+
+3.  Stop hawq on this segment by executing the following command:
+
+    ```shell
+    $ hawq stop segment
+    ```
+
+4.  On HAWQ master, remove the hostname of the segment from the `slaves` file. Then sync the `slaves` file to all nodes in the cluster by executing the following command:
+
+    ```shell
+    $ hawq scp -f hostfile slaves =:  $GPHOME/etc/slaves
+    ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/FaultTolerance.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/FaultTolerance.html.md.erb b/markdown/admin/FaultTolerance.html.md.erb
new file mode 100644
index 0000000..fc9de93
--- /dev/null
+++ b/markdown/admin/FaultTolerance.html.md.erb
@@ -0,0 +1,52 @@
+---
+title: Understanding the Fault Tolerance Service
+---
+
+The fault tolerance service (FTS) enables HAWQ to continue operating in the event that a segment node fails. The fault tolerance service runs automatically and requires no additional configuration requirements.
+
+Each segment runs a resource manager process that periodically sends (by default, every 30 seconds) the segment\u2019s status to the master's resource manager process. This interval is controlled by the `hawq_rm_segment_heartbeat_interval` server configuration parameter.
+
+When a segment encounters a critical error-- for example, a temporary directory on the segment fails due to a hardware error-- the segment reports that there is temporary directory failure to the HAWQ master through a heartbeat report. When the master receives the report, it marks the segment as DOWN in the `gp_segment_configuration` table. All changes to a segment's status are recorded in the `gp_configuration_history` catalog table, including the reason why the segment is marked as DOWN. When this segment is set to DOWN, master will not run query executors on the segment. The failed segment is fault-isolated from the rest of the cluster.
+
+Besides disk failure, there are other reasons why a segment can be marked as DOWN. For example, if HAWQ is running in YARN mode, every segment should have a NodeManager (Hadoop\u2019s YARN service) running on it, so that the segment can be considered a resource to HAWQ. However, if the NodeManager on a segment is not operating properly, this segment will also be marked as DOWN in `gp_segment_configuration table`. The corresponding reason for the failure is recorded into `gp_configuration_history`.
+
+**Note:** If a disk fails in a particular segment, the failure may cause either an HDFS error or a temporary directory error in HAWQ. HDFS errors are handled by the Hadoop HDFS service.
+
+##Viewing the Current Status of a Segment <a id="view_segment_status"></a>
+
+To view the current status of the segment, query the `gp_segment_configuration` table.
+
+If the status of a segment is DOWN, the "description" column displays the reason. The reason can include any of the following reasons, as single reasons or as a combination of several reasons, split by a semicolon (";").
+
+**Reason: heartbeat timeout**
+
+Master has not received a heartbeat from the segment. If you see this reason, make sure that HAWQ is running on the segment.
+
+If the segment reports a heartbeat at a later time, the segment is marked as UP.
+
+**Reason: failed probing segment**
+
+Master has probed the segment to verify that it is operating normally, and the segment response is NO.
+
+While a HAWQ instance is running, the Query Dispatcher finds that some Query Executors on the segment are not working normally. The resource manager process on master sends a message to this segment. When the segment resource manager receives the message from master, it checks whether its PostgreSQL postmaster process is working normally and sends a reply message to master. When master gets a reply message that indicates that this segment's postmaster process is not working normally, then the master marks the segment as DOWN with the reason "failed probing segment."
+
+Check the logs of the failed segment and try to restart the HAWQ instance.
+
+**Reason: communication error**
+
+Master cannot connect to the segment.
+
+Check the network connection between the master and the segment.
+
+**Reason: resource manager process was reset**
+
+If the timestamp of the segment resource manager process doesn\u2019t match the previous timestamp, it means that the resource manager process on segment has been restarted. In this case, HAWQ master needs to return the resources on this segment and marks the segment as DOWN. If the master receives a new heartbeat from this segment, it will mark it back to UP. 
+
+**Reason: no global node report**
+
+HAWQ is using YARN for resource management. No cluster report has been received for this segment. 
+
+Check that NodeManager is operating normally on this segment. 
+
+If not, try to start NodeManager on the segment. 
+After NodeManager is started, run `yarn node --list` to see if the node is in list. If so, this segment is set to UP.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb b/markdown/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
new file mode 100644
index 0000000..b4284be
--- /dev/null
+++ b/markdown/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
@@ -0,0 +1,223 @@
+---
+title: HAWQ Filespaces and High Availability Enabled HDFS
+---
+
+If you initialized HAWQ without the HDFS High Availability \(HA\) feature, you can enable it by using the following procedure.
+
+## <a id="enablingthehdfsnamenodehafeature"></a>Enabling the HDFS NameNode HA Feature 
+
+To enable the HDFS NameNode HA feature for use with HAWQ, you need to perform the following tasks:
+
+1. Enable high availability in your HDFS cluster.
+1. Collect information about the target filespace.
+1. Stop the HAWQ cluster and backup the catalog (**Note:** Ambari users must perform this manual step.)
+1. Move the filespace location using the command line tool (**Note:** Ambari users must perform this manual step.)
+1. Reconfigure `${GPHOME}/etc/hdfs-client.xml` and `${GPHOME}/etc/hawq-site.xml` files. Then, synchronize updated configuration files to all HAWQ nodes.
+1. Start the HAWQ cluster and resynchronize the standby master after moving the filespace.
+
+
+### <a id="enablehahdfs"></a>Step 1: Enable High Availability in Your HDFS Cluster 
+
+Enable high availability for NameNodes in your HDFS cluster. See the documentation for your Hadoop distribution for instructions on how to do this. 
+
+**Note:** If you're using Ambari to manage your HDFS cluster, you can use the Enable NameNode HA Wizard. For example, [this Hortonworks HDP procedure](https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-user-guide/content/how_to_configure_namenode_high_availability.html) outlines how to do this in Ambari for HDP.
+
+### <a id="collectinginformationaboutthetargetfilespace"></a>Step 2: Collect Information about the Target Filespace 
+
+A default filespace named dfs\_system exists in the pg\_filespace catalog and the parameter, pg\_filespace\_entry, contains detailed information for each filespace.�
+
+To move the filespace location to a HA-enabled HDFS location, you must move the data to a new path on your HA-enabled HDFS cluster.
+
+1.  Use the following SQL query to gather information about the filespace located on HDFS:
+
+    ```sql
+    SELECT
+        fsname, fsedbid, fselocation
+    FROM
+        pg_filespace AS sp, pg_filespace_entry AS entry, pg_filesystem AS fs
+    WHERE
+        sp.fsfsys = fs.oid AND fs.fsysname = 'hdfs' AND sp.oid = entry.fsefsoid
+    ORDER BY
+        entry.fsedbid;
+    ```
+
+    The sample output is as follows:
+
+    ```
+		  fsname | fsedbid | fselocation
+	--------------+---------+-------------------------------------------------
+	cdbfast_fs_c | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_c
+	cdbfast_fs_b | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_b
+	cdbfast_fs_a | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_a
+	dfs_system   | 0       | hdfs://test5:9000/hawq/hawq-1459499690
+	(4 rows)
+    ```
+
+    The output contains the following:
+    - HDFS paths that share the same prefix
+    - Current filespace location
+
+    **Note:** If you see `{replica=3}` in the filespace location, ignore this part of the prefix. This is a known issue.
+
+2.  To enable HA HDFS, you need the filespace name and the common prefix of your HDFS paths. The filespace location is formatted like a URL.
+
+	If the previous filespace location is 'hdfs://test5:9000/hawq/hawq-1459499690' and the HA HDFS common prefix is 'hdfs://hdfs-cluster', then the new filespace location should be 'hdfs://hdfs-cluster/hawq/hawq-1459499690'.
+
+    ```
+    Filespace Name: dfs_system
+    Old location: hdfs://test5:9000/hawq/hawq-1459499690
+    New location: hdfs://hdfs-cluster/hawq/hawq-1459499690
+    ```
+
+### <a id="stoppinghawqclusterandbackupcatalog"></a>Step 3: Stop the HAWQ Cluster and Back Up the Catalog 
+
+**Note:** Ambari users must perform this manual step.
+
+When you enable HA HDFS, you are�changing the HAWQ catalog and persistent tables. You cannot perform transactions while�persistent tables are being updated. Therefore, before you move the filespace location, back up the catalog. This is to ensure that you do not lose data due to a�hardware failure or during an operation \(such as killing the HAWQ process\).�
+
+
+1. If you defined a custom port for HAWQ master, export the `PGPORT` environment variable. For example:
+
+	```shell
+	export PGPORT=9000
+	```
+
+1. Save the HAWQ master data directory, found in the `hawq_master_directory` property value from `hawq-site.xml` to an environment variable.
+ 
+	```bash
+	export MDATA_DIR=/path/to/hawq_master_directory
+	```
+
+1.  Disconnect all workload connections. Check the active connection with:
+
+    ```shell
+    $ psql -p ${PGPORT} -c "SELECT * FROM pg_catalog.pg_stat_activity" -d template1
+    ```
+    where `${PGPORT}` corresponds to the port number you optionally customized for HAWQ master. 
+    
+
+2.  Issue a checkpoint:�
+
+    ```shell
+    $ psql�-p ${PGPORT} -c "CHECKPOINT" -d template1
+    ```
+
+3.  Shut down the HAWQ cluster:�
+
+    ```shell
+    $ hawq stop cluster -a -M fast
+    ```
+
+4.  Copy the master data directory to a backup location:
+
+    ```shell
+    $ cp -r ${MDATA_DIR} /catalog/backup/location
+    ```
+	The master data directory contains the catalog. Fatal errors can occur due to hardware failure or if you fail to kill a HAWQ process before attempting a filespace location change. Make sure you back this directory up.
+
+### <a id="movingthefilespacelocation"></a>Step 4: Move the Filespace Location 
+
+**Note:** Ambari users must perform this manual step.
+
+HAWQ provides the command line tool, `hawq filespace`, to move the location of the filespace.
+
+1. If you defined a custom port for HAWQ master, export the `PGPORT` environment variable. For example:
+
+	```shell
+	export PGPORT=9000
+	```
+1. Run the following command to move a filespace location:
+
+	```shell
+	$ hawq filespace --movefilespace default --location=hdfs://hdfs-cluster/hawq_new_filespace
+	```
+	Specify `default` as the value of the `--movefilespace` option. Replace `hdfs://hdfs-cluster/hawq_new_filespace` with the new filespace location.
+
+#### **Important:** Potential Errors During Filespace Move
+
+Non-fatal error can occur if you provide invalid input or if you have not stopped HAWQ before attempting a filespace location change. Check that you have followed the instructions from the beginning, or correct the input error before you re-run `hawq filespace`.
+
+Fatal errors can occur due to hardware failure or if you fail to kill a HAWQ process before attempting a filespace location change. When a fatal error occurs, you will see the message, "PLEASE RESTORE MASTER DATA DIRECTORY" in the output. If this occurs, shut down the database and restore the `${MDATA_DIR}` that you backed up in Step 4.
+
+### <a id="configuregphomeetchdfsclientxml"></a>Step 5: Update HAWQ to Use NameNode HA by Reconfiguring hdfs-client.xml and hawq-site.xml 
+
+If you install and manage your cluster using command-line utilities, follow these steps to modify your HAWQ configuration to use the NameNode HA service.
+
+**Note:** These steps are not required if you use Ambari to manage HDFS and HAWQ, because Ambari makes these changes automatically after you enable NameNode HA.
+
+For command-line administrators:
+
+1. Edit the ` ${GPHOME}/etc/hdfs-client.xml` file on each segment and add the following NameNode properties:
+
+    ```xml
+    <property>
+     <name>dfs.ha.namenodes.hdpcluster</name>
+     <value>nn1,nn2</value>
+    </property>
+
+    <property>
+     <name>dfs.namenode.http-address.hdpcluster.nn1</name>
+     <value>ip-address-1.mycompany.com:50070</value>
+    </property>
+
+    <property>
+     <name>dfs.namenode.http-address.hdpcluster.nn2</name>
+     <value>ip-address-2.mycompany.com:50070</value>
+    </property>
+
+    <property>
+     <name>dfs.namenode.rpc-address.hdpcluster.nn1</name>
+     <value>ip-address-1.mycompany.com:8020</value>
+    </property>
+
+    <property>
+     <name>dfs.namenode.rpc-address.hdpcluster.nn2</name>
+     <value>ip-address-2.mycompany.com:8020</value>
+    </property>
+
+    <property>
+     <name>dfs.nameservices</name>
+     <value>hdpcluster</value>
+    </property>
+     ```
+
+    In the listing above:
+    * Replace `hdpcluster` with the actual service ID that is configured in HDFS.
+    * Replace `ip-address-2.mycompany.com:50070` with the actual NameNode RPC host and port number that is configured in HDFS.
+    * Replace `ip-address-1.mycompany.com:8020` with the actual NameNode HTTP host and port number that is configured in HDFS.
+    * The order of the NameNodes listed in `dfs.ha.namenodes.hdpcluster` is important for performance, especially when running secure HDFS. The first entry (`nn1` in the example above) should correspond to the active NameNode.
+
+2.  Change the following parameter in the `$GPHOME/etc/hawq-site.xml` file:
+
+    ```xml
+    <property>
+        <name>hawq_dfs_url</name>
+        <value>hdpcluster/hawq_default</value>
+        <description>URL for accessing HDFS.</description>
+    </property>
+    ```
+
+    In the listing above:
+    * Replace `hdpcluster` with the actual service ID that is configured in HDFS.
+    * Replace `/hawq_default` with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable.
+
+3. Copy the updated configuration files to all nodes in the cluster (as listed in `hawq_hosts`).
+
+	```shell
+	$ hawq scp -f hawq_hosts hdfs-client.xml hawq-site.xml =:$GPHOME/etc/
+	```
+
+### <a id="reinitializethestandbymaster"></a>Step 6: Restart the HAWQ Cluster and Resynchronize the Standby Master 
+
+1. Restart the HAWQ cluster:
+
+	```shell
+	$ hawq start cluster -a
+	```
+
+1. Moving the filespace to a new location renders the standby master catalog invalid. To update the standby, resync the standby master.  On the active master, run the following command to ensure that the standby master's catalog is resynced with the active master.
+
+	```shell
+	$ hawq init standby -n -M fast
+
+	```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/HighAvailability.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/HighAvailability.html.md.erb b/markdown/admin/HighAvailability.html.md.erb
new file mode 100644
index 0000000..0c2e32b
--- /dev/null
+++ b/markdown/admin/HighAvailability.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: High Availability in HAWQ
+---
+
+A HAWQ cluster can be made highly available by providing fault-tolerant hardware, by enabling HAWQ or HDFS high-availability features, and by performing regular monitoring and maintenance procedures to ensure the health of all system components.
+
+Hardware components eventually fail either due to normal wear or to unexpected circumstances. Loss of power can lead to temporarily unavailable components. You can make a system highly available by providing redundant standbys for components that can fail so services can continue uninterrupted when a failure does occur. In some cases, the cost of redundancy is higher than a user\u2019s tolerance for interruption in service. When this is the case, the goal is to ensure that full service is able to be restored, and can be restored within an expected timeframe.
+
+With HAWQ, fault tolerance and data availability is achieved with:
+
+* [Hardware Level Redundancy (RAID and JBOD)](#ha_raid)
+* [Master Mirroring](#ha_master_mirroring)
+* [Dual Clusters](#ha_dual_clusters)
+
+## <a id="ha_raid"></a>Hardware Level Redundancy (RAID and JBOD) 
+
+As a best practice, HAWQ deployments should use RAID for master nodes and JBOD for segment nodes. Using these hardware-level systems provides high performance redundancy for single disk failure without having to go into database level fault tolerance. RAID and JBOD provide a lower level of redundancy at the disk level.
+
+## <a id="ha_master_mirroring"></a>Master Mirroring 
+
+There are two masters in a highly available cluster, a primary and a standby. As with segments, the master and standby should be deployed on different hosts so that the cluster can tolerate a single host failure. Clients connect to the primary master and queries can be executed only on the primary master. The secondary master is kept up-to-date by replicating the write-ahead log (WAL) from the primary to the secondary.
+
+## <a id="ha_dual_clusters"></a>Dual Clusters 
+
+You can add another level of redundancy to your deployment by maintaining two HAWQ clusters, both storing the same data.
+
+The two main methods for keeping data synchronized on dual clusters are "dual ETL" and "backup/restore."
+
+Dual ETL provides a complete standby cluster with the same data as the primary cluster. ETL (extract, transform, and load) refers to the process of cleansing, transforming, validating, and loading incoming data into a data warehouse. With dual ETL, this process is executed twice in parallel, once on each cluster, and is validated each time. It also allows data to be queried on both clusters, doubling the query throughput.
+
+Applications can take advantage of both clusters and also ensure that the ETL is successful and validated on both clusters.
+
+To maintain a dual cluster with the backup/restore method, create backups of the primary cluster and restore them on the secondary cluster. This method takes longer to synchronize data on the secondary cluster than the dual ETL strategy, but requires less application logic to be developed. Populating a second cluster with backups is ideal in use cases where data modifications and ETL are performed daily or less frequently.
+
+See [Backing Up and Restoring HAWQ](BackingUpandRestoringHAWQDatabases.html) for instructions on how to backup and restore HAWQ.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/MasterMirroring.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/MasterMirroring.html.md.erb b/markdown/admin/MasterMirroring.html.md.erb
new file mode 100644
index 0000000..b9352f0
--- /dev/null
+++ b/markdown/admin/MasterMirroring.html.md.erb
@@ -0,0 +1,144 @@
+---
+title: Using Master Mirroring
+---
+
+There are two masters in a HAWQ cluster-- a primary master and a standby master. Clients connect to the primary master and queries can be executed only on the primary master.
+
+You deploy a backup or mirror of the master instance on a separate host machine from the primary master so that the cluster can tolerate a single host failure. A backup master or standby master serves as a warm standby if the primary master becomes non-operational. You create a standby master from the primary master while the primary is online.
+
+The primary master continues to provide services to users while HAWQ takes a transactional snapshot of the primary master instance. In addition to taking a transactional snapshot and deploying it to the standby master, HAWQ also records changes to the primary master. After HAWQ deploys the snapshot to the standby master, HAWQ deploys the updates to synchronize the standby master with the primary master.
+
+After the primary master and standby master are synchronized, HAWQ keeps the standby master up to date using walsender and walreceiver, write-ahead log (WAL)-based replication processes. The walreceiver is a standby master process. The walsender process is a primary master process. The two processes use WAL-based streaming replication to keep the primary and standby masters synchronized.
+
+Since the master does not house user data, only system catalog tables are synchronized between the primary and standby masters. When these tables are updated, changes are automatically copied to the standby master to keep it current with the primary.
+
+*Figure 1: Master Mirroring in HAWQ*
+
+![](../mdimages/standby_master.jpg)
+
+
+If the primary master fails, the replication process stops, and an administrator can activate the standby master. Upon activation of the standby master, the replicated logs reconstruct the state of the primary master at the time of the last successfully committed transaction. The activated standby then functions as the HAWQ master, accepting connections on the port specified when the standby master was initialized.
+
+If the master fails, the administrator uses command line tools or Ambari to instruct the standby master to take over as the new primary master. 
+
+**Tip:** You can configure a virtual IP address for the master and standby so that client programs do not have to switch to a different network address when the \u2018active\u2019 master changes. If the master host fails, the virtual IP address can be swapped to the actual acting master.
+
+##Configuring Master Mirroring <a id="standby_master_configure"></a>
+
+You can configure a new HAWQ system with a standby master during HAWQ\u2019s installation process, or you can add a standby master later. This topic assumes you are adding a standby master to an existing node in your HAWQ cluster.
+
+###Add a standby master to an existing system
+
+1. Ensure the host machine for the standby master has been installed with HAWQ and configured accordingly:
+    * The gpadmin system user has been created.
+    * HAWQ binaries are installed.
+    * HAWQ environment variables are set.
+    * SSH keys have been exchanged.
+    * HAWQ Master Data directory has been created.
+
+2. Initialize the HAWQ master standby:
+
+    a. If you use Ambari to manage your cluster, follow the instructions in [Adding a HAWQ Standby Master](ambari-admin.html#amb-add-standby).
+
+    b. If you do not use Ambari, log in to the HAWQ master and re-initialize the HAWQ master standby node:
+ 
+    ``` shell
+    $ ssh gpadmin@<hawq_master>
+    hawq_master$ . /usr/local/hawq/greenplum_path.sh
+    hawq_master$ hawq init standby -s <new_standby_master>
+    ```
+
+    where \<new\_standby\_master\> identifies the hostname of the standby master.
+
+3. Check the status of master mirroring by querying the `gp_master_mirroring system` view. See [Checking on the State of Master Mirroring](#standby_check) for instructions.
+
+4. To activate or failover to the standby master, see [Failing Over to a Standby Master](#standby_failover).
+
+##Failing Over to a Standby Master<a id="standby_failover"></a>
+
+If the primary master fails, log replication stops. You must explicitly activate the standby master in this circumstance.
+
+Upon activation of the standby master, HAWQ reconstructs the state of the master at the time of the last successfully committed transaction.
+
+###To activate the standby master
+
+1. Ensure that a standby master host has been configured for the system.
+
+2. Activate the standby master:
+
+    a. If you use Ambari to manage your cluster, follow the instructions in [Activating the HAWQ Standby Master](ambari-admin.html#amb-activate-standby).
+
+    b. If you do not use Ambari, log in to the HAWQ master and activate the HAWQ master standby node:
+
+	``` shell
+	hawq_master$ hawq activate standby
+ 	```
+   After you activate the standby master, it becomes the active or primary master for the HAWQ cluster.
+
+4. (Optional, but recommended.) Configure a new standby master. See [Add a standby master to an existing system](#standby_master_configure) for instructions.
+	
+5. Check the status of the HAWQ cluster by executing the following command on the master:
+
+	```shell
+	hawq_master$ hawq state
+	```
+	
+	The newly-activated master's status should be **Active**. If you configured a new standby master, its status is **Passive**. When a standby master is not configured, the command displays `-No entries found`, the message indicating that no standby master instance is configured.
+
+6. Query the `gp_segment_configuration` table to verify that segments have registered themselves to the new master:
+
+    ``` shell
+    hawq_master$ psql dbname -c 'SELECT * FROM gp_segment_configuration;'
+    ```
+	
+7. Finally, check the status of master mirroring by querying the `gp_master_mirroring` system view. See [Checking on the State of Master Mirroring](#standby_check) for instructions.
+
+
+##Checking on the State of Master Mirroring <a id="standby_check"></a>
+
+To check on the status of master mirroring, query the `gp_master_mirroring` system view. This view provides information about the walsender process used for HAWQ master mirroring. 
+
+```shell
+hawq_master$ psql dbname -c 'SELECT * FROM gp_master_mirroring;'
+```
+
+If a standby master has not been set up for the cluster, you will see the following output:
+
+```
+ summary_state  | detail_state | log_time | error_message
+----------------+--------------+----------+---------------
+ Not Configured |              |          | 
+(1 row)
+```
+
+If the standby is configured and in sync with the master, you will see output similar to the following:
+
+```
+ summary_state | detail_state | log_time               | error_message
+---------------+--------------+------------------------+---------------
+ Synchronized  |              | 2016-01-22 21:53:47+00 |
+(1 row)
+```
+
+##Resynchronizing Standby with the Master <a id="resync_master"></a>
+
+The standby can become out-of-date if the log synchronization process between the master and standby has stopped or has fallen behind. If this occurs, you will observe output similar to the following after querying the `gp_master_mirroring` view:
+
+```
+   summary_state  | detail_state | log_time               | error_message
+------------------+--------------+------------------------+---------------
+ Not Synchronized |              |                        |
+(1 row)
+```
+
+To resynchronize the standby with the master:
+
+1. If you use Ambari to manage your cluster, follow the instructions in [Removing the HAWQ Standby Master](ambari-admin.html#amb-remove-standby).
+
+2. If you do not use Ambari, execute the following command on the HAWQ master:
+
+    ```shell
+    hawq_master$ hawq init standby -n
+    ```
+
+    This command stops and restarts the master and then synchronizes the standby.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/RecommendedMonitoringTasks.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/RecommendedMonitoringTasks.html.md.erb b/markdown/admin/RecommendedMonitoringTasks.html.md.erb
new file mode 100644
index 0000000..5083b44
--- /dev/null
+++ b/markdown/admin/RecommendedMonitoringTasks.html.md.erb
@@ -0,0 +1,259 @@
+---
+title: Recommended Monitoring and Maintenance Tasks
+---
+
+This section lists monitoring and maintenance activities recommended to ensure high availability and consistent performance of your HAWQ cluster.
+
+The tables in the following sections suggest activities that a HAWQ System Administrator can perform periodically to ensure that all components of the system are operating optimally. Monitoring activities help you to detect and diagnose problems early. Maintenance activities help you to keep the system up-to-date and avoid deteriorating performance, for example, from bloated system tables or diminishing free disk space.
+
+It is not necessary to implement all of these suggestions in every cluster; use the frequency and severity recommendations as a guide to implement measures according to your service requirements.
+
+## <a id="drr_5bg_rp"></a>Database State Monitoring Activities 
+
+<table>
+  <tr>
+    <th>Activity</th>
+    <th>Procedure</th>
+    <th>Corrective Actions</th>
+  </tr>
+  <tr>
+    <td><p>List segments that are currently down. If any rows are returned, this should generate a warning or alert.</p>
+    <p>Recommended frequency: run every 5 to 10 minutes</p><p>Severity: IMPORTANT</p></td>
+    <td>Run the following query in the `postgres` database:
+    <pre><code>SELECT * FROM gp_segment_configuration
+WHERE status <> 'u';
+</code></pre>
+  </td>
+  <td>If the query returns any rows, follow these steps to correct the problem:
+  <ol>
+    <li>Verify that the hosts with down segments are responsive.</li>
+    <li>If hosts are OK, check the pg_log files for the down segments to discover the root cause of the segments going down.</li>
+    </ol>
+    </td>
+    </tr>
+  <tr>
+    <td>
+      <p>Run a distributed query to test that it runs on all segments. One row should be returned for each segment.</p>
+      <p>Recommended frequency: run every 5 to 10 minutes</p>
+      <p>Severity: CRITICAL</p>
+    </td>
+    <td>
+      <p>Execute the following query in the `postgres` database:</p>
+      <pre><code>SELECT gp_segment_id, count(&#42;)
+FROM gp_dist_random('pg_class')
+GROUP BY 1;
+</code></pre>
+  </td>
+  <td>If this query fails, there is an issue dispatching to some segments in the cluster. This is a rare event. Check the hosts that are not able to be dispatched to ensure there is no hardware or networking issue.</td>
+  </tr>
+  <tr>
+    <td>
+      <p>Perform a basic check to see if the master is up and functioning.</p>
+      <p>Recommended frequency: run every 5 to 10 minutes</p>
+      <p>Severity: CRITICAL</p>
+    </td>
+    <td>
+      <p>Run the following query in the `postgres` database:</p>
+      <pre><code>SELECT count(&#42;) FROM gp_segment_configuration;</code></pre>
+    </td>
+    <td>
+      <p>If this query fails the active master may be down. Try again several times and then inspect the active master manually. If the active master is down, reboot or power cycle the active master to ensure no processes remain on the active master and then trigger the activation of the standby master.</p>
+    </td>
+  </tr>
+</table>
+
+## <a id="topic_y4c_4gg_rp"></a>Hardware and Operating System Monitoring 
+
+<table>
+  <tr>
+    <th>Activity</th>
+    <th>Procedure</th>
+    <th>Corrective Actions</th>
+  </tr>
+  <tr>
+    <td>
+      <p>Underlying platform check for maintenance required or system down of the hardware.</p>
+      <p>Recommended frequency: real-time, if possible, or every 15 minutes</p>
+      <p>Severity: CRITICAL</p>
+    </td>
+    <td>
+      <p>Set up system check for hardware and OS errors.</p>
+    </td>
+    <td>
+      <p>If required, remove a machine from the HAWQ cluster to resolve hardware and OS issues, then add it back to the cluster.</p>
+    </td>
+  </tr>
+  <tr>
+    <td>
+      <p>Check disk space usage on volumes used for HAWQ data storage and the OS. Recommended frequency: every 5 to 30 minutes</p>
+      <p>Severity: CRITICAL</p>
+    </td>
+    <td>
+      <p>Set up a disk space check.</p>
+      <ul>
+        <li>Set a threshold to raise an alert when a disk reaches a percentage of capacity. The recommended threshold is 75% full.</li>
+        <li>It is not recommended to run the system with capacities approaching 100%.</li>
+      </ul>
+    </td>
+    <td>
+      <p>Free space on the system by removing some data or files.</p>
+    </td>
+  </tr>
+  <tr>
+    <td>
+      <p>Check for errors or dropped packets on the network interfaces.</p>
+      <p>Recommended frequency: hourly</p>
+      <p>Severity: IMPORTANT</p>
+    </td>
+    <td>
+      <p>Set up a network interface checks.</p>
+    </td>
+    <td>
+      <p>Work with network and OS teams to resolve errors.</p>
+    </td>
+  </tr>
+  <tr>
+    <td>
+      <p>Check for RAID errors or degraded RAID performance.</p>
+      <p>Recommended frequency: every 5 minutes</p>
+      <p>Severity: CRITICAL</p>
+    </td>
+    <td>
+      <p>Set up a RAID check.</p>
+    </td>
+    <td>
+      <ul>
+        <li>Replace failed disks as soon as possible.</li>
+        <li>Work with system administration team to resolve other RAID or controller errors as soon as possible.</li>
+      </ul>
+    </td>
+  </tr>
+  <tr>
+    <td>
+      <p>Check for adequate I/O bandwidth and I/O skew.</p>
+      <p>Recommended frequency: when create a cluster or when hardware issues are suspected.</p>
+    </td>
+    <td>
+      <p>Run the HAWQ `hawq checkperf` utility.</p>
+    </td>
+    <td>
+      <p>The cluster may be under-specified if data transfer rates are not similar to the following:</p>
+      <ul>
+        <li>2GB per second disk read</li>
+        <li>1 GB per second disk write</li>
+        <li>10 Gigabit per second network read and write</li>
+      </ul>
+      <p>If transfer rates are lower than expected, consult with your data architect regarding performance expectations.</p>
+      <p>If the machines on the cluster display an uneven performance profile, work with the system administration team to fix faulty machines.</p>
+    </td>
+  </tr>
+</table>
+
+## <a id="maintentenance_check_scripts"></a>Data Maintenance 
+
+<table>
+  <tr>
+    <th>Activity</th>
+    <th>Procedure</th>
+    <th>Corrective Actions</th>
+  </tr>
+  <tr>
+    <td>Check for missing statistics on tables.</td>
+    <td>Check the `hawq_stats_missing` view in each database:
+    <pre><code>SELECT * FROM hawq_toolkit.hawq_stats_missing;</code></pre>
+    </td>
+    <td>Run <code>ANALYZE</code> on tables that are missing statistics.</td>
+  </tr>
+</table>
+
+## <a id="topic_dld_23h_rp"></a>Database Maintenance 
+
+<table>
+  <tr>
+    <th>Activity</th>
+    <th>Procedure</th>
+    <th>Corrective Actions</th>
+  </tr>
+  <tr>
+    <td>
+      <p>Mark deleted rows in HAWQ system catalogs (tables in the `pg_catalog` schema) so that the space they occupy can be reused.</p>
+      <p>Recommended frequency: daily</p>
+      <p>Severity: CRITICAL</p>
+    </td>
+    <td>
+      <p>Vacuum each system catalog:</p>
+      <pre><code>VACUUM &lt;<i>table</i>&gt;;</code></pre>
+    </td>
+    <td>Vacuum system catalogues regularly to prevent bloating.</td>
+  </tr>
+  <tr>
+    <td>
+    <p>Vacuum all system catalogs (tables in the <code>pg_catalog</code> schema) that are approaching <a href="../reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a>.</p>
+    <p>Recommended frequency: daily</p>
+    <p>Severity: CRITICAL</p>
+    </td>
+    <td>
+      <p><p>Vacuum an individual system catalog table:</p>
+      <pre><code>VACUUM &lt;<i>table</i>&gt;;</code></pre>
+    </td>
+    <td>After the <a href="../reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a> value is reached, VACUUM will no longer replace transaction IDs with <code>FrozenXID</code> while scanning a table. Perform vacuum on these tables before the limit is reached.</td>
+  </tr>
+    <td>
+      <p>Update table statistics.</p>
+      <p>Recommended frequency: after loading data and before executing queries</p>
+      <p>Severity: CRITICAL</p>
+    </td>
+    <td>
+      <p>Analyze user tables:</p>
+      <pre><code>ANALYZEDB -d &lt;<i>database</i>&gt; -a</code></pre>
+    </td>
+    <td>Analyze updated tables regularly so that the optimizer can produce efficient query execution plans.</td>
+  </tr>
+  <tr>
+    <td>
+      <p>Backup the database data.</p>
+      <p>Recommended frequency: daily, or as required by your backup plan</p>
+      <p>Severity: CRITICAL</p>
+    </td>
+    <td>See <a href="BackingUpandRestoringHAWQDatabases.html">Backing Up and Restoring HAWQ</a> for a discussion of backup procedures.</td>
+    <td>Best practice is to have a current backup ready in case the database must be restored.</td>
+  </tr>
+  <tr>
+    <td>
+      <p>Vacuum system catalogs (tables in the <code>pg_catalog</code> schema) to maintain an efficient catalog.</p>
+      <p>Recommended frequency: weekly, or more often if database objects are created and dropped frequently</p>
+    </td>
+    <td>
+      <p><code>VACUUM</code> the system tables in each database.</p>
+    </td>
+    <td>The optimizer retrieves information from the system tables to create query plans. If system tables and indexes are allowed to become bloated over time, scanning the system tables increases query execution time.</td>
+  </tr>
+</table>
+
+## <a id="topic_idx_smh_rp"></a>Patching and Upgrading 
+
+<table>
+  <tr>
+    <th>Activity</th>
+    <th>Procedure</th>
+    <th>Corrective Actions</th>
+  </tr>
+  <tr>
+    <td>
+      <p>Ensure any bug fixes or enhancements are applied to the kernel.</p>
+      <p>Recommended frequency: at least every 6 months</p>
+      <p>Severity: IMPORTANT</p>
+    </td>
+    <td>Follow the vendor's instructions to update the Linux kernel.</td>
+    <td>Keep the kernel current to include bug fixes and security fixes, and to avoid difficult future upgrades.</td>
+  </tr>
+  <tr>
+    <td>
+      <p>Install HAWQ minor releases.</p>
+      <p>Recommended frequency: quarterly</p>
+      <p>Severity: IMPORTANT</p>
+    </td>
+    <td>Always upgrade to the latest in the series.</td>
+    <td>Keep the HAWQ software current to incorporate bug fixes, performance enhancements, and feature enhancements into your HAWQ cluster.</td>
+  </tr>
+</table>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/RunningHAWQ.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/RunningHAWQ.html.md.erb b/markdown/admin/RunningHAWQ.html.md.erb
new file mode 100644
index 0000000..c7de1d5
--- /dev/null
+++ b/markdown/admin/RunningHAWQ.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: Running a HAWQ Cluster
+---
+
+This section provides information for system administrators responsible for administering a HAWQ deployment.
+
+You should have some knowledge of Linux/UNIX system administration, database management systems, database administration, and structured query language \(SQL\) to administer a HAWQ cluster. Because HAWQ is based on PostgreSQL, you should also have some familiarity with PostgreSQL. The HAWQ documentation calls out similarities between HAWQ and PostgreSQL features throughout.
+
+## <a id="hawq_users"></a>HAWQ Users
+
+HAWQ supports users with both administrative and operating privileges. The HAWQ administrator may choose to manage the HAWQ cluster using either Ambari or the command line. [Managing HAWQ Using Ambari](../admin/ambari-admin.html) provides Ambari-specific HAWQ cluster administration procedures. [Starting and Stopping HAWQ](startstop.html), [Expanding a Cluster](ClusterExpansion.html), and [Removing a Node](ClusterShrink.html) describe specific command-line-managed HAWQ cluster administration procedures. Other topics in this guide are applicable to both Ambari- and command-line-managed HAWQ clusters.
+
+The default HAWQ admininstrator user is named `gpadmin`. The HAWQ admin may choose to assign administrative and/or operating HAWQ privileges to additional users.  Refer to [Configuring Client Authentication](../clientaccess/client_auth.html) and [Managing Roles and Privileges](../clientaccess/roles_privs.html) for additional information about HAWQ user configuration.
+
+## <a id="hawq_systems"></a>HAWQ Deployment Systems
+
+A typical HAWQ deployment includes single HDFS and HAWQ master and standby nodes and multiple HAWQ segment and HDFS data nodes. The HAWQ cluster may also include systems running the HAWQ Extension Framework (PXF) and other Hadoop services. Refer to [HAWQ Architecture](../overview/HAWQArchitecture.html) and [Select HAWQ Host Machines](../install/select-hosts.html) for information about the different systems in a HAWQ deployment and how they are configured.
+
+
+## <a id="hawq_env_databases"></a>HAWQ Databases
+
+[Creating and Managing Databases](../ddl/ddl-database.html) and [Creating and Managing Tables](../ddl/ddl-table.html) describe HAWQ database and table creation commands.
+
+You manage HAWQ databases at the command line using the [psql](../reference/cli/client_utilities/psql.html) utility, an interactive front-end to the HAWQ database. Configuring client access to HAWQ databases and tables may require information related to [Establishing a Database Session](../clientaccess/g-establishing-a-database-session.html).
+
+[HAWQ Database Drivers and APIs](../clientaccess/g-database-application-interfaces.html) identifies supported HAWQ database drivers and APIs for additional client access methods.
+
+## <a id="hawq_env_data"></a>HAWQ Data
+
+HAWQ internal data resides in HDFS. You may require access to data in different formats and locations in your data lake. You can use HAWQ and the HAWQ Extension Framework (PXF) to access and manage both internal and this external data:
+
+- [Managing Data with HAWQ](../datamgmt/dml.html) discusses the basic data operations and details regarding the loading and unloading semantics for HAWQ internal tables.
+- [Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html) describes PXF, an extensible framework you may use to query data external to HAWQ.
+
+## <a id="hawq_env_setup"></a>HAWQ Operating Environment
+
+Refer to [Introducing the HAWQ Operating Environment](setuphawqopenv.html) for a discussion of the HAWQ operating environment, including a procedure to set up the HAWQ environment. This section also provides an introduction to the important files and directories in a HAWQ installation.


[06/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/TableDistributionStorage.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/TableDistributionStorage.html.md.erb b/overview/TableDistributionStorage.html.md.erb
deleted file mode 100755
index ec1d8b5..0000000
--- a/overview/TableDistributionStorage.html.md.erb
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: Table Distribution and Storage
----
-
-HAWQ stores all table data, except the system table, in HDFS. When a user creates a table, the metadata is stored on the master's local file system and the table content is stored in HDFS.
-
-In order to simplify table data management, all the data of one relation are saved under one HDFS folder.
-
-For all HAWQ table storage formats, AO \(Append-Only\) and Parquet, the data files are splittable, so that HAWQ can assign multiple virtual segments to consume one data file concurrently. This increases the degree of query parallelism.
-
-## Table Distribution Policy
-
-The default table distribution policy in HAWQ is random.
-
-Randomly distributed tables have some benefits over hash distributed tables. For example, after cluster expansion, HAWQ can use more resources automatically without redistributing the data. For huge tables, redistribution is very expensive, and data locality for randomly distributed tables is better after the underlying HDFS redistributes its data during rebalance or DataNode failures. This is quite common when the cluster is large.
-
-On the other hand, for some queries, hash distributed tables are faster than randomly distributed tables. For example, hash distributed tables have some performance benefits for some TPC-H queries. You should choose the distribution policy that is best suited for your application's scenario.
-
-See [Choosing the Table Distribution Policy](../ddl/ddl-table.html) for more details.
-
-## Data Locality
-
-Data is distributed across HDFS DataNodes. Since remote read involves network I/O, a data locality algorithm improves the local read ratio. HAWQ considers three aspects when allocating data blocks to virtual segments:
-
--   Ratio of local read
--   Continuity of file read
--   Data balance among virtual segments
-
-## External Data Access
-
-HAWQ can access data in external files using the HAWQ Extension Framework (PXF).
-PXF is an extensible framework that allows HAWQ to access data in external
-sources as readable or writable HAWQ tables. PXF has built-in connectors for
-accessing data inside HDFS files, Hive tables, and HBase tables. PXF also
-integrates with HCatalog to query Hive tables directly. See [Using PXF
-with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html) for more
-details.
-
-Users can create custom PXF connectors to access other parallel data stores or
-processing engines. Connectors are Java plug-ins that use the PXF API. For more
-information see [PXF External Tables and API](../pxf/PXFExternalTableandAPIReference.html).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/overview/system-overview.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/system-overview.html.md.erb b/overview/system-overview.html.md.erb
deleted file mode 100644
index 9fc1c53..0000000
--- a/overview/system-overview.html.md.erb
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Apache HAWQ (Incubating) System Overview
----
-* <a href="./HAWQOverview.html" class="subnav">What is HAWQ?</a>
-* <a href="./HAWQArchitecture.html" class="subnav">HAWQ Architecture</a>
-* <a href="./TableDistributionStorage.html" class="subnav">Table Distribution and Storage</a>
-* <a href="./ElasticSegments.html" class="subnav">Elastic Virtual Segment Allocation</a>
-* <a href="./ResourceManagement.html" class="subnav">Resource Management</a>
-* <a href="./HDFSCatalogCache.html" class="subnav">HDFS Catalog Cache</a>
-* <a href="./ManagementTools.html" class="subnav">Management Tools</a>
-* <a href="./RedundancyFailover.html" class="subnav">Redundancy and Fault Tolerance</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/plext/UsingProceduralLanguages.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/UsingProceduralLanguages.html.md.erb b/plext/UsingProceduralLanguages.html.md.erb
deleted file mode 100644
index bef1b93..0000000
--- a/plext/UsingProceduralLanguages.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: Using Languages and Extensions in HAWQ
----
-
-HAWQ supports user-defined functions that are created with the SQL and C built-in languages, and also supports user-defined aliases for internal functions.
-
-HAWQ also supports user-defined functions written in languages other than SQL and C. These other languages are generically called *procedural languages* (PLs) and are extensions to the core HAWQ functionality. HAWQ specifically supports the PL/Java, PL/Perl, PL/pgSQL, PL/Python, and PL/R procedural languages. 
-
-HAWQ additionally provides the `pgcrypto` extension for password hashing and data encryption.
-
-This chapter describes these languages and extensions:
-
--   <a href="builtin_langs.html">Using HAWQ Built-In Languages</a>
--   <a href="using_pljava.html">Using PL/Java</a>
--   <a href="using_plperl.html">Using PL/Perl</a>
--   <a href="using_plpgsql.html">Using PL/pgSQL</a>
--   <a href="using_plpython.html">Using PL/Python</a>
--   <a href="using_plr.html">Using PL/R</a>
--   <a href="using_pgcrypto.html">Using pgcrypto</a>
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/plext/builtin_langs.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/builtin_langs.html.md.erb b/plext/builtin_langs.html.md.erb
deleted file mode 100644
index 01891e8..0000000
--- a/plext/builtin_langs.html.md.erb
+++ /dev/null
@@ -1,110 +0,0 @@
----
-title: Using HAWQ Built-In Languages
----
-
-This section provides an introduction to using the HAWQ built-in languages.
-
-HAWQ supports user-defined functions created with the SQL and C built-in languages. HAWQ also supports user-defined aliases for internal functions.
-
-
-## <a id="enablebuiltin"></a>Enabling Built-in Language Support
-
-Support for SQL and C language user-defined functions and aliasing of internal functions is enabled by default for all HAWQ databases.
-
-## <a id="builtinsql"></a>Defining SQL Functions
-
-SQL functions execute an arbitrary list of SQL statements. The SQL statements in the body of a SQL function must be separated by semicolons. The final statement in a non-void-returning SQL function must be a [SELECT](../reference/sql/SELECT.html) that returns data of the type specified by the function's return type. The function will return a single or set of rows corresponding to this last SQL query.
-
-The following example creates and calls a SQL function to count the number of rows of the table named `orders`:
-
-``` sql
-gpadmin=# CREATE FUNCTION count_orders() RETURNS bigint AS $$
- SELECT count(*) FROM orders;
-$$ LANGUAGE SQL;
-CREATE FUNCTION
-gpadmin=# SELECT count_orders();
- my_count 
-----------
-   830513
-(1 row)
-```
-
-For additional information about creating SQL functions, refer to [Query Language (SQL) Functions](https://www.postgresql.org/docs/8.2/static/xfunc-sql.html) in the PostgreSQL documentation.
-
-## <a id="builtininternal"></a>Aliasing Internal Functions
-
-Many HAWQ internal functions are written in C. These functions are declared during initialization of the database cluster and statically linked to the HAWQ server. See [Built-in Functions and Operators](../query/functions-operators.html#topic29) for detailed information about HAWQ internal functions.
-
-You cannot define new internal functions, but you can create aliases for existing internal functions.
-
-The following example creates a new function named `all_caps` that is an alias for the `upper` HAWQ internal function:
-
-
-``` sql
-gpadmin=# CREATE FUNCTION all_caps (text) RETURNS text AS 'upper'
-            LANGUAGE internal STRICT;
-CREATE FUNCTION
-gpadmin=# SELECT all_caps('change me');
- all_caps  
------------
- CHANGE ME
-(1 row)
-
-```
-
-For more information about aliasing internal functions, refer to [Internal Functions](https://www.postgresql.org/docs/8.2/static/xfunc-internal.html) in the PostgreSQL documentation.
-
-## <a id="builtinc_lang"></a>Defining C Functions
-
-You must compile user-defined functions written in C into shared libraries so that the HAWQ server can load them on demand. This dynamic loading distinguishes C language functions from internal functions that are written in C.
-
-The [CREATE FUNCTION](../reference/sql/CREATE-FUNCTION.html) call for a user-defined C function must include both the name of the shared library and the name of the function.
-
-If an absolute path to the shared library is not provided, an attempt is made to locate the library relative to the: 
-
-1. HAWQ PostgreSQL library directory (obtained via the `pg_config --pkglibdir` command)
-2. `dynamic_library_path` configuration value
-3. current working directory
-
-in that order. 
-
-Example:
-
-``` c
-#include "postgres.h"
-#include "fmgr.h"
-
-#ifdef PG_MODULE_MAGIC
-PG_MODULE_MAGIC;
-#endif
-
-PG_FUNCTION_INFO_V1(double_it);
-         
-Datum
-double_it(PG_FUNCTION_ARGS)
-{
-    int32   arg = PG_GETARG_INT32(0);
-
-    PG_RETURN_INT64(arg + arg);
-}
-```
-
-If the above function is compiled into a shared object named `libdoubleit.so` located in `/share/libs`, you would register and invoke the function with HAWQ as follows:
-
-``` sql
-gpadmin=# CREATE FUNCTION double_it_c(integer) RETURNS integer
-            AS '/share/libs/libdoubleit', 'double_it'
-            LANGUAGE C STRICT;
-CREATE FUNCTION
-gpadmin=# SELECT double_it_c(27);
- double_it 
------------
-        54
-(1 row)
-
-```
-
-The shared library `.so` extension may be omitted.
-
-For additional information about using the C language to create functions, refer to [C-Language Functions](https://www.postgresql.org/docs/8.2/static/xfunc-c.html) in the PostgreSQL documentation.
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/plext/using_pgcrypto.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_pgcrypto.html.md.erb b/plext/using_pgcrypto.html.md.erb
deleted file mode 100644
index e3e9225..0000000
--- a/plext/using_pgcrypto.html.md.erb
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Enabling Cryptographic Functions for PostgreSQL (pgcrypto)
----
-
-`pgcrypto` is a package extension included in your HAWQ distribution. You must explicitly enable the cryptographic functions to use this extension.
-
-## <a id="pgcryptoprereq"></a>Prerequisites 
-
-
-Before you enable the `pgcrypto` software package, make sure that your HAWQ database is running, you have sourced `greenplum_path.sh`, and that the `$GPHOME` environment variable is set.
-
-## <a id="enablepgcrypto"></a>Enable pgcrypto 
-
-On every database in which you want to enable `pgcrypto`, run the following command:
-
-``` shell
-$ psql -d <dbname> -f $GPHOME/share/postgresql/contrib/pgcrypto.sql
-```
-	
-Replace \<dbname\> with the name of the target database.
-	
-## <a id="uninstallpgcrypto"></a>Disable pgcrypto 
-
-The `uninstall_pgcrypto.sql` script removes `pgcrypto` objects from your database.  On each database in which you enabled `pgcrypto` support, execute the following:
-
-``` shell
-$ psql -d <dbname> -f $GPHOME/share/postgresql/contrib/uninstall_pgcrypto.sql
-```
-
-Replace \<dbname\> with the name of the target database.
-	
-**Note:**  This script does not remove dependent user-created objects.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/plext/using_pljava.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_pljava.html.md.erb b/plext/using_pljava.html.md.erb
deleted file mode 100644
index 99b5767..0000000
--- a/plext/using_pljava.html.md.erb
+++ /dev/null
@@ -1,709 +0,0 @@
----
-title: Using PL/Java
----
-
-This section contains an overview of the HAWQ PL/Java language. 
-
-
-## <a id="aboutpljava"></a>About PL/Java 
-
-With the HAWQ PL/Java extension, you can write Java methods using your favorite Java IDE and install the JAR files that implement the methods in your HAWQ cluster.
-
-**Note**: If building HAWQ from source, you must specify PL/Java as a build option when compiling HAWQ. To use PL/Java in a HAWQ deployment, you must explicitly enable the PL/Java extension in all desired databases.  
-
-The HAWQ PL/Java package is based on the open source PL/Java 1.4.0. HAWQ PL/Java provides the following features.
-
-- Ability to execute PL/Java functions with Java 1.6 or 1.7.
-- Standardized utilities (modeled after the SQL 2003 proposal) to install and maintain Java code in the database.
-- Standardized mappings of parameters and result. Complex types as well as sets are supported.
-- An embedded, high performance, JDBC driver utilizing the internal HAWQ Database SPI routines.
-- Metadata support for the JDBC driver. Both `DatabaseMetaData` and `ResultSetMetaData` are included.
-- The ability to return a `ResultSet` from a query as an alternative to building a ResultSet row by row.
-- Full support for savepoints and exception handling.
-- The ability to use IN, INOUT, and OUT parameters.
-- Two separate HAWQ languages:
-	- pljava, TRUSTED PL/Java language
-	- pljavau, UNTRUSTED PL/Java language
-- Transaction and Savepoint listeners enabling code execution when a transaction or savepoint is committed or rolled back.
-- Integration with GNU GCJ on selected platforms.
-
-A function in SQL will appoint a static method in a Java class. In order for the function to execute, the appointed class must available on the class path specified by the HAWQ server configuration parameter `pljava_classpath`. The PL/Java extension adds a set of functions that helps to install and maintain the Java classes. Classes are stored in normal Java archives, JAR files. A JAR file can optionally contain a deployment descriptor that in turn contains SQL commands to be executed when the JAR is deployed or undeployed. The functions are modeled after the standards proposed for SQL 2003.
-
-PL/Java implements a standard way of passing parameters and return values. Complex types and sets are passed using the standard JDBC ResultSet class.
-
-A JDBC driver is included in PL/Java. This driver calls HAWQ internal SPI routines. The driver is essential since it is common for functions to make calls back to the database to fetch data. When PL/Java functions fetch data, they must use the same transactional boundaries that are used by the main function that entered PL/Java execution context.
-
-PL/Java is optimized for performance. The Java virtual machine executes within the same process as the backend to minimize call overhead. PL/Java is designed with the objective to enable the power of Java to the database itself so that database intensive business logic can execute as close to the actual data as possible.
-
-The standard Java Native Interface (JNI) is used when bridging calls between the backend and the Java VM.
-
-
-## <a id="abouthawqpljava"></a>About HAWQ PL/Java 
-
-There are a few key differences between the implementation of PL/Java in standard PostgreSQL and HAWQ.
-
-### <a id="pljavafunctions"></a>Functions 
-
-The following functions are not supported in HAWQ. The classpath is handled differently in a distributed HAWQ environment than in the PostgreSQL environment.
-
-- sqlj.install_jar
-- sqlj.install_jar
-- sqlj.replace_jar
-- sqlj.remove_jar
-- sqlj.get_classpath
-- sqlj.set_classpath
-
-HAWQ uses the `pljava_classpath` server configuration parameter in place of the `sqlj.set_classpath` function.
-
-### <a id="serverconfigparams"></a>Server Configuration Parameters 
-
-PL/Java uses server configuration parameters to configure classpath, Java VM, and other options. Refer to the [Server Configuration Parameter Reference](../reference/HAWQSiteConfig.html) for general information about HAWQ server configuration parameters.
-
-The following server configuration parameters are used by PL/Java in HAWQ. These parameters replace the `pljava.*` parameters that are used in the standard PostgreSQL PL/Java implementation.
-
-#### pljava\_classpath
-
-A colon (:) separated list of the jar files containing the Java classes used in any PL/Java functions. The jar files must be installed in the same locations on all HAWQ hosts. With the trusted PL/Java language handler, jar file paths must be relative to the `$GPHOME/lib/postgresql/java/` directory. With the untrusted language handler (javaU language tag), paths may be relative to `$GPHOME/lib/postgresql/java/` or absolute.
-
-#### pljava\_statement\_cache\_size
-
-Sets the size in KB of the Most Recently Used (MRU) cache for prepared statements.
-
-#### pljava\_release\_lingering\_savepoints
-
-If TRUE, lingering savepoints will be released on function exit. If FALSE, they will be rolled back.
-
-#### pljava\_vmoptions
-
-Defines the start up options for the Java VM.
-
-### <a id="setting_serverconfigparams"></a>Setting PL/Java Configuration Parameters 
-
-You can set PL/Java server configuration parameters at the session level, or globally across your whole cluster. Your HAWQ cluster configuration must be reloaded after setting a server configuration value globally.
-
-#### <a id="setsrvrcfg_global"></a>Cluster Level
-
-You will perform different procedures to set a PL/Java server configuration parameter for your whole HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set PL/Java server configuration parameters.
-
-The following examples add a JAR file named `myclasses.jar` to the `pljava_classpath` server configuration parameter for the entire HAWQ cluster.
-
-If you use Ambari to manage your HAWQ cluster:
-
-1. Set the `pljava_classpath` configuration property to include `myclasses.jar` via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. 
-2. Select **Service Actions > Restart All** to load the updated configuration.
-
-If you manage your HAWQ cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-1. Use the `hawq config` utility to set `pljava_classpath`:
-
-    ``` shell
-    $ hawq config -c pljava_classpath -v \'myclasses.jar\'
-    ```
-2. Reload the HAWQ configuration:
-
-    ``` shell
-    $ hawq stop cluster -u
-    ```
-
-#### <a id="setsrvrcfg_session"></a>Session Level 
-
-To set a PL/Java server configuration parameter for only the *current* database session, set the parameter within the `psql` subsystem. For example, to set `pljava_classpath`:
-	
-``` sql
-=> SET pljava_classpath='myclasses.jar';
-```
-
-
-## <a id="enablepljava"></a>Enabling and Removing PL/Java Support 
-
-The PL/Java extension must be explicitly enabled on each database in which it will be used.
-
-
-### <a id="pljavaprereq"></a>Prerequisites 
-
-Before you enable PL/Java:
-
-1. Ensure that you have installed a supported Java runtime environment and that the `$JAVA_HOME` variable is set to the same path on the master and all segment nodes.
-
-2. Perform the following step on all machines to set up `ldconfig` for the installed JDK:
-
-	``` shell
-	$ echo "$JAVA_HOME/jre/lib/amd64/server" > /etc/ld.so.conf.d/libjdk.conf
-	$ ldconfig
-	```
-4. Make sure that your HAWQ cluster is running, you have sourced `greenplum_path.sh` and that your `$GPHOME` environment variable is set.
-
-
-### <a id="enablepljava"></a>Enable PL/Java and Install JAR Files 
-
-To use PL/Java:
-
-1. Enable the language for each database.
-1. Install user-created JAR files on all HAWQ hosts.
-1. Add the names of the JAR files to the HAWQ `pljava_classpath` server configuration parameter. This parameter value should identify a list of the installed JAR files.
-
-#### <a id="enablepljava"></a>Enable PL/Java and Install JAR Files 
-
-Perform the following steps as the `gpadmin` user:
-
-1. Enable PL/Java by running the `$GPHOME/share/postgresql/pljava/install.sql` SQL script in the databases that will use PL/Java. The `install.sql` script registers both the trusted and untrusted PL/Java languages. For example, the following command enables PL/Java on a database named `testdb`:
-
-	``` shell
-	$ psql -d testdb -f $GPHOME/share/postgresql/pljava/install.sql
-	```
-	
-	To enable the PL/Java extension in all new HAWQ databases, run the script on the `template1` database: 
-
-    ``` shell
-    $ psql -d template1 -f $GPHOME/share/postgresql/pljava/install.sql
-    ```
-
-    Use this option *only* if you are certain you want to enable PL/Java in all new databases.
-	
-2. Copy your Java archives (JAR files) to `$GPHOME/lib/postgresql/java/` on all HAWQ hosts. This example uses the `hawq scp` utility to copy the `myclasses.jar` file located in the current directory:
-
-	``` shell
-	$ hawq scp -f hawq_hosts myclasses.jar =:$GPHOME/lib/postgresql/java/
-	```
-	The `hawq_hosts` file contains a list of the HAWQ hosts.
-
-3. Add the JAR files to the `pljava_classpath` configuration parameter. Refer to [Setting PL/Java Configuration Parameters](#setting_serverconfigparams) for the specific procedure.
-
-5. (Optional) Your HAWQ installation includes an `examples.sql` file.  This script contains sample PL/Java functions that you can use for testing. Run the commands in this file to create and run test functions that use the Java classes in `examples.jar`:
-
-	``` shell
-	$ psql -f $GPHOME/share/postgresql/pljava/examples.sql
-	```
-
-#### Configuring PL/Java VM Options
-
-PL/Java JVM options can be configured via the `pljava_vmoptions` server configuration parameter. For example, `pljava_vmoptions=-Xmx512M` sets the maximum heap size of the JVM. The default `-Xmx` value is `64M`.
-
-Refer to [Setting PL/Java Configuration Parameters](#setting_serverconfigparams) for the specific procedure to set PL/Java server configuration parameters.
-
-	
-### <a id="uninstallpljava"></a>Disable PL/Java 
-
-To disable PL/Java, you should:
-
-1. Remove PL/Java support from each database in which it was added.
-2. Uninstall the Java JAR files.
-
-#### <a id="uninstallpljavasupport"></a>Remove PL/Java Support from Databases 
-
-For a database that no longer requires the PL/Java language, remove support for PL/Java by running the `uninstall.sql` script as the `gpadmin` user. For example, the following command disables the PL/Java language in the specified database:
-
-``` shell
-$ psql -d <dbname> -f $GPHOME/share/postgresql/pljava/uninstall.sql
-```
-
-Replace \<dbname\> with the name of the target database.
-
-
-#### <a id="uninstallpljavapackage"></a>Uninstall the Java JAR files 
-
-When no databases have PL/Java as a registered language, remove the Java JAR files.
-
-If you use Ambari to manage your cluster:
-
-1. Remove the `pljava_classpath` configuration property via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down.
-
-2. Remove the JAR files from the `$GPHOME/lib/postgresql/java/` directory of each HAWQ host.
-
-3. Select **Service Actions > Restart All** to restart your HAWQ cluster.
-
-
-If you manage your cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-1. Use the `hawq config` utility to remove `pljava_classpath`:
-
-    ``` shell
-    $ hawq config -r pljava_classpath
-    ```
-    
-2. Remove the JAR files from the `$GPHOME/lib/postgresql/java/` directory of each HAWQ host.
-
-3. If you manage your cluster from the command line, run:
-
-    ``` shell
-    $ hawq restart cluster
-    ```
-
-
-## <a id="writingpljavafunc"></a>Writing PL/Java Functions 
-
-This section provides information about writing functions with PL/Java.
-
-- [SQL Declaration](#sqldeclaration)
-- [Type Mapping](#typemapping)
-- [NULL Handling](#nullhandling)
-- [Complex Types](#complextypes)
-- [Returning Complex Types](#returningcomplextypes)
-- [Functions That Return Sets](#functionreturnsets)
-- [Returning a SETOF \<scalar type\>](#returnsetofscalar)
-- [Returning a SETOF \<complex type\>](#returnsetofcomplex)
-
-
-### <a id="sqldeclaration"></a>SQL Declaration 
-
-A Java function is declared with the name of a class and a static method on that class. The class will be resolved using the classpath that has been defined for the schema where the function is declared. If no classpath has been defined for that schema, the public schema is used. If no classpath is found there either, the class is resolved using the system classloader.
-
-The following function can be declared to access the static method getProperty on `java.lang.System` class:
-
-```sql
-=> CREATE FUNCTION getsysprop(VARCHAR)
-     RETURNS VARCHAR
-     AS 'java.lang.System.getProperty'
-   LANGUAGE java;
-```
-
-Run the following command to return the Java `user.home` property:
-
-```sql
-=> SELECT getsysprop('user.home');
-```
-
-### <a id="typemapping"></a>Type Mapping 
-
-Scalar types are mapped in a straightforward way. This table lists the current mappings.
-
-***Table 1: PL/Java data type mappings***
-
-| PostgreSQL | Java |
-|------------|------|
-| bool | boolean |
-| char | byte |
-| int2 | short |
-| int4 | int |
-| int8 | long |
-| varchar | java.lang.String |
-| text | java.lang.String |
-| bytea | byte[ ] |
-| date | java.sql.Date |
-| time | java.sql.Time (stored value treated as local time) |
-| timetz | java.sql.Time |
-| timestamp	| java.sql.Timestamp (stored value treated as local time) |
-| timestampz |	java.sql.Timestamp |
-| complex |	java.sql.ResultSet |
-| setof complex	| java.sql.ResultSet |
-
-All other types are mapped to `java.lang.String` and will utilize the standard textin/textout routines registered for respective type.
-
-### <a id="nullhandling"></a>NULL Handling 
-
-The scalar types that map to Java primitives can not be passed as NULL values. To pass NULL values, those types can have an alternative mapping. You enable this mapping by explicitly denoting it in the method reference.
-
-```sql
-=> CREATE FUNCTION trueIfEvenOrNull(integer)
-     RETURNS bool
-     AS 'foo.fee.Fum.trueIfEvenOrNull(java.lang.Integer)'
-   LANGUAGE java;
-```
-
-The Java code would be similar to this:
-
-```java
-package foo.fee;
-public class Fum
-{
-  static boolean trueIfEvenOrNull(Integer value)
-  {
-    return (value == null)
-      ? true
-      : (value.intValue() % 1) == 0;
-  }
-}
-```
-
-The following two statements both yield true:
-
-```sql
-=> SELECT trueIfEvenOrNull(NULL);
-=> SELECT trueIfEvenOrNull(4);
-```
-
-In order to return NULL values from a Java method, you use the object type that corresponds to the primitive (for example, you return `java.lang.Integer` instead of `int`). The PL/Java resolve mechanism finds the method regardless. Since Java cannot have different return types for methods with the same name, this does not introduce any ambiguity.
-
-### <a id="complextypes"></a>Complex Types 
-
-A complex type will always be passed as a read-only `java.sql.ResultSet` with exactly one row. The `ResultSet` is positioned on its row so a call to `next()` should not be made. The values of the complex type are retrieved using the standard getter methods of the `ResultSet`.
-
-Example:
-
-```sql
-=> CREATE TYPE complexTest
-     AS(base integer, incbase integer, ctime timestamptz);
-=> CREATE FUNCTION useComplexTest(complexTest)
-     RETURNS VARCHAR
-     AS 'foo.fee.Fum.useComplexTest'
-   IMMUTABLE LANGUAGE java;
-```
-
-In the Java class `Fum`, we add the following static method:
-
-```java
-public static String useComplexTest(ResultSet complexTest)
-throws SQLException
-{
-  int base = complexTest.getInt(1);
-  int incbase = complexTest.getInt(2);
-  Timestamp ctime = complexTest.getTimestamp(3);
-  return "Base = \"" + base +
-    "\", incbase = \"" + incbase +
-    "\", ctime = \"" + ctime + "\"";
-}
-```
-
-### <a id="returningcomplextypes"></a>Returning Complex Types 
-
-Java does not stipulate any way to create a `ResultSet`. Hence, returning a ResultSet is not an option. The SQL-2003 draft suggests that a complex return value should be handled as an IN/OUT parameter. PL/Java implements a `ResultSet` that way. If you declare a function that returns a complex type, you will need to use a Java method with boolean return type with a last parameter of type `java.sql.ResultSet`. The parameter will be initialized to an empty updateable ResultSet that contains exactly one row.
-
-Assume that the complexTest type in previous section has been created.
-
-```sql
-=> CREATE FUNCTION createComplexTest(int, int)
-     RETURNS complexTest
-     AS 'foo.fee.Fum.createComplexTest'
-   IMMUTABLE LANGUAGE java;
-```
-
-The PL/Java method resolve will now find the following method in the `Fum` class:
-
-```java
-public static boolean complexReturn(int base, int increment, 
-  ResultSet receiver)
-throws SQLException
-{
-  receiver.updateInt(1, base);
-  receiver.updateInt(2, base + increment);
-  receiver.updateTimestamp(3, new 
-    Timestamp(System.currentTimeMillis()));
-  return true;
-}
-```
-
-The return value denotes if the receiver should be considered as a valid tuple (true) or NULL (false).
-
-### <a id="functionreturnsets"></a>Functions that Return Sets 
-
-When returning result set, you should not build a result set before returning it, because building a large result set would consume a large amount of resources. It is better to produce one row at a time. Incidentally, that is what the HAWQ backend expects a function with SETOF return to do. You can return a SETOF a scalar type such as an int, float or varchar, or you can return a SETOF a complex type.
-
-### <a id="returnsetofscalar"></a>Returning a SETOF \<scalar type\> 
-
-In order to return a set of a scalar type, you need create a Java method that returns something that implements the `java.util.Iterator` interface. Here is an example of a method that returns a SETOF varchar:
-
-```sql
-=> CREATE FUNCTION javatest.getSystemProperties()
-     RETURNS SETOF varchar
-     AS 'foo.fee.Bar.getNames'
-   IMMUTABLE LANGUAGE java;
-```
-
-This simple Java method returns an iterator:
-
-```java
-package foo.fee;
-import java.util.Iterator;
-
-public class Bar
-{
-    public static Iterator getNames()
-    {
-        ArrayList names = new ArrayList();
-        names.add("Lisa");
-        names.add("Bob");
-        names.add("Bill");
-        names.add("Sally");
-        return names.iterator();
-    }
-}
-```
-
-### <a id="returnsetofcomplex"></a>Returning a SETOF \<complex type\> 
-
-A method returning a SETOF <complex type> must use either the interface `org.postgresql.pljava.ResultSetProvider` or `org.postgresql.pljava.ResultSetHandle`. The reason for having two interfaces is that they cater for optimal handling of two distinct use cases. The former is for cases when you want to dynamically create each row that is to be returned from the SETOF function. The latter makes is in cases where you want to return the result of an executed query.
-
-#### Using the ResultSetProvider Interface
-
-This interface has two methods. The boolean `assignRowValues(java.sql.ResultSet tupleBuilder, int rowNumber)` and the `void close()` method. The HAWQ query evaluator will call the `assignRowValues` repeatedly until it returns false or until the evaluator decides that it does not need any more rows. Then it calls close.
-
-You can use this interface the following way:
-
-```sql
-=> CREATE FUNCTION javatest.listComplexTests(int, int)
-     RETURNS SETOF complexTest
-     AS 'foo.fee.Fum.listComplexTest'
-   IMMUTABLE LANGUAGE java;
-```
-
-The function maps to a static java method that returns an instance that implements the `ResultSetProvider` interface.
-
-```java
-public class Fum implements ResultSetProvider
-{
-  private final int m_base;
-  private final int m_increment;
-  public Fum(int base, int increment)
-  {
-    m_base = base;
-    m_increment = increment;
-  }
-  public boolean assignRowValues(ResultSet receiver, int 
-currentRow)
-  throws SQLException
-  {
-    // Stop when we reach 12 rows.
-    //
-    if(currentRow >= 12)
-      return false;
-    receiver.updateInt(1, m_base);
-    receiver.updateInt(2, m_base + m_increment * currentRow);
-    receiver.updateTimestamp(3, new 
-Timestamp(System.currentTimeMillis()));
-    return true;
-  }
-  public void close()
-  {
-   // Nothing needed in this example
-  }
-  public static ResultSetProvider listComplexTests(int base, 
-int increment)
-  throws SQLException
-  {
-    return new Fum(base, increment);
-  }
-}
-```
-
-The `listComplextTests` method is called once. It may return NULL if no results are available or an instance of the `ResultSetProvider`. Here the Java class `Fum` implements this interface so it returns an instance of itself. The method `assignRowValues` will then be called repeatedly until it returns false. At that time, close will be called.
-
-#### Using the ResultSetHandle Interface
-
-This interface is similar to the `ResultSetProvider` interface in that it has a `close()` method that will be called at the end. But instead of having the evaluator call a method that builds one row at a time, this method has a method that returns a `ResultSet`. The query evaluator will iterate over this set and deliver the `ResultSet` contents, one tuple at a time, to the caller until a call to `next()` returns false or the evaluator decides that no more rows are needed.
-
-Here is an example that executes a query using a statement that it obtained using the default connection. The SQL suitable for the deployment descriptor looks like this:
-
-```sql
-=> CREATE FUNCTION javatest.listSupers()
-     RETURNS SETOF pg_user
-     AS 'org.postgresql.pljava.example.Users.listSupers'
-   LANGUAGE java;
-=> CREATE FUNCTION javatest.listNonSupers()
-     RETURNS SETOF pg_user
-     AS 'org.postgresql.pljava.example.Users.listNonSupers'
-   LANGUAGE java;
-```
-
-And in the Java package `org.postgresql.pljava.example` a class `Users` is added:
-
-```java
-public class Users implements ResultSetHandle
-{
-  private final String m_filter;
-  private Statement m_statement;
-  public Users(String filter)
-  {
-    m_filter = filter;
-  }
-  public ResultSet getResultSet()
-  throws SQLException
-  {
-    m_statement = 
-      DriverManager.getConnection("jdbc:default:connection").cr
-eateStatement();
-    return m_statement.executeQuery("SELECT * FROM pg_user 
-       WHERE " + m_filter);
-  }
-
-  public void close()
-  throws SQLException
-  {
-    m_statement.close();
-  }
-
-  public static ResultSetHandle listSupers()
-  {
-    return new Users("usesuper = true");
-  }
-
-  public static ResultSetHandle listNonSupers()
-  {
-    return new Users("usesuper = false");
-  }
-}
-```
-## <a id="usingjdbc"></a>Using JDBC 
-
-PL/Java contains a JDBC driver that maps to the PostgreSQL SPI functions. A connection that maps to the current transaction can be obtained using the following statement:
-
-```java
-Connection conn = 
-  DriverManager.getConnection("jdbc:default:connection"); 
-```
-
-After obtaining a connection, you can prepare and execute statements similar to other JDBC connections. These are limitations for the PL/Java JDBC driver:
-
-- The transaction cannot be managed in any way. Thus, you cannot use methods on the connection such as:
-   - `commit()`
-   - `rollback()`
-   - `setAutoCommit()`
-   - `setTransactionIsolation()`
-- Savepoints are available with some restrictions. A savepoint cannot outlive the function in which it was set and it must be rolled back or released by that same function.
-- A ResultSet returned from `executeQuery()` are always `FETCH_FORWARD` and `CONCUR_READ_ONLY`.
-- Meta-data is only available in PL/Java 1.1 or higher.
-- `CallableStatement` (for stored procedures) is not implemented.
-- The types `Clob` or `Blob` are not completely implemented, they need more work. The types `byte[]` and `String` can be used for `bytea` and `text` respectively.
-
-## <a id="exceptionhandling"></a>Exception Handling 
-
-You can catch and handle an exception in the HAWQ backend just like any other exception. The backend `ErrorData` structure is exposed as a property in a class called `org.postgresql.pljava.ServerException` (derived from `java.sql.SQLException`) and the Java try/catch mechanism is synchronized with the backend mechanism.
-
-**Important:** You will not be able to continue executing backend functions until your function has returned and the error has been propagated when the backend has generated an exception unless you have used a savepoint. When a savepoint is rolled back, the exceptional condition is reset and you can continue your execution.
-
-## <a id="savepoints"></a>Savepoints 
-
-HAWQ savepoints are exposed using the `java.sql.Connection` interface. Two restrictions apply.
-
-- A savepoint must be rolled back or released in the function where it was set.
-- A savepoint must not outlive the function where it was set.
-
-## <a id="logging"></a>Logging 
-
-PL/Java uses the standard Java Logger. Hence, you can write things like:
-
-```java
-Logger.getAnonymousLogger().info( "Time is " + new 
-Date(System.currentTimeMillis()));
-```
-
-At present, the logger uses a handler that maps the current state of the HAWQ configuration setting `log_min_messages` to a valid Logger level and that outputs all messages using the HAWQ backend function `elog()`.
-
-**Note:** The `log_min_messages` setting is read from the database the first time a PL/Java function in a session is executed. On the Java side, the setting does not change after the first PL/Java function execution in a specific session until the HAWQ session that is working with PL/Java is restarted.
-
-The following mapping apply between the Logger levels and the HAWQ backend levels.
-
-***Table 2: PL/Java Logging Levels Mappings***
-
-| java.util.logging.Level | HAWQ Level |
-|-------------------------|------------|
-| SEVERE ERROR | ERROR |
-| WARNING |	WARNING |
-| CONFIG |	LOG |
-| INFO | INFO |
-| FINE | DEBUG1 |
-| FINER | DEBUG2 |
-| FINEST | DEBUG3 |
-
-## <a id="security"></a>Security 
-
-This section describes security aspects of using PL/Java.
-
-### <a id="installation"></a>Installation 
-
-Only a database super user can install PL/Java. The PL/Java utility functions are installed using SECURITY DEFINER so that they execute with the access permissions that where granted to the creator of the functions.
-
-### <a id="trustedlang"></a>Trusted Language 
-
-PL/Java is a trusted language. The trusted PL/Java language has no access to the file system as stipulated by PostgreSQL definition of a trusted language. Any database user can create and access functions in a trusted language.
-
-PL/Java also installs a language handler for the language `javau`. This version is not trusted and only a superuser can create new functions that use it. Any user can call the functions.
-
-
-## <a id="pljavaexample"></a>Example 
-
-The following simple Java example creates a JAR file that contains a single method and runs the method.
-
-<p class="note"><b>Note:</b> The example requires Java SDK to compile the Java file.</p>
-
-The following method returns a substring.
-
-```java
-{
-public static String substring(String text, int beginIndex,
-  int endIndex)
-    {
-    return text.substring(beginIndex, endIndex);
-    }
-}
-```
-
-Enter the Java code in a text file `example.class`.
-
-Contents of the file `manifest.txt`:
-
-```plaintext
-Manifest-Version: 1.0
-Main-Class: Example
-Specification-Title: "Example"
-Specification-Version: "1.0"
-Created-By: 1.6.0_35-b10-428-11M3811
-Build-Date: 01../2013 10:09 AM
-```
-
-Compile the Java code:
-
-```shell
-$ javac *.java
-```
-
-Create a JAR archive named `analytics.jar` that contains the class file and the manifest file in the JAR:
-
-```shell
-$ jar cfm analytics.jar manifest.txt *.class
-```
-
-Upload the JAR file to the HAWQ master host.
-
-Run the `hawq scp` utility to copy the jar file to the HAWQ Java directory. Use the `-f` option to specify the file that contains a list of the master and segment hosts:
-
-```shell
-$ hawq scp -f hawq_hosts analytics.jar =:/usr/local/hawq/lib/postgresql/java/
-```
-
-Add the `analytics.jar` JAR file to the `pljava_classpath` configuration parameter. Refer to [Setting PL/Java Configuration Parameters](#setting_serverconfigparams) for the specific procedure.
-
-From the `psql` subsystem, run the following command to show the installed JAR files:
-
-``` sql
-=> SHOW pljava_classpath
-```
-
-The following SQL commands create a table and define a Java function to test the method in the JAR file:
-
-```sql
-=> CREATE TABLE temp (a varchar) DISTRIBUTED randomly; 
-=> INSERT INTO temp values ('my string'); 
---Example function 
-=> CREATE OR REPLACE FUNCTION java_substring(varchar, int, int) 
-     RETURNS varchar AS 'Example.substring' 
-   LANGUAGE java; 
---Example execution 
-=> SELECT java_substring(a, 1, 5) FROM temp;
-```
-
-If you add these SQL commands to a file named `mysample.sql`, you can run the commands from the `psql` subsystem using the `\i` meta-command:
-
-``` sql
-=> \i mysample.sql 
-```
-
-The output is similar to this:
-
-```shell
-java_substring
-----------------
- y st
-(1 row)
-```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/plext/using_plperl.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_plperl.html.md.erb b/plext/using_plperl.html.md.erb
deleted file mode 100644
index d6ffa04..0000000
--- a/plext/using_plperl.html.md.erb
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title: Using PL/Perl
----
-
-This section contains an overview of the HAWQ PL/Perl language extension.
-
-## <a id="enableplperl"></a>Enabling PL/Perl
-
-If PL/Perl is enabled during HAWQ build time, HAWQ installs the PL/Perl language extension automatically. To use PL/Perl, you must enable it on specific databases.
-
-On every database where you want to enable PL/Perl, connect to the database using the psql client.
-
-``` shell
-$ psql -d <dbname>
-```
-
-Replace \<dbname\> with the name of the target database.
-
-Then, run the following SQL command:
-
-``` shell
-psql# CREATE LANGUAGE plperl;
-```
-
-## <a id="references"></a>References 
-
-For more information on using PL/Perl, see the PostgreSQL PL/Perl documentation at [https://www.postgresql.org/docs/8.2/static/plperl.html](https://www.postgresql.org/docs/8.2/static/plperl.html).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/plext/using_plpgsql.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_plpgsql.html.md.erb b/plext/using_plpgsql.html.md.erb
deleted file mode 100644
index 3661e9b..0000000
--- a/plext/using_plpgsql.html.md.erb
+++ /dev/null
@@ -1,142 +0,0 @@
----
-title: Using PL/pgSQL in HAWQ
----
-
-SQL is the language of most other relational databases use as query language. It is portable and easy to learn. But every SQL statement must be executed individually by the database server. 
-
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
-
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
-
-You can use functions created with PL/pgSQL with any database that supports built-in functions. For example, it is possible to create complex conditional computation functions and later use them to define operators or use them in index expressions.
-
-Every SQL statement must be executed individually by the database server. Your client application must send each query to the database server, wait for it to be processed, receive and process the results, do some computation, then send further queries to the server. This requires interprocess communication and incurs network overhead if your client is on a different machine than the database server.
-
-With PL/pgSQL, you can group a block of computation and a series of queries inside the database server, thus having the power of a procedural language and the ease of use of SQL, but with considerable savings of client/server communication overhead.
-
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
-
-This can result in a considerable performance increase as compared to an application that does not use stored functions.
-
-PL/pgSQL supports all the data types, operators, and functions of SQL.
-
-**Note:**  PL/pgSQL is automatically installed and registered in all HAWQ databases.
-
-## <a id="supportedargumentandresultdatatypes"></a>Supported Data Types for Arguments and Results 
-
-Functions written in PL/pgSQL accept as arguments any scalar or array data type supported by the server, and they can return a result containing this data type. They can also accept or return any composite type (row type) specified by name. It is also possible to declare a PL/pgSQL function as returning record, which means that the result is a row type whose columns are determined by specification in the calling query. See <a href="#tablefunctions" class="xref">Table Functions</a>.
-
-PL/pgSQL functions can be declared to accept a variable number of arguments by using the VARIADIC marker. This works exactly the same way as for SQL functions. See <a href="#sqlfunctionswithvariablenumbersofarguments" class="xref">SQL Functions with Variable Numbers of Arguments</a>.
-
-PL/pgSQLfunctions can also be declared to accept and return the polymorphic typesanyelement,anyarray,anynonarray, and anyenum. The actual data types handled by a polymorphic function can vary from call to call, as discussed in <a href="http://www.postgresql.org/docs/8.4/static/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC" class="xref">Section 34.2.5</a>. An example is shown in <a href="http://www.postgresql.org/docs/8.4/static/plpgsql-declarations.html#PLPGSQL-DECLARATION-ALIASES" class="xref">Section 38.3.1</a>.
-
-PL/pgSQL functions can also be declared to return a "set" (or table) of any data type that can be returned as a single instance. Such a function generates its output by executing RETURN NEXT for each desired element of the result set, or by using RETURN QUERY to output the result of evaluating a query.
-
-Finally, a PL/pgSQL function can be declared to return void if it has no useful return value.
-
-PL/pgSQL functions can also be declared with output parameters in place of an explicit specification of the return type. This does not add any fundamental capability to the language, but it is often convenient, especially for returning multiple values. The RETURNS TABLE notation can also be used in place of RETURNS SETOF .
-
-This topic describes the following PL/pgSQLconcepts:
-
--   [Table Functions](#tablefunctions)
--   [SQL Functions with Variable number of Arguments](#sqlfunctionswithvariablenumbersofarguments)
--   [Polymorphic Types](#polymorphictypes)
-
-
-## <a id="tablefunctions"></a>Table Functions 
-
-
-Table functions are functions that produce a set of rows, made up of either base data types (scalar types) or composite data types (table rows). They are used like a table, view, or subquery in the FROM clause of a query. Columns returned by table functions can be included in SELECT, JOIN, or WHERE clauses in the same manner as a table, view, or subquery column.
-
-If a table function returns a base data type, the single result column name matches the function name. If the function returns a composite type, the result columns get the same names as the individual attributes of the type.
-
-A table function can be aliased in the FROM clause, but it also can be left unaliased. If a function is used in the FROM clause with no alias, the function name is used as the resulting table name.
-
-Some examples:
-
-```sql
-CREATE TABLE foo (fooid int, foosubid int, fooname text);
-
-CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$
-    SELECT * FROM foo WHERE fooid = $1;
-$$ LANGUAGE SQL;
-
-SELECT * FROM getfoo(1) AS t1;
-
-SELECT * FROM foo
-    WHERE foosubid IN (
-                        SELECT foosubid
-                        FROM getfoo(foo.fooid) z
-                        WHERE z.fooid = foo.fooid
-                      );
-
-CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);
-
-SELECT * FROM vw_getfoo;
-```
-
-In some cases, it is useful to define table functions that can return different column sets depending on how they are invoked. To support this, the table function can be declared as returning the pseudotype record. When such a function is used in a query, the expected row structure must be specified in the query itself, so that the system can know how to parse and plan the query. Consider this example:
-
-```sql
-SELECT *
-    FROM dblink('dbname=mydb', 'SELECT proname, prosrc FROM pg_proc')
-      AS t1(proname name, prosrc text)
-    WHERE proname LIKE 'bytea%';
-```
-
-The `dblink` function executes a remote query (see `contrib/dblink`). It is declared to return `record` since it might be used for any kind of query. The actual column set must be specified in the calling query so that the parser knows, for example, what `*` should expand to.
-
-
-## <a id="sqlfunctionswithvariablenumbersofarguments"></a>SQL Functions with Variable Numbers of Arguments 
-
-SQL functions can be declared to accept variable numbers of arguments, so long as all the "optional" arguments are of the same data type. The optional arguments will be passed to the function as an array. The function is declared by marking the last parameter as VARIADIC; this parameter must be declared as being of an array type. For example:
-
-```sql
-CREATE FUNCTION mleast(VARIADIC numeric[]) RETURNS numeric AS $$
-    SELECT min($1[i]) FROM generate_subscripts($1, 1) g(i);
-$$ LANGUAGE SQL;
-
-SELECT mleast(10, -1, 5, 4.4);
- mleast 
---------
-     -1
-(1 row)
-```
-
-Effectively, all the actual arguments at or beyond the VARIADIC position are gathered up into a one-dimensional array, as if you had written
-
-```sql
-SELECT mleast(ARRAY[10, -1, 5, 4.4]);    -- doesn't work
-```
-
-You can't actually write that, though; or at least, it will not match this function definition. A parameter marked VARIADIC matches one or more occurrences of its element type, not of its own type.
-
-Sometimes it is useful to be able to pass an already-constructed array to a variadic function; this is particularly handy when one variadic function wants to pass on its array parameter to another one. You can do that by specifying VARIADIC in the call:
-
-```sql
-SELECT mleast(VARIADIC ARRAY[10, -1, 5, 4.4]);
-```
-
-This prevents expansion of the function's variadic parameter into its element type, thereby allowing the array argument value to match normally. VARIADIC can only be attached to the last actual argument of a function call.
-
-
-
-## <a id="polymorphictypes"></a>Polymorphic Types 
-
-Four pseudo-types of special interest are anyelement,anyarray, anynonarray, and anyenum, which are collectively called *polymorphic types*. Any function declared using these types is said to be a*polymorphic function*. A polymorphic function can operate on many different data types, with the specific data type(s) being determined by the data types actually passed to it in a particular call.
-
-Polymorphic arguments and results are tied to each other and are resolved to a specific data type when a query calling a polymorphic function is parsed. Each position (either argument or return value) declared as anyelement is allowed to have any specific actual data type, but in any given call they must all be the sam eactual type. Each position declared as anyarray can have any array data type, but similarly they must all be the same type. If there are positions declared anyarray and others declared anyelement, the actual array type in the anyarray positions must be an array whose elements are the same type appearing in the anyelement positions.anynonarray is treated exactly the same as anyelement, but adds the additional constraint that the actual type must not be an array type. anyenum is treated exactly the same as anyelement, but adds the additional constraint that the actual type must be an enum type.
-
-Thus, when more than one argument position is declared with a polymorphic type, the net effect is that only certain combinations of actual argument types are allowed. For example, a function declared as equal(anyelement, anyelement) will take any two input values, so long as they are of the same data type.
-
-When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also polymorphic, and the actual data type supplied as the argument determines the actual result type for that call. For example, if there were not already an array subscripting mechanism, one could define a function that implements subscripting `assubscript(anyarray, integer)` returns `anyelement`. This declaration constrains the actual first argument to be an array type, and allows the parser to infer the correct result type from the actual first argument's type. Another example is that a function declared `asf(anyarray)` returns `anyenum` will only accept arrays of `enum` types.
-
-Note that `anynonarray` and `anyenum` do not represent separate type variables; they are the same type as `anyelement`, just with an additional constraint. For example, declaring a function as `f(anyelement,           anyenum)` is equivalent to declaring it as `f(anyenum, anyenum)`; both actual arguments have to be the same enum type.
-
-Variadic functions described in <a href="#sqlfunctionswithvariablenumbersofarguments" class="xref">SQL Functions with Variable Numbers of Arguments</a> can be polymorphic: this is accomplished by declaring its last parameter as `VARIADIC anyarray`. For purposes of argument matching and determining the actual result type, such a function behaves the same as if you had written the appropriate number of `anynonarray` parameters.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/plext/using_plpython.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_plpython.html.md.erb b/plext/using_plpython.html.md.erb
deleted file mode 100644
index 063509a..0000000
--- a/plext/using_plpython.html.md.erb
+++ /dev/null
@@ -1,789 +0,0 @@
----
-title: Using PL/Python in HAWQ
----
-
-This section provides an overview of the HAWQ PL/Python procedural language extension.
-
-## <a id="abouthawqplpython"></a>About HAWQ PL/Python 
-
-PL/Python is embedded in your HAWQ product distribution or within your HAWQ build if you chose to enable it as a build option. 
-
-With the HAWQ PL/Python extension, you can write user-defined functions in Python that take advantage of Python features and modules, enabling you to quickly build robust HAWQ database applications.
-
-HAWQ uses the system Python installation.
-
-### <a id="hawqlimitations"></a>HAWQ PL/Python Limitations 
-
-- HAWQ does not support PL/Python trigger functions.
-- PL/Python is available only as a HAWQ untrusted language.
- 
-## <a id="enableplpython"></a>Enabling and Removing PL/Python Support 
-
-To use PL/Python in HAWQ, you must either install a binary version of HAWQ that includes PL/Python or specify PL/Python as a build option when you compile HAWQ from source.
-
-You must register the PL/Python language with a database before you can create and execute a PL/Python UDF on that database. You must be a database superuser to register and remove new languages in HAWQ databases.
-
-On every database to which you want to install and enable PL/Python:
-
-1. Connect to the database using the `psql` client:
-
-    ``` shell
-    gpadmin@hawq-node$ psql -d <dbname>
-    ```
-
-    Replace \<dbname\> with the name of the target database.
-
-2. Run the following SQL command to register the PL/Python procedural language:
-
-    ``` sql
-    dbname=# CREATE LANGUAGE plpythonu;
-    ```
-
-    **Note**: `plpythonu` is installed as an *untrusted* language; it offers no way of restricting what you can program in UDFs created with the language. Creating and executing PL/Python UDFs is permitted only by database superusers and other database users explicitly `GRANT`ed the permissions.
-
-To remove support for `plpythonu` from a database, run the following SQL command; you must be a database superuser to remove a registered procedural language:
-
-``` sql
-dbname=# DROP LANGUAGE plpythonu;
-```
-
-## <a id="developfunctions"></a>Developing Functions with PL/Python 
-
-PL/Python functions are defined using the standard SQL [CREATE FUNCTION](../reference/sql/CREATE-FUNCTION.html) syntax.
-
-The body of a PL/Python user-defined function is a Python script. When the function is called, its arguments are passed as elements of the array `args[]`. You can also pass named arguments as ordinary variables to the Python script. 
-
-PL/Python function results are returned with a `return` statement, or a `yield` statement in the case of a result-set statement.
-
-The following PL/Python function computes and returns the maximum of two integers:
-
-``` sql
-=# CREATE FUNCTION mypymax (a integer, b integer)
-     RETURNS integer
-   AS $$
-     if (a is None) or (b is None):
-       return None
-     if a > b:
-       return a
-     return b
-   $$ LANGUAGE plpythonu;
-```
-
-To execute the `mypymax` function:
-
-``` sql
-=# SELECT mypymax(5, 7);
- mypymax 
----------
-       7
-(1 row)
-```
-
-Adding the `STRICT` keyword to the `LANGUAGE` subclause instructs HAWQ to return null when any of the input arguments are null. When created as `STRICT`, the function itself need not perform null checks.
-
-The following example uses an unnamed argument, the built-in Python `max()` function, and the `STRICT` keyword to create a UDF named `mypymax2`:
-
-``` sql
-=# CREATE FUNCTION mypymax2 (a integer, integer)
-     RETURNS integer AS $$ 
-   return max(a, args[0]) 
-   $$ LANGUAGE plpythonu STRICT;
-=# SELECT mypymax(5, 3);
- mypymax2
-----------
-        5
-(1 row)
-=# SELECT mypymax(5, null);
- mypymax2
-----------
-       
-(1 row)
-```
-
-## <a id="example_createtbl"></a>Creating the Sample Data
-
-Perform the following steps to create, and insert data into, a simple table. This table will be used in later exercises.
-
-1. Create a database named `testdb`:
-
-    ``` shell
-    gpadmin@hawq-node$ createdb testdb
-    ```
-
-1. Create a table named `sales`:
-
-    ``` shell
-    gpadmin@hawq-node$ psql -d testdb
-    ```
-    ``` sql
-    testdb=> CREATE TABLE sales (id int, year int, qtr int, day int, region text)
-               DISTRIBUTED BY (id);
-    ```
-
-2. Insert data into the table:
-
-    ``` sql
-    testdb=> INSERT INTO sales VALUES
-     (1, 2014, 1,1, 'usa'),
-     (2, 2002, 2,2, 'europe'),
-     (3, 2014, 3,3, 'asia'),
-     (4, 2014, 4,4, 'usa'),
-     (5, 2014, 1,5, 'europe'),
-     (6, 2014, 2,6, 'asia'),
-     (7, 2002, 3,7, 'usa') ;
-    ```
-
-## <a id="pymod_intro"></a>Python Modules 
-A Python module is a text file containing Python statements and definitions. Python modules are named, with the file name for a module following the `<python-module-name>.py` naming convention.
-
-Should you need to build a Python module, ensure that the appropriate software is installed on the build system. Also be sure that you are building for the correct deployment architecture, i.e. 64-bit.
-
-### <a id="pymod_intro_hawq"></a>HAWQ Considerations 
-
-When installing a Python module in HAWQ, you must add the module to all segment nodes in the cluster. You must also add all Python modules to any new segment hosts when you expand your HAWQ cluster.
-
-PL/Python supports the built-in HAWQ Python module named `plpy`.  You can also install 3rd party Python modules.
-
-
-## <a id="modules_plpy"></a>plpy Module 
-
-The HAWQ PL/Python procedural language extension automatically imports the Python module `plpy`. `plpy` implements functions to execute SQL queries and prepare execution plans for queries.  The `plpy` module also includes functions to manage errors and messages.
-   
-### <a id="executepreparesql"></a>Executing and Preparing SQL Queries 
-
-Use the PL/Python `plpy` module `plpy.execute()` function to execute a SQL query. Use the `plpy.prepare()` function to prepare an execution plan for a query. Preparing the execution plan for a query is useful if you want to run the query from multiple Python functions.
-
-#### <a id="plpyexecute"></a>plpy.execute() 
-
-Invoking `plpy.execute()` with a query string and an optional limit argument runs the query, returning the result in a Python result object. This result object:
-
-- emulates a list or dictionary object
-- returns rows that can be accessed by row number and column name; row numbering starts with 0 (zero)
-- can be modified
-- includes an `nrows()` method that returns the number of rows returned by the query
-- includes a `status()` method that returns the `SPI_execute()` return value
-
-For example, the following Python statement when present in a PL/Python user-defined function will execute a `SELECT * FROM mytable` query:
-
-``` python
-rv = plpy.execute("SELECT * FROM my_table", 3)
-```
-
-As instructed by the limit argument `3`, the `plpy.execute` function will return up to 3 rows from `my_table`. The result set is stored in the `rv` object.
-
-Access specific columns in the table by name. For example, if `my_table` has a column named `my_column`:
-
-``` python
-my_col_data = rv[i]["my_column"]
-```
-
-You specified that the function return a maximum of 3 rows in the `plpy.execute()` command above. As such, the index `i` used to access the result value `rv` must specify an integer between 0 and 2, inclusive.
-
-##### <a id="plpyexecute_example"></a>Example: plpy.execute()
-
-Example: Use `plpy.execute()` to run a similar query on the `sales` table you created in an earlier section:
-
-1. Define a PL/Python UDF that executes a query to return at most 5 rows from the `sales` table:
-
-    ``` sql
-    =# CREATE OR REPLACE FUNCTION mypytest(a integer) 
-         RETURNS text 
-       AS $$ 
-         rv = plpy.execute("SELECT * FROM sales ORDER BY id", 5)
-         region = rv[a-1]["region"]
-         return region
-       $$ LANGUAGE plpythonu;
-    ```
-
-    When executed, this UDF returns the `region` value from the `id` identified by the input value `a`. Since row numbering of the result set starts at 0, you must access the result set with index `a - 1`. 
-    
-    Specifying the `ORDER BY id` clause in the `SELECT` statement ensures that subsequent invocations of `mypytest` with the same input argument will return identical result sets.
-
-3. Run `mypytest` with an argument identifying `id` `3`:
-
-    ```sql
-    =# SELECT mypytest(3);
-     mypytest 
-    ----------
-     asia
-    (1 row)
-    ```
-    
-    Recall that the row numbering starts from 0 in a Python returned result set. The valid input argument for the `mypytest2` function is an integer between 0 and 4, inclusive.
-
-    The query returns the `region` from the row with `id = 3`, `asia`.
-    
-Note: This example demonstrates some of the concepts discussed previously. It may not be the ideal way to return a specific column value.
-
-#### <a id="plpyprepare"></a>plpy.prepare() 
-
-The function `plpy.prepare()` prepares the execution plan for a query. Preparing the execution plan for a query is useful if you plan to run the query from multiple Python functions.
-
-You invoke `plpy.prepare()` with a query string. Also include a list of parameter types if you are using parameter references in the query. For example, the following statement in a PL/Python user-defined function returns the execution plan for a query:
-
-``` python
-plan = plpy.prepare("SELECT * FROM sales ORDER BY id WHERE 
-  region = $1", [ "text" ])
-```
-
-The string `text` identifies the data type of the variable `$1`. 
-
-After preparing an execution plan, you use the function `plpy.execute()` to run it.  For example:
-
-``` python
-rv = plpy.execute(plan, [ "usa" ])
-```
-
-When executed, `rv` will include all rows in the `sales` table where `region = usa`.
-
-Read on for a description of how one passes data between PL/Python function calls.
-
-##### <a id="plpyprepare_dictionaries"></a>Saving Execution Plans
-
-When you prepare an execution plan using the PL/Python module, the plan is automatically saved. See the [Postgres Server Programming Interface (SPI)](http://www.postgresql.org/docs/8.2/static/spi.html) documentation for information about execution plans.
-
-To make effective use of saved plans across function calls, you use one of the Python persistent storage dictionaries, SD or GD.
-
-The global dictionary SD is available to store data between function calls. This variable is private static data. The global dictionary GD is public data, and is available to all Python functions within a session. *Use GD with care*.
-
-Each function gets its own execution environment in the Python interpreter, so that global data and function arguments from `myfunc1` are not available to `myfunc2`. The exception is the data in the GD dictionary, as mentioned previously.
-
-This example saves an execution plan to the SD dictionary and then executes the plan:
-
-```sql
-=# CREATE FUNCTION usesavedplan() RETURNS text AS $$
-     select1plan = plpy.prepare("SELECT region FROM sales WHERE id=1")
-     SD["s1plan"] = select1plan
-     # other function processing
-     # execute the saved plan
-     rv = plpy.execute(SD["s1plan"])
-     return rv[0]["region"]
-   $$ LANGUAGE plpythonu;
-=# SELECT usesavedplan();
-```
-
-##### <a id="plpyprepare_example"></a>Example: plpy.prepare()
-
-Example: Use `plpy.prepare()` and `plpy.execute()` to prepare and run an execution plan using the GD dictionary:
-
-1. Define a PL/Python UDF to prepare and save an execution plan to the GD. Also  return the name of the plan:
-
-    ``` sql
-    =# CREATE OR REPLACE FUNCTION mypy_prepplan() 
-         RETURNS text 
-       AS $$ 
-         plan = plpy.prepare("SELECT * FROM sales WHERE region = $1 ORDER BY id", [ "text" ])
-         GD["getregionplan"] = plan
-         return "getregionplan"
-       $$ LANGUAGE plpythonu;
-    ```
-
-    This UDF, when run, will return the name (key) of the execution plan generated from the `plpy.prepare()` call.
-
-1. Define a PL/Python UDF to run the execution plan; this function will take the plan name and `region` name as an input:
-
-    ``` sql
-    =# CREATE OR REPLACE FUNCTION mypy_execplan(planname text, regionname text)
-         RETURNS integer 
-       AS $$ 
-         rv = plpy.execute(GD[planname], [ regionname ], 5)
-         year = rv[0]["year"]
-         return year
-       $$ LANGUAGE plpythonu STRICT;
-    ```
-
-    This UDF executes the `planname` plan that was previously saved to the GD. You will call `mypy_execplan()` with the `planname` returned from the `plpy.prepare()` call.
-
-3. Execute the `mypy_prepplan()` and `mypy_execplan()` UDFs, passing `region` `usa`:
-
-    ``` sql
-    =# SELECT mypy_execplan( mypy_prepplan(), 'usa' );
-     mypy_execplan
-    ---------------
-         2014
-    (1 row)
-    ```
-
-### <a id="pythonerrors"></a>Handling Python Errors and Messages 
-
-The `plpy` module implements the following message- and error-related functions, each of which takes a message string as an argument:
-
-- `plpy.debug(msg)`
-- `plpy.log(msg)`
-- `plpy.info(msg)`
-- `plpy.notice(msg)`
-- `plpy.warning(msg)`
-- `plpy.error(msg)`
-- `plpy.fatal(msg)`
-
-`plpy.error()` and `plpy.fatal()` raise a Python exception which, if uncaught, propagates out to the calling query, possibly aborting the current transaction or subtransaction. `raise plpy.ERROR(msg)` and `raise plpy.FATAL(msg)` are equivalent to calling `plpy.error()` and `plpy.fatal()`, respectively. Use the other message functions to generate messages of different priority levels.
-
-Messages may be reported to the client and/or written to the HAWQ server log file.  The HAWQ server configuration parameters [`log_min_messages`](../reference/guc/parameter_definitions.html#log_min_messages) and [`client_min_messages`](../reference/guc/parameter_definitions.html#client_min_messages) control where messages are reported.
-
-#### <a id="plpymessages_example"></a>Example: Generating Messages
-
-In this example, you will create a PL/Python UDF that includes some debug log messages. You will also configure your `psql` session to enable debug-level client logging.
-
-1. Define a PL/Python UDF that executes a query that will return at most 5 rows from the `sales` table. Invoke the `plpy.debug()` method to display some additional information:
-
-    ``` sql
-    =# CREATE OR REPLACE FUNCTION mypytest_debug(a integer) 
-         RETURNS text 
-       AS $$ 
-         plpy.debug('mypytest_debug executing query:  SELECT * FROM sales ORDER BY id')
-         rv = plpy.execute("SELECT * FROM sales ORDER BY id", 5)
-         plpy.debug('mypytest_debug: query returned ' + str(rv.nrows()) + ' rows')
-         region = rv[a]["region"]
-         return region
-       $$ LANGUAGE plpythonu;
-    ```
-
-2. Execute the `mypytest_debug()` UDF, passing the integer `2` as an argument:
-
-    ```sql
-    =# SELECT mypytest_debug(2);
-     mypytest_debug 
-    ----------------
-     asia
-    (1 row)
-    ```
-
-3. Enable `DEBUG2` level client logging:
-
-    ``` sql
-    =# SET client_min_messages=DEBUG2;
-    ```
-    
-2. Execute the `mypytest_debug()` UDF again:
-
-    ```sql
-    =# SELECT mypytest_debug(2);
-    ...
-    DEBUG2:  mypytest_debug executing query:  SELECT * FROM sales ORDER BY id
-    ...
-    DEBUG2:  mypytest_debug: query returned 5 rows
-    ...
-    ```
-
-    Debug output is very verbose. You will parse a lot of output to find the `mypytest_debug` messages. *Hint*: look both near the start and end of the output.
-    
-6. Turn off client-level debug logging:
-
-    ```sql
-    =# SET client_min_messages=NOTICE;
-    ```
-
-## <a id="pythonmodules-3rdparty"></a>3rd-Party Python Modules 
-
-PL/Python supports installation and use of 3rd-party Python Modules. This section includes examples for installing the `setuptools` and NumPy Python modules.
-
-**Note**: You must have superuser privileges to install Python modules to the system Python directories.
-
-### <a id="simpleinstall"></a>Example: Installing setuptools 
-
-In this example, you will manually install the Python `setuptools` module from the Python Package Index repository. `setuptools` enables you to easily download, build, install, upgrade, and uninstall Python packages.
-
-You will first build the module from the downloaded package, installing it on a single host. You will then build and install the module on all segment nodes in your HAWQ cluster.
-
-1. Download the `setuptools` module package from the Python Package Index site. For example, run this `wget` command on a HAWQ node as the `gpadmin` user:
-
-    ``` shell
-    $ ssh gpadmin@<hawq-node>
-    gpadmin@hawq-node$ . /usr/local/hawq/greenplum_path.sh
-    gpadmin@hawq-node$ mkdir plpython_pkgs
-    gpadmin@hawq-node$ cd plpython_pkgs
-    gpadmin@hawq-node$ export PLPYPKGDIR=`pwd`
-    gpadmin@hawq-node$ wget --no-check-certificate https://pypi.python.org/packages/source/s/setuptools/setuptools-18.4.tar.gz
-    ```
-
-2. Extract the files from the `tar.gz` package:
-
-    ``` shell
-    gpadmin@hawq-node$ tar -xzvf setuptools-18.4.tar.gz
-    ```
-
-3. Run the Python scripts to build and install the Python package; you must have superuser privileges to install Python modules to the system Python installation:
-
-    ``` shell
-    gpadmin@hawq-node$ cd setuptools-18.4
-    gpadmin@hawq-node$ python setup.py build 
-    gpadmin@hawq-node$ sudo python setup.py install
-    ```
-
-4. Run the following command to verify the module is available to Python:
-
-    ``` shell
-    gpadmin@hawq-node$ python -c "import setuptools"
-    ```
-    
-    If no error is returned, the `setuptools` module was successfully imported.
-
-5. The `setuptools` package installs the `easy_install` utility. This utility enables you to install Python packages from the Python Package Index repository. For example, this command installs the Python `pip` utility from the Python Package Index site:
-
-    ``` shell
-    gpadmin@hawq-node$ sudo easy_install pip
-    ```
-
-5. Copy the `setuptools` package to all HAWQ nodes in your cluster. For example, this command copies the `tar.gz` file from the current host to the host systems listed in the file `hawq-hosts`:
-
-    ``` shell
-    gpadmin@hawq-node$ cd $PLPYPKGDIR
-    gpadmin@hawq-node$ hawq scp -f hawq-hosts setuptools-18.4.tar.gz =:/home/gpadmin
-    ```
-
-6. Run the commands to build, install, and test the `setuptools` package you just copied to all hosts in your HAWQ cluster. For example:
-
-    ``` shell
-    gpadmin@hawq-node$ hawq ssh -f hawq-hosts
-    >>> mkdir plpython_pkgs
-    >>> cd plpython_pkgs
-    >>> tar -xzvf ../setuptools-18.4.tar.gz
-    >>> cd setuptools-18.4
-    >>> python setup.py build 
-    >>> sudo python setup.py install
-    >>> python -c "import setuptools"
-    >>> exit
-    ```
-
-### <a id="complexinstall"></a>Example: Installing NumPy 
-
-In this example, you will build and install the Python module NumPy. NumPy is a module for scientific computing with Python. For additional information about NumPy, refer to [http://www.numpy.org/](http://www.numpy.org/).
-
-This example assumes `yum` is installed on all HAWQ segment nodes and that the `gpadmin` user is a member of `sudoers` with `root` privileges on the nodes.
-
-#### <a id="complexinstall_prereq"></a>Prerequisites
-Building the NumPy package requires the following software:
-
-- OpenBLAS libraries - an open source implementation of BLAS (Basic Linear Algebra Subprograms)
-- Python development packages - python-devel
-- gcc compilers - gcc, gcc-gfortran, and gcc-c++
-
-Perform the following steps to set up the OpenBLAS compilation environment on each HAWQ node:
-
-1. Use `yum` to install gcc compilers from system repositories. The compilers are required on all hosts where you compile OpenBLAS.  For example:
-
-	``` shell
-	root@hawq-node$ yum -y install gcc gcc-gfortran gcc-c++ python-devel
-	```
-
-2. (Optionally required) If you cannot install the correct compiler versions with `yum`, you have the option to download the gcc compilers, including `gfortran`, from source and build and install them manually. Refer to [Building gfortran from Source](https://gcc.gnu.org/wiki/GFortranBinaries#FromSource) for `gfortran` build and install information.
-
-2. Create a symbolic link to `g++`, naming it `gxx`:
-
-	``` bash
-	root@hawq-node$ ln -s /usr/bin/g++ /usr/bin/gxx
-	```
-
-3. You may also need to create symbolic links to any libraries that have different versions available; for example, linking `libppl_c.so.4` to `libppl_c.so.2`.
-
-4. You can use the `hawq scp` utility to copy files to HAWQ hosts and the `hawq ssh` utility to run commands on those hosts.
-
-
-#### <a id="complexinstall_downdist"></a>Obtaining Packages
-
-Perform the following steps to download and distribute the OpenBLAS and NumPy source packages:
-
-1. Download the OpenBLAS and NumPy source files. For example, these `wget` commands download `tar.gz` files into a `packages` directory in the current working directory:
-
-    ``` shell
-    $ ssh gpadmin@<hawq-node>
-    gpadmin@hawq-node$ wget --directory-prefix=packages http://github.com/xianyi/OpenBLAS/tarball/v0.2.8
-    gpadmin@hawq-node$ wget --directory-prefix=packages http://sourceforge.net/projects/numpy/files/NumPy/1.8.0/numpy-1.8.0.tar.gz/download
-    ```
-
-2. Distribute the software to all nodes in your HAWQ cluster. For example, if you downloaded the software to `/home/gpadmin/packages`, these commands create the `packages` directory on all nodes and copies the software to the nodes listed in the `hawq-hosts` file:
-
-    ``` shell
-    gpadmin@hawq-node$ hawq ssh -f hawq-hosts mkdir packages 
-    gpadmin@hawq-node$ hawq scp -f hawq-hosts packages/* =:/home/gpadmin/packages
-    ```
-
-#### <a id="buildopenblas"></a>Build and Install OpenBLAS Libraries 
-
-Before building and installing the NumPy module, you must first build and install the OpenBLAS libraries. This section describes how to build and install the libraries on a single HAWQ node.
-
-1. Extract the OpenBLAS files from the file:
-
-	``` shell
-	$ ssh gpadmin@<hawq-node>
-	gpadmin@hawq-node$ cd packages
-	gpadmin@hawq-node$ tar xzf v0.2.8 -C /home/gpadmin/packages
-	gpadmin@hawq-node$ mv /home/gpadmin/packages/xianyi-OpenBLAS-9c51cdf /home/gpadmin/packages/OpenBLAS
-	```
-	
-	These commands extract the OpenBLAS tar file and simplify the unpacked directory name.
-
-2. Compile OpenBLAS. You must set the `LIBRARY_PATH` environment variable to the current `$LD_LIBRARY_PATH`. For example:
-
-	``` shell
-	gpadmin@hawq-node$ cd OpenBLAS
-	gpadmin@hawq-node$ export LIBRARY_PATH=$LD_LIBRARY_PATH
-	gpadmin@hawq-node$ make FC=gfortran USE_THREAD=0 TARGET=SANDYBRIDGE
-	```
-	
-	Replace the `TARGET` argument with the target appropriate for your hardware. The `TargetList.txt` file identifies the list of supported OpenBLAS targets.
-	
-	Compiling OpenBLAS make take some time.
-
-3. Install the OpenBLAS libraries in `/usr/local` and then change the owner of the files to `gpadmin`. You must have `root` privileges. For example:
-
-	``` shell
-	gpadmin@hawq-node$ sudo make PREFIX=/usr/local install
-	gpadmin@hawq-node$ sudo ldconfig
-	gpadmin@hawq-node$ sudo chown -R gpadmin /usr/local/lib
-	```
-
-	The following libraries are installed to `/usr/local/lib`, along with symbolic links:
-
-	``` shell
-	gpadmin@hawq-node$ ls -l gpadmin@hawq-node$
-	    ...
-	    libopenblas.a -> libopenblas_sandybridge-r0.2.8.a
-	    libopenblas_sandybridge-r0.2.8.a
-	    libopenblas_sandybridge-r0.2.8.so
-	    libopenblas.so -> libopenblas_sandybridge-r0.2.8.so
-	    libopenblas.so.0 -> libopenblas_sandybridge-r0.2.8.so
-	    ...
-	```
-
-4. Install the OpenBLAS libraries on all nodes in your HAWQ cluster. You can use the `hawq ssh` utility to similarly build and install the OpenBLAS libraries on each of the nodes. 
-
-    Or, you may choose to copy the OpenBLAS libraries you just built to all of the HAWQ cluster nodes. For example, these `hawq ssh` and `hawq scp` commands install prerequisite packages, and copy and install the OpenBLAS libraries on the hosts listed in the `hawq-hosts` file.
-
-    ``` shell
-    $ hawq ssh -f hawq-hosts -e 'sudo yum -y install gcc gcc-gfortran gcc-c++ python-devel'
-    $ hawq ssh -f hawq-hosts -e 'ln -s /usr/bin/g++ /usr/bin/gxx'
-    $ hawq ssh -f hawq-hosts -e sudo chown gpadmin /usr/local/lib
-    $ hawq scp -f hawq-hosts /usr/local/lib/libopen*sandy* =:/usr/local/lib
-    ```
-    ``` shell
-    $ hawq ssh -f hawq-hosts
-    >>> cd /usr/local/lib
-    >>> ln -s libopenblas_sandybridge-r0.2.8.a libopenblas.a
-    >>> ln -s libopenblas_sandybridge-r0.2.8.so libopenblas.so
-    >>> ln -s libopenblas_sandybridge-r0.2.8.so libopenblas.so.0
-    >>> sudo ldconfig
-   ```
-
-#### Build and Install NumPy <a name="buildinstallnumpy"></a>
-
-After you have installed the OpenBLAS libraries, you can build and install NumPy module. These steps install the NumPy module on a single host. You can use the `hawq ssh` utility to build and install the NumPy module on multiple hosts.
-
-1. Extract the NumPy module source files:
-
-	``` shell
-	gpadmin@hawq-node$ cd /home/gpadmin/packages
-	gpadmin@hawq-node$ tar xzf numpy-1.8.0.tar.gz
-	```
-	
-	Unpacking the `numpy-1.8.0.tar.gz` file creates a directory named `numpy-1.8.0` in the current directory.
-
-2. Set up the environment for building and installing NumPy:
-
-	``` shell
-	gpadmin@hawq-node$ export BLAS=/usr/local/lib/libopenblas.a
-	gpadmin@hawq-node$ export LAPACK=/usr/local/lib/libopenblas.a
-	gpadmin@hawq-node$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
-	gpadmin@hawq-node$ export LIBRARY_PATH=$LD_LIBRARY_PATH
-	```
-
-3. Build and install NumPy. (Building the NumPy package might take some time.)
-
-	``` shell
-	gpadmin@hawq-node$ cd numpy-1.8.0
-	gpadmin@hawq-node$ python setup.py build
-	gpadmin@hawq-node$ sudo python setup.py install
-	```
-
-	**Note:** If the NumPy module did not successfully build, the NumPy build process might need a `site.cfg` file that specifies the location of the OpenBLAS libraries. Create the `site.cfg` file in the NumPy package directory:
-
-	``` shell
-	gpadmin@hawq-node$ touch site.cfg
-	```
-
-	Add the following to the `site.cfg` file and run the NumPy build command again:
-
-	``` pre
-	[default]
-	library_dirs = /usr/local/lib
-
-	[atlas]
-	atlas_libs = openblas
-	library_dirs = /usr/local/lib
-
-	[lapack]
-	lapack_libs = openblas
-	library_dirs = /usr/local/lib
-
-	# added for scikit-learn 
-	[openblas]
-	libraries = openblas
-	library_dirs = /usr/local/lib
-	include_dirs = /usr/local/include
-	```
-
-4. Verify that the NumPy module is available for import by Python:
-
-	``` shell
-	gpadmin@hawq-node$ cd $HOME
-	gpadmin@hawq-node$ python -c "import numpy"
-	```
-	
-	If no error is returned, the NumPy module was successfully imported.
-
-5. As performed in the `setuptools` Python module installation, use the `hawq ssh` utility to build, install, and test the NumPy module on all HAWQ nodes.
-
-5. The environment variables that were required to build the NumPy module are also required in the `gpadmin` runtime environment to run Python NumPy functions. You can use the `echo` command to add the environment variables to `gpadmin`'s `.bashrc` file. For example, the following `echo` commands add the environment variables to the `.bashrc` file in `gpadmin`'s home directory:
-
-	``` shell
-	$ echo -e '\n#Needed for NumPy' >> ~/.bashrc
-	$ echo -e 'export BLAS=/usr/local/lib/libopenblas.a' >> ~/.bashrc
-	$ echo -e 'export LAPACK=/usr/local/lib/libopenblas.a' >> ~/.bashrc
-	$ echo -e 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib' >> ~/.bashrc
-	$ echo -e 'export LIBRARY_PATH=$LD_LIBRARY_PATH' >> ~/.bashrc
-	```
-
-    You can use the `hawq ssh` utility with these `echo` commands to add the environment variables to the `.bashrc` file on all nodes in your HAWQ cluster.
-
-### <a id="testingpythonmodules"></a>Testing Installed Python Modules 
-
-You can create a simple PL/Python user-defined function (UDF) to validate that a Python module is available in HAWQ. This example tests the NumPy module.
-
-1. Create a PL/Python UDF that imports the NumPy module:
-
-    ``` shell
-    gpadmin@hawq_node$ psql -d testdb
-    ```
-    ``` sql
-    =# CREATE OR REPLACE FUNCTION test_importnumpy(x int)
-       RETURNS text
-       AS $$
-         try:
-             from numpy import *
-             return 'SUCCESS'
-         except ImportError, e:
-             return 'FAILURE'
-       $$ LANGUAGE plpythonu;
-    ```
-
-    The function returns SUCCESS if the module is imported, and FAILURE if an import error occurs.
-
-2. Create a table that loads data on each HAWQ segment instance:
-
-    ``` sql
-    => CREATE TABLE disttbl AS (SELECT x FROM generate_series(1,50) x ) DISTRIBUTED BY (x);
-    ```
-    
-    Depending upon the size of your HAWQ installation, you may need to generate a larger series to ensure data is distributed to all segment instances.
-
-3. Run the UDF on the segment nodes where data is stored in the primary segment instances.
-
-    ``` sql
-    =# SELECT gp_segment_id, test_importnumpy(1) AS status
-         FROM disttbl
-         GROUP BY gp_segment_id, status
-         ORDER BY gp_segment_id, status;
-    ```
-
-    The `SELECT` command returns SUCCESS if the UDF imported the Python module on the HAWQ segment instance. FAILURE is returned if the Python module could not be imported.
-   
-
-#### <a id="testingpythonmodules"></a>Troubleshooting Python Module Import Failures
-
-Possible causes of a Python module import failure include:
-
-- A problem accessing required libraries. For the NumPy example, HAWQ might have a problem accessing the OpenBLAS libraries or the Python libraries on a segment host.
-
-	*Try*: Test importing the module on the segment host. This `hawq ssh` command tests importing the NumPy module on the segment host named mdw1.
-
-	``` shell
-	gpadmin@hawq-node$ hawq ssh -h mdw1 python -c "import numpy"
-	```
-
-- Environment variables may not be configured in the HAWQ environment. The Python import command may not return an error in this case.
-
-	*Try*: Ensure that the environment variables are properly set. For the NumPy example, ensure that the environment variables listed at the end of the section [Build and Install NumPy](#buildinstallnumpy) are defined in the `.bashrc` file for the `gpadmin` user on the master and all segment nodes.
-	
-	**Note:** The `.bashrc` file for the `gpadmin` user on the HAWQ master and all segment nodes must source the `greenplum_path.sh` file.
-
-	
-- HAWQ might not have been restarted after adding environment variable settings to the `.bashrc` file. Again, the Python import command may not return an error in this case.
-
-	*Try*: Ensure that you have restarted HAWQ.
-	
-	``` shell
-	gpadmin@master$ hawq restart cluster
-	```
-
-## <a id="dictionarygd"></a>Using the GD Dictionary to Improve PL/Python Performance 
-
-Importing a Python module is an expensive operation that can adversely affect performance. If you are importing the same module frequently, you can use Python global variables to import the module on the first invocation and forego loading the module on subsequent imports. 
-
-The following PL/Python function uses the GD persistent storage dictionary to avoid importing the module NumPy if it has already been imported in the GD. The UDF includes a call to `plpy.notice()` to display a message when importing the module.
-
-``` sql
-=# CREATE FUNCTION mypy_import2gd() RETURNS text AS $$ 
-     if 'numpy' not in GD:
-       plpy.notice('mypy_import2gd: importing module numpy')
-       import numpy
-       GD['numpy'] = numpy
-     return 'numpy'
-   $$ LANGUAGE plpythonu;
-```
-``` sql
-=# SELECT mypy_import2gd();
-NOTICE:  mypy_import2gd: importing module numpy
-CONTEXT:  PL/Python function "mypy_import2gd"
- mypy_import2gd 
-----------------
- numpy
-(1 row)
-```
-``` sql
-=# SELECT mypy_import2gd();
- mypy_import2gd 
-----------------
- numpy
-(1 row)
-```
-
-The second `SELECT` call does not include the `NOTICE` message, indicating that the module was obtained from the GD.
-
-## <a id="references"></a>References 
-
-This section lists references for using PL/Python.
-
-### <a id="technicalreferences"></a>Technical References 
-
-For information about PL/Python in HAWQ, see the [PL/Python - Python Procedural Language](http://www.postgresql.org/docs/8.2/static/plpython.html) PostgreSQL documentation.
-
-For information about Python Package Index (PyPI), refer to [PyPI - the Python Package Index](https://pypi.python.org/pypi).
-
-The following Python modules may be of interest:
-
-- [SciPy library](http://www.scipy.org/scipylib/index.html) provides user-friendly and efficient numerical routines including those for numerical integration and optimization. To download the SciPy package tar file:
-
-    ``` shell
-    hawq-node$ wget http://sourceforge.net/projects/scipy/files/scipy/0.10.1/scipy-0.10.1.tar.gz
-    ```
-
-- [Natural Language Toolkit](http://www.nltk.org/) (`nltk`) is a platform for building Python programs to work with human language data. 
-
-    The Python [`distribute`](https://pypi.python.org/pypi/distribute/0.6.21) package is required for `nltk`. The `distribute` package should be installed before installing `ntlk`. To download the `distribute` package tar file:
-
-    ``` shell
-    hawq-node$ wget http://pypi.python.org/packages/source/d/distribute/distribute-0.6.21.tar.gz
-    ```
-
-    To download the `nltk` package tar file:
-
-    ``` shell
-    hawq-node$ wget http://pypi.python.org/packages/source/n/nltk/nltk-2.0.2.tar.gz#md5=6e714ff74c3398e88be084748df4e657
-    ```
-
-### <a id="usefulreading"></a>Useful Reading 
-
-For information about the Python language, see [http://www.python.org/](http://www.python.org/).
-
-A set of slides that were used in a talk about how the Pivotal Data Science team uses the PyData stack in the Pivotal MPP databases and on Pivotal Cloud Foundry [http://www.slideshare.net/SrivatsanRamanujam/all-thingspythonpivotal](http://www.slideshare.net/SrivatsanRamanujam/all-thingspythonpivotal).
-



[37/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-register_files.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-register_files.html.md.erb b/markdown/datamgmt/load/g-register_files.html.md.erb
new file mode 100644
index 0000000..25c24ca
--- /dev/null
+++ b/markdown/datamgmt/load/g-register_files.html.md.erb
@@ -0,0 +1,217 @@
+---
+title: Registering Files into HAWQ Internal Tables
+---
+
+The `hawq register` utility loads and registers HDFS data files or folders into HAWQ internal tables. Files can be read directly, rather than having to be copied or loaded, resulting in higher performance and more efficient transaction processing.
+
+Data from the file or directory specified by \<hdfsfilepath\> is loaded into the appropriate HAWQ table directory in HDFS and the utility updates the corresponding HAWQ metadata for the files. Either AO or Parquet-formatted tables in HDFS can be loaded into a corresponding table in HAWQ.
+
+You can use `hawq register` either to:
+
+-  Load and register external Parquet-formatted file data generated by an external system such as Hive or Spark.
+-  Recover cluster data from a backup cluster for disaster recovery. 
+
+Requirements for running `hawq register` on the  server are:
+
+-   All hosts in your HAWQ cluster (master and segments) must have network access between them and the hosts containing the data to be loaded.
+-   The Hadoop client must be configured and the hdfs filepath specified.
+-   The files to be registered and the HAWQ table must be located in the same HDFS cluster.
+-   The target table DDL is configured with the correct data type mapping.
+
+##<a id="topic1__section2"></a>Registering Externally Generated HDFS File Data to an Existing Table
+
+Files or folders in HDFS can be registered into an existing table, allowing them to be managed as a HAWQ internal table. When registering files, you can optionally specify the maximum amount of data to be loaded, in bytes, using the `--eof` option. If registering a folder, the actual file sizes are used. 
+
+Only HAWQ or Hive-generated Parquet tables are supported. Partitioned tables are not supported. Attempting to register these tables will result in an error. 
+
+Metadata for the Parquet file(s) and the destination table must be consistent. Different data types are used by HAWQ tables and Parquet files, so data must be mapped. You must verify that the structure of the Parquet files and the HAWQ table are compatible before running `hawq register`. Not all HIVE data types can be mapped to HAWQ equivalents. The currently-supported HIVE data types are: boolean, int, smallint, tinyint, bigint, float, double, string, binary, char, and varchar.
+
+As a best practice, create a copy of the Parquet file to be registered before running ```hawq register```
+You can then then run ```hawq register``` on the copy,  leaving the original file available for additional Hive queries or if a data mapping error is encountered.
+
+###Limitations for Registering Hive Tables to HAWQ 
+
+The following HIVE data types cannot be converted to HAWQ equivalents: timestamp, decimal, array, struct, map, and union.   
+
+###Example: Registering a Hive-Generated Parquet File
+
+This example shows how to register a HIVE-generated parquet file in HDFS into the table `parquet_table` in HAWQ, which is in the database named `postgres`. The file path of the HIVE-generated file is `hdfs://localhost:8020/temp/hive.paq`.
+
+In this example, the location of the database is `hdfs://localhost:8020/hawq_default`, the tablespace id is 16385, the database id is 16387, the table filenode id is 77160, and the last file under the filenode is numbered 7.
+
+Run the `hawq register` command for the file location  `hdfs://localhost:8020/temp/hive.paq`:
+
+``` pre
+$ hawq register -d postgres -f hdfs://localhost:8020/temp/hive.paq parquet_table
+```
+
+After running the `hawq register` command, the corresponding new location of the file in HDFS is:  `hdfs://localhost:8020/hawq_default/16385/16387/77160/8`. 
+
+The command updates the metadata of the table `parquet_table` in HAWQ, which is contained in the table `pg_aoseg.pg_paqseg_77160`. The pg\_aoseg table is a fixed schema for row-oriented and Parquet AO tables. For row-oriented tables, the table name prefix is pg\_aoseg. For Parquet tables, the table name prefix is pg\_paqseg. 77160 is the relation id of the table.
+
+You can locate the table by one of two methods, either  by relation ID or by table name. 
+
+To find the relation ID, run the following command on the catalog table pg\_class: 
+
+```
+SELECT oid FROM pg_class WHERE relname=$relname
+```
+To find the table name, run the command: 
+
+```
+SELECT segrelid FROM pg_appendonly WHERE relid = $relid
+```
+then run: 
+
+```
+SELECT relname FROM pg_class WHERE oid = segrelid
+```
+
+## <a id="topic1__section3"></a>Registering Data Using Information from a YAML Configuration File
+ 
+The `hawq register` command can register HDFS files  by using metadata loaded from a YAML configuration file by using the `--config <yaml_config\>` option. Both AO and Parquet tables can be registered. Tables need not exist in HAWQ before being registered. In disaster recovery, information in a YAML-format file created by the `hawq extract` command can re-create HAWQ tables by using metadata from a backup checkpoint.
+
+You can also use a YAML confguration file to append HDFS files to an existing HAWQ table or create a table and register it into HAWQ.
+
+For disaster recovery, tables can be re-registered using the HDFS files and a YAML file. The clusters are assumed to have data periodically imported from Cluster A to Cluster B. 
+
+Data is registered according to the following conditions: 
+
+-  Existing tables have files appended to the existing HAWQ table.
+-  If a table does not exist, it is created and registered into HAWQ. The catalog table will be updated with the file size specified by the YAML file.
+-  If the -\\\-force option is used, the data in existing catalog tables is erased and re-registered. All HDFS-related catalog contents in `pg_aoseg.pg_paqseg_$relid ` are cleared. The original files on HDFS are retained.
+
+Tables using random distribution are preferred for registering into HAWQ.
+
+There are additional restrictions when registering hash tables. When registering hash-distributed tables using a YAML file, the distribution policy in the YAML file must match that of the table being registered into and the order of the files in the YAML file should reflect the hash distribution. The size of the registered file should be identical to or a multiple of the hash table bucket number. 
+
+Only single-level partitioned tables can be registered into HAWQ.
+
+
+###Example: Registration using a YAML Configuration File
+
+This example shows how to use `hawq register` to register HDFS data using a YAML configuration file generated by hawq extract. 
+
+First, create a table in SQL and insert some data into it.  
+
+```
+=> CREATE TABLE paq1(a int, b varchar(10))with(appendonly=true, orientation=parquet);
+=> INSERT INTO paq1 VALUES(generate_series(1,1000), 'abcde');
+```
+
+Extract the table metadata by using the `hawq extract` utility.
+
+```
+hawq extract -o paq1.yml paq1
+```
+
+Register the data into new table paq2, using the -\\\-config option to identify the YAML file.
+
+```
+hawq register --config paq1.yml paq2
+```
+Select the new table and check to verify that  the content has been registered.
+
+```
+=> SELECT count(*) FROM paq2;
+```
+
+
+## <a id="topic1__section4"></a>Data Type Mapping<a id="topic1__section4"></a>
+
+HIVE and Parquet tables use different data types than HAWQ tables and must be mapped for metadata compatibility. You are responsible for making sure your implementation is mapped to the appropriate data type before running `hawq register`. The tables below show equivalent data types, if available.
+
+<span class="tablecap">Table 1. HAWQ to Parquet Mapping</span>
+
+|HAWQ Data Type   | Parquet Data Type  |
+| :------------| :---------------|
+| bool        | boolean       |
+| int2/int4/date        | int32       |
+| int8/money       | int64      |
+| time/timestamptz/timestamp       | int64      |
+| float4        | float       |
+|float8        | double       |
+|bit/varbit/bytea/numeric       | Byte array       |
+|char/bpchar/varchar/name| Byte array |
+| text/xml/interval/timetz  | Byte array  |
+| macaddr/inet/cidr  | Byte array  |
+
+**Additional HAWQ-to-Parquet Mapping**
+
+**point**:  
+
+``` 
+group {
+    required int x;
+    required int y;
+}
+```
+
+**circle:** 
+
+```
+group {
+    required int x;
+    required int y;
+    required int r;
+}
+```
+
+**box:**  
+
+```
+group {
+    required int x1;
+    required int y1;
+    required int x2;
+    required int y2;
+}
+```
+
+**iseg:** 
+
+
+```
+group {
+    required int x1;
+    required int y1;
+    required int x2;
+    required int y2;
+}
+``` 
+
+**path**:
+  
+```
+group {
+    repeated group {
+        required int x;
+        required int y;
+    }
+}
+```
+
+
+<span class="tablecap">Table 2. HIVE to HAWQ Mapping</span>
+
+|HIVE Data Type   | HAWQ Data Type  |
+| :------------| :---------------|
+| boolean        | bool       |
+| tinyint        | int2       |
+| smallint       | int2/smallint      |
+| int            | int4 / int |
+| bigint         | int8 / bigint      |
+| float        | float4       |
+| double	| float8 |
+| string        | varchar       |
+| binary      | bytea       |
+| char | char |
+| varchar  | varchar  |
+
+
+### Extracting Metadata
+
+For more information on extracting metadata to a YAML file and the output content of the YAML file, refer to the reference page for [hawq extract](../../reference/cli/admin_utilities/hawqextract.html#topic1).
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-representing-null-values.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-representing-null-values.html.md.erb b/markdown/datamgmt/load/g-representing-null-values.html.md.erb
new file mode 100644
index 0000000..4d4ffdd
--- /dev/null
+++ b/markdown/datamgmt/load/g-representing-null-values.html.md.erb
@@ -0,0 +1,7 @@
+---
+title: Representing NULL Values
+---
+
+`NULL` represents an unknown piece of data in a column or field. Within your data files you can designate a string to represent null values. The default string is `\N` (backslash-N) in `TEXT` mode, or an empty value with no quotations in `CSV` mode. You can also declare a different string using the `NULL` clause of `COPY`, `CREATE EXTERNAL                 TABLE `or the `hawq load` control file when defining your data format. For example, you can use an empty string if you do not want to distinguish nulls from empty strings. When using the HAWQ loading tools, any data item that matches the designated null string is considered a null value.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html.md.erb b/markdown/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html.md.erb
new file mode 100644
index 0000000..ba0603c
--- /dev/null
+++ b/markdown/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: Running COPY in Single Row Error Isolation Mode
+---
+
+By default, `COPY` stops an operation at the first error: if the data contains an error, the operation fails and no data loads. If you run `COPY                 FROM` in *single row error isolation mode*, HAWQ skips rows that contain format errors and loads properly formatted rows. Single row error isolation mode applies only to rows in the input file that contain format errors. If the data contains a contraint error such as violation of a `NOT NULL` or `CHECK` constraint, the operation fails and no data loads.
+
+Specifying `SEGMENT REJECT LIMIT` runs the `COPY` operation in single row error isolation mode. Specify the acceptable number of error rows on each segment, after which the entire `COPY FROM` operation fails and no rows load. The error row count is for each HAWQ segment, not for the entire load operation.
+
+If the `COPY` operation does not reach the error limit, HAWQ loads all correctly-formatted rows and discards the error rows. The `LOG ERRORS INTO` clause allows you to keep error rows for further examination. Use `LOG ERRORS` to capture data formatting errors internally in HAWQ. For example:
+
+``` sql
+=> COPY country FROM '/data/gpdb/country_data'
+   WITH DELIMITER '|' LOG ERRORS INTO errtable
+   SEGMENT REJECT LIMIT 10 ROWS;
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-starting-and-stopping-gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-starting-and-stopping-gpfdist.html.md.erb b/markdown/datamgmt/load/g-starting-and-stopping-gpfdist.html.md.erb
new file mode 100644
index 0000000..7e2cca9
--- /dev/null
+++ b/markdown/datamgmt/load/g-starting-and-stopping-gpfdist.html.md.erb
@@ -0,0 +1,42 @@
+---
+title: Starting and Stopping gpfdist
+---
+
+You can start `gpfdist` in your current directory location or in any directory that you specify. The default port is `8080`.
+
+From your current directory, type:
+
+``` shell
+$ gpfdist &
+```
+
+From a different directory, specify the directory from which to serve files, and optionally, the HTTP port to run on.
+
+To start `gpfdist` in the background and log output messages and errors to a log file:
+
+``` shell
+$ gpfdist -d /var/load_files -p 8081 -l /home/gpadmin/log &
+```
+
+For multiple `gpfdist` instances on the same ETL host (see [External Tables Using Multiple gpfdist Instances with Multiple NICs](g-about-gpfdist-setup-and-performance.html#topic14__du165882)), use a different base directory and port for each instance. For example:
+
+``` shell
+$ gpfdist -d /var/load_files1 -p 8081 -l /home/gpadmin/log1 &
+$ gpfdist -d /var/load_files2 -p 8082 -l /home/gpadmin/log2 &
+```
+
+To stop `gpfdist` when it is running in the background:
+
+First find its process id:
+
+``` shell
+$ ps -ef | grep gpfdist
+```
+
+Then kill the process, for example (where 3456 is the process ID in this example):
+
+``` shell
+$ kill 3456
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-transfer-and-store-the-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-transfer-and-store-the-data.html.md.erb b/markdown/datamgmt/load/g-transfer-and-store-the-data.html.md.erb
new file mode 100644
index 0000000..8a6d7ab
--- /dev/null
+++ b/markdown/datamgmt/load/g-transfer-and-store-the-data.html.md.erb
@@ -0,0 +1,16 @@
+---
+title: Transfer and Store the Data
+---
+
+Use one of the following approaches to transform the data with `gpfdist`.
+
+-   `GPLOAD` supports only input transformations, but is easier to implement in many cases.
+-   `INSERT INTO SELECT FROM` supports both input and output transformations, but exposes more details.
+
+-   **[Transforming with GPLOAD](../../datamgmt/load/g-transforming-with-gpload.html)**
+
+-   **[Transforming with INSERT INTO SELECT FROM](../../datamgmt/load/g-transforming-with-insert-into-select-from.html)**
+
+-   **[Configuration File Format](../../datamgmt/load/g-configuration-file-format.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-transforming-with-gpload.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-transforming-with-gpload.html.md.erb b/markdown/datamgmt/load/g-transforming-with-gpload.html.md.erb
new file mode 100644
index 0000000..438fedb
--- /dev/null
+++ b/markdown/datamgmt/load/g-transforming-with-gpload.html.md.erb
@@ -0,0 +1,30 @@
+---
+title: Transforming with GPLOAD
+---
+
+To transform data using the `GPLOAD ` control file, you must specify both the file name for the `TRANSFORM_CONFIG` file and the name of the `TRANSFORM` operation in the `INPUT` section of the `GPLOAD` control file.
+
+-   `TRANSFORM_CONFIG `specifies the name of the `gpfdist` configuration file.
+-   The `TRANSFORM` setting indicates the name of the transformation that is described in the file named in `TRANSFORM_CONFIG`.
+
+``` pre
+---
+VERSION: 1.0.0.1
+DATABASE: ops
+USER: gpadmin
+GPLOAD:
+INPUT:
+- TRANSFORM_CONFIG: config.yaml
+- TRANSFORM: prices_input
+- SOURCE:
+FILE: prices.xml
+```
+
+The transformation operation name must appear in two places: in the `TRANSFORM` setting of the `gpfdist` configuration file and in the `TRANSFORMATIONS` section of the file named in the `TRANSFORM_CONFIG` section.
+
+In the `GPLOAD` control file, the optional parameter `MAX_LINE_LENGTH` specifies the maximum length of a line in the XML transformation data that is passed to hawq load.
+
+The following diagram shows the relationships between the `GPLOAD` control file, the `gpfdist` configuration file, and the XML data file.
+
+<img src="../../images/03-gpload-files.jpg" class="image" width="415" height="258" />
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-transforming-with-insert-into-select-from.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-transforming-with-insert-into-select-from.html.md.erb b/markdown/datamgmt/load/g-transforming-with-insert-into-select-from.html.md.erb
new file mode 100644
index 0000000..d91cc93
--- /dev/null
+++ b/markdown/datamgmt/load/g-transforming-with-insert-into-select-from.html.md.erb
@@ -0,0 +1,22 @@
+---
+title: Transforming with INSERT INTO SELECT FROM
+---
+
+Specify the transformation in the `CREATE EXTERNAL TABLE` definition's `LOCATION` clause. For example, the transform is shown in bold in the following command. (Run `gpfdist` first, using the command `gpfdist             -c config.yaml`).
+
+``` sql
+CREATE READABLE EXTERNAL TABLE prices_readable (LIKE prices)
+   LOCATION ('gpfdist://hostname:8081/prices.xml#transform=prices_input')
+   FORMAT 'TEXT' (DELIMITER '|')
+   LOG ERRORS INTO error_log SEGMENT REJECT LIMIT 10;
+```
+
+In the command above, change *hostname* to your hostname. `prices_input` comes from the configuration file.
+
+The following query loads data into the `prices` table.
+
+``` sql
+INSERT INTO prices SELECT * FROM prices_readable;
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-transforming-xml-data.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-transforming-xml-data.html.md.erb b/markdown/datamgmt/load/g-transforming-xml-data.html.md.erb
new file mode 100644
index 0000000..f9520bb
--- /dev/null
+++ b/markdown/datamgmt/load/g-transforming-xml-data.html.md.erb
@@ -0,0 +1,34 @@
+---
+title: Transforming XML Data
+---
+
+The HAWQ data loader *gpfdist* provides transformation features to load XML data into a table and to write data from the HAWQ to XML files. The following diagram shows *gpfdist* performing an XML transform.
+
+<a id="topic75__du185408"></a>
+<span class="figtitleprefix">Figure: </span>External Tables using XML Transformations
+
+<img src="../../images/ext-tables-xml.png" class="image" />
+
+To load or extract XML data:
+
+-   [Determine the Transformation Schema](g-determine-the-transformation-schema.html#topic76)
+-   [Write a Transform](g-write-a-transform.html#topic77)
+-   [Write the gpfdist Configuration](g-write-the-gpfdist-configuration.html#topic78)
+-   [Load the Data](g-load-the-data.html#topic79)
+-   [Transfer and Store the Data](g-transfer-and-store-the-data.html#topic80)
+
+The first three steps comprise most of the development effort. The last two steps are straightforward and repeatable, suitable for production.
+
+-   **[Determine the Transformation Schema](../../datamgmt/load/g-determine-the-transformation-schema.html)**
+
+-   **[Write a Transform](../../datamgmt/load/g-write-a-transform.html)**
+
+-   **[Write the gpfdist Configuration](../../datamgmt/load/g-write-the-gpfdist-configuration.html)**
+
+-   **[Load the Data](../../datamgmt/load/g-load-the-data.html)**
+
+-   **[Transfer and Store the Data](../../datamgmt/load/g-transfer-and-store-the-data.html)**
+
+-   **[XML Transformation Examples](../../datamgmt/load/g-xml-transformation-examples.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-troubleshooting-gpfdist.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-troubleshooting-gpfdist.html.md.erb b/markdown/datamgmt/load/g-troubleshooting-gpfdist.html.md.erb
new file mode 100644
index 0000000..2e6a450
--- /dev/null
+++ b/markdown/datamgmt/load/g-troubleshooting-gpfdist.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: Troubleshooting gpfdist
+---
+
+The segments access `gpfdist` at runtime. Ensure that the HAWQ segment hosts have network access to `gpfdist`. `gpfdist` is a web server: test connectivity by running the following command from each host in the HAWQ array (segments and master):
+
+``` shell
+$ wget http://gpfdist_hostname:port/filename      
+```
+
+The `CREATE EXTERNAL TABLE` definition must have the correct host name, port, and file names for `gpfdist`. Specify file names and paths relative to the directory from which `gpfdist` serves files (the directory path specified when `gpfdist` started). See [Creating External Tables - Examples](creating-external-tables-examples.html#topic44).
+
+If you start `gpfdist` on your system and IPv6 networking is disabled, `gpfdist` displays this warning message when testing for an IPv6 port.
+
+``` pre
+[WRN gpfdist.c:2050] Creating the socket failed
+```
+
+If the corresponding IPv4 port is available, `gpfdist` uses that port and the warning for IPv6 port can be ignored. To see information about the ports that `gpfdist` tests, use the `-V` option.
+
+For information about IPv6 and IPv4 networking, see your operating system documentation.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-unloading-data-from-hawq-database.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-unloading-data-from-hawq-database.html.md.erb b/markdown/datamgmt/load/g-unloading-data-from-hawq-database.html.md.erb
new file mode 100644
index 0000000..e0690ad
--- /dev/null
+++ b/markdown/datamgmt/load/g-unloading-data-from-hawq-database.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: Unloading Data from HAWQ
+---
+
+A writable external table allows you to select rows from other database tables and output the rows to files, named pipes, to applications, or as output targets for parallel MapReduce calculations. You can define file-based and web-based writable external tables.
+
+This topic describes how to unload data from HAWQ using parallel unload (writable external tables) and non-parallel unload (`COPY`).
+
+-   **[Defining a File-Based Writable External Table](../../datamgmt/load/g-defining-a-file-based-writable-external-table.html)**
+
+-   **[Defining a Command-Based Writable External Web Table](../../datamgmt/load/g-defining-a-command-based-writable-external-web-table.html)**
+
+-   **[Unloading Data Using a Writable External Table](../../datamgmt/load/g-unloading-data-using-a-writable-external-table.html)**
+
+-   **[Unloading Data Using COPY](../../datamgmt/load/g-unloading-data-using-copy.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-unloading-data-using-a-writable-external-table.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-unloading-data-using-a-writable-external-table.html.md.erb b/markdown/datamgmt/load/g-unloading-data-using-a-writable-external-table.html.md.erb
new file mode 100644
index 0000000..377f2d6
--- /dev/null
+++ b/markdown/datamgmt/load/g-unloading-data-using-a-writable-external-table.html.md.erb
@@ -0,0 +1,17 @@
+---
+title: Unloading Data Using a Writable External Table
+---
+
+Writable external tables allow only `INSERT` operations. You must grant `INSERT` permission on a table to enable access to users who are not the table owner or a superuser. For example:
+
+``` sql
+GRANT INSERT ON writable_ext_table TO admin;
+```
+
+To unload data using a writable external table, select the data from the source table(s) and insert it into the writable external table. The resulting rows are output to the writable external table. For example:
+
+``` sql
+INSERT INTO writable_ext_table SELECT * FROM regular_table;
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-unloading-data-using-copy.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-unloading-data-using-copy.html.md.erb b/markdown/datamgmt/load/g-unloading-data-using-copy.html.md.erb
new file mode 100644
index 0000000..816a2b5
--- /dev/null
+++ b/markdown/datamgmt/load/g-unloading-data-using-copy.html.md.erb
@@ -0,0 +1,12 @@
+---
+title: Unloading Data Using COPY
+---
+
+`COPY TO` copies data from a table to a file (or standard input) on the HAWQ master host using a single process on the HAWQ master instance. Use `COPY` to output a table's entire contents, or filter the output using a `SELECT` statement. For example:
+
+``` sql
+COPY (SELECT * FROM country WHERE country_name LIKE 'A%') 
+TO '/home/gpadmin/a_list_countries.out';
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-url-based-web-external-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-url-based-web-external-tables.html.md.erb b/markdown/datamgmt/load/g-url-based-web-external-tables.html.md.erb
new file mode 100644
index 0000000..a115972
--- /dev/null
+++ b/markdown/datamgmt/load/g-url-based-web-external-tables.html.md.erb
@@ -0,0 +1,24 @@
+---
+title: URL-based Web External Tables
+---
+
+A URL-based web table accesses data from a web server using the HTTP protocol. Web table data is dynamic; the data is not rescannable.
+
+Specify the `LOCATION` of files on a web server using `http://`. The web data file(s) must reside on a web server that HAWQ segment hosts can access. The number of URLs specified corresponds to the minimum number of virtual segments that work in parallel to access the web table.
+
+The following sample command defines a web table that gets data from several URLs.
+
+``` sql
+=# CREATE EXTERNAL WEB TABLE ext_expenses (
+    name text, date date, amount float4, category text, description text) 
+LOCATION ('http://intranet.company.com/expenses/sales/file.csv',
+          'http://intranet.company.com/expenses/exec/file.csv',
+          'http://intranet.company.com/expenses/finance/file.csv',
+          'http://intranet.company.com/expenses/ops/file.csv',
+          'http://intranet.company.com/expenses/marketing/file.csv',
+          'http://intranet.company.com/expenses/eng/file.csv' 
+      )
+FORMAT 'CSV' ( HEADER );
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-using-a-custom-format.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-using-a-custom-format.html.md.erb b/markdown/datamgmt/load/g-using-a-custom-format.html.md.erb
new file mode 100644
index 0000000..e83744a
--- /dev/null
+++ b/markdown/datamgmt/load/g-using-a-custom-format.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: Using a Custom Format
+---
+
+You specify a custom data format in the `FORMAT` clause of `CREATE             EXTERNAL TABLE`.
+
+```
+FORMAT 'CUSTOM' (formatter=format_function, key1=val1,...keyn=valn)
+```
+
+Where the `'CUSTOM'` keyword indicates that the data has a custom format and `formatter` specifies the function to use to format the data, followed by comma-separated parameters to the formatter function.
+
+HAWQ provides functions for formatting fixed-width data, but you must author the formatter functions for variable-width data. The steps are as follows.
+
+1.  Author and compile input and output functions as a shared library.
+2.  Specify the shared library function with `CREATE FUNCTION` in HAWQ.
+3.  Use the `formatter` parameter of `CREATE EXTERNAL                TABLE`'s `FORMAT` clause to call the function.
+
+-   **[Importing and Exporting Fixed Width Data](../../datamgmt/load/g-importing-and-exporting-fixed-width-data.html)**
+
+-   **[Examples - Read Fixed-Width Data](../../datamgmt/load/g-examples-read-fixed-width-data.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html.md.erb b/markdown/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html.md.erb
new file mode 100644
index 0000000..0c68b2c
--- /dev/null
+++ b/markdown/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: Using the HAWQ File Server (gpfdist)
+---
+
+The `gpfdist` protocol provides the best performance and is the easiest to set up. `gpfdist` ensures optimum use of all segments in your HAWQ system for external table reads.
+
+This topic describes the setup and management tasks for using `gpfdist` with external tables.
+
+-   **[About gpfdist Setup and Performance](../../datamgmt/load/g-about-gpfdist-setup-and-performance.html)**
+
+-   **[Controlling Segment Parallelism](../../datamgmt/load/g-controlling-segment-parallelism.html)**
+
+-   **[Installing gpfdist](../../datamgmt/load/g-installing-gpfdist.html)**
+
+-   **[Starting and Stopping gpfdist](../../datamgmt/load/g-starting-and-stopping-gpfdist.html)**
+
+-   **[Troubleshooting gpfdist](../../datamgmt/load/g-troubleshooting-gpfdist.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-working-with-file-based-ext-tables.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-working-with-file-based-ext-tables.html.md.erb b/markdown/datamgmt/load/g-working-with-file-based-ext-tables.html.md.erb
new file mode 100644
index 0000000..e024a7d
--- /dev/null
+++ b/markdown/datamgmt/load/g-working-with-file-based-ext-tables.html.md.erb
@@ -0,0 +1,21 @@
+---
+title: Working with File-Based External Tables
+---
+
+External tables provide access to data stored in data sources outside of HAWQ as if the data were stored in regular database tables. Data can be read from or written to external tables.
+
+An external table is a HAWQ database table backed with data that resides outside of the database. An external table is either readable or writable. It can be used like a regular database table in SQL commands such as `SELECT` and `INSERT` and joined with other tables. External tables are most often used to load and unload database data.
+
+Web-based external tables provide access to data served by an HTTP server or an operating system process. See [Creating and Using Web External Tables](g-creating-and-using-web-external-tables.html#topic31) for more about web-based tables.
+
+-   **[Accessing File-Based External Tables](../../datamgmt/load/g-external-tables.html)**
+
+    External tables enable accessing external files as if they are regular database tables. They are often used to move data into and out of a HAWQ database.
+
+-   **[gpfdist Protocol](../../datamgmt/load/g-gpfdist-protocol.html)**
+
+-   **[gpfdists Protocol](../../datamgmt/load/g-gpfdists-protocol.html)**
+
+-   **[Handling Errors in External Table Data](../../datamgmt/load/g-handling-errors-ext-table-data.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-write-a-transform.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-write-a-transform.html.md.erb b/markdown/datamgmt/load/g-write-a-transform.html.md.erb
new file mode 100644
index 0000000..6b35ab2
--- /dev/null
+++ b/markdown/datamgmt/load/g-write-a-transform.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: Write a Transform
+---
+
+The transform specifies what to extract from the data.You can use any authoring environment and language appropriate for your project. For XML transformations, choose a technology such as XSLT, Joost (STX), Java, Python, or Perl, based on the goals and scope of the project.
+
+In the price example, the next step is to transform the XML data into a simple two-column delimited format.
+
+``` pre
+708421|19.99
+708466|59.25
+711121|24.99
+```
+
+The following STX transform, called *input\_transform.stx*, completes the data transformation.
+
+``` xml
+<?xml version="1.0"?>
+<stx:transform version="1.0"
+   xmlns:stx="http://stx.sourceforge.net/2002/ns"
+   pass-through="none">
+  <!-- declare variables -->
+  <stx:variable name="itemnumber"/>
+  <stx:variable name="price"/>
+  <!-- match and output prices as columns delimited by | -->
+  <stx:template match="/prices/pricerecord">
+    <stx:process-children/>
+    <stx:value-of select="$itemnumber"/>    
+<stx:text>|</stx:text>
+    <stx:value-of select="$price"/>      <stx:text>
+</stx:text>
+  </stx:template>
+  <stx:template match="itemnumber">
+    <stx:assign name="itemnumber" select="."/>
+  </stx:template>
+  <stx:template match="price">
+    <stx:assign name="price" select="."/>
+  </stx:template>
+</stx:transform>
+```
+
+This STX transform declares two temporary variables, `itemnumber` and `price`, and the following rules.
+
+1.  When an element that satisfies the XPath expression `/prices/pricerecord` is found, examine the child elements and generate output that contains the value of the `itemnumber` variable, a `|` character, the value of the price variable, and a newline.
+2.  When an `<itemnumber>` element is found, store the content of that element in the variable `itemnumber`.
+3.  When a &lt;price&gt; element is found, store the content of that element in the variable `price`.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-write-the-gpfdist-configuration.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-write-the-gpfdist-configuration.html.md.erb b/markdown/datamgmt/load/g-write-the-gpfdist-configuration.html.md.erb
new file mode 100644
index 0000000..89733cd
--- /dev/null
+++ b/markdown/datamgmt/load/g-write-the-gpfdist-configuration.html.md.erb
@@ -0,0 +1,61 @@
+---
+title: Write the gpfdist Configuration
+---
+
+The `gpfdist` configuration is specified as a YAML 1.1 document. It specifies rules that `gpfdist` uses to select a Transform to apply when loading or extracting data.
+
+This example `gpfdist` configuration contains the following items:
+
+-   the `config.yaml` file defining `TRANSFORMATIONS`
+-   the `input_transform.sh` wrapper script, referenced in the `config.yaml` file
+-   the `input_transform.stx` joost transformation, called from `input_transform.sh`
+
+Aside from the ordinary YAML rules, such as starting the document with three dashes (`---`), a `gpfdist` configuration must conform to the following restrictions:
+
+1.  a `VERSION` setting must be present with the value `1.0.0.1`.
+2.  a `TRANSFORMATIONS` setting must be present and contain one or more mappings.
+3.  Each mapping in the `TRANSFORMATION` must contain:
+    -   a `TYPE` with the value 'input' or 'output'
+    -   a `COMMAND` indicating how the transform is run.
+
+4.  Each mapping in the `TRANSFORMATION` can contain optional `CONTENT`, `SAFE`, and `STDERR` settings.
+
+The following `gpfdist` configuration called `config.YAML` applies to the prices example. The initial indentation on each line is significant and reflects the hierarchical nature of the specification. The name `prices_input` in the following example will be referenced later when creating the table in SQL.
+
+``` pre
+---
+VERSION: 1.0.0.1
+TRANSFORMATIONS:
+  prices_input:
+    TYPE:     input
+    COMMAND:  /bin/bash input_transform.sh %filename%
+```
+
+The `COMMAND` setting uses a wrapper script called `input_transform.sh` with a `%filename%` placeholder. When `gpfdist` runs the `prices_input` transform, it invokes `input_transform.sh` with `/bin/bash` and replaces the `%filename%` placeholder with the path to the input file to transform. The wrapper script called `input_transform.sh` contains the logic to invoke the STX transformation and return the output.
+
+If Joost is used, the Joost STX engine must be installed.
+
+``` bash
+#!/bin/bash
+# input_transform.sh - sample input transformation, 
+# demonstrating use of Java and Joost STX to convert XML into
+# text to load into HAWQ.
+# java arguments:
+#   -jar joost.jar ��������joost STX engine
+#   -nodecl                  don't generate a <?xml?> declaration
+#   $1                        filename to process
+#   input_transform.stx    the STX transformation
+#
+# the AWK step eliminates a blank line joost emits at the end
+java \
+    -jar joost.jar \
+    -nodecl \
+    $1 \
+    input_transform.stx \
+ | awk 'NF>0
+```
+
+The `input_transform.sh` file uses the Joost STX engine with the AWK interpreter. The following diagram shows the process flow as `gpfdist` runs the transformation.
+
+<img src="../../images/02-pipeline.png" class="image" width="462" height="190" />
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/g-xml-transformation-examples.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/g-xml-transformation-examples.html.md.erb b/markdown/datamgmt/load/g-xml-transformation-examples.html.md.erb
new file mode 100644
index 0000000..12ad1d6
--- /dev/null
+++ b/markdown/datamgmt/load/g-xml-transformation-examples.html.md.erb
@@ -0,0 +1,13 @@
+---
+title: XML Transformation Examples
+---
+
+The following examples demonstrate the complete process for different types of XML data and STX transformations. Files and detailed instructions associated with these examples can be downloaded from the Apache site `gpfdist_transform` tools demo page. Read the README file before you run the examples.
+
+-   **[Command-based Web External Tables](../../datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html)**
+
+-   **[Example using IRS MeF XML Files (In demo Directory)](../../datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html)**
+
+-   **[Example using WITSML\u2122 Files (In demo Directory)](../../datamgmt/load/g-example-witsml-files-in-demo-directory.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-database.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-database.html.md.erb b/markdown/ddl/ddl-database.html.md.erb
new file mode 100644
index 0000000..2ef9f9f
--- /dev/null
+++ b/markdown/ddl/ddl-database.html.md.erb
@@ -0,0 +1,78 @@
+---
+title: Creating and Managing Databases
+---
+
+A HAWQ system is a single instance of HAWQ. There can be several separate HAWQ systems installed, but usually just one is selected by environment variable settings. See your HAWQ administrator for details.
+
+There can be multiple databases in a HAWQ system. This is different from some database management systems \(such as Oracle\) where the database instance *is* the database. Although you can create many databases in a HAWQ system, client programs can connect to and access only one database at a time \u2014 you cannot cross-query between databases.
+
+## <a id="topic3"></a>About Template Databases 
+
+Each new database you create is based on a *template*. HAWQ provides a default database, *template1*. Use *template1* to connect to HAWQ for the first time. HAWQ uses *template1* to create databases unless you specify another template. Do not create any objects in *template1* unless you want those objects to be in every database you create.
+
+HAWQ uses two other database templates, *template0* and *postgres*, internally. Do not drop or modify *template0* or *postgres*. You can use *template0* to create a completely clean database containing only the standard objects predefined by HAWQ at initialization, especially if you modified *template1*.
+
+## <a id="topic4"></a>Creating a Database 
+
+The `CREATE DATABASE` command creates a new database. For example:
+
+``` sql
+=> CREATE DATABASE new_dbname;
+```
+
+To create a database, you must have privileges to create a database or be a HAWQ superuser. If you do not have the correct privileges, you cannot create a database. The HAWQ administrator must either give you the necessary privileges or to create a database for you.
+
+You can also use the client program `createdb` to create a database. For example, running the following command in a command line terminal connects to HAWQ using the provided host name and port and creates a database named *mydatabase*:
+
+``` shell
+$ createdb -h masterhost -p 5432 mydatabase
+```
+
+The host name and port must match the host name and port of the installed HAWQ system.
+
+Some objects, such as roles, are shared by all the databases in a HAWQ system. Other objects, such as tables that you create, are known only in the database in which you create them.
+
+### <a id="topic5"></a>Cloning a Database 
+
+By default, a new database is created by cloning the standard system database template, *template1*. Any database can be used as a template when creating a new database, thereby providing the capability to 'clone' or copy an existing database and all objects and data within that database. For example:
+
+``` sql
+=> CREATE DATABASE new_dbname TEMPLATE old_dbname
+```
+
+## <a id="topic6"></a>Viewing the List of Databases 
+
+If you are working in the `psql` client program, you can use the `\l` meta-command to show the list of databases and templates in your HAWQ system. If using another client program and you are a superuser, you can query the list of databases from the `pg_database` system catalog table. For example:
+
+``` sql
+=> SELECT datname FROM pg_database;
+```
+
+## <a id="topic7"></a>Altering a Database 
+
+The ALTER DATABASE command changes database attributes such as owner, name, or default configuration attributes. For example, the following command alters a database by setting its default schema search path \(the `search_path` configuration parameter\):
+
+``` sql
+=> ALTER DATABASE mydatabase SET search_path TO myschema, public, pg_catalog;
+```
+
+To alter a database, you must be the owner of the database or a superuser.
+
+## <a id="topic8"></a>Dropping a Database 
+
+The `DROP DATABASE` command drops \(or deletes\) a database. It removes the system catalog entries for the database and deletes the database directory on disk that contains the data. You must be the database owner or a superuser to drop a database, and you cannot drop a database while you or anyone else is connected to it. Connect to `template1` \(or another database\) before dropping a database. For example:
+
+``` shell
+=> \c template1
+```
+``` sql
+=> DROP DATABASE mydatabase;
+```
+
+You can also use the client program `dropdb` to drop a database. For example, the following command connects to HAWQ using the provided host name and port and drops the database *mydatabase*:
+
+``` shell
+$ dropdb -h masterhost -p 5432 mydatabase
+```
+
+**Warning:** Dropping a database cannot be undone.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-partition.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-partition.html.md.erb b/markdown/ddl/ddl-partition.html.md.erb
new file mode 100644
index 0000000..f790161
--- /dev/null
+++ b/markdown/ddl/ddl-partition.html.md.erb
@@ -0,0 +1,483 @@
+---
+title: Partitioning Large Tables
+---
+
+Table partitioning enables supporting very large tables, such as fact tables, by logically dividing them into smaller, more manageable pieces. Partitioned tables can improve query performance by allowing the HAWQ query optimizer to scan only the data needed to satisfy a given query instead of scanning all the contents of a large table.
+
+Partitioning does not change the physical distribution of table data across the segments. Table distribution is physical: HAWQ physically divides partitioned tables and non-partitioned tables across segments to enable parallel query processing. Table *partitioning* is logical: HAWQ logically divides big tables to improve query performance and facilitate data warehouse maintenance tasks, such as rolling old data out of the data warehouse.
+
+HAWQ supports:
+
+-   *range partitioning*: division of data based on a numerical range, such as date or price.
+-   *list partitioning*: division of data based on a list of values, such as sales territory or product line.
+-   A combination of both types.
+<a id="im207241"></a>
+
+![](../mdimages/partitions.jpg "Example Multi-level Partition Design")
+
+## <a id="topic64"></a>Table Partitioning in HAWQ 
+
+HAWQ divides tables into parts \(also known as partitions\) to enable massively parallel processing. Tables are partitioned during `CREATE TABLE` using the `PARTITION BY` \(and optionally the `SUBPARTITION BY`\) clause. Partitioning creates a top-level \(or parent\) table with one or more levels of sub-tables \(or child tables\). Internally, HAWQ creates an inheritance relationship between the top-level table and its underlying partitions, similar to the functionality of the `INHERITS` clause of PostgreSQL.
+
+HAWQ uses the partition criteria defined during table creation to create each partition with a distinct `CHECK` constraint, which limits the data that table can contain. The query optimizer uses `CHECK` constraints to determine which table partitions to scan to satisfy a given query predicate.
+
+The HAWQ system catalog stores partition hierarchy information so that rows inserted into the top-level parent table propagate correctly to the child table partitions. To change the partition design or table structure, alter the parent table using `ALTER TABLE` with the `PARTITION` clause.
+
+To insert data into a partitioned table, you specify the root partitioned table, the table created with the `CREATE TABLE` command. You also can specify a leaf child table of the partitioned table in an `INSERT` command. An error is returned if the data is not valid for the specified leaf child table. Specifying a child table that is not a leaf child table in the `INSERT` command is not supported.
+
+## <a id="topic65"></a>Deciding on a Table Partitioning Strategy 
+
+Not all tables are good candidates for partitioning. If the answer is *yes* to all or most of the following questions, table partitioning is a viable database design strategy for improving query performance. If the answer is *no* to most of the following questions, table partitioning is not the right solution for that table. Test your design strategy to ensure that query performance improves as expected.
+
+-   **Is the table large enough?** Large fact tables are good candidates for table partitioning. If you have millions or billions of records in a table, you may see performance benefits from logically breaking that data up into smaller chunks. For smaller tables with only a few thousand rows or less, the administrative overhead of maintaining the partitions will outweigh any performance benefits you might see.
+-   **Are you experiencing unsatisfactory performance?** As with any performance tuning initiative, a table should be partitioned only if queries against that table are producing slower response times than desired.
+-   **Do your query predicates have identifiable access patterns?** Examine the `WHERE` clauses of your query workload and look for table columns that are consistently used to access data. For example, if most of your queries tend to look up records by date, then a monthly or weekly date-partitioning design might be beneficial. Or if you tend to access records by region, consider a list-partitioning design to divide the table by region.
+-   **Does your data warehouse maintain a window of historical data?** Another consideration for partition design is your organization's business requirements for maintaining historical data. For example, your data warehouse may require that you keep data for the past twelve months. If the data is partitioned by month, you can easily drop the oldest monthly partition from the warehouse and load current data into the most recent monthly partition.
+-   **Can the data be divided into somewhat equal parts based on some defining criteria?** Choose partitioning criteria that will divide your data as evenly as possible. If the partitions contain a relatively equal number of records, query performance improves based on the number of partitions created. For example, by dividing a large table into 10 partitions, a query will execute 10 times faster than it would against the unpartitioned table, provided that the partitions are designed to support the query's criteria.
+
+Do not create more partitions than are needed. Creating too many partitions can slow down management and maintenance jobs, such as vacuuming, recovering segments, expanding the cluster, checking disk usage, and others.
+
+Partitioning does not improve query performance unless the query optimizer can eliminate partitions based on the query predicates. Queries that scan every partition run slower than if the table were not partitioned, so avoid partitioning if few of your queries achieve partition elimination. Check the explain plan for queries to make sure that partitions are eliminated. See [Query Profiling](../query/query-profiling.html) for more about partition elimination.
+
+Be very careful with multi-level partitioning because the number of partition files can grow very quickly. For example, if a table is partitioned by both day and city, and there are 1,000 days of data and 1,000 cities, the total number of partitions is one million. Column-oriented tables store each column in a physical table, so if this table has 100 columns, the system would be required to manage 100 million files for the table.
+
+Before settling on a multi-level partitioning strategy, consider a single level partition with bitmap indexes. Indexes slow down data loads, so consider performance testing with your data and schema to decide on the best strategy.
+
+## <a id="topic66"></a>Creating Partitioned Tables 
+
+You partition tables when you create them with `CREATE TABLE`. This topic provides examples of SQL syntax for creating a table with various partition designs.
+
+To partition a table:
+
+1.  Decide on the partition design: date range, numeric range, or list of values.
+2.  Choose the column\(s\) on which to partition the table.
+3.  Decide how many levels of partitions you want. For example, you can create a date range partition table by month and then subpartition the monthly partitions by sales region.
+
+-   [Defining Date Range Table Partitions](#topic67)
+-   [Defining Numeric Range Table Partitions](#topic68)
+-   [Defining List Table Partitions](#topic69)
+-   [Defining Multi-level Partitions](#topic70)
+-   [Partitioning an Existing Table](#topic71)
+
+### <a id="topic67"></a>Defining Date Range Table Partitions 
+
+A date range partitioned table uses a single `date` or `timestamp` column as the partition key column. You can use the same partition key column to create subpartitions if necessary, for example, to partition by month and then subpartition by day. Consider partitioning by the most granular level. For example, for a table partitioned by date, you can partition by day and have 365 daily partitions, rather than partition by year then subpartition by month then subpartition by day. A multi-level design can reduce query planning time, but a flat partition design runs faster.
+
+You can have HAWQ automatically generate partitions by giving a `START` value, an `END` value, and an `EVERY` clause that defines the partition increment value. By default, `START` values are always inclusive and `END` values are always exclusive. For example:
+
+``` sql
+CREATE TABLE sales (id int, date date, amt decimal(10,2))
+DISTRIBUTED BY (id)
+PARTITION BY RANGE (date)
+( START (date '2008-01-01') INCLUSIVE
+   END (date '2009-01-01') EXCLUSIVE
+   EVERY (INTERVAL '1 day') );
+```
+
+You can also declare and name each partition individually. For example:
+
+``` sql
+CREATE TABLE sales (id int, date date, amt decimal(10,2))
+DISTRIBUTED BY (id)
+PARTITION BY RANGE (date)
+( PARTITION Jan08 START (date '2008-01-01') INCLUSIVE ,
+  PARTITION Feb08 START (date '2008-02-01') INCLUSIVE ,
+  PARTITION Mar08 START (date '2008-03-01') INCLUSIVE ,
+  PARTITION Apr08 START (date '2008-04-01') INCLUSIVE ,
+  PARTITION May08 START (date '2008-05-01') INCLUSIVE ,
+  PARTITION Jun08 START (date '2008-06-01') INCLUSIVE ,
+  PARTITION Jul08 START (date '2008-07-01') INCLUSIVE ,
+  PARTITION Aug08 START (date '2008-08-01') INCLUSIVE ,
+  PARTITION Sep08 START (date '2008-09-01') INCLUSIVE ,
+  PARTITION Oct08 START (date '2008-10-01') INCLUSIVE ,
+  PARTITION Nov08 START (date '2008-11-01') INCLUSIVE ,
+  PARTITION Dec08 START (date '2008-12-01') INCLUSIVE
+                  END (date '2009-01-01') EXCLUSIVE );
+```
+
+You do not have to declare an `END` value for each partition, only the last one. In this example, `Jan08` ends where `Feb08` starts.
+
+### <a id="topic68"></a>Defining Numeric Range Table Partitions 
+
+A numeric range partitioned table uses a single numeric data type column as the partition key column. For example:
+
+``` sql
+CREATE TABLE rank (id int, rank int, year int, gender
+char(1), count int)
+DISTRIBUTED BY (id)
+PARTITION BY RANGE (year)
+( START (2001) END (2008) EVERY (1),
+  DEFAULT PARTITION extra );
+```
+
+For more information about default partitions, see [Adding a Default Partition](#topic80).
+
+### <a id="topic69"></a>Defining List Table Partitions 
+
+A list partitioned table can use any data type column that allows equality comparisons as its partition key column. A list partition can also have a multi-column \(composite\) partition key, whereas a range partition only allows a single column as the partition key. For list partitions, you must declare a partition specification for every partition \(list value\) you want to create. For example:
+
+``` sql
+CREATE TABLE rank (id int, rank int, year int, gender
+char(1), count int )
+DISTRIBUTED BY (id)
+PARTITION BY LIST (gender)
+( PARTITION girls VALUES ('F'),
+  PARTITION boys VALUES ('M'),
+  DEFAULT PARTITION other );
+```
+
+**Note:** The HAWQ legacy optimizer allows list partitions with multi-column \(composite\) partition keys. A range partition only allows a single column as the partition key. GPORCA does not support composite keys.
+
+For more information about default partitions, see [Adding a Default Partition](#topic80).
+
+### <a id="topic70"></a>Defining Multi-level Partitions 
+
+You can create a multi-level partition design with subpartitions of partitions. Using a *subpartition template* ensures that every partition has the same subpartition design, including partitions that you add later. For example, the following SQL creates the two-level partition design shown in [Figure 1](#im207241):
+
+``` sql
+CREATE TABLE sales (trans_id int, date date, amount
+decimal(9,2), region text)
+DISTRIBUTED BY (trans_id)
+PARTITION BY RANGE (date)
+SUBPARTITION BY LIST (region)
+SUBPARTITION TEMPLATE
+( SUBPARTITION usa VALUES ('usa'),
+  SUBPARTITION asia VALUES ('asia'),
+  SUBPARTITION europe VALUES ('europe'),
+  DEFAULT SUBPARTITION other_regions)
+  (START (date '2011-01-01') INCLUSIVE
+   END (date '2012-01-01') EXCLUSIVE
+   EVERY (INTERVAL '1 month'),
+   DEFAULT PARTITION outlying_dates );
+```
+
+The following example shows a three-level partition design where the `sales` table is partitioned by `year`, then `month`, then `region`. The `SUBPARTITION TEMPLATE` clauses ensure that each yearly partition has the same subpartition structure. The example declares a `DEFAULT` partition at each level of the hierarchy.
+
+``` sql
+CREATE TABLE p3_sales (id int, year int, month int, day int,
+region text)
+DISTRIBUTED BY (id)
+PARTITION BY RANGE (year)
+    SUBPARTITION BY RANGE (month)
+      SUBPARTITION TEMPLATE (
+        START (1) END (13) EVERY (1),
+        DEFAULT SUBPARTITION other_months )
+           SUBPARTITION BY LIST (region)
+             SUBPARTITION TEMPLATE (
+               SUBPARTITION usa VALUES ('usa'),
+               SUBPARTITION europe VALUES ('europe'),
+               SUBPARTITION asia VALUES ('asia'),
+               DEFAULT SUBPARTITION other_regions )
+( START (2002) END (2012) EVERY (1),
+  DEFAULT PARTITION outlying_years );
+```
+
+**CAUTION**:
+
+When you create multi-level partitions on ranges, it is easy to create a large number of subpartitions, some containing little or no data. This can add many entries to the system tables, which increases the time and memory required to optimize and execute queries. Increase the range interval or choose a different partitioning strategy to reduce the number of subpartitions created.
+
+### <a id="topic71"></a>Partitioning an Existing Table 
+
+Tables can be partitioned only at creation. If you have a table that you want to partition, you must create a partitioned table, load the data from the original table into the new table, drop the original table, and rename the partitioned table with the original table's name. You must also re-grant any table permissions. For example:
+
+``` sql
+CREATE TABLE sales2 (LIKE sales)
+PARTITION BY RANGE (date)
+( START (date '2008-01-01') INCLUSIVE
+   END (date '2009-01-01') EXCLUSIVE
+   EVERY (INTERVAL '1 month') );
+INSERT INTO sales2 SELECT * FROM sales;
+DROP TABLE sales;
+ALTER TABLE sales2 RENAME TO sales;
+GRANT ALL PRIVILEGES ON sales TO admin;
+GRANT SELECT ON sales TO guest;
+```
+
+## <a id="topic73"></a>Loading Partitioned Tables 
+
+After you create the partitioned table structure, top-level parent tables are empty. Data is routed to the bottom-level child table partitions. In a multi-level partition design, only the subpartitions at the bottom of the hierarchy can contain data.
+
+Rows that cannot be mapped to a child table partition are rejected and the load fails. To avoid unmapped rows being rejected at load time, define your partition hierarchy with a `DEFAULT` partition. Any rows that do not match a partition's `CHECK` constraints load into the `DEFAULT` partition. See [Adding a Default Partition](#topic80).
+
+At runtime, the query optimizer scans the entire table inheritance hierarchy and uses the `CHECK` table constraints to determine which of the child table partitions to scan to satisfy the query's conditions. The `DEFAULT` partition \(if your hierarchy has one\) is always scanned. `DEFAULT` partitions that contain data slow down the overall scan time.
+
+When you use `COPY` or `INSERT` to load data into a parent table, the data is automatically rerouted to the correct partition, just like a regular table.
+
+Best practice for loading data into partitioned tables is to create an intermediate staging table, load it, and then exchange it into your partition design. See [Exchanging a Partition](#topic83).
+
+## <a id="topic74"></a>Verifying Your Partition Strategy 
+
+When a table is partitioned based on the query predicate, you can use `EXPLAIN` to verify that the query optimizer scans only the relevant data to examine the query plan.
+
+For example, suppose a *sales* table is date-range partitioned by month and subpartitioned by region as shown in [Figure 1](#im207241). For the following query:
+
+``` sql
+EXPLAIN SELECT * FROM sales WHERE date='01-07-12' AND
+region='usa';
+```
+
+The query plan for this query should show a table scan of only the following tables:
+
+-   the default partition returning 0-1 rows \(if your partition design has one\)
+-   the January 2012 partition \(*sales\_1\_prt\_1*\) returning 0-1 rows
+-   the USA region subpartition \(*sales\_1\_2\_prt\_usa*\) returning *some number* of rows.
+
+The following example shows the relevant portion of the query plan.
+
+``` pre
+->  `Seq Scan on``sales_1_prt_1` sales (cost=0.00..0.00 `rows=0`
+�����width=0)
+Filter: "date"=01-07-08::date AND region='USA'::text
+->  `Seq Scan on``sales_1_2_prt_usa` sales (cost=0.00..9.87
+`rows=20`
+������width=40)
+```
+
+Ensure that the query optimizer does not scan unnecessary partitions or subpartitions \(for example, scans of months or regions not specified in the query predicate\), and that scans of the top-level tables return 0-1 rows.
+
+### <a id="topic75"></a>Troubleshooting Selective Partition Scanning 
+
+The following limitations can result in a query plan that shows a non-selective scan of your partition hierarchy.
+
+-   The query optimizer can selectively scan partitioned tables only when the query contains a direct and simple restriction of the table using immutable operators such as:
+
+    =, < , <=�, \>,��\>=�, and <\>
+
+-   Selective scanning recognizes `STABLE` and `IMMUTABLE` functions, but does not recognize `VOLATILE` functions within a query. For example, `WHERE` clauses such as `date > CURRENT_DATE` cause the query optimizer to selectively scan partitioned tables, but `time > TIMEOFDAY` does not.
+
+## <a id="topic76"></a>Viewing Your Partition Design 
+
+You can look up information about your partition design using the *pg\_partitions* view. For example, to see the partition design of the *sales* table:
+
+``` sql
+SELECT partitionboundary, partitiontablename, partitionname,
+partitionlevel, partitionrank
+FROM pg_partitions
+WHERE tablename='sales';
+```
+
+The following table and views show information about partitioned tables.
+
+-   *pg\_partition* - Tracks partitioned tables and their inheritance level relationships.
+-   *pg\_partition\_templates* - Shows the subpartitions created using a subpartition template.
+-   *pg\_partition\_columns* - Shows the partition key columns used in a partition design.
+
+## <a id="topic77"></a>Maintaining Partitioned Tables 
+
+To maintain a partitioned table, use the `ALTER TABLE` command against the top-level parent table. The most common scenario is to drop old partitions and add new ones to maintain a rolling window of data in a range partition design. If you have a default partition in your partition design, you add a partition by *splitting* the default partition.
+
+-   [Adding a Partition](#topic78)
+-   [Renaming a Partition](#topic79)
+-   [Adding a Default Partition](#topic80)
+-   [Dropping a Partition](#topic81)
+-   [Truncating a Partition](#topic82)
+-   [Exchanging a Partition](#topic83)
+-   [Splitting a Partition](#topic84)
+-   [Modifying a Subpartition Template](#topic85)
+
+**Note:** When using multi-level partition designs, the following operations are not supported with ALTER TABLE:
+
+-   ADD DEFAULT PARTITION
+-   ADD PARTITION
+-   DROP DEFAULT PARTITION
+-   DROP PARTITION
+-   SPLIT PARTITION
+-   All operations that involve modifying subpartitions.
+
+**Important:** When defining and altering partition designs, use the given partition name, not the table object name. Although you can query and load any table \(including partitioned tables\) directly using SQL commands, you can only modify the structure of a partitioned table using the `ALTER TABLE...PARTITION` clauses.
+
+Partitions are not required to have names. If a partition does not have a name, use one of the following expressions to specify a part: `PARTITION FOR (value)` or \)`PARTITION FOR(RANK(number)`.
+
+### <a id="topic78"></a>Adding a Partition 
+
+You can add a partition to a partition design with the `ALTER TABLE` command. If the original partition design included subpartitions defined by a *subpartition template*, the newly added partition is subpartitioned according to that template. For example:
+
+``` sql
+ALTER TABLE sales ADD PARTITION
+    START (date '2009-02-01') INCLUSIVE
+    END (date '2009-03-01') EXCLUSIVE;
+```
+
+If you did not use a subpartition template when you created the table, you define subpartitions when adding a partition:
+
+``` sql
+ALTER TABLE sales ADD PARTITION
+    START (date '2009-02-01') INCLUSIVE
+    END (date '2009-03-01') EXCLUSIVE
+     ( SUBPARTITION usa VALUES ('usa'),
+       SUBPARTITION asia VALUES ('asia'),
+       SUBPARTITION europe VALUES ('europe') );
+```
+
+When you add a subpartition to an existing partition, you can specify the partition to alter. For example:
+
+``` sql
+ALTER TABLE sales ALTER PARTITION FOR (RANK(12))
+      ADD PARTITION africa VALUES ('africa');
+```
+
+**Note:** You cannot add a partition to a partition design that has a default partition. You must split the default partition to add a partition. See [Splitting a Partition](#topic84).
+
+### <a id="topic79"></a>Renaming a Partition 
+
+Partitioned tables use the following naming convention. Partitioned subtable names are subject to uniqueness requirements and length limitations.
+
+<pre><code><i>&lt;parentname&gt;</i>_<i>&lt;level&gt;</i>_prt_<i>&lt;partition_name&gt;</i></code></pre>
+
+For example:
+
+```
+sales_1_prt_jan08
+```
+
+For auto-generated range partitions, where a number is assigned when no name is given\):
+
+```
+sales_1_prt_1
+```
+
+To rename a partitioned child table, rename the top-level parent table. The *&lt;parentname&gt;* changes in the table names of all associated child table partitions. For example, the following command:
+
+``` sql
+ALTER TABLE sales RENAME TO globalsales;
+```
+
+Changes the associated table names:
+
+```
+globalsales_1_prt_1
+```
+
+You can change the name of a partition to make it easier to identify. For example:
+
+``` sql
+ALTER TABLE sales RENAME PARTITION FOR ('2008-01-01') TO jan08;
+```
+
+Changes the associated table name as follows:
+
+```
+sales_1_prt_jan08
+```
+
+When altering partitioned tables with the `ALTER TABLE` command, always refer to the tables by their partition name \(*jan08*\) and not their full table name \(*sales\_1\_prt\_jan08*\).
+
+**Note:** The table name cannot be a partition name in an `ALTER TABLE` statement. For example, `ALTER TABLE sales...` is correct, `ALTER TABLE sales_1_part_jan08...` is not allowed.
+
+### <a id="topic80"></a>Adding a Default Partition 
+
+You can add a default partition to a partition design with the `ALTER TABLE` command.
+
+``` sql
+ALTER TABLE sales ADD DEFAULT PARTITION other;
+```
+
+If incoming data does not match a partition's `CHECK` constraint and there is no default partition, the data is rejected. Default partitions ensure that incoming data that does not match a partition is inserted into the default partition.
+
+### <a id="topic81"></a>Dropping a Partition 
+
+You can drop a partition from your partition design using the `ALTER TABLE` command. When you drop a partition that has subpartitions, the subpartitions \(and all data in them\) are automatically dropped as well. For range partitions, it is common to drop the older partitions from the range as old data is rolled out of the data warehouse. For example:
+
+``` sql
+ALTER TABLE sales DROP PARTITION FOR (RANK(1));
+```
+
+### <a id="topic_enm_vrk_kv"></a>Sorting AORO Partitioned Tables 
+
+HDFS read access for large numbers of append-only, row-oriented \(AORO\) tables with large numbers of partitions can be tuned by using the `optimizer_parts_to_force_sort_on_insert` parameter to control how HDFS opens files. This parameter controls the way the optimizer sorts tuples during INSERT operations, to maximize HDFS performance.
+
+The user-tunable parameter `optimizer_parts_to_force_sort_on_insert` can force the GPORCA query optimizer to generate a plan for sorting tuples during insertion into an append-only, row-oriented \(AORO\) partitioned tables. Sorting the insert tuples reduces the number of partition switches, thus improving the overall INSERT performance. For a given AORO table, if its number of leaf-partitioned tables is greater than or equal to the number specified in `optimizer_parts_to_force_sort_on_insert`, the plan generated by the GPORCA will sort inserts by their partition IDs before performing the INSERT operation. Otherwise, the inserts are not sorted. The default value for `optimizer_parts_to_force_sort_on_insert` is 160.
+
+### <a id="topic82"></a>Truncating a Partition 
+
+You can truncate a partition using the `ALTER TABLE` command. When you truncate a partition that has subpartitions, the subpartitions are automatically truncated as well.
+
+``` sql
+ALTER TABLE sales TRUNCATE PARTITION FOR (RANK(1));
+```
+
+### <a id="topic83"></a>Exchanging a Partition 
+
+You can exchange a partition using the `ALTER TABLE` command. Exchanging a partition swaps one table in place of an existing partition. You can exchange partitions only at the lowest level of your partition hierarchy \(only partitions that contain data can be exchanged\).
+
+Partition exchange can be useful for data loading. For example, load a staging table and swap the loaded table into your partition design. You can use partition exchange to change the storage type of older partitions to append-only tables. For example:
+
+``` sql
+CREATE TABLE jan12 (LIKE sales) WITH (appendonly=true);
+INSERT INTO jan12 SELECT * FROM sales_1_prt_1 ;
+ALTER TABLE sales EXCHANGE PARTITION FOR (DATE '2012-01-01')
+WITH TABLE jan12;
+```
+
+**Note:** This example refers to the single-level definition of the table `sales`, before partitions were added and altered in the previous examples.
+
+### <a id="topic84"></a>Splitting a Partition 
+
+Splitting a partition divides a partition into two partitions. You can split a partition using the `ALTER TABLE` command. You can split partitions only at the lowest level of your partition hierarchy: only partitions that contain data can be split. The split value you specify goes into the *latter* partition.
+
+For example, to split a monthly partition into two with the first partition containing dates January 1-15 and the second partition containing dates January 16-31:
+
+``` sql
+ALTER TABLE sales SPLIT PARTITION FOR ('2008-01-01')
+AT ('2008-01-16')
+INTO (PARTITION jan081to15, PARTITION jan0816to31);
+```
+
+If your partition design has a default partition, you must split the default partition to add a partition.
+
+When using the `INTO` clause, specify the current default partition as the second partition name. For example, to split a default range partition to add a new monthly partition for January 2009:
+
+``` sql
+ALTER TABLE sales SPLIT DEFAULT PARTITION
+START ('2009-01-01') INCLUSIVE
+END ('2009-02-01') EXCLUSIVE
+INTO (PARTITION jan09, default partition);
+```
+
+### <a id="topic85"></a>Modifying a Subpartition Template 
+
+Use `ALTER TABLE` SET SUBPARTITION TEMPLATE to modify the subpartition template of a partitioned table. Partitions added after you set a new subpartition template have the new partition design. Existing partitions are not modified.
+
+The following example alters the subpartition template of this partitioned table:
+
+``` sql
+CREATE TABLE sales (trans_id int, date date, amount decimal(9,2), region text)
+  DISTRIBUTED BY (trans_id)
+  PARTITION BY RANGE (date)
+  SUBPARTITION BY LIST (region)
+  SUBPARTITION TEMPLATE
+    ( SUBPARTITION usa VALUES ('usa'),
+      SUBPARTITION asia VALUES ('asia'),
+      SUBPARTITION europe VALUES ('europe'),
+      DEFAULT SUBPARTITION other_regions )
+  ( START (date '2014-01-01') INCLUSIVE
+    END (date '2014-04-01') EXCLUSIVE
+    EVERY (INTERVAL '1 month') );
+```
+
+This `ALTER TABLE` command, modifies the subpartition template.
+
+``` sql
+ALTER TABLE sales SET SUBPARTITION TEMPLATE
+( SUBPARTITION usa VALUES ('usa'),
+  SUBPARTITION asia VALUES ('asia'),
+  SUBPARTITION europe VALUES ('europe'),
+  SUBPARTITION africa VALUES ('africa'),
+  DEFAULT SUBPARTITION regions );
+```
+
+When you add a date-range partition of the table sales, it includes the new regional list subpartition for Africa. For example, the following command creates the subpartitions `usa`, `asia`, `europe`, `africa`, and a default partition named `other`:
+
+``` sql
+ALTER TABLE sales ADD PARTITION "4"
+  START ('2014-04-01') INCLUSIVE
+  END ('2014-05-01') EXCLUSIVE ;
+```
+
+To view the tables created for the partitioned table `sales`, you can use the command `\dt sales*` from the psql command line.
+
+To remove a subpartition template, use `SET SUBPARTITION TEMPLATE` with empty parentheses. For example, to clear the sales table subpartition template:
+
+``` sql
+ALTER TABLE sales SET SUBPARTITION TEMPLATE ();
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-schema.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-schema.html.md.erb b/markdown/ddl/ddl-schema.html.md.erb
new file mode 100644
index 0000000..7c361ba
--- /dev/null
+++ b/markdown/ddl/ddl-schema.html.md.erb
@@ -0,0 +1,88 @@
+---
+title: Creating and Managing Schemas
+---
+
+Schemas logically organize objects and data in a database. Schemas allow you to have more than one object \(such as tables\) with the same name in the database without conflict if the objects are in different schemas.
+
+## <a id="topic18"></a>The Default "Public" Schema 
+
+Every database has a default schema named *public*. If you do not create any schemas, objects are created in the *public* schema. All database roles \(users\) have `CREATE` and `USAGE` privileges in the *public* schema. When you create a schema, you grant privileges to your users to allow access to the schema.
+
+## <a id="topic19"></a>Creating a Schema 
+
+Use the `CREATE SCHEMA` command to create a new schema. For example:
+
+``` sql
+=> CREATE SCHEMA myschema;
+```
+
+To create or access objects in a schema, write a qualified name consisting of the schema name and table name separated by a period. For example:
+
+```
+myschema.table
+```
+
+See [Schema Search Paths](#topic20) for information about accessing a schema.
+
+You can create a schema owned by someone else, for example, to restrict the activities of your users to well-defined namespaces. The syntax is:
+
+``` sql
+=> CREATE SCHEMA schemaname AUTHORIZATION username;
+```
+
+## <a id="topic20"></a>Schema Search Paths 
+
+To specify an object's location in a database, use the schema-qualified name. For example:
+
+``` sql
+=> SELECT * FROM myschema.mytable;
+```
+
+You can set the `search_path` configuration parameter to specify the order in which to search the available schemas for objects. The schema listed first in the search path becomes the *default* schema. If a schema is not specified, objects are created in the default schema.
+
+### <a id="topic21"></a>Setting the Schema Search Path 
+
+The `search_path` configuration parameter sets the schema search order. The `ALTER DATABASE` command sets the search path. For example:
+
+``` sql
+=> ALTER DATABASE mydatabase SET search_path TO myschema,
+public, pg_catalog;
+```
+
+### <a id="topic22"></a>Viewing the Current Schema 
+
+Use the `current_schema()` function to view the current schema. For example:
+
+``` sql
+=> SELECT current_schema();
+```
+
+Use the `SHOW` command to view the current search path. For example:
+
+``` sql
+=> SHOW search_path;
+```
+
+## <a id="topic23"></a>Dropping a Schema 
+
+Use the `DROP SCHEMA` command to drop \(delete\) a schema. For example:
+
+``` sql
+=> DROP SCHEMA myschema;
+```
+
+By default, the schema must be empty before you can drop it. To drop a schema and all of its objects \(tables, data, functions, and so on\) use:
+
+``` sql
+=> DROP SCHEMA myschema CASCADE;
+```
+
+## <a id="topic24"></a>System Schemas 
+
+The following system-level schemas exist in every database:
+
+-   `pg_catalog` contains the system catalog tables, built-in data types, functions, and operators. It is always part of the schema search path, even if it is not explicitly named in the search path.
+-   `information_schema` consists of a standardized set of views that contain information about the objects in the database. These views get system information from the system catalog tables in a standardized way.
+-   `pg_toast` stores large objects such as records that exceed the page size. This schema is used internally by the HAWQ system.
+-   `pg_bitmapindex` stores bitmap index objects such as lists of values. This schema is used internally by the HAWQ system.
+-   `hawq_toolkit` is an administrative schema that contains external tables, views, and functions that you can access with SQL commands. All database users can access `hawq_toolkit` to view and query the system log files and other system metrics.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-storage.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-storage.html.md.erb b/markdown/ddl/ddl-storage.html.md.erb
new file mode 100644
index 0000000..264e552
--- /dev/null
+++ b/markdown/ddl/ddl-storage.html.md.erb
@@ -0,0 +1,71 @@
+---
+title: Table Storage Model and Distribution Policy
+---
+
+HAWQ supports several storage models and a mix of storage models. When you create a table, you choose how to store its data. This topic explains the options for table storage and how to choose the best storage model for your workload.
+
+**Note:** To simplify the creation of database tables, you can specify the default values for some table storage options with the HAWQ server configuration parameter `gp_default_storage_options`.
+
+## <a id="topic39"></a>Row-Oriented Storage 
+
+HAWQ provides storage orientation models of either row-oriented or Parquet tables. Evaluate performance using your own data and query workloads to determine the best alternatives.
+
+-   Row-oriented storage: good for OLTP types of workloads with many iterative transactions and many columns of a single row needed all at once, so retrieving is efficient.
+
+    **Note:** Column-oriented storage is no longer available. Parquet storage should be used, instead.
+
+Row-oriented storage provides the best options for the following situations:
+
+-   **Frequent INSERTs.** Where rows are frequently inserted into the table
+-   **Number of columns requested in queries.** Where you typically request all or the majority of columns in the `SELECT` list or `WHERE` clause of your queries, choose a row-oriented model. 
+-   **Number of columns in the table.** Row-oriented storage is most efficient when many columns are required at the same time, or when the row-size of a table is relatively small. 
+
+## <a id="topic55"></a>Altering a Table 
+
+The `ALTER TABLE`command changes the definition of a table. Use `ALTER TABLE` to change table attributes such as column definitions, distribution policy, storage model, and partition structure \(see also [Maintaining Partitioned Tables](ddl-partition.html)\). For example, to add a not-null constraint to a table column:
+
+``` sql
+=> ALTER TABLE address ALTER COLUMN street SET NOT NULL;
+```
+
+### <a id="topic56"></a>Altering Table Distribution 
+
+`ALTER TABLE` provides options to change a table's distribution policy . When the table distribution options change, the table data is redistributed on disk, which can be resource intensive. You can also redistribute table data using the existing distribution policy.
+
+### <a id="topic57"></a>Changing the Distribution Policy 
+
+For partitioned tables, changes to the distribution policy apply recursively to the child partitions. This operation preserves the ownership and all other attributes of the table. For example, the following command redistributes the table sales across all segments using the customer\_id column as the distribution key:
+
+``` sql
+ALTER TABLE sales SET DISTRIBUTED BY (customer_id);
+```
+
+When you change the hash distribution of a table, table data is automatically redistributed. Changing the distribution policy to a random distribution does not cause the data to be redistributed. For example:
+
+``` sql
+ALTER TABLE sales SET DISTRIBUTED RANDOMLY;
+```
+
+### <a id="topic58"></a>Redistributing Table Data 
+
+To redistribute table data for tables with a random distribution policy \(or when the hash distribution policy has not changed\) use `REORGANIZE=TRUE`. Reorganizing data may be necessary to correct a data skew problem, or when segment resources are added to the system. For example, the following command redistributes table data across all segments using the current distribution policy, including random distribution.
+
+``` sql
+ALTER TABLE sales SET WITH (REORGANIZE=TRUE);
+```
+
+## <a id="topic62"></a>Dropping a Table 
+
+The`DROP TABLE`command removes tables from the database. For example:
+
+``` sql
+DROP TABLE mytable;
+```
+
+`DROP TABLE` always removes any indexes, rules, triggers, and constraints that exist for the target table. Specify `CASCADE`to drop a table that is referenced by a view. `CASCADE` removes dependent views.
+
+To empty a table of rows without removing the table definition, use `TRUNCATE`. For example:
+
+``` sql
+TRUNCATE mytable;
+```


[49/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/ambari-rest-api.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ambari-rest-api.html.md.erb b/admin/ambari-rest-api.html.md.erb
deleted file mode 100644
index 2cc79e4..0000000
--- a/admin/ambari-rest-api.html.md.erb
+++ /dev/null
@@ -1,163 +0,0 @@
----
-title: Using the Ambari REST API
----
-
-You can monitor and manage the resources in your HAWQ cluster using the Ambari REST API.  In addition to providing access to the metrics information in your cluster, the API supports viewing, creating, deleting, and updating cluster resources.
-
-This section will provide an introduction to using the Ambari REST APIs for HAWQ-related cluster management activities.
-
-Refer to [Ambari API Reference v1](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md) for the official Ambari API documentation, including full REST resource definitions and response semantics. *Note*: These APIs may change in new versions of Ambari.
-
-
-## <a id="ambari-rest-uri"></a>Manageable HAWQ Resources
-
-HAWQ provides several REST resources to support starting and stopping services, executing service checks, and viewing configuration information among other activities. HAWQ resources you can manage using the Ambari REST API include:
-
-| Ambari Resource      | Description     |
-|----------------------|------------------------|
-| cluster | The HAWQ cluster. |
-| service | The HAWQ and PXF service. You can manage other Hadoop services as well. |
-| component | A specific HAWQ/PXF service component, i.e. the HAWQ Master, PXF. |
-| configuration | A specific HAWQ/PXF configuration entity, for example the hawq-site or pxf-profiles configuration files, or a specific single HAWQ or PXF configuration property. |
-| request | A group of tasks. |
-
-## <a id="ambari-rest-uri"></a>URI Structure
-
-The Ambari REST API provides access to HAWQ cluster resources via URI (uniform resource identifier) paths. To use the Ambari REST API, you will send HTTP requests and parse JSON-formatted HTTP responses.
-
-The Ambari REST API supports standard HTTP request methods including:
-
-- `GET` - read resource properties, metrics
-- `POST` - create new resource
-- `PUT` - update resource
-- `DELETE` - delete resource
-
-URIs for Ambari REST API resources have the following structure:
-
-``` shell
-http://<ambari-server-host>:<port>/api/v1/<resource-path>
-```
-
-The Ambari REST API supports the following HAWQ-related \<resource-paths\>:
-
-| REST Resource Path              | Description     |
-|----------------------|------------------------|
-| clusters/\<cluster\-name\> | The HAWQ cluster name. |
-| clusters/\<cluster\-name\>/services/PXF | The PXF service. |
-| clusters/\<cluster\-name\>/services/HAWQ | The HAWQ service. |
-| clusters/\<cluster\-name\>/services/HAWQ/components | All HAWQ service components. |
-| clusters/\<cluster\-name\>/services/HAWQ/components/\<name\> | A specific HAWQ service component, i.e. HAWQMASTER. |
-| clusters/\<cluster\-name\>/configurations | Cluster configurations. |
-| clusters/\<cluster\-name\>/requests | Group of tasks that run a command. |
-
-## <a id="ambari-rest-curl"></a>Submitting Requests with cURL
-
-Your HTTP request to the Ambari REST API should include the following information:
-
-- User name and password for basic authentication.
-- An HTTP request header.
-- The HTTP request method.
-- JSON-formatted request data, if required.
-- The URI identifying the Ambari REST resource.
-
-You can use the `curl` command to transfer HTTP request data to, and receive data from, the Ambari server using the HTTP protocol.
-
-Use the following syntax to issue a `curl` command for Ambari HAWQ/PXF management operations:
-
-``` shell
-$ curl -u <user>:<passwd> -H <header> -X GET|POST|PUT|DELETE -d <data> <URI>
-```
-
-`curl` options relevant to Ambari REST API communication include:
-
-| Option              | Description     |
-|----------------------|------------------------|
-| -u \<user\>:\<passwd\> | Identify the username and password for basic authentication to the HTTP server. |
-| -H \<header\>   | Identify an extra header to include in the HTTP request. \<header\> must specify `'X-Requested-By:ambari'`.   |
-| -X \<command\>   | Identify the request method. \<command\> may specify `GET` (the default), `POST`, `PUT`, and `DELETE`. |
-| -d \<data\>     | Send the specified \<data\> to the HTTP server along with the request. The \<command\> and \<URI\> determine if \<data\> is required, and if so, its content.  |
-| \<URI\>    | Path to the Ambari REST resource.  |
-
-
-## <a id="ambari-rest-api-auth"></a>Authenticating with the Ambari REST API
-
-The first step in using the Ambari REST API is to authenticate with the Ambari server. The Ambari REST API supports HTTP basic authentication. With this authentication method, you provide a username and password that is internally encoded and sent in the HTTP header.
-
-Example: Testing Authentication
-
-1. Set up some environment variables; replace the values with those appropriate for your operating environment.  For example:
-
-    ``` shell
-    $ export AMBUSER=admin
-    $ export AMBPASSWD=admin
-    $ export AMBHOST=<ambari-server>
-    $ export AMBPORT=8080
-    ```
-
-2. Submit a `curl` request to the Ambari server:
-
-    ``` shell
-    $ curl -u $AMBUSER:$AMBPASSWD http://$AMBHOST:$AMBPORT
-    ```
-    
-    If authentication succeeds, Apache license information is displayed.
-
-
-## <a id="ambari-rest-using"></a>Using the Ambari REST API for HAWQ Management
-
-
-### <a id="ambari-rest-ex-clustname"></a>Example: Retrieving the HAWQ Cluster Name
-
-1. Set up an additional environment variables:
-
-    ``` shell
-    $ export AMBCREDS="$AMBUSER:$AMBPASSWD"
-    $ export AMBURLBASE="http://${AMBHOST}:${AMBPORT}/api/v1/clusters"
-    ```
-    
-    You will use these variables in upcoming examples to simplify `curl` calls.
-    
-2. Use the Ambari REST API to determine the name of your HAWQ cluster; also set `$AMBURLBASE` to include the cluster name:
-
-    ``` shell
-    $ export CLUSTER_NAME="$(curl -u ${AMBCREDS} -i -H 'X-Requested-By:ambari' $AMBURLBASE | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p')"
-    $ echo $CLUSTER_NAME
-    TestCluster
-    $ export AMBURLBASE=$AMBURLBASE/$CLUSTER_NAME
-    ```
-
-### <a id="ambari-rest-ex-mgmt"></a>Examples: Managing the HAWQ and PXF Services
-
-The following subsections provide `curl` commands for common HAWQ cluster management activities.
-
-Refer to [API usage scenarios, troubleshooting, and other FAQs](https://cwiki.apache.org/confluence/display/AMBARI/API+usage+scenarios%2C+troubleshooting%2C+and+other+FAQs) for additional Ambari REST API usage examples.
-
-
-#### <a id="ambari-rest-ex-get"></a>Viewing HAWQ Cluster Service and Configuration Information
-
-| Task              |Command           |
-|----------------------|------------------------|
-| View HAWQ service information. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' $AMBURLBASE/services/HAWQ` |
-| List all HAWQ components. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' $AMBURLBASE/services/HAWQ/components` |
-| View information about the HAWQ master. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' $AMBURLBASE/services/HAWQ/components/HAWQMASTER` |
-| View the `hawq-site` configuration settings. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' "$AMBURLBASE/configurations?type=hawq-site&tag=TOPOLOGY_RESOLVED"` |
-| View the initial `core-site` configuration settings. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' "$AMBURLBASE/configurations?type=core-site&tag=INITIAL"` |
-| View the `pxf-profiles` configuration file. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' "$AMBURLBASE/configurations?type=pxf-profiles&tag=INITIAL"` |
-| View all components on node. | `curl -u $AMBCREDS -i  -X GET -H 'X-Requested-B:ambari' $AMBURLBASE/hosts/<hawq-node>` |
-
-
-#### <a id="ambari-rest-ex-put"></a>Starting/Stopping HAWQ and PXF Services
-
-| Task              |Command           |
-|----------------------|------------------------|
-| Start the HAWQ service. | `curl -u $AMBCREDS -X PUT -H 'X-Requested-By:ambari' -d '{"RequestInfo": {"context" :"Start HAWQ via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' $AMBURLBASE/services/HAWQ` |
-| Stop the HAWQ service. | `curl -u $AMBCREDS -X PUT -H 'X-Requested-By:ambari' -d '{"RequestInfo": {"context" :"Stop HAWQ via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' $AMBURLBASE/services/HAWQ` |
-| Start the PXF service. | `curl -u $AMBCREDS -X PUT -H 'X-Requested-By:ambari' -d '{"RequestInfo": {"context" :"Start PXF via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' $AMBURLBASE//services/PXF` |
-| Stop the PXF service. | `curl -u $AMBCREDS -X PUT -H 'X-Requested-By:ambari' -d '{"RequestInfo": {"context" :"Stop PXF via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' $AMBURLBASE/services/PXF` |
-
-#### <a id="ambari-rest-ex-post"></a>Invoking HAWQ and PXF Service Actions
-
-| Task              |Command           |
-|----------------------|------------------------|
-| Run a HAWQ service check. | `curl -u $AMBCREDS -X POST -H 'X-Requested-By:ambari' -d '{"RequestInfo":{"context":"HAWQ Service Check","command":"HAWQ_SERVICE_CHECK"}, "Requests/resource_filters":[{ "service_name":"HAWQ"}]}'  $AMBURLBASE/requests` |
-| Run a PXF service check. | `curl -u $AMBCREDS -X POST -H 'X-Requested-By:ambari' -d '{"RequestInfo":{"context":"PXF Service Check","command":"PXF_SERVICE_CHECK"}, "Requests/resource_filters":[{ "service_name":"PXF"}]}'  $AMBURLBASE/requests` |

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/maintain.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/maintain.html.md.erb b/admin/maintain.html.md.erb
deleted file mode 100644
index f4b1491..0000000
--- a/admin/maintain.html.md.erb
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Routine System Maintenance Tasks
----
-
-## <a id="overview-topic"></a>Overview
-
-To keep a HAWQ system running efficiently, the database must be regularly cleared of expired data and the table statistics must be updated so that the query optimizer has accurate information.
-
-HAWQ requires that certain tasks be performed regularly to achieve optimal performance. The tasks discussed here are required, but database administrators can automate them using standard UNIX tools such as `cron` scripts. An administrator sets up the appropriate scripts and checks that they execute successfully. See [Recommended Monitoring and Maintenance Tasks](RecommendedMonitoringTasks.html) for additional suggested maintenance activities you can implement to keep your HAWQ system running optimally.
-
-## <a id="topic10"></a>Database Server Log Files 
-
-HAWQ log output tends to be voluminous, especially at higher debug levels, and you do not need to save it indefinitely. Administrators rotate the log files periodically so new log files are started and old ones are removed.
-
-HAWQ has log file rotation enabled on the master and all segment instances. Daily log files are created in the `pg_log` subdirectory of the master and each segment data directory using the following naming convention: <code>hawq-<i>YYYY-MM-DD\_hhmmss</i>.csv</code>. Although log files are rolled over daily, they are not automatically truncated or deleted. Administrators need to implement scripts or programs to periodically clean up old log files in the `pg_log` directory of the master and of every segment instance.
-
-For information about viewing the database server log files, see [Viewing the Database Server Log Files](monitor.html).
-
-## <a id="topic11"></a>Management Utility Log Files 
-
-Log files for the HAWQ management utilities are written to `~/hawqAdminLogs` by default. The naming convention for management log files is:
-
-<pre><code><i>script_name_date</i>.log
-</code></pre>
-
-The log entry format is:
-
-<pre><code><i>timestamp:utility:host:user</i>:[INFO|WARN|FATAL]:<i>message</i>
-</code></pre>
-
-The log file for a particular utility execution is appended to its daily log file each time that utility is run.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/monitor.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/monitor.html.md.erb b/admin/monitor.html.md.erb
deleted file mode 100644
index 418c8c3..0000000
--- a/admin/monitor.html.md.erb
+++ /dev/null
@@ -1,444 +0,0 @@
----
-title: Monitoring a HAWQ System
----
-
-You can monitor a HAWQ system using a variety of tools included with the system or available as add-ons.
-
-Observing the HAWQ system day-to-day performance helps administrators understand the system behavior, plan workflow, and troubleshoot problems. This chapter discusses tools for monitoring database performance and activity.
-
-Also, be sure to review [Recommended Monitoring and Maintenance Tasks](RecommendedMonitoringTasks.html) for monitoring activities you can script to quickly detect problems in the system.
-
-
-## <a id="topic31"></a>Using hawq\_toolkit 
-
-Use HAWQ's administrative schema [*hawq\_toolkit*](../reference/toolkit/hawq_toolkit.html) to query the system catalogs, log files, and operating environment for system status information. The *hawq\_toolkit* schema contains several views you can access using SQL commands. The *hawq\_toolkit* schema is accessible to all database users. Some objects require superuser permissions. Use a command similar to the following to add the *hawq\_toolkit* schema to your schema search path:
-
-```sql
-=> SET ROLE 'gpadmin' ;
-=# SET search_path TO myschema, hawq_toolkit ;
-```
-
-## <a id="topic3"></a>Monitoring System State 
-
-As a HAWQ administrator, you must monitor the system for problem events such as a segment going down or running out of disk space on a segment host. The following topics describe how to monitor the health of a HAWQ system and examine certain state information for a HAWQ system.
-
--   [Checking System State](#topic12)
--   [Checking Disk Space Usage](#topic15)
--   [Viewing Metadata Information about Database Objects](#topic24)
--   [Viewing Query Workfile Usage Information](#topic27)
-
-### <a id="topic12"></a>Checking System State 
-
-A HAWQ system is comprised of multiple PostgreSQL instances \(the master and segments\) spanning multiple machines. To monitor a HAWQ system, you need to know information about the system as a whole, as well as status information of the individual instances. The `hawq state` utility provides status information about a HAWQ system.
-
-#### <a id="topic13"></a>Viewing Master and Segment Status and Configuration 
-
-The default `hawq state` action is to check segment instances and show a brief status of the valid and failed segments. For example, to see a quick status of your HAWQ system:
-
-```shell
-$ hawq state -b
-```
-
-You can also display information about the HAWQ master data directory by invoking `hawq state` with the `-d` option:
-
-```shell
-$ hawq state -d <master_data_dir>
-```
-
-
-### <a id="topic15"></a>Checking Disk Space Usage 
-
-#### <a id="topic16"></a>Checking Sizing of Distributed Databases and Tables 
-
-The *hawq\_toolkit* administrative schema contains several views that you can use to determine the disk space usage for a distributed HAWQ database, schema, table, or index.
-
-##### <a id="topic17"></a>Viewing Disk Space Usage for a Database 
-
-To see the total size of a database \(in bytes\), use the *hawq\_size\_of\_database* view in the *hawq\_toolkit* administrative schema. For example:
-
-```sql
-=> SELECT * FROM hawq_toolkit.hawq_size_of_database
-     ORDER BY sodddatname;
-```
-
-##### <a id="topic18"></a>Viewing Disk Space Usage for a Table 
-
-The *hawq\_toolkit* administrative schema contains several views for checking the size of a table. The table sizing views list the table by object ID \(not by name\). To check the size of a table by name, you must look up the relation name \(`relname`\) in the *pg\_class* table. For example:
-
-```sql
-=> SELECT relname AS name, sotdsize AS size, sotdtoastsize
-     AS toast, sotdadditionalsize AS other
-     FROM hawq_toolkit.hawq_size_of_table_disk AS sotd, pg_class
-   WHERE sotd.sotdoid=pg_class.oid ORDER BY relname;
-```
-
-##### <a id="topic19"></a>Viewing Disk Space Usage for Indexes 
-
-The *hawq\_toolkit* administrative schema contains a number of views for checking index sizes. To see the total size of all index\(es\) on a table, use the *hawq\_size\_of\_all\_table\_indexes* view. To see the size of a particular index, use the *hawq\_size\_of\_index* view. The index sizing views list tables and indexes by object ID \(not by name\). To check the size of an index by name, you must look up the relation name \(`relname`\) in the *pg\_class* table. For example:
-
-```sql
-=> SELECT soisize, relname AS indexname
-     FROM pg_class, hawq_size_of_index
-   WHERE pg_class.oid=hawq_size_of_index.soioid
-     AND pg_class.relkind='i';
-```
-
-### <a id="topic24"></a>Viewing Metadata Information about Database Objects 
-
-HAWQ uses its system catalogs to track various metadata information about the objects stored in a database (tables, views, indexes and so on), as well as global objects including roles and tablespaces.
-
-#### <a id="topic25"></a>Viewing the Last Operation Performed 
-
-You can use the system views *pg\_stat\_operations* and *pg\_stat\_partition\_operations* to look up actions performed on a database object. For example, to view when the `cust` table was created and when it was last analyzed:
-
-```sql
-=> SELECT schemaname AS schema, objname AS table,
-     usename AS role, actionname AS action,
-     subtype AS type, statime AS time
-   FROM pg_stat_operations
-   WHERE objname='cust';
-```
-
-```
-�schema | table | role | action  | type  | time
---------+-------+------+---------+-------+--------------------------
-��sales | cust  | main | CREATE  | TABLE | 2010-02-09 18:10:07.867977-08
-��sales | cust  | main | VACUUM  |       | 2010-02-10 13:32:39.068219-08
-��sales | cust  | main | ANALYZE |       | 2010-02-25 16:07:01.157168-08
-(3 rows)
-
-```
-
-#### <a id="topic26"></a>Viewing the Definition of an Object 
-
-You can use the `psql` `\d` meta-command to display the definition of an object, such as a table or view. For example, to see the definition of a table named `sales`:
-
-``` sql
-=> \d sales
-```
-
-```
-Append-Only Table "public.sales"
- Column |  Type   | Modifiers 
---------+---------+-----------
- id     | integer | 
- year   | integer | 
- qtr    | integer | 
- day    | integer | 
- region | text    | 
-Compression Type: None
-Compression Level: 0
-Block Size: 32768
-Checksum: f
-Distributed by: (id)
-```
-
-
-### <a id="topic27"></a>Viewing Query Workfile Usage Information 
-
-The HAWQ administrative schema *hawq\_toolkit* contains views that display information about HAWQ workfiles. HAWQ creates workfiles on disk if it does not have sufficient memory to execute the query in memory. This information can be used for troubleshooting and tuning queries. The information in the views can also be used to specify the values for the HAWQ configuration parameters `hawq_workfile_limit_per_query` and `hawq_workfile_limit_per_segment`.
-
-Views in the *hawq\_toolkit* schema include:
-
--   *hawq\_workfile\_entries* - one row for each operator currently using disk space for workfiles on a segment
--   *hawq\_workfile\_usage\_per\_query* - one row for each running query currently using disk space for workfiles on a segment
--   *hawq\_workfile\_usage\_per\_segment* - one row for each segment where each row displays the total amount of disk space currently in use for workfiles on the segment
-
-
-## <a id="topic28"></a>Viewing the Database Server Log Files 
-
-Every database instance in HAWQ \(master and segments\) runs a PostgreSQL database server with its own server log file. Daily log files are created in the `pg_log` directory of the master  and each segment data directory.
-
-### <a id="topic29"></a>Log File Format 
-
-The server log files are written in comma-separated values \(CSV\) format. Log entries may not include values for all log fields. For example, only log entries associated with a query worker process will have the `slice_id` populated. You can identify related log entries of a particular query by the query's session identifier \(`gp_session_id`\) and command identifier \(`gp_command_count`\).
-
-Log entries may include the following fields:
-
-<table>
-  <tr><th>#</th><th>Field Name</th><th>Data Type</th><th>Description</th></tr>
-  <tr><td>1</td><td>event_time</td><td>timestamp with time zone</td><td>Time that the log entry was written to the log</td></tr>
-  <tr><td>2</td><td>user_name</td><td>varchar(100)</td><td>The database user name</td></tr>
-  <tr><td>3</td><td>database_name</td><td>varchar(100)</td><td>The database name</td></tr>
-  <tr><td>4</td><td>process_id</td><td>varchar(10)</td><td>The system process ID (prefixed with "p")</td></tr>
-  <tr><td>5</td><td>thread_id</td><td>varchar(50)</td><td>The thread count (prefixed with "th")</td></tr>
-  <tr><td>6</td><td>remote_host</td><td>varchar(100)</td><td>On the master, the hostname/address of the client machine. On the segment, the hostname/address of the master.</td></tr>
-  <tr><td>7</td><td>remote_port</td><td>varchar(10)</td><td>The segment or master port number</td></tr>
-  <tr><td>8</td><td>session_start_time</td><td>timestamp with time zone</td><td>Time session connection was opened</td></tr>
-  <tr><td>9</td><td>transaction_id</td><td>int</td><td>Top-level transaction ID on the master. This ID is the parent of any subtransactions.</td></tr>
-  <tr><td>10</td><td>gp_session_id</td><td>text</td><td>Session identifier number (prefixed with "con")</td></tr>
-  <tr><td>11</td><td>gp_command_count</td><td>text</td><td>The command number within a session (prefixed with "cmd")</td></tr>
-  <tr><td>12</td><td>gp_segment</td><td>text</td><td>The segment content identifier. The master always has a content ID of -1.</td></tr>
-  <tr><td>13</td><td>slice_id</td><td>text</td><td>The slice ID (portion of the query plan being executed)</td></tr>
-  <tr><td>14</td><td>distr_tranx_id</td><td>text</td><td>Distributed transaction ID</td></tr>
-  <tr><td>15</td><td>local_tranx_id</td><td>text</td><td>Local transaction ID</td></tr>
-  <tr><td>16</td><td>sub_tranx_id</td><td>text</td><td>Subtransaction ID</td></tr>
-  <tr><td>17</td><td>event_severity</td><td>varchar(10)</td><td>Values include: LOG, ERROR, FATAL, PANIC, DEBUG1, DEBUG2</td></tr>
-  <tr><td>18</td><td>sql_state_code</td><td>varchar(10)</td><td>SQL state code associated with the log message</td></tr>
-  <tr><td>19</td><td>event_message</td><td>text</td><td>Log or error message text</td></tr>
-  <tr><td>20</td><td>event_detail</td><td>text</td><td>Detail message text associated with an error or warning message</td></tr>
-  <tr><td>21</td><td>event_hint</td><td>text</td><td>Hint message text associated with an error or warning message</td></tr>
-  <tr><td>22</td><td>internal_query</td><td>text</td><td>The internally-generated query text</td></tr>
-  <tr><td>23</td><td>internal_query_pos</td><td>int</td><td>The cursor index into the internally-generated query text</td></tr>
-  <tr><td>24</td><td>event_context</td><td>text</td><td>The context in which this message gets generated</td></tr>
-  <tr><td>25</td><td>debug_query_string</td><td>text</td><td>User-supplied query string with full detail for debugging. This string can be modified for internal use.</td></tr>
-  <tr><td>26</td><td>error_cursor_pos</td><td>int</td><td>The cursor index into the query string</td></tr>
-  <tr><td>27</td><td>func_name</td><td>text</td><td>The function in which this message is generated</td></tr>
-  <tr><td>28</td><td>file_name</td><td>text</td><td>The internal code file where the message originated</td></tr>
-  <tr><td>29</td><td>file_line</td><td>int</td><td>The line of the code file where the message originated</td></tr>
-  <tr><td>30</td><td>stack_trace</td><td>text</td><td>Stack trace text associated with this message</td></tr>
-</table>
-### <a id="topic30"></a>Searching the HAWQ Server Log Files 
-
-You can use the `gplogfilter` HAWQ utility to search through a HAWQ log file for entries matching specific criteria. By default, this utility searches through the HAWQ master log file in the default logging location. For example, to display the entries to the master log file starting after 2 pm on a certain date:
-
-``` shell
-$ gplogfilter -b '2016-01-18 14:00'
-```
-
-To search through all segment log files simultaneously, run `gplogfilter` through the `hawq ssh` utility. For example, specify a \<seg\_hosts\> file that includes all segment hosts of interest, then invoke `gplogfilter` to display the last three lines of each segment log file on each segment host. (Note: enter the commands after the `=>` prompt, do not include the `=>`.):
-
-``` shell
-$ hawq ssh -f <seg_hosts>
-=> source /usr/local/hawq/greenplum_path.sh
-=> gplogfilter -n 3 /data/hawq/segment/pg_log/hawq*.csv
-```
-
-## <a id="topic_jx2_rqg_kp"></a>HAWQ Error Codes 
-
-The following section describes SQL error codes for certain database events.
-
-### <a id="topic_pyh_sqg_kp"></a>SQL Standard Error Codes 
-
-The following table lists all the defined error codes. Some are not used, but are defined by the SQL standard. The error classes are also shown. For each error class there is a standard error code having the last three characters 000. This code is used only for error conditions that fall within the class but do not have any more-specific code assigned.
-
-The PL/pgSQL condition name for each error code is the same as the phrase shown in the table, with underscores substituted for spaces. For example, code 22012, DIVISION BY ZERO, has condition name DIVISION\_BY\_ZERO. Condition names can be written in either upper or lower case.
-
-**Note:** PL/pgSQL does not recognize warning, as opposed to error, condition names; those are classes 00, 01, and 02.
-
-|Error Code|Meaning|Constant|
-|----------|-------|--------|
-|**Class 00**\u2014 Successful Completion|
-|00000|SUCCESSFUL COMPLETION|successful\_completion|
-|Class 01 \u2014 Warning|
-|01000|WARNING|warning|
-|0100C|DYNAMIC RESULT SETS RETURNED|dynamic\_result\_sets\_returned|
-|01008|IMPLICIT ZERO BIT PADDING|implicit\_zero\_bit\_padding|
-|01003|NULL VALUE ELIMINATED IN SET FUNCTION|null\_value\_eliminated\_in\_set\_function|
-|01007|PRIVILEGE NOT GRANTED|privilege\_not\_granted|
-|01006|PRIVILEGE NOT REVOKED|privilege\_not\_revoked|
-|01004|STRING DATA RIGHT TRUNCATION|string\_data\_right\_truncation|
-|01P01|DEPRECATED FEATURE|deprecated\_feature|
-|**Class 02** \u2014 No Data \(this is also a warning class per the SQL standard\)|
-|02000|NO DATA|no\_data|
-|02001|NO ADDITIONAL DYNAMIC RESULT SETS RETURNED|no\_additional\_dynamic\_result\_sets\_returned|
-|**Class 03** \u2014 SQL Statement Not Yet Complete|
-|03000|SQL STATEMENT NOT YET COMPLETE|sql\_statement\_not\_yet\_complete|
-|**Class 08** \u2014 Connection Exception|
-|08000|CONNECTION EXCEPTION|connection\_exception|
-|08003|CONNECTION DOES NOT EXIST|connection\_does\_not\_exist|
-|08006|CONNECTION FAILURE|connection\_failure|
-|08001|SQLCLIENT UNABLE TO ESTABLISH SQLCONNECTION|sqlclient\_unable\_to\_establish\_sqlconnection|
-|08004|SQLSERVER REJECTED ESTABLISHMENT OF SQLCONNECTION|sqlserver\_rejected\_establishment\_of\_sqlconnection|
-|08007|TRANSACTION RESOLUTION UNKNOWN|transaction\_resolution\_unknown|
-|08P01|PROTOCOL VIOLATION|protocol\_violation|
-|**Class 09** \u2014 Triggered Action Exception|
-|09000|TRIGGERED ACTION EXCEPTION|triggered\_action\_exception|
-|**Class 0A** \u2014 Feature Not Supported|
-|0A000|FEATURE NOT SUPPORTED|feature\_not\_supported|
-|**Class 0B** \u2014 Invalid Transaction Initiation|
-|0B000|INVALID TRANSACTION INITIATION|invalid\_transaction\_initiation|
-|**Class 0F** \u2014 Locator Exception|
-|0F000|LOCATOR EXCEPTION|locator\_exception|
-|0F001|INVALID LOCATOR SPECIFICATION|invalid\_locator\_specification|
-|**Class 0L** \u2014 Invalid Grantor|
-|0L000|INVALID GRANTOR|invalid\_grantor|
-|0LP01|INVALID GRANT OPERATION|invalid\_grant\_operation|
-|**Class 0P** \u2014 Invalid Role Specification|
-|0P000|INVALID ROLE SPECIFICATION|invalid\_role\_specification|
-|**Class 21** \u2014 Cardinality Violation|
-|21000|CARDINALITY VIOLATION|cardinality\_violation|
-|**Class 22** \u2014 Data Exception|
-|22000|DATA EXCEPTION|data\_exception|
-|2202E|ARRAY SUBSCRIPT ERROR|array\_subscript\_error|
-|22021|CHARACTER NOT IN REPERTOIRE|character\_not\_in\_repertoire|
-|22008|DATETIME FIELD OVERFLOW|datetime\_field\_overflow|
-|22012|DIVISION BY ZERO|division\_by\_zero|
-|22005|ERROR IN ASSIGNMENT|error\_in\_assignment|
-|2200B|ESCAPE CHARACTER CONFLICT|escape\_character\_conflict|
-|22022|INDICATOR OVERFLOW|indicator\_overflow|
-|22015|INTERVAL FIELD OVERFLOW|interval\_field\_overflow|
-|2201E|INVALID ARGUMENT FOR LOGARITHM|invalid\_argument\_for\_logarithm|
-|2201F|INVALID ARGUMENT FOR POWER FUNCTION|invalid\_argument\_for\_power\_function|
-|2201G|INVALID ARGUMENT FOR WIDTH BUCKET FUNCTION|invalid\_argument\_for\_width\_bucket\_function|
-|22018|INVALID CHARACTER VALUE FOR CAST|invalid\_character\_value\_for\_cast|
-|22007|INVALID DATETIME FORMAT|invalid\_datetime\_format|
-|22019|INVALID ESCAPE CHARACTER|invalid\_escape\_character|
-|2200D|INVALID ESCAPE OCTET|invalid\_escape\_octet|
-|22025|INVALID ESCAPE SEQUENCE|invalid\_escape\_sequence|
-|22P06|NONSTANDARD USE OF ESCAPE CHARACTER|nonstandard\_use\_of\_escape\_character|
-|22010|INVALID INDICATOR PARAMETER VALUE|invalid\_indicator\_parameter\_value|
-|22020|INVALID LIMIT VALUE|invalid\_limit\_value|
-|22023|INVALID PARAMETER VALUE|invalid\_parameter\_value|
-|2201B|INVALID REGULAR EXPRESSION|invalid\_regular\_expression|
-|22009|INVALID TIME ZONE DISPLACEMENT VALUE|invalid\_time\_zone\_displacement\_value|
-|2200C|INVALID USE OF ESCAPE CHARACTER|invalid\_use\_of\_escape\_character|
-|2200G|MOST SPECIFIC TYPE MISMATCH|most\_specific\_type\_mismatch|
-|22004|NULL VALUE NOT ALLOWED|null\_value\_not\_allowed|
-|22002|NULL VALUE NO INDICATOR PARAMETER|null\_value\_no\_indicator\_parameter|
-|22003|NUMERIC VALUE OUT OF RANGE|numeric\_value\_out\_of\_range|
-|22026|STRING DATA LENGTH MISMATCH|string\_data\_length\_mismatch|
-|22001|STRING DATA RIGHT TRUNCATION|string\_data\_right\_truncation|
-|22011|SUBSTRING ERROR|substring\_error|
-|22027|TRIM ERROR|trim\_error|
-|22024|UNTERMINATED C STRING|unterminated\_c\_string|
-|2200F|ZERO LENGTH CHARACTER STRING|zero\_length\_character\_string|
-|22P01|FLOATING POINT EXCEPTION|floating\_point\_exception|
-|22P02|INVALID TEXT REPRESENTATION|invalid\_text\_representation|
-|22P03|INVALID BINARY REPRESENTATION|invalid\_binary\_representation|
-|22P04|BAD COPY FILE FORMAT|bad\_copy\_file\_format|
-|22P05|UNTRANSLATABLE CHARACTER|untranslatable\_character|
-|**Class 23** \u2014 Integrity Constraint Violation|
-|23000|INTEGRITY CONSTRAINT VIOLATION|integrity\_constraint\_violation|
-|23001|RESTRICT VIOLATION|restrict\_violation|
-|23502|NOT NULL VIOLATION|not\_null\_violation|
-|23503|FOREIGN KEY VIOLATION|foreign\_key\_violation|
-|23505|UNIQUE VIOLATION|unique\_violation|
-|23514|CHECK VIOLATION|check\_violation|
-|**Class 24** \u2014 Invalid Cursor State|
-|24000|INVALID CURSOR STATE|invalid\_cursor\_state|
-|**Class 25** \u2014 Invalid Transaction State|
-|25000|INVALID TRANSACTION STATE|invalid\_transaction\_state|
-|25001|ACTIVE SQL TRANSACTION|active\_sql\_transaction|
-|25002|BRANCH TRANSACTION ALREADY ACTIVE|branch\_transaction\_already\_active|
-|25008|HELD CURSOR REQUIRES SAME ISOLATION LEVEL|held\_cursor\_requires\_same\_isolation\_level|
-|25003|INAPPROPRIATE ACCESS MODE FOR BRANCH TRANSACTION|inappropriate\_access\_mode\_for\_branch\_transaction|
-|25004|INAPPROPRIATE ISOLATION LEVEL FOR BRANCH TRANSACTION|inappropriate\_isolation\_level\_for\_branch\_transaction|
-|25005|NO ACTIVE SQL TRANSACTION FOR BRANCH TRANSACTION|no\_active\_sql\_transaction\_for\_branch\_transaction|
-|25006|READ ONLY SQL TRANSACTION|read\_only\_sql\_transaction|
-|25007|SCHEMA AND DATA STATEMENT MIXING NOT SUPPORTED|schema\_and\_data\_statement\_mixing\_not\_supported|
-|25P01|NO ACTIVE SQL TRANSACTION|no\_active\_sql\_transaction|
-|25P02|IN FAILED SQL TRANSACTION|in\_failed\_sql\_transaction|
-|**Class 26** \u2014 Invalid SQL Statement Name|
-|26000|INVALID SQL STATEMENT NAME|invalid\_sql\_statement\_name|
-|**Class 27** \u2014 Triggered Data Change Violation|
-|27000|TRIGGERED DATA CHANGE VIOLATION|triggered\_data\_change\_violation|
-|**Class 28** \u2014 Invalid Authorization Specification|
-|28000|INVALID AUTHORIZATION SPECIFICATION|invalid\_authorization\_specification|
-|**Class 2B** \u2014 Dependent Privilege Descriptors Still Exist|
-|2B000|DEPENDENT PRIVILEGE DESCRIPTORS STILL EXIST|dependent\_privilege\_descriptors\_still\_exist|
-|2BP01|DEPENDENT OBJECTS STILL EXIST|dependent\_objects\_still\_exist|
-|**Class 2D** \u2014 Invalid Transaction Termination|
-|2D000|INVALID TRANSACTION TERMINATION|invalid\_transaction\_termination|
-|**Class 2F** \u2014 SQL Routine Exception|
-|2F000|SQL ROUTINE EXCEPTION|sql\_routine\_exception|
-|2F005|FUNCTION EXECUTED NO RETURN STATEMENT|function\_executed\_no\_return\_statement|
-|2F002|MODIFYING SQL DATA NOT PERMITTED|modifying\_sql\_data\_not\_permitted|
-|2F003|PROHIBITED SQL STATEMENT ATTEMPTED|prohibited\_sql\_statement\_attempted|
-|2F004|READING SQL DATA NOT PERMITTED|reading\_sql\_data\_not\_permitted|
-|**Class 34** \u2014 Invalid Cursor Name|
-|34000|INVALID CURSOR NAME|invalid\_cursor\_name|
-|**Class 38** \u2014 External Routine Exception|
-|38000|EXTERNAL ROUTINE EXCEPTION|external\_routine\_exception|
-|38001|CONTAINING SQL NOT PERMITTED|containing\_sql\_not\_permitted|
-|38002|MODIFYING SQL DATA NOT PERMITTED|modifying\_sql\_data\_not\_permitted|
-|38003|PROHIBITED SQL STATEMENT ATTEMPTED|prohibited\_sql\_statement\_attempted|
-|38004|READING SQL DATA NOT PERMITTED|reading\_sql\_data\_not\_permitted|
-|**Class 39** \u2014 External Routine Invocation Exception|
-|39000|EXTERNAL ROUTINE INVOCATION EXCEPTION|external\_routine\_invocation\_exception|
-|39001|INVALID SQLSTATE RETURNED|invalid\_sqlstate\_returned|
-|39004|NULL VALUE NOT ALLOWED|null\_value\_not\_allowed|
-|39P01|TRIGGER PROTOCOL VIOLATED|trigger\_protocol\_violated|
-|39P02|SRF PROTOCOL VIOLATED|srf\_protocol\_violated|
-|**Class 3B** \u2014 Savepoint Exception|
-|3B000|SAVEPOINT EXCEPTION|savepoint\_exception|
-|3B001|INVALID SAVEPOINT SPECIFICATION|invalid\_savepoint\_specification|
-|**Class 3D** \u2014 Invalid Catalog Name|
-|3D000|INVALID CATALOG NAME|invalid\_catalog\_name|
-|**Class 3F** \u2014 Invalid Schema Name|
-|3F000|INVALID SCHEMA NAME|invalid\_schema\_name|
-|**Class 40** \u2014 Transaction Rollback|
-|40000|TRANSACTION ROLLBACK|transaction\_rollback|
-|40002|TRANSACTION INTEGRITY CONSTRAINT VIOLATION|transaction\_integrity\_constraint\_violation|
-|40001|SERIALIZATION FAILURE|serialization\_failure|
-|40003|STATEMENT COMPLETION UNKNOWN|statement\_completion\_unknown|
-|40P01|DEADLOCK DETECTED|deadlock\_detected|
-|**Class 42** \u2014 Syntax Error or Access Rule Violation|
-|42000|SYNTAX ERROR OR ACCESS RULE VIOLATION|syntax\_error\_or\_access\_rule\_violation|
-|42601|SYNTAX ERROR|syntax\_error|
-|42501|INSUFFICIENT PRIVILEGE|insufficient\_privilege|
-|42846|CANNOT COERCE|cannot\_coerce|
-|42803|GROUPING ERROR|grouping\_error|
-|42830|INVALID FOREIGN KEY|invalid\_foreign\_key|
-|42602|INVALID NAME|invalid\_name|
-|42622|NAME TOO LONG|name\_too\_long|
-|42939|RESERVED NAME|reserved\_name|
-|42804|DATATYPE MISMATCH|datatype\_mismatch|
-|42P18|INDETERMINATE DATATYPE|indeterminate\_datatype|
-|42809|WRONG OBJECT TYPE|wrong\_object\_type|
-|42703|UNDEFINED COLUMN|undefined\_column|
-|42883|UNDEFINED FUNCTION|undefined\_function|
-|42P01|UNDEFINED TABLE|undefined\_table|
-|42P02|UNDEFINED PARAMETER|undefined\_parameter|
-|42704|UNDEFINED OBJECT|undefined\_object|
-|42701|DUPLICATE COLUMN|duplicate\_column|
-|42P03|DUPLICATE CURSOR|duplicate\_cursor|
-|42P04|DUPLICATE DATABASE|duplicate\_database|
-|42723|DUPLICATE FUNCTION|duplicate\_function|
-|42P05|DUPLICATE PREPARED STATEMENT|duplicate\_prepared\_statement|
-|42P06|DUPLICATE SCHEMA|duplicate\_schema|
-|42P07|DUPLICATE TABLE|duplicate\_table|
-|42712|DUPLICATE ALIAS|duplicate\_alias|
-|42710|DUPLICATE OBJECT|duplicate\_object|
-|42702|AMBIGUOUS COLUMN|ambiguous\_column|
-|42725|AMBIGUOUS FUNCTION|ambiguous\_function|
-|42P08|AMBIGUOUS PARAMETER|ambiguous\_parameter|
-|42P09|AMBIGUOUS ALIAS|ambiguous\_alias|
-|42P10|INVALID COLUMN REFERENCE|invalid\_column\_reference|
-|42611|INVALID COLUMN DEFINITION|invalid\_column\_definition|
-|42P11|INVALID CURSOR DEFINITION|invalid\_cursor\_definition|
-|42P12|INVALID DATABASE DEFINITION|invalid\_database\_definition|
-|42P13|INVALID FUNCTION DEFINITION|invalid\_function\_definition|
-|42P14|INVALID PREPARED STATEMENT DEFINITION|invalid\_prepared\_statement\_definition|
-|42P15|INVALID SCHEMA DEFINITION|invalid\_schema\_definition|
-|42P16|INVALID TABLE DEFINITION|invalid\_table\_definition|
-|42P17|INVALID OBJECT DEFINITION|invalid\_object\_definition|
-|**Class 44** \u2014 WITH CHECK OPTION Violation|
-|44000|WITH CHECK OPTION VIOLATION|with\_check\_option\_violation|
-|**Class 53** \u2014 Insufficient Resources|
-|53000|INSUFFICIENT RESOURCES|insufficient\_resources|
-|53100|DISK FULL|disk\_full|
-|53200|OUT OF MEMORY|out\_of\_memory|
-|53300|TOO MANY CONNECTIONS|too\_many\_connections|
-|**Class 54** \u2014 Program Limit Exceeded|
-|54000|PROGRAM LIMIT EXCEEDED|program\_limit\_exceeded|
-|54001|STATEMENT TOO COMPLEX|statement\_too\_complex|
-|54011|TOO MANY COLUMNS|too\_many\_columns|
-|54023|TOO MANY ARGUMENTS|too\_many\_arguments|
-|**Class 55** \u2014 Object Not In Prerequisite State|
-|55000|OBJECT NOT IN PREREQUISITE STATE|object\_not\_in\_prerequisite\_state|
-|55006|OBJECT IN USE|object\_in\_use|
-|55P02|CANT CHANGE RUNTIME PARAM|cant\_change\_runtime\_param|
-|55P03|LOCK NOT AVAILABLE|lock\_not\_available|
-|**Class 57** \u2014 Operator Intervention|
-|57000|OPERATOR INTERVENTION|operator\_intervention|
-|57014|QUERY CANCELED|query\_canceled|
-|57P01|ADMIN SHUTDOWN|admin\_shutdown|
-|57P02|CRASH SHUTDOWN|crash\_shutdown|
-|57P03|CANNOT CONNECT NOW|cannot\_connect\_now|
-|**Class 58** \u2014 System Error \(errors external to HAWQ \)|
-|58030|IO ERROR|io\_error|
-|58P01|UNDEFINED FILE|undefined\_file|
-|58P02|DUPLICATE FILE|duplicate\_file|
-|Class F0 \u2014 Configuration File Error|
-|F0000|CONFIG FILE ERROR|config\_file\_error|
-|F0001|LOCK FILE EXISTS|lock\_file\_exists|
-|**Class P0** \u2014 PL/pgSQL Error|
-|P0000|PLPGSQL ERROR|plpgsql\_error|
-|P0001|RAISE EXCEPTION|raise\_exception|
-|P0002|NO DATA FOUND|no\_data\_found|
-|P0003|TOO MANY ROWS|too\_many\_rows|
-|**Class XX** \u2014 Internal Error|
-|XX000|INTERNAL ERROR|internal\_error|
-|XX001|DATA CORRUPTED|data\_corrupted|
-|XX002|INDEX CORRUPTED|index\_corrupted|

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/setuphawqopenv.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/setuphawqopenv.html.md.erb b/admin/setuphawqopenv.html.md.erb
deleted file mode 100644
index 9d9b731..0000000
--- a/admin/setuphawqopenv.html.md.erb
+++ /dev/null
@@ -1,81 +0,0 @@
----
-title: Introducing the HAWQ Operating Environment
----
-
-Before invoking operations on a HAWQ cluster, you must set up your HAWQ environment. This set up is required for both administrative and non-administrative HAWQ users.
-
-## <a id="hawq_setupenv"></a>Procedure: Setting Up Your HAWQ Operating Environment
-
-HAWQ installs a script that you can use to set up your HAWQ cluster environment. The `greenplum_path.sh` script, located in your HAWQ root install directory, sets `$PATH` and other environment variables to find HAWQ files.  Most importantly, `greenplum_path.sh` sets the `$GPHOME` environment variable to point to the root directory of the HAWQ installation.  If you installed HAWQ from a product distribution, the HAWQ root is typically `/usr/local/hawq`. If you built HAWQ from source or downloaded the tarball, you will have selected an install root directory on your own.
-
-Perform the following steps to set up your HAWQ operating environment:
-
-1. Log in to the HAWQ node as the desired user.  For example:
-
-    ``` shell
-    $ ssh gpadmin@<master>
-    gpadmin@master$ 
-    ```
-
-    Or, if you are already logged in to \<node\-type\> as a different user, switch to the desired user. For example:
-    
-    ``` shell
-    gpadmin@master$ su - <hawq-user>
-    Password:
-    hawq-user@master$ 
-    ```
-
-2. Set up your HAWQ operating environment by sourcing the `greenplum_path.sh` file:
-
-    ``` shell
-    hawq-node$ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-    If you built HAWQ from source or downloaded the tarball, substitute the path to the installed or extracted `greenplum_path.sh` file \(for example `/opt/hawq-2.1.0.0/greenplum_path.sh`\).
-
-
-3. Edit your `.bash_profile` or other shell initialization file to source `greenplum_path.sh` on login.  For example, add:
-
-    ``` shell
-    source /usr/local/hawq/greenplum_path.sh
-    ```
-    
-4. Set HAWQ-specific environment variables relevant to your deployment in your shell initialization file. These include `PGAPPNAME`, `PGDATABASE`, `PGHOST`, `PGPORT`, and `PGUSER.` For example:
-
-    1.  If you use a custom HAWQ master port number, make this port number the default by setting the `PGPORT` environment variable in your shell initialization file; add:
-
-        ``` shell
-        export PGPORT=10432
-        ```
-    
-        Setting `PGPORT` simplifies `psql` invocation by providing a default for the `-p` (port) option.
-
-    1.  If you will routinely operate on a specific database, make this database the default by setting the `PGDATABASE` environment variable in your shell initialization file:
-
-        ``` shell
-        export PGDATABASE=<database-name>
-        ```
-    
-        Setting `PGDATABASE` simplifies `psql` invocation by providing a default for the `-d` (database) option.
-
-    You may choose to set additional HAWQ deployment-specific environment variables. See [Environment Variables](../reference/HAWQEnvironmentVariables.html#optionalenvironmentvariables).
-
-## <a id="hawq_env_files_and_dirs"></a>HAWQ Files and Directories
-
-The following table identifies some files and directories of interest in a default HAWQ installation.  Unless otherwise specified, the table entries are relative to `$GPHOME`.
-
-|File/Directory                   | Contents           |
-|---------------------------------|---------------------|
-| $HOME/hawqAdminLogs/            | Default HAWQ management utility log file directory |
-| greenplum_path.sh      | HAWQ environment set-up script |
-| bin/      | HAWQ admin, client, database, and administration utilities |
-| etc/              | HAWQ configuration files, including `hawq-site.xml` |
-| include/          | HDFS, PostgreSQL, `libpq` header files  |
-| lib/              | HAWQ libraries |
-| lib/postgresql/   | PostgreSQL shared libraries and JAR files |
-| share/postgresql/ | PostgreSQL and procedural languages samples and scripts    |
-| /data/hawq/[master&#124;segment]/ | Default location of HAWQ master and segment data directories |
-| /data/hawq/[master&#124;segment]/pg_log/ | Default location of HAWQ master and segment log file directories |
-| /etc/pxf/conf/               | PXF service and configuration files |
-| /usr/lib/pxf/                | PXF service and plug-in shared libraries  |
-| /usr/hdp/current/            | HDP runtime and configuration files |

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/startstop.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/startstop.html.md.erb b/admin/startstop.html.md.erb
deleted file mode 100644
index 7aac723..0000000
--- a/admin/startstop.html.md.erb
+++ /dev/null
@@ -1,146 +0,0 @@
----
-title: Starting and Stopping HAWQ
----
-
-In a HAWQ DBMS, the database server instances \(the master and all segments\) are started or stopped across all of the hosts in the system in such a way that they can work together as a unified DBMS.
-
-Because a HAWQ system is distributed across many machines, the process for starting and stopping a HAWQ system is different than the process for starting and stopping a regular PostgreSQL DBMS.
-
-Use the `hawq start `*`object`* and `hawq stop `*`object`* commands to start and stop HAWQ, respectively. These management tools are located in the `$GPHOME/bin` directory on your HAWQ master host. 
-
-Initializing a HAWQ system also starts the system.
-
-**Important:**
-
-Do not issue a `KILL` command to end any Postgres process. Instead, use the database command `pg_cancel_backend()`.
-
-For information about [hawq start](../reference/cli/admin_utilities/hawqstart.html) and [hawq stop](../reference/cli/admin_utilities/hawqstop.html), see the appropriate pages in the HAWQ Management Utility Reference or enter `hawq start -h` or `hawq stop -h` on the command line.
-
-
-## <a id="task_hkd_gzv_fp"></a>Starting HAWQ 
-
-When a HAWQ system is first initialized, it is also started. For more information about initializing HAWQ, see [hawq init](../reference/cli/admin_utilities/hawqinit.html). 
-
-To start a stopped HAWQ system that was previously initialized, run the `hawq start` command on the master instance.
-
-You can also use the `hawq start master` command to start only the HAWQ master, without segment nodes, then add these later, using `hawq start segment`. If you want HAWQ to ignore hosts that fail ssh validation, use the hawq start `--ignore-bad-hosts` option. 
-
-Use the `hawq start cluster` command to start a HAWQ system that has already been initialized by the `hawq init cluster` command, but has been stopped by the `hawq stop cluster` command. The `hawq start cluster` command starts a HAWQ system on the master host and starts all its segments. The command orchestrates this process and performs the process in parallel.
-
-
-## <a id="task_gpdb_restart"></a>Restarting HAWQ 
-
-Stop the HAWQ system and then restart it.
-
-The `hawq restart` command with the appropriate `cluster` or node-type option will stop and then restart HAWQ after the shutdown completes. If the master or segments are already stopped, restart will have no effect.
-
--   To restart a HAWQ cluster, enter the following command on the master host:
-
-    ```shell
-    $ hawq restart cluster
-    ```
-
-
-## <a id="task_upload_config"></a>Reloading Configuration File Changes Only 
-
-Reload changes to the HAWQ configuration files without interrupting the system.
-
-The `hawq stop` command can reload changes to the `pg_hba.conf `configuration file and to *runtime* parameters in the `hawq-site.xml` and `pg_hba.conf` files without service interruption. Active sessions pick up changes when they reconnect to the database. Many server configuration parameters require a full system restart \(`hawq restart cluster`\) to activate. For information about server configuration parameters, see the [Server Configuration Parameter Reference](../reference/guc/guc_config.html).
-
--   Reload configuration file changes without shutting down the system using the `hawq stop` command:
-
-    ```shell
-    $ hawq stop cluster --reload
-    ```
-    
-    Or:
-
-    ```shell
-    $ hawq stop cluster -u
-    ```
-    
-
-## <a id="task_maint_mode"></a>Starting the Master in Maintenance Mode 
-
-Start only the master to perform maintenance or administrative tasks without affecting data on the segments.
-
-Maintenance mode is a superuser-only mode that should only be used when required for a particular maintenance task. For example, you can connect to a database only on the master instance in maintenance mode and edit system catalog settings.
-
-1.  Run `hawq start` on the `master` using the `-m` option:
-
-    ```shell
-    $ hawq start master -m
-    ```
-
-2.  Connect to the master in maintenance mode to do catalog maintenance. For example:
-
-    ```shell
-    $ PGOPTIONS='-c gp_session_role=utility' psql template1
-    ```
-3.  After completing your administrative tasks, restart the master in production mode. 
-
-    ```shell
-    $ hawq restart master 
-    ```
-
-    **Warning:**
-
-    Incorrect use of maintenance mode connections can result in an inconsistent HAWQ system state. Only expert users should perform this operation.
-
-
-## <a id="task_gpdb_stop"></a>Stopping HAWQ 
-
-The `hawq stop cluster` command stops or restarts your HAWQ system and always runs on the master host. When activated, `hawq stop cluster` stops all `postgres` processes in the system, including the master and all segment instances. The `hawq stop cluster` command uses a default of up to 64 parallel worker threads to bring down the segments that make up the HAWQ cluster. The system waits for any active transactions to finish before shutting down. To stop HAWQ immediately, use fast mode. The commands `hawq stop master`, `hawq stop segment`, `hawq stop standby`, or `hawq stop allsegments` can be used to stop the master, the local segment node, standby, or all segments in the cluster. Stopping the master will stop only the master segment, and will not shut down a cluster.
-
--   To stop HAWQ:
-
-    ```shell
-    $ hawq stop cluster
-    ```
-
--   To stop HAWQ in fast mode:
-
-    ```shell
-    $ hawq stop cluster -M fast
-    ```
-
-
-## <a id="task_tx4_bl3_h5"></a>Best Practices to Start/Stop HAWQ Cluster Members 
-
-For best results in using `hawq start` and `hawq stop` to manage your HAWQ system, the following best practices are recommended.
-
--   Issue the `CHECKPOINT` command to update and flush all data files to disk and update the log file before stopping the cluster. A checkpoint ensures that, in the event of a crash, files can be restored from the checkpoint snapshot.
-
--   Stop the entire HAWQ system by stopping the cluster on the master host. 
-
-    ```shell
-    $ hawq stop cluster
-    ```
-
--   To stop segments and kill any running queries without causing data loss or inconsistency issues, use `fast` or `immediate` mode on the cluster:
-
-    ```shell
-    $ hawq stop cluster -M fast
-    $ hawq stop cluster -M immediate
-    ```
-
--   Use `hawq stop master` to stop the master only. If you cannot stop the master due to running transactions, try using `fast` shutdown. If `fast` shutdown does not work, use `immediate` shutdown. Use `immediate` shutdown with caution, as it will result in a crash-recovery run when the system is restarted.
-
-	```shell
-    $ hawq stop master -M fast
-    $ hawq stop master -M immediate
-    ```
--   If you have changed or want to reload server parameter settings on a HAWQ database where there are active connections, use the command:
-
-
-	```shell
-    $ hawq stop master -u -M fast 
-    ```   
-
--   When stopping a segment or all segments, use `smart` mode, which is the default. Using `fast` or `immediate` mode on segments will have no effect since segments are stateless.
-
-    ```shell
-    $ hawq stop segment
-    $ hawq stop allsegments
-    ```
--	You should typically always use `hawq start cluster` or `hawq restart cluster` to start the cluster. If you do end up starting nodes individually with `hawq start standby|master|segment`, make sure to always start the standby *before* the active master. Otherwise, the standby can become unsynchronized with the active master.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/bestpractices/HAWQBestPracticesOverview.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/HAWQBestPracticesOverview.html.md.erb b/bestpractices/HAWQBestPracticesOverview.html.md.erb
deleted file mode 100644
index 13b4dca..0000000
--- a/bestpractices/HAWQBestPracticesOverview.html.md.erb
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Best Practices
----
-
-This chapter provides best practices on using the components and features that are part of a HAWQ system.
-
-
--   **[Best Practices for Operating HAWQ](../bestpractices/operating_hawq_bestpractices.html)**
-
-    This topic provides best practices for operating HAWQ, including recommendations for stopping, starting and monitoring HAWQ.
-
--   **[Best Practices for Securing HAWQ](../bestpractices/secure_bestpractices.html)**
-
-    To secure your HAWQ deployment, review the recommendations listed in this topic.
-
--   **[Best Practices for Managing Resources](../bestpractices/managing_resources_bestpractices.html)**
-
-    This topic describes best practices for managing resources in HAWQ.
-
--   **[Best Practices for Managing Data](../bestpractices/managing_data_bestpractices.html)**
-
-    This topic describes best practices for creating databases, loading data, partioning data, and recovering data in HAWQ.
-
--   **[Best Practices for Querying Data](../bestpractices/querying_data_bestpractices.html)**
-
-    To obtain the best results when querying data in HAWQ, review the best practices described in this topic.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/bestpractices/general_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/general_bestpractices.html.md.erb b/bestpractices/general_bestpractices.html.md.erb
deleted file mode 100644
index 503887b..0000000
--- a/bestpractices/general_bestpractices.html.md.erb
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: HAWQ Best Practices
----
-
-This topic addresses general best practices for users who are new to HAWQ.
-
-When using HAWQ, adhere to the following guidelines for best results:
-
--   **Use a consistent `hawq-site.xml` file to configure your entire cluster**:
-
-    Configuration guc/parameters are located in `$GPHOME/etc/hawq-site.xml`. This configuration file resides on all HAWQ instances and can be modified by using the `hawq config` utility. You can use the same configuration file cluster-wide across both master and segments.
-    
-    If you install and manage HAWQ using Ambari, do not use `hawq config` to set or change HAWQ configuration properties. Use the Ambari interface for all configuration changes. Configuration changes to `hawq-site.xml` made outside the Ambari interface will be overwritten when you restart or reconfigure  HAWQ using Ambari.
-
-    **Note:** While `postgresql.conf` still exists in HAWQ, any parameters defined in `hawq-site.xml` will overwrite configurations in `postgresql.conf`. For this reason, we recommend that you only use `hawq-site.xml` to configure your HAWQ cluster.
-
--   **Keep in mind the factors that impact the number of virtual segments used for queries. The number of virtual segments used directly impacts the query's performance.** The degree of parallelism achieved by a query is determined by multiple factors, including the following:
-    -   **Cost of the query**. Small queries use fewer segments and larger queries use more segments. Note that there are some techniques you can use when defining resource queues to influence the number of virtual segments and general resources that are allocated to queries. See [Best Practices for Using Resource Queues](managing_resources_bestpractices.html#topic_hvd_pls_wv).
-    -   **Available resources**. Resources available at query time. If more resources are available in the resource queue, the resources will be used.
-    -   **Hash table and bucket number**. If the query involves only hash-distributed tables, and the bucket number (bucketnum) configured for all the hash tables is either the same bucket number for all tables or the table size for random tables is no more than 1.5 times larger than the size of hash tables for the hash tables, then the query's parallelism is fixed (equal to the hash table bucket number). Otherwise, the number of virtual segments depends on the query's cost and hash-distributed table queries will behave like queries on randomly distributed tables.
-    -   **Query Type**: For queries with some user-defined functions or for external tables where calculating resource costs is difficult , then the number of virtual segments is controlled by `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, as well as by the ON clause and the location list of external tables. If the query has a hash result table (e.g. `INSERT into hash_table`) then the number of virtual segment number must be equal to the bucket number of the resulting hash table, If the query is performed in utility mode, such as for `COPY` and `ANALYZE` operations, the virtual segment number is calculated by different policies, which will be explained later in this section.
-    -   **PXF**: PXF external tables use the `default_hash_table_bucket_number` parameter, not the `hawq_rm_nvseg_perquery_perseg_limit` parameter, to control the number of virtual segments. 
-
-    See [Query Performance](../query/query-performance.html#topic38) for more details.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/bestpractices/managing_data_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/managing_data_bestpractices.html.md.erb b/bestpractices/managing_data_bestpractices.html.md.erb
deleted file mode 100644
index 11d6e02..0000000
--- a/bestpractices/managing_data_bestpractices.html.md.erb
+++ /dev/null
@@ -1,47 +0,0 @@
----
-title: Best Practices for Managing Data
----
-
-This topic describes best practices for creating databases, loading data, partioning data, and recovering data in HAWQ.
-
-## <a id="topic_xhy_v2j_1v"></a>Best Practices for Loading Data
-
-Loading data into HDFS is challenging due to the limit on the number of files that can be opened concurrently for write on both NameNodes and DataNodes.
-
-To obtain the best performance during data loading, observe the following best practices:
-
--   Typically the number of concurrent connections to a NameNode should not exceed 50,000, and the number of open files per DataNode should not exceed 10,000. If you exceed these limits, NameNode and DataNode may become overloaded and slow.
--   If the number of partitions in a table is large, the recommended way to load data into the partitioned table is to load the data partition by partition. For example, you can use query such as the following to load data into only one partition:
-
-    ```sql
-    INSERT INTO target_partitioned_table_part1 SELECT * FROM source_table WHERE filter
-    ```
-
-    where *filter* selects only the data in the target partition.
-
--   To alleviate the load on NameNode, you can reduce the number of virtual segment used per node. You can do this on the statement-level or on the resource queue level. See [Configuring the Maximum Number of Virtual Segments](../resourcemgmt/ConfigureResourceManagement.html#topic_tl5_wq1_f5) for more information.
--   Use resource queues to limit load query and read query concurrency.
-
-The best practice for loading data into partitioned tables is to create an intermediate staging table, load it, and then exchange it into your partition design. See [Exchanging a Partition](../ddl/ddl-partition.html#topic83).
-
-## <a id="topic_s23_52j_1v"></a>Best Practices for Partitioning Data
-
-### <a id="topic65"></a>Deciding on a Table Partitioning Strategy
-
-Not all tables are good candidates for partitioning. If the answer is *yes* to all or most of the following questions, table partitioning is a viable database design strategy for improving query performance. If the answer is *no* to most of the following questions, table partitioning is not the right solution for that table. Test your design strategy to ensure that query performance improves as expected.
-
--   **Is the table large enough?** Large fact tables are good candidates for table partitioning. If you have millions or billions of records in a table, you may see performance benefits from logically breaking that data up into smaller chunks. For smaller tables with only a few thousand rows or less, the administrative overhead of maintaining the partitions will outweigh any performance benefits you might see.
--   **Are you experiencing unsatisfactory performance?** As with any performance tuning initiative, a table should be partitioned only if queries against that table are producing slower response times than desired.
--   **Do your query predicates have identifiable access patterns?** Examine the `WHERE` clauses of your query workload and look for table columns that are consistently used to access data. For example, if most of your queries tend to look up records by date, then a monthly or weekly date-partitioning design might be beneficial. Or if you tend to access records by region, consider a list-partitioning design to divide the table by region.
--   **Does your data warehouse maintain a window of historical data?** Another consideration for partition design is your organization's business requirements for maintaining historical data. For example, your data warehouse may require that you keep data for the past twelve months. If the data is partitioned by month, you can easily drop the oldest monthly partition from the warehouse and load current data into the most recent monthly partition.
--   **Can the data be divided into somewhat equal parts based on some defining criteria?** Choose partitioning criteria that will divide your data as evenly as possible. If the partitions contain a relatively equal number of records, query performance improves based on the number of partitions created. For example, by dividing a large table into 10 partitions, a query will execute 10 times faster than it would against the unpartitioned table, provided that the partitions are designed to support the query's criteria.
-
-Do not create more partitions than are needed. Creating too many partitions can slow down management and maintenance jobs, such as vacuuming, recovering segments, expanding the cluster, checking disk usage, and others.
-
-Partitioning does not improve query performance unless the query optimizer can eliminate partitions based on the query predicates. Queries that scan every partition run slower than if the table were not partitioned, so avoid partitioning if few of your queries achieve partition elimination. Check the explain plan for queries to make sure that partitions are eliminated. See [Query Profiling](../query/query-profiling.html#topic39) for more about partition elimination.
-
-Be very careful with multi-level partitioning because the number of partition files can grow very quickly. For example, if a table is partitioned by both day and city, and there are 1,000 days of data and 1,000 cities, the total number of partitions is one million. Column-oriented tables store each column in a physical table, so if this table has 100 columns, the system would be required to manage 100 million files for the table.
-
-Before settling on a multi-level partitioning strategy, consider a single level partition with bitmap indexes. Indexes slow down data loads, so consider performance testing with your data and schema to decide on the best strategy.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/bestpractices/managing_resources_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/managing_resources_bestpractices.html.md.erb b/bestpractices/managing_resources_bestpractices.html.md.erb
deleted file mode 100644
index f770611..0000000
--- a/bestpractices/managing_resources_bestpractices.html.md.erb
+++ /dev/null
@@ -1,144 +0,0 @@
----
-title: Best Practices for Managing Resources
----
-
-This topic describes best practices for managing resources in HAWQ.
-
-## <a id="topic_ikz_ndx_15"></a>Best Practices for Configuring Resource Management
-
-When configuring resource management, you can apply certain best practices to ensure that resources are managed both efficiently and for best system performance.
-
-The following is a list of high-level best practices for optimal resource management:
-
--   Make sure segments do not have identical IP addresses. See [Segments Do Not Appear in gp\_segment\_configuration](../troubleshooting/Troubleshooting.html#topic_hlj_zxx_15) for an explanation of this problem.
--   Configure all segments to have the same resource capacity. See [Configuring Segment Resource Capacity](../resourcemgmt/ConfigureResourceManagement.html#topic_htk_fxh_15).
--   To prevent resource fragmentation, ensure that your deployment's segment resource capacity (standalone mode) or YARN node resource capacity (YARN mode) is a multiple of all virtual segment resource quotas. See [Configuring Segment Resource Capacity](../resourcemgmt/ConfigureResourceManagement.html#topic_htk_fxh_15) (HAWQ standalone mode) and [Setting HAWQ Segment Resource Capacity in YARN](../resourcemgmt/YARNIntegration.html#topic_pzf_kqn_c5).
--   Ensure that enough registered segments are available and usable for query resource requests. If the number of unavailable or unregistered segments is higher than a set limit, then query resource requests are rejected. Also ensure that the variance of dispatched virtual segments across physical segments is not greater than the configured limit. See [Rejection of Query Resource Requests](../troubleshooting/Troubleshooting.html#topic_vm5_znx_15).
--   Use multiple master and segment temporary directories on separate, large disks (2TB or greater) to load balance writes to temporary files (for example, `/disk1/tmp             /disk2/tmp`). For a given query, HAWQ will use a separate temp directory (if available) for each virtual segment to store spill files. Multiple HAWQ sessions will also use separate temp directories where available to avoid disk contention. If you configure too few temp directories, or you place multiple temp directories on the same disk, you increase the risk of disk contention or running out of disk space when multiple virtual segments target the same disk.
--   Configure minimum resource levels in YARN, and tune the timeout of when idle resources are returned to YARN. See [Tune HAWQ Resource Negotiations with YARN](../resourcemgmt/YARNIntegration.html#topic_wp3_4bx_15).
--   Make sure that the property `yarn.scheduler.minimum-allocation-mb` in `yarn-site.xml` is an equal subdivision of 1GB. For example, 1024, 512.
-
-## <a id="topic_hvd_pls_wv"></a>Best Practices for Using Resource Queues
-
-Design and configure your resource queues depending on the operational needs of your deployment. This topic describes the best practices for creating and modifying resource queues within the context of different operational scenarios.
-
-### Modifying Resource Queues for Overloaded HDFS
-
-A high number of concurrent HAWQ queries can cause HDFS to overload, especially when querying partitioned tables. Use the `ACTIVE_STATEMENTS` attribute to restrict statement concurrency in a resource queue. For example, if an external application is executing more than 100 concurrent queries, then limiting the number of active statements in your resource queues will instruct the HAWQ resource manager to restrict actual statement concurrency within HAWQ. You might want to modify an existing resource queue as follows:
-
-```sql
-ALTER RESOURCE QUEUE sampleque1 WITH (ACTIVE_STATEMENTS=20);
-```
-
-In this case, when this DDL is applied to queue `sampleque1`, the roles using this queue will have to wait until no more than 20 statements are running to execute their queries. Therefore, 80 queries will be waiting in the queue for later execution. Restricting the number of active query statements helps limit the usage of HDFS resources and protects HDFS. You can alter concurrency even when the resource queue is busy. For example, if a queue already has 40 concurrent statements running, and you apply a DDL statement that specifies `ACTIVE_STATEMENTS=20`, then the resource queue pauses the allocation of resources to queries until more than 20 statements have returned their resources.
-
-### Isolating and Protecting Production Workloads
-
-Another best practice is using resource queues to isolate your workloads. Workload isolation prevents your production workload from being starved of resources. To create this isolation, divide your workload by creating roles for specific purposes. For example, you could create one role for production online verification and another role for the regular running of production processes.
-
-In this scenario, let us assign `role1` for the production workload and `role2` for production software verification. We can define the following resource queues under the same parent queue `dept1que`, which is the resource queue defined for the entire department.
-
-```sql
-CREATE RESOURCE QUEUE dept1product
-   WITH (PARENT='dept1que', MEMORY_LIMIT_CLUSTER=90%, CORE_LIMIT_CLUSTER=90%, RESOURCE_OVERCOMMIT_FACTOR=2);
-
-CREATE RESOURCE QUEUE dept1verification 
-   WITH (PARENT='dept1que', MEMORY_LIMIT_CLUSTER=10%, CORE_LIMIT_CLUSTER=10%, RESOURCE_OVERCOMMIT_FACTOR=10);
-
-ALTER ROLE role1 RESOURCE QUEUE dept1product;
-
-ALTER ROLE role2 RESOURCE QUEUE dept1verification;
-```
-
-With these resource queues defined, workload is spread across the resource queues as follows:
-
--   When both `role1` and `role2` have workloads, the test verification workload gets only 10% of the total available `dept1que` resources, leaving 90% of the `dept1que` resources available for running the production workload.
--   When `role1` has a workload but `role2` is idle, then 100% of all `dept1que` resources can be consumed by the production workload.
--   When only `role2` has a workload (for example, during a scheduled testing window), then 100% of all `dept1que` resources can also be utilized for testing.
-
-Even when the resource queues are busy, you can alter the resource queue's memory and core limits to change resource allocation policies before switching workloads.
-
-In addition, you can use resource queues to isolate workloads for different departments or different applications. For example, we can use the following DDL statements to define 3 departments, and an administrator can arbitrarily redistribute resource allocations among the departments according to usage requirements.
-
-```sql
-ALTER RESOURCE QUEUE pg_default 
-   WITH (MEMORY_LIMIT_CLUSTER=10%, CORE_LIMIT_CLUSTER=10%);
-
-CREATE RESOURCE QUEUE dept1 
-   WITH (PARENT='pg_root', MEMORY_LIMIT_CLUSTER=30%, CORE_LIMIT_CLUSTER=30%);
-
-CREATE RESOURCE QUEUE dept2 
-   WITH (PARENT='pg_root', MEMORY_LIMIT_CLUSTER=30%, CORE_LIMIT_CLUSTER=30%);
-
-CREATE RESOURCE QUEUE dept3 
-   WITH (PARENT='pg_root', MEMORY_LIMIT_CLUSTER=30%, CORE_LIMIT_CLUSTER=30%);
-
-CREATE RESOURCE QUEUE dept11
-   WITH (PARENT='dept1', MEMORY_LIMIT_CLUSTER=50%,CORE_LIMIT_CLUSTER=50%);
-
-CREATE RESOURCE QUEUE dept12
-   WITH (PARENT='dept1', MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%);
-```
-
-### Querying Parquet Tables with Large Table Size
-
-You can use resource queues to improve query performance on Parquet tables with a large page size. This type of query requires a large memory quota for virtual segments. Therefore, if one role mostly queries Parquet tables with a large page size, alter the resource queue associated with the role to increase its virtual segment resource quota. For example:
-
-```sql
-ALTER RESOURCE queue1 WITH (VSEG_RESOURCE_QUOTA='mem:2gb');
-```
-
-If there are only occasional queries on Parquet tables with a large page size, use a statement level specification instead of altering the resource queue. For example:
-
-```sql
-SET HAWQ_RM_STMT_NVSEG=10;
-SET HAWQ_RM_STMT_VSEG_MEMORY='2gb';
-query1;
-SET HAWQ_RM_STMT_NVSEG=0;
-```
-
-### Restricting Resource Consumption for Specific Queries
-
-In general, the HAWQ resource manager attempts to provide as much resources as possible to the current query to achieve high query performance. When a query is complex and large, however, the associated resource queue can use up many virtual segments causing other resource queues (and queries) to starve. Under these circumstances,you should enable nvseg limits on the resource queue associated with the large query. For example, you can specify that all queries can use no more than 200 virtual segments. To achieve this limit, alter the resource queue as follows
-
-``` sql
-ALTER RESOURCE QUEUE queue1 WITH (NVSEG_UPPER_LIMIT=200);
-```
-
-If we hope to make this limit vary according to the dynamic cluster size, we can use the following statement.
-
-```sql
-ALTER RESOURCE QUEUE queue1 WITH (NVSEG_UPPER_LIMIT_PERSEG=10);
-```
-
-After setting the limit in the above example, the actual limit will be 100 if you have a 10-node cluster. If the cluster is expanded to 20 nodes, then the limit increases automatically to 200.
-
-### Guaranteeing Resource Allocations for Individual Statements
-
-In general, the minimum number of virtual segments allocated to a statement is decided by the resource queue's actual capacity and its concurrency setting. For example, if there are 10 nodes in a cluster and the total resource capacity of the cluster is 640GB and 160 cores, then a resource queue having 20% capacity has a capacity of 128GB (640GB \* .20) and 32 cores (160 \*.20). If the virtual segment quota is set to 256MB, then this queue has 512 virtual segments allocated (128GB/256MB=512). If the `ACTIVE_STATEMENTS` concurrency setting for the resource queue is 20, then the minimum number of allocated virtual segments for each query is **25** (*trunc*(512/20)=25). However, this minimum number of virtual segments is a soft restriction. If a query statement requires only 5 virtual segments, then this minimum number of 25 is ignored since it is not necessary to allocate 25 for this statement.
-
-In order to raise the minimum number of virtual segments available for a query statement, there are two options.
-
--   *Option 1*: Alter the resource queue to reduce concurrency. This is the recommended way to achieve the goal. For example:
-
-    ```sql
-    ALTER RESOURCE QUEUE queue1 WITH (ACTIVE_STATEMENTS=10);
-    ```
-
-    If the original concurrency setting is 20, then the minimum number of virtual segments is doubled.
-
--   *Option 2*: Alter the nvseg limits of the resource queue. For example:
-
-    ```sql
-    ALTER RESOURCE QUEUE queue1 WITH (NVSEG_LOWER_LIMIT=50);
-    ```
-
-    or, alternately:
-
-    ```sql
-    ALTER RESOURCE QUEUE queue1 WITH (NVSEG_LOWER_LIMIT_PERSEG=5);
-    ```
-
-    In the second DDL, if there are 10 nodes in the cluster, the actual minimum number of virtual segments is 50 (5 \* 10 = 50).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/bestpractices/operating_hawq_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/operating_hawq_bestpractices.html.md.erb b/bestpractices/operating_hawq_bestpractices.html.md.erb
deleted file mode 100644
index 9dc56e9..0000000
--- a/bestpractices/operating_hawq_bestpractices.html.md.erb
+++ /dev/null
@@ -1,298 +0,0 @@
----
-title: Best Practices for Operating HAWQ
----
-
-This topic provides best practices for operating HAWQ, including recommendations for stopping, starting and monitoring HAWQ.
-
-## <a id="best_practice_config"></a>Best Practices for Configuring HAWQ Parameters
-
-The HAWQ configuration guc/parameters are located in `$GPHOME/etc/hawq-site.xml`. This configuration file resides on all HAWQ instances and can be modified either by the Ambari interface or the command line. 
-
-If you install and manage HAWQ using Ambari, use the Ambari interface for all configuration changes. Do not use command line utilities such as `hawq config` to set or change HAWQ configuration properties for Ambari-managed clusters. Configuration changes to `hawq-site.xml` made outside the Ambari interface will be overwritten when you restart or reconfigure HAWQ using Ambari.
-
-If you manage your cluster using command line tools instead of Ambari, use a consistent `hawq-site.xml` file to configure your entire cluster. 
-
-**Note:** While `postgresql.conf` still exists in HAWQ, any parameters defined in `hawq-site.xml` will overwrite configurations in `postgresql.conf`. For this reason, we recommend that you only use `hawq-site.xml` to configure your HAWQ cluster. For Ambari clusters, always use Ambari for configuring `hawq-site.xml` parameters.
-
-## <a id="task_qgk_bz3_1v"></a>Best Practices to Start/Stop HAWQ Cluster Members
-
-For best results in using `hawq start` and `hawq stop` to manage your HAWQ system, the following best practices are recommended.
-
--   Issue the `CHECKPOINT` command to update and flush all data files to disk and update the log file before stopping the cluster. A checkpoint ensures that, in the event of a crash, files can be restored from the checkpoint snapshot.
--   Stop the entire HAWQ system by stopping the cluster on the master host:
-    ```shell
-    $ hawq stop cluster
-    ```
-
--   To stop segments and kill any running queries without causing data loss or inconsistency issues, use `fast` or `immediate` mode on the cluster:
-
-    ```shell
-    $ hawq stop cluster -M fast
-    ```
-    ```shell
-    $ hawq stop cluster -M immediate
-    ```
-
--   Use `hawq stop master` to stop the master only. If you cannot stop the master due to running transactions, try using fast shutdown. If fast shutdown does not work, use immediate shutdown. Use immediate shutdown with caution, as it will result in a crash-recovery run when the system is restarted. 
-
-    ```shell
-    $ hawq stop master -M fast
-    ```
-    ```shell
-    $ hawq stop master -M immediate
-    ```
-
--   When stopping a segment or all segments, you can use the default mode of smart mode. Using fast or immediate mode on segments will have no effect since segments are stateless.
-
-    ```shell
-    $ hawq stop segment
-    ```
-    ```shell
-    $ hawq stop allsegments
-    ```
-
--   Typically you should always use `hawq start cluster` or `hawq               restart cluster` to start the cluster. If you do end up using `hawq start standby|master|segment` to start nodes individually, make sure you always start the standby before the active master. Otherwise, the standby can become unsynchronized with the active master.
-
-## <a id="id_trr_m1j_1v"></a>Guidelines for Cluster Expansion
-
-This topic provides some guidelines around expanding your HAWQ cluster.
-
-There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
-
--   When you add a new node, install both a DataNode and a physical segment on the new node.
--   After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
--   Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, execute **`select gp_metadata_cache_clear();`**.
--   Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html#topic1) command.
--   If you are using hash tables, consider updating the `default_hash_table_bucket_number` server configuration parameter to a larger value after expanding the cluster but before redistributing the hash tables.
-
-## <a id="id_o5n_p1j_1v"></a>Database State Monitoring Activities
-
-<a id="id_o5n_p1j_1v__d112e31"></a>
-
-<table>
-<caption><span class="tablecap">Table 1. Database State Monitoring Activities</span></caption>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Activity</th>
-<th>Procedure</th>
-<th>Corrective Actions</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>List segments that are currently down. If any rows are returned, this should generate a warning or alert.
-<p>Recommended frequency: run every 5 to 10 minutes</p>
-<p>Severity: IMPORTANT</p></td>
-<td>Run the following query in the <code class="ph codeph">postgres</code> database:
-<pre class="pre codeblock"><code>SELECT * FROM gp_segment_configuration
-WHERE status &lt;&gt; &#39;u&#39;;</code></pre></td>
-<td>If the query returns any rows, follow these steps to correct the problem:
-<ol>
-<li>Verify that the hosts with down segments are responsive.</li>
-<li>If hosts are OK, check the <span class="ph filepath">pg_log</span> files for the down segments to discover the root cause of the segments going down.</li>
-</ol></td>
-</tr>
-</tbody>
-</table>
-
-
-## <a id="id_d3w_p1j_1v"></a>Hardware and Operating System Monitoring
-
-<a id="id_d3w_p1j_1v__d112e111"></a>
-
-<table>
-<caption><span class="tablecap">Table 2. Hardware and Operating System Monitoring Activities</span></caption>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Activity</th>
-<th>Procedure</th>
-<th>Corrective Actions</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>Underlying platform check for maintenance required or system down of the hardware.
-<p>Recommended frequency: real-time, if possible, or every 15 minutes</p>
-<p>Severity: CRITICAL</p></td>
-<td>Set up system check for hardware and OS errors.</td>
-<td>If required, remove a machine from the HAWQ cluster to resolve hardware and OS issues, then add it back to the cluster after the issues are resolved.</td>
-</tr>
-<tr class="even">
-<td>Check disk space usage on volumes used for HAWQ data storage and the OS.
-<p>Recommended frequency: every 5 to 30 minutes</p>
-<p>Severity: CRITICAL</p></td>
-<td><div class="p">
-Set up a disk space check.
-<ul>
-<li>Set a threshold to raise an alert when a disk reaches a percentage of capacity. The recommended threshold is 75% full.</li>
-<li>It is not recommended to run the system with capacities approaching 100%.</li>
-</ul>
-</div></td>
-<td>Free space on the system by removing some data or files.</td>
-</tr>
-<tr class="odd">
-<td>Check for errors or dropped packets on the network interfaces.
-<p>Recommended frequency: hourly</p>
-<p>Severity: IMPORTANT</p></td>
-<td>Set up a network interface checks.</td>
-<td><p>Work with network and OS teams to resolve errors.</p></td>
-</tr>
-<tr class="even">
-<td>Check for RAID errors or degraded RAID performance.
-<p>Recommended frequency: every 5 minutes</p>
-<p>Severity: CRITICAL</p></td>
-<td>Set up a RAID check.</td>
-<td><ul>
-<li>Replace failed disks as soon as possible.</li>
-<li>Work with system administration team to resolve other RAID or controller errors as soon as possible.</li>
-</ul></td>
-</tr>
-<tr class="odd">
-<td>Check for adequate I/O bandwidth and I/O skew.
-<p>Recommended frequency: when create a cluster or when hardware issues are suspected.</p></td>
-<td>Run the HAWQ <code class="ph codeph">hawq checkperf</code> utility.</td>
-<td><div class="p">
-The cluster may be under-specified if data transfer rates are not similar to the following:
-<ul>
-<li>2GB per second disk read</li>
-<li>1 GB per second disk write</li>
-<li>10 Gigabit per second network read and write</li>
-</ul>
-If transfer rates are lower than expected, consult with your data architect regarding performance expectations.
-</div>
-<p>If the machines on the cluster display an uneven performance profile, work with the system administration team to fix faulty machines.</p></td>
-</tr>
-</tbody>
-</table>
-
-
-## <a id="id_khd_q1j_1v"></a>Data Maintenance
-
-<a id="id_khd_q1j_1v__d112e279"></a>
-
-<table>
-<caption><span class="tablecap">Table 3. Data Maintenance Activities</span></caption>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Activity</th>
-<th>Procedure</th>
-<th>Corrective Actions</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>Check for missing statistics on tables.</td>
-<td>Check the <code class="ph codeph">hawq_stats_missing</code> view in each database:
-<pre class="pre codeblock"><code>SELECT * FROM hawq_toolkit.hawq_stats_missing;</code></pre></td>
-<td>Run <code class="ph codeph">ANALYZE</code> on tables that are missing statistics.</td>
-</tr>
-</tbody>
-</table>
-
-
-## <a id="id_lx4_q1j_1v"></a>Database Maintenance
-
-<a id="id_lx4_q1j_1v__d112e343"></a>
-
-<table>
-<caption><span class="tablecap">Table 4. Database Maintenance Activities</span></caption>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Activity</th>
-<th>Procedure</th>
-<th>Corrective Actions</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>Mark deleted rows in HAWQ system catalogs (tables in the <code class="ph codeph">pg_catalog</code> schema) so that the space they occupy can be reused.
-<p>Recommended frequency: daily</p>
-<p>Severity: CRITICAL</p></td>
-<td>Vacuum each system catalog:
-<pre class="pre codeblock"><code>VACUUM &lt;table&gt;;</code></pre></td>
-<td>Vacuum system catalogs regularly to prevent bloating.</td>
-</tr>
-<tr class="even">
-<td>Update table statistics.
-<p>Recommended frequency: after loading data and before executing queries</p>
-<p>Severity: CRITICAL</p></td>
-<td>Analyze user tables:
-<pre class="pre codeblock"><code>ANALYZEDB -d &lt;database&gt; -a</code></pre></td>
-<td>Analyze updated tables regularly so that the optimizer can produce efficient query execution plans.</td>
-</tr>
-<tr class="odd">
-<td>Backup the database data.
-<p>Recommended frequency: daily, or as required by your backup plan</p>
-<p>Severity: CRITICAL</p></td>
-<td>See <a href="../admin/BackingUpandRestoringHAWQDatabases.html">Backing up and Restoring HAWQ Databases</a> for a discussion of backup procedures</td>
-<td>Best practice is to have a current backup ready in case the database must be restored.</td>
-</tr>
-<tr class="even">
-<td>Reindex system catalogs (tables in the <code class="ph codeph">pg_catalog</code> schema) to maintain an efficient catalog.
-<p>Recommended frequency: weekly, or more often if database objects are created and dropped frequently</p></td>
-<td>Run <code class="ph codeph">REINDEX SYSTEM</code> in each database.
-<pre class="pre codeblock"><code>REINDEXDB -s</code></pre></td>
-<td>The optimizer retrieves information from the system tables to create query plans. If system tables and indexes are allowed to become bloated over time, scanning the system tables increases query execution time.</td>
-</tr>
-</tbody>
-</table>
-
-
-## <a id="id_blv_q1j_1v"></a>Patching and Upgrading
-
-<a id="id_blv_q1j_1v__d112e472"></a>
-
-<table>
-<caption><span class="tablecap">Table 5. Patch and Upgrade Activities</span></caption>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Activity</th>
-<th>Procedure</th>
-<th>Corrective Actions</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>Ensure any bug fixes or enhancements are applied to the kernel.
-<p>Recommended frequency: at least every 6 months</p>
-<p>Severity: IMPORTANT</p></td>
-<td>Follow the vendor's instructions to update the Linux kernel.</td>
-<td>Keep the kernel current to include bug fixes and security fixes, and to avoid difficult future upgrades.</td>
-</tr>
-<tr class="even">
-<td>Install HAWQ minor releases.
-<p>Recommended frequency: quarterly</p>
-<p>Severity: IMPORTANT</p></td>
-<td>Always upgrade to the latest in the series.</td>
-<td>Keep the HAWQ software current to incorporate bug fixes, performance enhancements, and feature enhancements into your HAWQ cluster.</td>
-</tr>
-</tbody>
-</table>
-
-
-



[39/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/kerberos.html.md.erb b/markdown/clientaccess/kerberos.html.md.erb
new file mode 100644
index 0000000..2e7cfe5
--- /dev/null
+++ b/markdown/clientaccess/kerberos.html.md.erb
@@ -0,0 +1,308 @@
+---
+title: Using Kerberos Authentication
+---
+
+**Note:** The following steps for enabling Kerberos *are not required* if you install HAWQ using Ambari.
+
+You can control access to HAWQ with a Kerberos authentication server.
+
+HAWQ supports the Generic Security Service Application Program Interface \(GSSAPI\) with Kerberos authentication. GSSAPI provides automatic authentication \(single sign-on\) for systems that support it. You specify the HAWQ users \(roles\) that require Kerberos authentication in the HAWQ configuration file `pg_hba.conf`. The login fails if Kerberos authentication is not available when a role attempts to log in to HAWQ.
+
+Kerberos provides a secure, encrypted authentication service. It does not encrypt data exchanged between the client and database and provides no authorization services. To encrypt data exchanged over the network, you must use an SSL connection. To manage authorization for access to HAWQ databases and objects such as schemas and tables, you use settings in the `pg_hba.conf` file and privileges given to HAWQ users and roles within the database. For information about managing authorization privileges, see [Managing Roles and Privileges](roles_privs.html).
+
+For more information about Kerberos, see [http://web.mit.edu/kerberos/](http://web.mit.edu/kerberos/).
+
+## <a id="kerberos_prereq"></a>Requirements for Using Kerberos with HAWQ 
+
+The following items are required for using Kerberos with HAWQ:
+
+-   Kerberos Key Distribution Center \(KDC\) server using the `krb5-server` library
+-   Kerberos version 5 `krb5-libs` and `krb5-workstation` packages installed on the HAWQ master host
+-   System time on the Kerberos server and HAWQ master host must be synchronized. \(Install Linux `ntp` package on both servers.\)
+-   Network connectivity between the Kerberos server and the HAWQ master
+-   Java 1.7.0\_17 or later is required to use Kerberos-authenticated JDBC on Red Hat Enterprise Linux 6.x
+-   Java 1.6.0\_21 or later is required to use Kerberos-authenticated JDBC on Red Hat Enterprise Linux 4.x or 5.x
+
+## <a id="nr166539"></a>Enabling Kerberos Authentication for HAWQ 
+
+Complete the following tasks to set up Kerberos authentication with HAWQ:
+
+1.  Verify your system satisfies the prequisites for using Kerberos with HAWQ. See [Requirements for Using Kerberos with HAWQ](#kerberos_prereq).
+2.  Set up, or identify, a Kerberos Key Distribution Center \(KDC\) server to use for authentication. See [Install and Configure a Kerberos KDC Server](#task_setup_kdc).
+3.  Create and deploy principals for your HDFS cluster, and ensure that kerberos authentication is enabled and functioning for all HDFS services. See your Hadoop documentation for additional details.
+4.  In a Kerberos database on the KDC server, set up a Kerberos realm and principals on the server. For HAWQ, a principal is a HAWQ role that uses Kerberos authentication. In the Kerberos database, a realm groups together Kerberos principals that are HAWQ roles.
+5.  Create Kerberos keytab files for HAWQ. To access HAWQ, you create a service key known only by Kerberos and HAWQ. On the Kerberos server, the service key is stored in the Kerberos database.
+
+    On the HAWQ master, the service key is stored in key tables, which are files known as keytabs. The service keys are usually stored in the keytab file `/etc/krb5.keytab`. This service key is the equivalent of the service's password, and must be kept secure. Data that is meant to be read-only by the service is encrypted using this key.
+
+6.  Install the Kerberos client packages and the keytab file on HAWQ master.
+7.  Create a Kerberos ticket for `gpadmin` on the HAWQ master node using the keytab file. The ticket contains the Kerberos authentication credentials that grant access to the HAWQ.
+
+With Kerberos authentication configured on the HAWQ, you can use Kerberos for PSQL and JDBC.
+
+[Set up HAWQ with Kerberos for PSQL](#topic6)
+
+[Set up HAWQ with Kerberos for JDBC](#topic9)
+
+## <a id="task_setup_kdc"></a>Install and Configure a Kerberos KDC Server 
+
+Steps to set up a Kerberos Key Distribution Center \(KDC\) server on a Red Hat Enterprise Linux host for use with HAWQ.
+
+Follow these steps to install and configure a Kerberos Key Distribution Center \(KDC\) server on a Red Hat Enterprise Linux host.
+
+1.  Install the Kerberos server packages:
+
+    ```
+    sudo yum install krb5-libs krb5-server krb5-workstation
+    ```
+
+2.  Edit the `/etc/krb5.conf` configuration file. The following example shows a Kerberos server with a default `KRB.EXAMPLE.COM` realm.
+
+    ```
+    [logging]
+    �default = FILE:/var/log/krb5libs.log
+    �kdc = FILE:/var/log/krb5kdc.log
+    �admin_server = FILE:/var/log/kadmind.log
+
+    [libdefaults]
+    �default_realm = KRB.EXAMPLE.COM
+    �dns_lookup_realm = false
+    �dns_lookup_kdc = false
+    �ticket_lifetime = 24h
+    �renew_lifetime = 7d
+    �forwardable = true
+    �default_tgs_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
+    �default_tkt_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
+    �permitted_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
+
+    [realms]
+    �KRB.EXAMPLE.COM = {
+    ��kdc = kerberos-gpdb:88
+    ��admin_server = kerberos-gpdb:749
+    ��default_domain = kerberos-gpdb
+     }
+
+    [domain_realm]
+    �.kerberos-gpdb = KRB.EXAMPLE.COM
+    �kerberos-gpdb = KRB.EXAMPLE.COM
+
+    [appdefaults]
+    �pam = {
+    ����debug = false
+    ����ticket_lifetime = 36000
+    ����renew_lifetime = 36000
+    ����forwardable = true
+    ����krb4_convert = false
+       }
+    ```
+
+    The `kdc` and `admin_server` keys in the `[realms]` section specify the host \(`kerberos-gpdb`\) and port where the Kerberos server is running. IP numbers can be used in place of host names.
+
+    If your Kerberos server manages authentication for other realms, you would instead add the `KRB.EXAMPLE.COM` realm in the `[realms]` and `[domain_realm]` section of the `kdc.conf` file. See the [Kerberos documentation](http://web.mit.edu/kerberos/krb5-latest/doc/) for information about the `kdc.conf` file.
+
+3.  To create a Kerberos KDC database, run the `kdb5_util`.
+
+    ```
+    kdb5_util create -s
+    ```
+
+    The `kdb5_util`create option creates the database to store keys for the Kerberos realms that are managed by this KDC server. The `-s` option creates a stash file. Without the stash file, every time the KDC server starts it requests a password.
+
+4.  Add an administrative user to the KDC database with the `kadmin.local` utility. Because it does not itself depend on Kerberos authentication, the `kadmin.local` utility allows you to add an initial administrative user to the local Kerberos server. To add the user `gpadmin` as an administrative user to the KDC database, run the following command:
+
+    ```
+    kadmin.local -q "addprinc gpadmin/admin"
+    ```
+
+    Most users do not need administrative access to the Kerberos server. They can use `kadmin` to manage their own principals \(for example, to change their own password\). For information about `kadmin`, see the [Kerberos documentation](http://web.mit.edu/kerberos/krb5-latest/doc/).
+
+5.  If needed, edit the `/var/kerberos/krb5kdc/kadm5.acl` file to grant the appropriate permissions to `gpadmin`.
+6.  Start the Kerberos daemons:
+
+    ```
+    /sbin/service krb5kdc start
+    /sbin/service kadmin start
+    ```
+
+7.  To start Kerberos automatically upon restart:
+
+    ```
+    /sbin/chkconfig krb5kdc on
+    /sbin/chkconfig kadmin on
+    ```
+
+
+## <a id="task_m43_vwl_2p"></a>Create HAWQ Roles in the KDC Database 
+
+Add principals to the Kerberos realm for HAWQ.
+
+Start `kadmin.local` in interactive mode, then add two principals to the HAWQ Realm.
+
+1.  Start `kadmin.local` in interactive mode:
+
+    ```
+    kadmin.local
+    ```
+
+2.  Add principals:
+
+    ```
+    kadmin.local: addprinc gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
+    kadmin.local: addprinc postgres/master.test.com@KRB.EXAMPLE.COM
+    ```
+
+    The `addprinc` commands prompt for passwords for each principal. The first `addprinc` creates a HAWQ user as a principal, `gpadmin/kerberos-gpdb`. The second `addprinc` command creates the `postgres` process on the HAWQ master host as a principal in the Kerberos KDC. This principal is required when using Kerberos authentication with HAWQ.
+
+3.  Create a Kerberos keytab file with `kadmin.local`. The following example creates a keytab file `gpdb-kerberos.keytab` in the current directory with authentication information for the two principals.
+
+    ```
+    kadmin.local: xst -k gpdb-kerberos.keytab
+        gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
+        postgres/master.test.com@KRB.EXAMPLE.COM
+    ```
+
+    You will copy this file to the HAWQ master host.
+
+4.  Exit `kadmin.local` interactive mode with the `quit` command:`kadmin.local: quit`
+
+## <a id="topic6"></a>Install and Configure the Kerberos Client 
+
+Steps to install the Kerberos client on the HAWQ master host.
+
+Install the Kerberos client libraries on the HAWQ master and configure the Kerberos client.
+
+1.  Install the Kerberos packages on the HAWQ master.
+
+    ```
+    sudo yum install krb5-libs krb5-workstation
+    ```
+
+2.  Ensure that the `/etc/krb5.conf` file is the same as the one that is on the Kerberos server.
+3.  Copy the `gpdb-kerberos.keytab` file that was generated on the Kerberos server to the HAWQ master host.
+4.  Remove any existing tickets with the Kerberos utility `kdestroy`. Run the utility as root.
+
+    ```
+    sudo kdestroy
+    ```
+
+5.  Use the Kerberos utility `kinit` to request a ticket using the keytab file on the HAWQ master for `gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM`. The `-t` option specifies the keytab file on the HAWQ master.
+
+    ```
+    # kinit -k -t gpdb-kerberos.keytab gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
+    ```
+
+6.  Use the Kerberos utility `klist` to display the contents of the Kerberos ticket cache on the HAWQ master. The following is an example:
+
+    ```screen
+    # klist
+    Ticket cache: FILE:/tmp/krb5cc_108061
+    Default principal: gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
+    Valid starting�����Expires������������Service principal
+    03/28/13 14:50:26��03/29/13 14:50:26��krbtgt/KRB.EXAMPLE.COM ����@KRB.EXAMPLE.COM
+    ����renew until 03/28/13 14:50:26
+    ```
+
+
+### <a id="topic7"></a>Set up HAWQ with Kerberos for PSQL 
+
+Configure a HAWQ to use Kerberos.
+
+After you have set up Kerberos on the HAWQ master, you can configure HAWQ to use Kerberos. For information on setting up the HAWQ master, see [Install and Configure the Kerberos Client](#topic6).
+
+1.  Create a HAWQ administrator role in the database `template1` for the Kerberos principal that is used as the database administrator. The following example uses `gpamin/kerberos-gpdb`.
+
+    ``` bash
+    $ psql template1 -c 'CREATE ROLE "gpadmin/kerberos-gpdb" LOGIN SUPERUSER;'
+
+    ```
+
+    The role you create in the database `template1` will be available in any new HAWQ that you create.
+
+2.  Modify `hawq-site.xml` to specify the location of the keytab file. For example, adding this line to the `hawq-site.xml` specifies the folder /home/gpadmin as the location of the keytab filegpdb-kerberos.keytab.
+
+    ``` xml
+      <property>
+          <name>krb_server_keyfile</name>
+          <value>/home/gpadmin/gpdb-kerberos.keytab</value>
+      </property>
+    ```
+
+3.  Modify the HAWQ file `pg_hba.conf` to enable Kerberos support. Then restart HAWQ \(`hawq restart -a`\). For example, adding the following line to `pg_hba.conf` adds GSSAPI and Kerberos support. The value for `krb_realm` is the Kerberos realm that is used for authentication to HAWQ.
+
+    ```
+    host all all 0.0.0.0/0 gss include_realm=0 krb_realm=KRB.EXAMPLE.COM
+    ```
+
+    For information about the `pg_hba.conf` file, see [The pg\_hba.conf file](http://www.postgresql.org/docs/9.0/static/auth-pg-hba-conf.html) in the Postgres documentation.
+
+4.  Create a ticket using `kinit` and show the tickets in the Kerberos ticket cache with `klist`.
+5.  As a test, log in to the database as the `gpadmin` role with the Kerberos credentials `gpadmin/kerberos-gpdb`:
+
+    ``` bash
+    $ psql -U "gpadmin/kerberos-gpdb" -h master.test template1
+    ```
+
+    A username map can be defined in the `pg_ident.conf` file and specified in the `pg_hba.conf` file to simplify logging into HAWQ. For example, this `psql` command logs into the default HAWQ on `mdw.proddb` as the Kerberos principal `adminuser/mdw.proddb`:
+
+    ``` bash
+    $ psql -U "adminuser/mdw.proddb" -h mdw.proddb
+    ```
+
+    If the default user is `adminuser`, the `pg_ident.conf` file and the `pg_hba.conf` file can be configured so that the `adminuser` can log in to the database as the Kerberos principal `adminuser/mdw.proddb` without specifying the `-U` option:
+
+    ``` bash
+    $ psql -h mdw.proddb
+    ```
+
+    The `pg_ident.conf` file defines the username map. This file is located in the HAWQ master data directory (identified by the `hawq_master_directory` property value in `hawq-site.xml`):
+
+    ```
+    # MAPNAME ��SYSTEM-USERNAME �������GP-USERNAME
+    mymap ������/^(.*)mdw\.proddb$���� adminuser
+    ```
+
+    The map can be specified in the `pg_hba.conf` file as part of the line that enables Kerberos support:
+
+    ```
+    host all all 0.0.0.0/0 krb5 include_realm=0 krb_realm=proddb map=mymap
+    ```
+
+    For more information about specifying username maps see [Username maps](http://www.postgresql.org/docs/9.0/static/auth-username-maps.html) in the Postgres documentation.
+
+6.  If a Kerberos principal is not a HAWQ user, a message similar to the following is displayed from the `psql` command line when the user attempts to log in to the database:
+
+    ```
+    psql: krb5_sendauth: Bad response
+    ```
+
+    The principal must be added as a HAWQ user.
+
+
+### <a id="topic9"></a>Set up HAWQ with Kerberos for JDBC 
+
+Enable Kerberos-authenticated JDBC access to HAWQ.
+
+You can configure HAWQ to use Kerberos to run user-defined Java functions.
+
+1.  Ensure that Kerberos is installed and configured on the HAWQ master. See [Install and Configure the Kerberos Client](#topic6).
+2.  Create the file `.java.login.config` in the folder `/home/gpadmin` and add the following text to the file:
+
+    ```
+    pgjdbc {
+    ��com.sun.security.auth.module.Krb5LoginModule required
+    ��doNotPrompt=true
+    ��useTicketCache=true
+    ��debug=true
+    ��client=true;
+    };
+    ```
+
+3.  Create a Java application that connects to HAWQ using Kerberos authentication. The following example database connection URL uses a PostgreSQL JDBC driver and specifies parameters for Kerberos authentication:
+
+    ```
+    jdbc:postgresql://mdw:5432/mytest?kerberosServerName=postgres&jaasApplicationName=pgjdbc&user=gpadmin/kerberos-gpdb
+    ```
+
+    The parameter names and values specified depend on how the Java application performs Kerberos authentication.
+
+4.  Test the Kerberos login by running a sample Java application from HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/ldap.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/ldap.html.md.erb b/markdown/clientaccess/ldap.html.md.erb
new file mode 100644
index 0000000..27b204f
--- /dev/null
+++ b/markdown/clientaccess/ldap.html.md.erb
@@ -0,0 +1,116 @@
+---
+title: Using LDAP Authentication with TLS/SSL
+---
+
+You can control access to HAWQ with an LDAP server and, optionally, secure the connection with encryption by adding parameters to pg\_hba.conf file entries.
+
+HAWQ supports LDAP authentication with the TLS/SSL protocol to encrypt communication with an LDAP server:
+
+-   LDAP authentication with STARTTLS and TLS protocol \u2013 STARTTLS starts with a clear text connection \(no encryption\) and upgrades it to a secure connection \(with encryption\).
+-   LDAP authentication with a secure connection and TLS/SSL \(LDAPS\) \u2013 HAWQ uses the TLS or SSL protocol based on the protocol that is used by the LDAP server.
+
+If no protocol is specified, HAWQ communicates with the LDAP server with a clear text connection.
+
+To use LDAP authentication, the HAWQ master host must be configured as an LDAP client. See your LDAP documentation for information about configuring LDAP clients.
+
+## Enabing LDAP Authentication with STARTTLS and TLS
+
+To enable STARTTLS with the TLS protocol, specify the `ldaptls` parameter with the value 1. The default port is 389. In this example, the authentication method parameters include the `ldaptls` parameter.
+
+```
+ldap ldapserver=ldap.example.com ldaptls=1 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
+```
+
+Specify a non-default port, with the `ldapport` parameter. In this example, the authentication method includes the `ldaptls` parameter and the `ldapport` parameter to specify the port 550.
+
+```
+ldap ldapserver=ldap.example.com ldaptls=1 ldapport=500 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
+```
+
+## Enabling LDAP Authentication with a Secure Connection and TLS/SSL
+
+To enable a secure connection with TLS/SSL, add `ldaps://` as the prefix to the LDAP server name specified in the `ldapserver` parameter. The default port is 636.
+
+This example `ldapserver` parameter specifies a secure connection and the TLS/SSL protocol for the LDAP server `ldap.example.com`.
+
+```
+ldapserver=ldaps://ldap.example.com
+```
+
+To specify a non-default port, add a colon \(:\) and the port number after the LDAP server name. This example `ldapserver` parameter includes the `ldaps://` prefix and the non-default port 550.
+
+```
+ldapserver=ldaps://ldap.example.com:550
+```
+
+### Notes
+
+HAWQ logs an error if the following are specified in a pg\_hba.conf file entry:
+
+-   If both the `ldaps://` prefix and the `ldaptls=1` parameter are specified.
+-   If both the `ldaps://` prefix and the `ldapport` parameter are specified.
+
+Enabling encrypted communication for LDAP authentication only encrypts the communication between HAWQ and the LDAP server.
+
+## Configuring Authentication with a System-wide OpenLDAP System
+
+If you have a system-wide OpenLDAP system and logins are configured to use LDAP with TLS or SSL in the pg_hba.conf file, logins may fail with the following message:
+
+```shell
+could not start LDAP TLS session: error code '-11'
+```
+
+To use an existing OpenLDAP system for authentication, HAWQ must be set up to use the LDAP server's CA certificate to validate user certificates. Follow these steps on both the master and standby hosts to configure HAWQ:
+
+1. Copy the base64-encoded root CA chain file from the Active Directory or LDAP server to
+the HAWQ master and standby master hosts. This example uses the directory `/etc/pki/tls/certs`.
+
+2. Change to the directory where you copied the CA certificate file and, as the root user, generate the hash for OpenLDAP:
+
+    ```
+    # cd /etc/pki/tls/certs
+    # openssl x509 -noout -hash -in <ca-certificate-file>
+    # ln -s <ca-certificate-file> <ca-certificate-file>.0
+    ```
+
+3. Configure an OpenLDAP configuration file for HAWQ with the CA certificate directory and certificate file specified.
+
+    As the root user, edit the OpenLDAP configuration file `/etc/openldap/ldap.conf`:
+
+    ```
+    SASL_NOCANON on
+    URI ldaps://ldapA.example.priv ldaps://ldapB.example.priv ldaps://ldapC.example.priv
+    BASE dc=example,dc=priv
+    TLS_CACERTDIR /etc/pki/tls/certs
+    TLS_CACERT /etc/pki/tls/certs/<ca-certificate-file>
+    ```
+
+    **Note**: For certificate validation to succeed, the hostname in the certificate must match a hostname in the URI property. Otherwise, you must also add `TLS_REQCERT allow` to the file.
+
+4. As the gpadmin user, edit `/usr/local/hawq/greenplum_path.sh` and add the following line.
+
+    ```bash
+    export LDAPCONF=/etc/openldap/ldap.conf
+    ```
+
+## Examples
+
+These are example entries from an pg\_hba.conf file.
+
+This example specifies LDAP authentication with no encryption between HAWQ and the LDAP server.
+
+```
+host all plainuser 0.0.0.0/0 ldap ldapserver=ldap.example.com ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
+```
+
+This example specifies LDAP authentication with the STARTTLS and TLS protocol between HAWQ and the LDAP server.
+
+```
+host all tlsuser 0.0.0.0/0 ldap ldapserver=ldap.example.com ldaptls=1 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
+```
+
+This example specifies LDAP authentication with a secure connection and TLS/SSL protocol between HAWQ and the LDAP server.
+
+```
+host all ldapsuser 0.0.0.0/0 ldap ldapserver=ldaps://ldap.example.com ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/clientaccess/roles_privs.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/roles_privs.html.md.erb b/markdown/clientaccess/roles_privs.html.md.erb
new file mode 100644
index 0000000..4bdf3ee
--- /dev/null
+++ b/markdown/clientaccess/roles_privs.html.md.erb
@@ -0,0 +1,285 @@
+---
+title: Managing Roles and Privileges
+---
+
+The HAWQ authorization mechanism stores roles and permissions to access database objects in the database and is administered using SQL statements or command-line utilities.
+
+HAWQ manages database access permissions using *roles*. The concept of roles subsumes the concepts of *users* and *groups*. A role can be a database user, a group, or both. Roles can own database objects \(for example, tables\) and can assign privileges on those objects to other roles to control access to the objects. Roles can be members of other roles, thus a member role can inherit the object privileges of its parent role.
+
+Every HAWQ system contains a set of database roles \(users and groups\). Those roles are separate from the users and groups managed by the operating system on which the server runs. However, for convenience you may want to maintain a relationship between operating system user names and HAWQ role names, since many of the client applications use the current operating system user name as the default.
+
+In HAWQ, users log in and connect through the master instance, which then verifies their role and access privileges. The master then issues commands to the segment instances behind the scenes as the currently logged in role.
+
+Roles are defined at the system level, meaning they are valid for all databases in the system.
+
+In order to bootstrap the HAWQ system, a freshly initialized system always contains one predefined *superuser* role \(also referred to as the system user\). This role will have the same name as the operating system user that initialized the HAWQ system. Customarily, this role is named `gpadmin`. In order to create more roles you first have to connect as this initial role.
+
+## <a id="topic2"></a>Security Best Practices for Roles and Privileges 
+
+-   **Secure the gpadmin system user.** HAWQ requires a UNIX user id to install and initialize the HAWQ system. This system user is referred to as `gpadmin` in the HAWQ documentation. This `gpadmin` user is the default database superuser in HAWQ, as well as the file system owner of the HAWQ installation and its underlying data files. This default administrator account is fundamental to the design of HAWQ. The system cannot run without it, and there is no way to limit the access of this gpadmin user id. Use roles to manage who has access to the database for specific purposes. You should only use the `gpadmin` account for system maintenance tasks such as expansion and upgrade. Anyone who logs on to a HAWQ host as this user id can read, alter or delete any data; specifically system catalog data and database access rights. Therefore, it is very important to secure the gpadmin user id and only provide access to essential system administrators. Administrators should only log in to HAWQ as
  `gpadmin` when performing certain system maintenance tasks \(such as upgrade or expansion\). Database users should never log on as `gpadmin`, and ETL or production workloads should never run as `gpadmin`.
+-   **Assign a distinct role to each user that logs in.** For logging and auditing purposes, each user that is allowed to log in to HAWQ should be given their own database role. For applications or web services, consider creating a distinct role for each application or service. See [Creating New Roles \(Users\)](#topic3).
+-   **Use groups to manage access privileges.** See [Role Membership](#topic5).
+-   **Limit users who have the SUPERUSER role attribute.** Roles that are superusers bypass all access privilege checks in HAWQ, as well as resource queuing. Only system administrators should be given superuser rights. See [Altering Role Attributes](#topic4).
+
+## <a id="topic3"></a>Creating New Roles \(Users\) 
+
+A user-level role is considered to be a database role that can log in to the database and initiate a database session. Therefore, when you create a new user-level role using the `CREATE ROLE` command, you must specify the `LOGIN` privilege. For example:
+
+``` sql
+=# CREATE ROLE jsmith WITH LOGIN;
+```
+
+A database role may have a number of attributes that define what sort of tasks that role can perform in the database. You can set these attributes when you create the role, or later using the `ALTER ROLE` command. See [Table 1](#iq139556) for a description of the role attributes you can set.
+
+### <a id="topic4"></a>Altering Role Attributes 
+
+A database role may have a number of attributes that define what sort of tasks that role can perform in the database.
+
+<a id="iq139556"></a>
+
+|Attributes|Description|
+|----------|-----------|
+|SUPERUSER &#124; NOSUPERUSER|Determines if the role is a superuser. You must yourself be a superuser to create a new superuser. NOSUPERUSER is the default.|
+|CREATEDB &#124; NOCREATEDB|Determines if the role is allowed to create databases. NOCREATEDB is the default.|
+|CREATEROLE &#124; NOCREATEROLE|Determines if the role is allowed to create and manage other roles. NOCREATEROLE is the default.|
+|INHERIT &#124; NOINHERIT|Determines whether a role inherits the privileges of roles it is a member of. A role with the INHERIT attribute can automatically use whatever database privileges have been granted to all roles it is directly or indirectly a member of. INHERIT is the default.|
+|LOGIN &#124; NOLOGIN|Determines whether a role is allowed to log in. A role having the LOGIN attribute can be thought of as a user. Roles without this attribute are useful for managing database privileges \(groups\). NOLOGIN is the default.|
+|CONNECTION LIMIT *connlimit*|If role can log in, this specifies how many concurrent connections the role can make. -1 \(the default\) means no limit.|
+|PASSWORD '*password*'|Sets the role's password. If you do not plan to use password authentication you can omit this option. If no password is specified, the password will be set to null and password authentication will always fail for that user. A null password can optionally be written explicitly as PASSWORD NULL.|
+|ENCRYPTED &#124; UNENCRYPTED|Controls whether the password is stored encrypted in the system catalogs. The default behavior is determined by the configuration parameter `password_encryption` \(currently set to md5, for SHA-256 encryption, change this setting to password\). If the presented password string is already in encrypted format, then it is stored encrypted as-is, regardless of whether ENCRYPTED or UNENCRYPTED is specified \(since the system cannot decrypt the specified encrypted password string\). This allows reloading of encrypted passwords during dump/restore.|
+|VALID UNTIL '*timestamp*'|Sets a date and time after which the role's password is no longer valid. If omitted the password will be valid for all time.|
+|RESOURCE QUEUE *queue\_name*|Assigns the role to the named resource queue for workload management. Any statement that role issues is then subject to the resource queue's limits. Note that the RESOURCE QUEUE attribute is not inherited; it must be set on each user-level \(LOGIN\) role.|
+|DENY \{deny\_interval &#124; deny\_point\}|Restricts access during an interval, specified by day or day and time. For more information see [Time-based Authentication](#topic13).|
+
+You can set these attributes when you create the role, or later using the `ALTER ROLE` command. For example:
+
+``` sql
+=# ALTER ROLE jsmith WITH PASSWORD 'passwd123';
+=# ALTER ROLE admin VALID UNTIL 'infinity';
+=# ALTER ROLE jsmith LOGIN;
+=# ALTER ROLE jsmith RESOURCE QUEUE adhoc;
+=# ALTER ROLE jsmith DENY DAY 'Sunday';
+```
+
+## <a id="topic5"></a>Role Membership 
+
+It is frequently convenient to group users together to ease management of object privileges: that way, privileges can be granted to, or revoked from, a group as a whole. In HAWQ this is done by creating a role that represents the group, and then granting membership in the group role to individual user roles.
+
+Use the `CREATE ROLE` SQL command to create a new group role. For example:
+
+``` sql
+=# CREATE ROLE admin CREATEROLE CREATEDB;
+```
+
+Once the group role exists, you can add and remove members \(user roles\) using the `GRANT` and `REVOKE` commands. For example:
+
+``` sql
+=# GRANT admin TO john, sally;
+=# REVOKE admin FROM bob;
+```
+
+For managing object privileges, you would then grant the appropriate permissions to the group-level role only \(see [Table 2](#iq139925)\). The member user roles then inherit the object privileges of the group role. For example:
+
+``` sql
+=# GRANT ALL ON TABLE mytable TO admin;
+=# GRANT ALL ON SCHEMA myschema TO admin;
+=# GRANT ALL ON DATABASE mydb TO admin;
+```
+
+The role attributes `LOGIN`, `SUPERUSER`, `CREATEDB`, and `CREATEROLE` are never inherited as ordinary privileges on database objects are. User members must actually `SET ROLE` to a specific role having one of these attributes in order to make use of the attribute. In the above example, we gave `CREATEDB` and `CREATEROLE` to the `admin` role. If `sally` is a member of `admin`, she could issue the following command to assume the role attributes of the parent role:
+
+``` sql
+=> SET ROLE admin;
+```
+
+## <a id="topic6"></a>Managing Object Privileges 
+
+When an object \(table, view, sequence, database, function, language, schema, or tablespace\) is created, it is assigned an owner. The owner is normally the role that executed the creation statement. For most kinds of objects, the initial state is that only the owner \(or a superuser\) can do anything with the object. To allow other roles to use it, privileges must be granted. HAWQ supports the following privileges for each object type:
+
+<a id="iq139925"></a>
+
+|Object Type|Privileges|
+|-----------|----------|
+|Tables, Views, Sequences|SELECT <br/> INSERT <br/> RULE <br/> ALL|
+|External Tables|SELECT <br/> RULE <br/> ALL|
+|Databases|CONNECT<br/>CREATE<br/>TEMPORARY &#124; TEMP <br/> ALL|
+|Functions|EXECUTE|
+|Procedural Languages|USAGE|
+|Schemas|CREATE <br/> USAGE <br/> ALL|
+|Custom Protocol|SELECT <br/> INSERT <br/> RULE </br> ALL|
+
+**Note:** Privileges must be granted for each object individually. For example, granting ALL on a database does not grant full access to the objects within that database. It only grants all of the database-level privileges \(CONNECT, CREATE, TEMPORARY\) to the database itself.
+
+Use the `GRANT` SQL command to give a specified role privileges on an object. For example:
+
+``` sql
+=# GRANT INSERT ON mytable TO jsmith;
+```
+
+To revoke privileges, use the `REVOKE` command. For example:
+
+``` sql
+=# REVOKE ALL PRIVILEGES ON mytable FROM jsmith;
+```
+
+You can also use the `DROP OWNED` and `REASSIGN OWNED` commands for managing objects owned by deprecated roles \(Note: only an object's owner or a superuser can drop an object or reassign ownership\). For example:
+
+``` sql
+=# REASSIGN OWNED BY sally TO bob;
+=# DROP OWNED BY visitor;
+```
+
+### <a id="topic7"></a>Simulating Row and Column Level Access Control 
+
+Row-level or column-level access is not supported, nor is labeled security. Row-level and column-level access can be simulated using views to restrict the columns and/or rows that are selected. Row-level labels can be simulated by adding an extra column to the table to store sensitivity information, and then using views to control row-level access based on this column. Roles can then be granted access to the views rather than the base table.
+
+## <a id="topic8"></a>Encrypting Data 
+
+PostgreSQL provides an optional package of encryption/decryption functions called `pgcrypto`, which can also be installed and used in HAWQ. The `pgcrypto` package is not installed by default with HAWQ. However, you can download a `pgcrypto` package from [Pivotal Network](https://network.pivotal.io). 
+
+If you are building HAWQ from source files, then you should enable `pgcrypto` support as an option when compiling HAWQ.
+
+The `pgcrypto` functions allow database administrators to store certain columns of data in encrypted form. This adds an extra layer of protection for sensitive data, as data stored in HAWQ in encrypted form cannot be read by users who do not have the encryption key, nor be read directly from the disks.
+
+**Note:** The `pgcrypto` functions run inside the database server, which means that all the data and passwords move between `pgcrypto` and the client application in clear-text. For optimal security, consider also using SSL connections between the client and the HAWQ master server.
+
+## <a id="topic9"></a>Encrypting Passwords 
+
+This technical note outlines how to use a server parameter to implement SHA-256 encrypted password storage. Note that in order to use SHA-256 encryption for storage, the client authentication method must be set to `password` rather than the default, `MD5`. \(See [Encrypting Client/Server Connections](client_auth.html) for more details.\) This means that the password is transmitted in clear text over the network; to avoid this, set up SSL to encrypt the client server communication channel.
+
+### <a id="topic10"></a>Enabling SHA-256 Encryption 
+
+You can set your chosen encryption method system-wide or on a per-session basis. There are three encryption methods available: `SHA-256`, `SHA-256-FIPS`, and `MD5` \(for backward compatibility\). The `SHA-256-FIPS` method requires that FIPS compliant libraries are used.
+
+#### <a id="topic11"></a>System-wide 
+
+You will perform different procedures to set the encryption method (`password_hash_algorithm` server parameter) system-wide depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update encryption method configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set encryption method configuration parameters.
+
+If you use Ambari to manage your HAWQ cluster:
+
+1. Set the `password_hash_algorithm` configuration property via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. Valid values include `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\).
+2. Select **Service Actions > Restart All** to load the updated configuration.
+
+If you manage your HAWQ cluster from the command line:
+
+1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
+
+    ``` shell
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+
+1. Use the `hawq config` utility to set `password_hash_algorithm` to `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\):
+
+    ``` shell
+    $ hawq config -c password_hash_algorithm -v 'SHA-256'
+    ```
+        
+    Or:
+        
+    ``` shell
+    $ hawq config -c password_hash_algorithm -v 'SHA-256-FIPS'
+    ```
+
+2. Reload the HAWQ configuration:
+
+    ``` shell
+    $ hawq stop cluster -u
+    ```
+
+3.  Verify the setting:
+
+    ``` bash
+    $ hawq config -s password_hash_algorithm
+    ```
+
+#### <a id="topic12"></a>Individual Session 
+
+To set the `password_hash_algorithm` server parameter for an individual database session:
+
+1.  Log in to your HAWQ instance as a superuser.
+2.  Set the `password_hash_algorithm` to `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\):
+
+    ``` sql
+    =# SET password_hash_algorithm = 'SHA-256'
+    SET
+    ```
+
+    or:
+
+    ``` sql
+    =# SET password_hash_algorithm = 'SHA-256-FIPS'
+    SET
+    ```
+
+3.  Verify the setting:
+
+    ``` sql
+    =# SHOW password_hash_algorithm;
+    password_hash_algorithm
+    ```
+
+    You will see:
+
+    ```
+    SHA-256
+    ```
+
+    or:
+
+    ```
+    SHA-256-FIPS
+    ```
+
+    **Example**
+
+    Following is an example of how the new setting works:
+
+4.  Login in as a super user and verify the password hash algorithm setting:
+
+    ``` sql
+    =# SHOW password_hash_algorithm
+    password_hash_algorithm
+    -------------------------------
+    SHA-256-FIPS
+    ```
+
+5.  Create a new role with password that has login privileges.
+
+    ``` sql
+    =# CREATE ROLE testdb WITH PASSWORD 'testdb12345#' LOGIN;
+    ```
+
+6.  Change the client authentication method to allow for storage of SHA-256 encrypted passwords:
+
+    Open the `pg_hba.conf` file on the master and add the following line:
+
+    ```
+    host all testdb 0.0.0.0/0 password
+    ```
+
+7.  Restart the cluster.
+8.  Login to the database as user just created `testdb`.
+
+    ``` bash
+    $ psql -U testdb
+    ```
+
+9.  Enter the correct password at the prompt.
+10. Verify that the password is stored as a SHA-256 hash.
+
+    Note that password hashes are stored in `pg_authid.rolpasswod`
+
+    1.  Login as the super user.
+    2.  Execute the following:
+
+        ``` sql
+        =# SELECT rolpassword FROM pg_authid WHERE rolname = 'testdb';
+        Rolpassword
+        -----------
+        sha256<64 hexidecimal characters>
+        ```
+
+
+## <a id="topic13"></a>Time-based Authentication 
+
+HAWQ enables the administrator to restrict access to certain times by role. Use the `CREATE ROLE` or `ALTER ROLE` commands to specify time-based constraints.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/BasicDataOperations.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/BasicDataOperations.html.md.erb b/markdown/datamgmt/BasicDataOperations.html.md.erb
new file mode 100644
index 0000000..66328c7
--- /dev/null
+++ b/markdown/datamgmt/BasicDataOperations.html.md.erb
@@ -0,0 +1,64 @@
+---
+title: Basic Data Operations
+---
+
+This topic describes basic data operations that you perform in HAWQ.
+
+## <a id="topic3"></a>Inserting Rows
+
+Use the `INSERT` command to create rows in a table. This command requires the table name and a value for each column in the table; you may optionally specify the column names in any order. If you do not specify column names, list the data values in the order of the columns in the table, separated by commas.
+
+For example, to specify the column names and the values to insert:
+
+``` sql
+INSERT INTO products (name, price, product_no) VALUES ('Cheese', 9.99, 1);
+```
+
+To specify only the values to insert:
+
+``` sql
+INSERT INTO products VALUES (1, 'Cheese', 9.99);
+```
+
+Usually, the data values are literals (constants), but you can also use scalar expressions. For example:
+
+``` sql
+INSERT INTO films SELECT * FROM tmp_films WHERE date_prod <
+'2004-05-07';
+```
+
+You can insert multiple rows in a single command. For example:
+
+``` sql
+INSERT INTO products (product_no, name, price) VALUES
+    (1, 'Cheese', 9.99),
+    (2, 'Bread', 1.99),
+    (3, 'Milk', 2.99);
+```
+
+To insert data into a partitioned table, you specify the root partitioned table, the table created with the `CREATE TABLE` command. You also can specify a leaf child table of the partitioned table in an `INSERT` command. An error is returned if the data is not valid for the specified leaf child table. Specifying a child table that is not a leaf child table in the `INSERT` command is not supported.
+
+To insert large amounts of data, use external tables or the `COPY` command. These load mechanisms are more efficient than `INSERT` for inserting large quantities of rows. See [Loading and Unloading Data](load/g-loading-and-unloading-data.html#topic1) for more information about bulk data loading.
+
+## <a id="topic9"></a>Vacuuming the System Catalog Tables
+
+Only HAWQ system catalog tables use multiple version concurrency control. Deleted or updated data rows in the catalog tables occupy physical space on disk even though new transactions cannot see them. Periodically running the `VACUUM` command removes these expired rows. 
+
+The `VACUUM` command also collects table-level statistics such as the number of rows and pages.
+
+For example:
+
+``` sql
+VACUUM pg_class;
+```
+
+### <a id="topic10"></a>Configuring the Free Space Map
+
+Expired rows are held in the *free space map*. The free space map must be sized large enough to hold all expired rows in your database. If not, a regular `VACUUM` command cannot reclaim space occupied by expired rows that overflow the free space map.
+
+**Note:** `VACUUM FULL` is not recommended with HAWQ because it is not safe for large tables and may take an unacceptably long time to complete. See [VACUUM](../reference/sql/VACUUM.html#topic1).
+
+Size the free space map with the following server configuration parameters:
+
+-   `max_fsm_pages`
+-   `max_fsm_relations`

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/ConcurrencyControl.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/ConcurrencyControl.html.md.erb b/markdown/datamgmt/ConcurrencyControl.html.md.erb
new file mode 100644
index 0000000..2ced135
--- /dev/null
+++ b/markdown/datamgmt/ConcurrencyControl.html.md.erb
@@ -0,0 +1,24 @@
+---
+title: Concurrency Control
+---
+
+This topic discusses the mechanisms used in HAWQ to provide concurrency control.
+
+HAWQ and PostgreSQL do not use locks for concurrency control. They maintain data consistency using a multiversion model, Multiversion Concurrency Control (MVCC). MVCC achieves transaction isolation for each database session, and each query transaction sees a snapshot of data. This ensures the transaction sees consistent data that is not affected by other concurrent transactions.
+
+Because MVCC does not use explicit locks for concurrency control, lock contention is minimized and HAWQ maintains reasonable performance in multiuser environments. Locks acquired for querying (reading) data do not conflict with locks acquired for writing data.
+
+HAWQ provides multiple lock modes to control concurrent access to data in tables. Most HAWQ SQL commands automatically acquire the appropriate locks to ensure that referenced tables are not dropped or modified in incompatible ways while a command executes. For applications that cannot adapt easily to MVCC behavior, you can use the `LOCK` command to acquire explicit locks. However, proper use of MVCC generally provides better performance.
+
+<caption><span class="tablecap">Table 1. Lock Modes in HAWQ</span></caption>
+
+<a id="topic_f5l_qnh_kr__ix140861"></a>
+
+| Lock Mode              | Associated SQL Commands                                                             | Conflicts With                                                                                                          |
+|------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
+| ACCESS SHARE           | `SELECT`                                                                            | ACCESS EXCLUSIVE                                                                                                        |
+| ROW EXCLUSIVE          | `INSERT`, `COPY`                                                                    | SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                                                 |
+| SHARE UPDATE EXCLUSIVE | `VACUUM` (without `FULL`), `ANALYZE`                                                | SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                         |
+| SHARE                  | `CREATE INDEX`                                                                      | ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                 |
+| SHARE ROW EXCLUSIVE    | �                                                                                   | ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                          |
+| ACCESS EXCLUSIVE       | `ALTER TABLE`, `DROP TABLE`, `TRUNCATE`, `REINDEX`, `CLUSTER`, `VACUUM FULL`        | ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE |

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/HAWQInputFormatforMapReduce.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/HAWQInputFormatforMapReduce.html.md.erb b/markdown/datamgmt/HAWQInputFormatforMapReduce.html.md.erb
new file mode 100644
index 0000000..a6fcca2
--- /dev/null
+++ b/markdown/datamgmt/HAWQInputFormatforMapReduce.html.md.erb
@@ -0,0 +1,304 @@
+---
+title: HAWQ InputFormat for MapReduce
+---
+
+MapReduce is a programming model developed by Google for processing and generating large data sets on an array of commodity servers. You can use the HAWQ InputFormat class to enable MapReduce jobs to access HAWQ data stored in HDFS.
+
+To use HAWQ InputFormat, you need only to provide the URL of the database to connect to, along with the table name you want to access. HAWQ InputFormat fetches only the metadata of the database and table of interest, which is much less data than the table data itself. After getting the metadata, HAWQ InputFormat determines where and how the table data is stored in HDFS. It reads and parses those HDFS files and processes the parsed table tuples directly inside a Map task.
+
+This chapter describes the document format and schema for defining HAWQ MapReduce jobs.
+
+## <a id="supporteddatatypes"></a>Supported Data Types
+
+HAWQ InputFormat supports the following data types:
+
+| SQL/HAWQ                | JDBC/JAVA                                        | setXXX        | getXXX        |
+|-------------------------|--------------------------------------------------|---------------|---------------|
+| DECIMAL/NUMERIC         | java.math.BigDecimal                             | setBigDecimal | getBigDecimal |
+| FLOAT8/DOUBLE PRECISION | double                                           | setDouble     | getDouble     |
+| INT8/BIGINT             | long                                             | setLong       | getLong       |
+| INTEGER/INT4/INT        | int                                              | setInt        | getInt        |
+| FLOAT4/REAL             | float                                            | setFloat      | getFloat      |
+| SMALLINT/INT2           | short                                            | setShort      | getShort      |
+| BOOL/BOOLEAN            | boolean                                          | setBoolean    | getBoolean    |
+| VARCHAR/CHAR/TEXT       | String                                           | setString     | getString     |
+| DATE                    | java.sql.Date                                    | setDate       | getDate       |
+| TIME/TIMETZ             | java.sql.Time                                    | setTime       | getTime       |
+| TIMESTAMP/TIMSTAMPTZ    | java.sql.Timestamp                               | setTimestamp  | getTimestamp  |
+| ARRAY                   | java.sq.Array                                    | setArray      | getArray      |
+| BIT/VARBIT              | com.pivotal.hawq.mapreduce.datatype.             | setVarbit     | getVarbit     |
+| BYTEA                   | byte\[\]                                         | setByte       | getByte       |
+| INTERVAL                | com.pivotal.hawq.mapreduce.datatype.HAWQInterval | setInterval   | getInterval   |
+| POINT                   | com.pivotal.hawq.mapreduce.datatype.HAWQPoint    | setPoint      | getPoint      |
+| LSEG                    | com.pivotal.hawq.mapreduce.datatype.HAWQLseg     | setLseg       | getLseg       |
+| BOX                     | com.pivotal.hawq.mapreduce.datatype.HAWQBox      | setBox        | getBox        |
+| CIRCLE                  | com.pivotal.hawq.mapreduce.datatype.HAWQCircle   | setVircle     | getCircle     |
+| PATH                    | com.pivotal.hawq.mapreduce.datatype.HAWQPath     | setPath       | getPath       |
+| POLYGON                 | com.pivotal.hawq.mapreduce.datatype.HAWQPolygon  | setPolygon    | getPolygon    |
+| MACADDR                 | com.pivotal.hawq.mapreduce.datatype.HAWQMacaddr  | setMacaddr    | getMacaddr    |
+| INET                    | com.pivotal.hawq.mapreduce.datatype.HAWQInet     | setInet       | getInet       |
+| CIDR                    | com.pivotal.hawq.mapreduce.datatype.HAWQCIDR     | setCIDR       | getCIDR       |
+
+## <a id="hawqinputformatexample"></a>HAWQ InputFormat Example
+
+The following example shows how you can use the `HAWQInputFormat` class to access HAWQ table data from MapReduce jobs.
+
+``` java
+package com.mycompany.app;
+import com.pivotal.hawq.mapreduce.HAWQException;
+import com.pivotal.hawq.mapreduce.HAWQInputFormat;
+import com.pivotal.hawq.mapreduce.HAWQRecord;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.hadoop.io.IntWritable;
+
+import java.io.IOException;
+public class HAWQInputFormatDemoDriver extends Configured
+implements Tool {
+
+    // CREATE TABLE employees (
+    // id INTEGER NOT NULL, name VARCHAR(32) NOT NULL);
+    public static class DemoMapper extends
+        Mapper<Void, HAWQRecord, IntWritable, Text> {
+       int id = 0;
+       String name = null;
+       public void map(Void key, HAWQRecord value, Context context)
+        throws IOException, InterruptedException {
+        try {
+        id = value.getInt(1);
+        name = value.getString(2);
+        } catch (HAWQException hawqE) {
+        throw new IOException(hawqE.getMessage());
+        }
+        context.write(new IntWritable(id), new Text(name));
+       }
+    }
+    private static int printUsage() {
+       System.out.println("HAWQInputFormatDemoDriver
+       <database_url> <table_name> <output_path> [username]
+       [password]");
+       ToolRunner.printGenericCommandUsage(System.out);
+       return 2;
+    }
+ 
+    public int run(String[] args) throws Exception {
+       if (args.length < 3) {
+        return printUsage();
+       }
+       Job job = Job.getInstance(getConf());
+       job.setJobName("hawq-inputformat-demo");
+       job.setJarByClass(HAWQInputFormatDemoDriver.class);
+       job.setMapperClass(DemoMapper.class);
+       job.setMapOutputValueClass(Text.class);
+       job.setOutputValueClass(Text.class);
+       String db_url = args[0];
+       String table_name = args[1];
+       String output_path = args[2];
+       String user_name = null;
+       if (args.length > 3) {
+         user_name = args[3];
+       }
+       String password = null;
+       if (args.length > 4) {
+         password = args[4];
+       }
+       job.setInputFormatClass(HAWQInputFormat.class);
+       HAWQInputFormat.setInput(job.getConfiguration(), db_url,
+       user_name, password, table_name);
+       FileOutputFormat.setOutputPath(job, new
+       Path(output_path));
+       job.setNumReduceTasks(0);
+       int res = job.waitForCompletion(true) ? 0 : 1;
+       return res;
+    }
+    
+    public static void main(String[] args) throws Exception {
+       int res = ToolRunner.run(new Configuration(),
+         new HAWQInputFormatDemoDriver(), args);
+       System.exit(res);
+    }
+}
+```
+
+**To compile and run the example:**
+
+1.  Create a work directory:
+
+    ``` shell
+    $ mkdir mrwork
+    $ cd mrwork
+    ```
+ 
+2.  Copy and paste the Java code above into a `.java` file.
+
+    ``` shell
+    $ mkdir -p com/mycompany/app
+    $ cd com/mycompany/app
+    $ vi HAWQInputFormatDemoDriver.java
+    ```
+
+3.  Note the following dependencies required for compilation:
+    1.  `HAWQInputFormat` jars (located in the `$GPHOME/lib/postgresql/hawq-mr-io` directory):
+        -   `hawq-mapreduce-common.jar`
+        -   `hawq-mapreduce-ao.jar`
+        -   `hawq-mapreduce-parquet.jar`
+        -   `hawq-mapreduce-tool.jar`
+
+    2.  Required 3rd party jars (located in the `$GPHOME/lib/postgresql/hawq-mr-io/lib` directory):
+        -   `parquet-common-1.1.0.jar`
+        -   `parquet-format-1.1.0.jar`
+        -   `parquet-hadoop-1.1.0.jar`
+        -   `postgresql-n.n-n-jdbc4.jar`
+        -   `snakeyaml-n.n.jar`
+
+    3.  Hadoop Mapreduce related jars (located in�the install directory of your Hadoop distribution).
+
+4.  Compile the Java program.  You may choose to use a different compilation command:
+
+    ``` shell
+    javac -classpath /usr/hdp/2.4.2.0-258/hadoop-mapreduce/*:/usr/local/hawq/lib/postgresql/hawq-mr-io/*:/usr/local/hawq/lib/postgresql/hawq-mr-io/lib/*:/usr/hdp/current/hadoop-client/* HAWQInputFormatDemoDriver.java
+    ```
+   
+5.  Build the JAR file.
+
+    ``` shell
+    $ cd ../../..
+    $ jar cf my-app.jar com
+    $ cp my-app.jar /tmp
+    ```
+    
+6.  Check that you have installed HAWQ and HDFS and your HAWQ cluster is running.
+
+7.  Create sample table:
+    1.  Log in to HAWQ:
+
+        ``` shell
+        �$ psql -d postgres�
+        ```
+
+    2.  Create the table:
+
+        ``` sql
+        CREATE TABLE employees (
+        id INTEGER NOT NULL,
+        name TEXT NOT NULL);
+        ```
+
+        Or a Parquet table:
+
+        ``` sql
+        CREATE TABLE employees ( id INTEGER NOT NULL, name TEXT NOT NULL) WITH (APPENDONLY=true, ORIENTATION=parquet);
+        ```
+
+    3.  Insert one tuple:
+
+        ``` sql
+        INSERT INTO employees VALUES (1, 'Paul');
+        \q
+        ```
+8.  Ensure the system `pg_hba.conf` configuration file is set up to allow `gpadmin` access to the `postgres` database.
+
+8.  Use the following shell script snippet showing how to run the Mapreduce job:
+
+    ``` shell
+    #!/bin/bash
+    
+    # set up environment variables
+    HAWQMRLIB=/usr/local/hawq/lib/postgresql/hawq-mr-io
+    export HADOOP_CLASSPATH=$HAWQMRLIB/hawq-mapreduce-ao.jar:$HAWQMRLIB/hawq-mapreduce-common.jar:$HAWQMRLIB/hawq-mapreduce-tool.jar:$HAWQMRLIB/hawq-mapreduce-parquet.jar:$HAWQMRLIB/lib/postgresql-9.2-1003-jdbc4.jar:$HAWQMRLIB/lib/snakeyaml-1.12.jar:$HAWQMRLIB/lib/parquet-hadoop-1.1.0.jar:$HAWQMRLIB/lib/parquet-common-1.1.0.jar:$HAWQMRLIB/lib/parquet-format-1.0.0.jar
+    export LIBJARS=$HAWQMRLIB/hawq-mapreduce-ao.jar,$HAWQMRLIB/hawq-mapreduce-common.jar,$HAWQMRLIB/hawq-mapreduce-tool.jar,$HAWQMRLIB/lib/postgresql-9.2-1003-jdbc4.jar,$HAWQMRLIB/lib/snakeyaml-1.12.jar,$HAWQMRLIB/hawq-mapreduce-parquet.jar,$HAWQMRLIB/lib/parquet-hadoop-1.1.0.jar,$HAWQMRLIB/lib/parquet-common-1.1.0.jar,$HAWQMRLIB/lib/parquet-format-1.0.0.jar
+    
+    # usage:  hadoop jar JARFILE CLASSNAME -libjars JARS <database_url> <table_name> <output_path_on_HDFS>
+    #   - writing output to HDFS, so run as hdfs user
+    #   - if not using the default postgres port, replace 5432 with port number for your HAWQ cluster
+    HADOOP_USER_NAME=hdfs hadoop jar /tmp/my-app.jar com.mycompany.app.HAWQInputFormatDemoDriver -libjars $LIBJARS localhost:5432/postgres employees /tmp/employees
+    ```
+    
+    The MapReduce job output is written to the `/tmp/employees` directory on the HDFS file system.
+
+9.  Use the following command to check the result of the Mapreduce job:
+
+    ``` shell
+    $ sudo -u hdfs hdfs dfs -ls /tmp/employees
+    $ sudo -u hdfs hdfs dfs -cat /tmp/employees/*
+    ```
+
+    The output will appear as follows:
+
+    ``` pre
+    1 Paul
+    ```
+        
+10.  If you choose to run the program again, delete the output file and directory:
+    
+    ``` shell
+    $ sudo -u hdfs hdfs dfs -rm /tmp/employees/*
+    $ sudo -u hdfs hdfs dfs -rmdir /tmp/employees
+    ```
+
+## <a id="accessinghawqdata"></a>Accessing HAWQ Data
+
+You can access HAWQ data using the `HAWQInputFormat.setInput()` interface.  You will use a different API signature depending on whether HAWQ is running or not.
+
+-   When HAWQ is running, use `HAWQInputFormat.setInput(Configuration conf, String db_url, String username, String password, String tableName)`.
+-   When HAWQ is not running, first extract the table metadata to a file with the Metadata Export Tool and then use `HAWQInputFormat.setInput(Configuration conf, String pathStr)`.
+
+### <a id="hawqinputformatsetinput"></a>HAWQ is Running
+
+``` java
+  /**
+    * Initializes the map-part of the job with the appropriate input settings
+    * through connecting to Database.
+    *
+    * @param conf
+    * The map-reduce job configuration
+    * @param db_url
+    * The database URL to connect to
+    * @param username
+    * The username for setting up a connection to the database
+    * @param password
+    * The password for setting up a connection to the database
+    * @param tableName
+    * The name of the table to access to
+    * @throws Exception
+    */
+public static void setInput(Configuration conf, String db_url,
+    String username, String password, String tableName)
+throws Exception;
+```
+
+### <a id="metadataexporttool"></a>HAWQ is not Running
+
+Use the metadata export tool, `hawq extract`, to export the metadata of the target table into a local YAML file:
+
+``` shell
+$ hawq extract [-h hostname] [-p port] [-U username] [-d database] [-o output_file] [-W] <tablename>
+```
+
+Using the extracted metadata, access HAWQ data through the following interface.  Pass the complete path to the `.yaml` file in the `pathStr` argument.
+
+``` java
+ /**
+   * Initializes the map-part of the job with the appropriate input settings through reading metadata file stored in local filesystem.
+   *
+   * To get metadata file, please use hawq extract first
+   *
+   * @param conf
+   * The map-reduce job configuration
+   * @param pathStr
+   * The metadata file path in local filesystem. e.g.
+   * /home/gpadmin/metadata/postgres_test
+   * @throws Exception
+   */
+public static void setInput(Configuration conf, String pathStr)
+   throws Exception;
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/Transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/Transactions.html.md.erb b/markdown/datamgmt/Transactions.html.md.erb
new file mode 100644
index 0000000..dfc9a5e
--- /dev/null
+++ b/markdown/datamgmt/Transactions.html.md.erb
@@ -0,0 +1,54 @@
+---
+title: Working with Transactions
+---
+
+This topic describes transaction support in HAWQ.
+
+Transactions allow you to bundle multiple SQL statements in one all-or-nothing operation.
+
+The following are the HAWQ SQL transaction commands:
+
+-   `BEGIN` or `START TRANSACTION `starts a transaction block.
+-   `END` or `COMMIT` commits the results of a transaction.
+-   `ROLLBACK` abandons a transaction without making any changes.
+-   `SAVEPOINT` marks a place in a transaction and enables partial rollback. You can roll back commands executed after a savepoint while maintaining commands executed before the savepoint.
+-   `ROLLBACK TO SAVEPOINT `rolls back a transaction to a savepoint.
+-   `RELEASE SAVEPOINT `destroys a savepoint within a transaction.
+
+## <a id="topic8"></a>Transaction Isolation Levels
+
+HAWQ accepts the standard SQL transaction levels as follows:
+
+-   *read uncommitted* and *read committed* behave like the standard *read committed*
+-   serializable and repeatable read behave like the standard serializable
+
+The following information describes the behavior of the HAWQ transaction levels:
+
+-   **read committed/read uncommitted** \u2014 Provides fast, simple, partial transaction isolation. With read committed and read uncommitted transaction isolation, `SELECT` transactions operate on a snapshot of the database taken when the query started.
+
+A `SELECT` query:
+
+-   Sees data committed before the query starts.
+-   Sees updates executed within the transaction.
+-   Does not see uncommitted data outside the transaction.
+-   Can possibly see changes that concurrent transactions made if the concurrent transaction is committed after the initial read in its own transaction.
+
+Successive `SELECT` queries in the same transaction can see different data if other concurrent transactions commit changes before the queries start.
+
+Read committed or read uncommitted transaction isolation may be inadequate for applications that perform complex queries and require a consistent view of the database.
+
+-   **serializable/repeatable read** \u2014 Provides strict transaction isolation in which transactions execute as if they run one after another rather than concurrently. Applications on the serializable or repeatable read level must be designed to retry transactions in case of serialization failures.
+
+A `SELECT` query:
+
+-   Sees a snapshot of the data as of the start of the transaction (not as of the start of the current query within the transaction).
+-   Sees only data committed before the query starts.
+-   Sees updates executed within the transaction.
+-   Does not see uncommitted data outside the transaction.
+-   Does not see changes that concurrent transactions made.
+
+    Successive `SELECT` commands within a single transaction always see the same data.
+
+The default transaction isolation level in HAWQ is *read committed*. To change the isolation level for a transaction, declare the isolation level when you `BEGIN` the transaction or use the `SET TRANSACTION` command after the transaction starts.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/about_statistics.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/about_statistics.html.md.erb b/markdown/datamgmt/about_statistics.html.md.erb
new file mode 100644
index 0000000..5e2184a
--- /dev/null
+++ b/markdown/datamgmt/about_statistics.html.md.erb
@@ -0,0 +1,209 @@
+---
+title: About Database Statistics
+---
+
+## <a id="overview"></a>Overview
+
+Statistics are metadata that describe the data stored in the database. The query optimizer needs up-to-date statistics to choose the best execution plan for a query. For example, if a query joins two tables and one of them must be broadcast to all segments, the optimizer can choose the smaller of the two tables to minimize network traffic.
+
+The statistics used by the optimizer are calculated and saved in the system catalog by the `ANALYZE` command. There are three ways to initiate an analyze operation:
+
+-   You can run the `ANALYZE` command directly.
+-   You can run the `analyzedb` management utility outside of the database, at the command line.
+-   An automatic analyze operation can be triggered when DML operations are performed on tables that have no statistics or when a DML operation modifies a number of rows greater than a specified threshold.
+
+These methods are described in the following sections.
+
+Calculating statistics consumes time and resources, so HAWQ produces estimates by calculating statistics on samples of large tables. In most cases, the default settings provide the information needed to generate correct execution plans for queries. If the statistics produced are not producing optimal query execution plans, the administrator can tune configuration parameters to produce more accurate stastistics by increasing the sample size or the granularity of statistics saved in the system catalog. Producing more accurate statistics has CPU and storage costs and may not produce better plans, so it is important to view explain plans and test query performance to ensure that the additional statistics-related costs result in better query performance.
+
+## <a id="topic_oq3_qxj_3s"></a>System Statistics
+
+### <a id="tablesize"></a>Table Size
+
+The query planner seeks to minimize the disk I/O and network traffic required to execute a query, using estimates of the number of rows that must be processed and the number of disk pages the query must access. The data from which these estimates are derived are the `pg_class` system table columns `reltuples` and `relpages`, which contain the number of rows and pages at the time a `VACUUM` or `ANALYZE` command was last run. As rows are added, the numbers become less accurate. However, an accurate count of disk pages is always available from the operating system, so as long as the ratio of `reltuples` to `relpages` does not change significantly, the optimizer can produce an estimate of the number of rows that is sufficiently accurate to choose the correct query execution plan.
+
+In append-optimized tables, the number of tuples is kept up-to-date in the system catalogs, so the `reltuples` statistic is not an estimate. Non-visible tuples in the table are subtracted from the total. The `relpages` value is estimated from the append-optimized block sizes.
+
+When the `reltuples` column differs significantly from the row count returned by `SELECT COUNT(*)`, an analyze should be performed to update the statistics.
+
+### <a id="views"></a>The pg\_statistic System Table and pg\_stats View
+
+The `pg_statistic` system table holds the results of the last `ANALYZE` operation on each database table. There is a row for each column of every table. It has the following columns:
+
+starelid  
+The object ID of the table or index the column belongs to.
+
+staatnum  
+The number of the described column, beginning with 1.
+
+stanullfrac  
+The fraction of the column's entries that are null.
+
+stawidth  
+The average stored width, in bytes, of non-null entries.
+
+stadistinct  
+The number of distinct nonnull data values in the column.
+
+stakind*N*  
+A code number indicating the kind of statistics stored in the *N*th slot of the `pg_statistic` row.
+
+staop*N*  
+An operator used to derive the statistics stored in the *N*th slot.
+
+stanumbers*N*  
+Numerical statistics of the appropriate kind for the *N*th slot, or NULL if the slot kind does not involve numerical values.
+
+stavalues*N*  
+Column data values of the appropriate kind for the *N*th slot, or NULL if the slot kind does not store any data values.
+
+The statistics collected for a column vary for different data types, so the `pg_statistic` table stores statistics that are appropriate for the data type in four *slots*, consisting of four columns per slot. For example, the first slot, which normally contains the most common values for a column, consists of the columns `stakind1`, `staop1`, `stanumbers1`, and `stavalues1`. Also see [pg\_statistic](../reference/catalog/pg_statistic.html#topic1).
+
+The `stakindN` columns each contain a numeric code to describe the type of statistics stored in their slot. The `stakind` code numbers from 1 to 99 are reserved for core PostgreSQL data types. HAWQ uses code numbers 1, 2, and 3. A value of 0 means the slot is unused. The following table describes the kinds of statistics stored for the three codes.
+
+<a id="topic_oq3_qxj_3s__table_upf_1yc_nt"></a>
+
+<table>
+<caption><span class="tablecap">Table 1. Contents of pg_statistic &quot;slots&quot;</span></caption>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>stakind Code</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>1</td>
+<td><em>Most CommonValues (MCV) Slot</em>
+<ul>
+<li><code class="ph codeph">staop</code> contains the object ID of the &quot;=&quot; operator, used to decide whether values are the same or not.</li>
+<li><code class="ph codeph">stavalues</code> contains an array of the <em>K</em> most common non-null values appearing in the column.</li>
+<li><code class="ph codeph">stanumbers</code> contains the frequencies (fractions of total row count) of the values in the <code class="ph codeph">stavalues</code> array.</li>
+</ul>
+The values are ordered in decreasing frequency. Since the arrays are variable-size, <em>K</em> can be chosen by the statistics collector. Values must occur more than once to be added to the <code class="ph codeph">stavalues</code> array; a unique column has no MCV slot.</td>
+</tr>
+<tr class="even">
+<td>2</td>
+<td><em>Histogram Slot</em> \u2013 describes the distribution of scalar data.
+<ul>
+<li><code class="ph codeph">staop</code> is the object ID of the &quot;&lt;&quot; operator, which describes the sort ordering.</li>
+<li><code class="ph codeph">stavalues</code> contains <em>M</em> (where <em>M</em>&gt;=2) non-null values that divide the non-null column data values into <em>M</em>-1 bins of approximately equal population. The first <code class="ph codeph">stavalues</code> item is the minimum value and the last is the maximum value.</li>
+<li><code class="ph codeph">stanumbers</code> is not used and should be null.</li>
+</ul>
+<p>If a Most Common Values slot is also provided, then the histogram describes the data distribution after removing the values listed in the MCV array. (It is a <em>compressed histogram</em> in the technical parlance). This allows a more accurate representation of the distribution of a column with some very common values. In a column with only a few distinct values, it is possible that the MCV list describes the entire data population; in this case the histogram reduces to empty and should be omitted.</p></td>
+</tr>
+<tr class="odd">
+<td>3</td>
+<td><em>Correlation Slot</em> \u2013 describes the correlation between the physical order of table tuples and the ordering of data values of this column.
+<ul>
+<li><code class="ph codeph">staop</code> is the object ID of the &quot;&lt;&quot; operator. As with the histogram, more than one entry could theoretically appear.</li>
+<li><code class="ph codeph">stavalues</code> is not used and should be NULL.</li>
+<li><code class="ph codeph">stanumbers</code> contains a single entry, the correlation coefficient between the sequence of data values and the sequence of their actual tuple positions. The coefficient ranges from +1 to -1.</li>
+</ul></td>
+</tr>
+</tbody>
+</table>
+
+The `pg_stats` view presents the contents of `pg_statistic` in a friendlier format. For more information, see [pg\_stats](../reference/catalog/pg_stats.html#topic1).
+
+Newly created tables and indexes have no statistics.
+
+### <a id="topic_oq3_qxj_3s__section_wsy_1rv_mt"></a>Sampling
+
+When calculating statistics for large tables, HAWQ creates a smaller table by sampling the base table. If the table is partitioned, samples are taken from all partitions.
+
+If a sample table is created, the number of rows in the sample is calculated to provide a maximum acceptable relative error. The amount of acceptable error is specified with the `gp_analyze_relative_error` system configuration parameter, which is set to .25 (25%) by default. This is usually sufficiently accurate to generate correct query plans. If `ANALYZE` is not producing good estimates for a table column, you can increase the sample size by setting the `gp_analyze_relative_error` configuration parameter to a lower value. Beware that setting this parameter to a low value can lead to a very large sample size and dramatically increase analyze time.
+
+### <a id="topic_oq3_qxj_3s__section_u5p_brv_mt"></a>Updating Statistics
+
+Running `ANALYZE` with no arguments updates statistics for all tables in the database. This could take a very long time, so it is better to analyze tables selectively after data has changed. You can also analyze a subset of the columns in a table, for example columns used in joins, `WHERE` clauses, `SORT` clauses, `GROUP BY` clauses, or `HAVING` clauses.
+
+See the SQL Command Reference for details of running the `ANALYZE` command.
+
+Refer to the Management Utility Reference for details of running the `analyzedb` command.
+
+### <a id="topic_oq3_qxj_3s__section_cv2_crv_mt"></a>Analyzing Partitioned and Append-Optimized Tables
+
+When the `ANALYZE` command is run on a partitioned table, it analyzes each leaf-level subpartition, one at a time. You can run `ANALYZE` on just new or changed partition files to avoid analyzing partitions that have not changed. If a table is partitioned, you can analyze just new or changed partitions.
+
+The `analyzedb` command-line utility skips unchanged partitions automatically. It also runs concurrent sessions so it can analyze several partitions concurrently. It runs five sessions by default, but the number of sessions can be set from 1 to 10 with the `-p` command-line option. Each time `analyzedb` runs, it saves state information for append-optimized tables and partitions in the `db_analyze` directory in the master data directory. The next time it runs, `analyzedb` compares the current state of each table with the saved state and skips analyzing a table or partition if it is unchanged. Heap tables are always analyzed.
+
+If the Pivotal Query Optimizer is enabled, you also need to run `ANALYZE             ROOTPARTITION` to refresh the root partition statistics. The Pivotal Query Optimizer requires statistics at the root level for partitioned tables. The legacy optimizer does not use these statistics. Enable the Pivotal Query Optimizer by setting both the `optimizer` and `optimizer_analyze_root_partition` system configuration parameters to on. The root level statistics are then updated when you run `ANALYZE` or `ANALYZE ROOTPARTITION`. The time to run `ANALYZE ROOTPARTITION` is similar to the time to analyze a single partition since `ANALYZE ROOTPARTITION`. The `analyzedb` utility updates root partition statistics by default .
+
+## <a id="topic_gyb_qrd_2t"></a>Configuring Statistics
+
+There are several options for configuring HAWQ statistics collection.
+
+### <a id="statstarget"></a>Statistics Target
+
+The statistics target is the size of the `most_common_vals`, `most_common_freqs`, and `histogram_bounds` arrays for an individual column. By default, the target is 25. The default target can be changed by setting a server configuration parameter and the target can be set for any column using the `ALTER TABLE` command. Larger values increase the time needed to do `ANALYZE`, but may improve the quality of the legacy query optimizer (planner) estimates.
+
+Set the system default statistics target to a different value by setting the `default_statistics_target` server configuration parameter. The default value is usually sufficient, and you should only raise or lower it if your tests demonstrate that query plans improve with the new target. 
+
+You will perform different procedures to set server configuration parameters for your whole HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters via the Ambari Web UI only. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set server configuration parameters.
+
+The following examples show how to raise the default statistics target from 25 to 50.
+
+If you use Ambari to manage your HAWQ cluster:
+
+1. Set the `default_statistics_target` configuration property to `50` via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down.
+2. Select **Service Actions > Restart All** to load the updated configuration.
+
+If you manage your HAWQ cluster from the command line:
+
+1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
+
+    ``` shell
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+
+1. Use the `hawq config` utility to set `default_statistics_target`:
+
+    ``` shell
+    $ hawq config -c default_statistics_target -v 50
+    ```
+2. Reload the HAWQ configuration:
+
+    ``` shell
+    $ hawq stop cluster -u
+    ```
+
+The statististics target for individual columns can be set with the `ALTER             TABLE` command. For example, some queries can be improved by increasing the target for certain columns, especially columns that have irregular distributions. You can set the target to zero for columns that never contribute to query optimization. When the target is 0, `ANALYZE` ignores the column. For example, the following `ALTER TABLE` command sets the statistics target for the `notes` column in the `emp` table to zero:
+
+``` sql
+ALTER TABLE emp ALTER COLUMN notes SET STATISTICS 0;
+```
+
+The statistics target can be set in the range 0 to 1000, or set it to -1 to revert to using the system default statistics target.
+
+Setting the statistics target on a parent partition table affects the child partitions. If you set statistics to 0 on some columns on the parent table, the statistics for the same columns are set to 0 for all children partitions. However, if you later add or exchange another child partition, the new child partition will use either the default statistics target or, in the case of an exchange, the previous statistics target. Therefore, if you add or exchange child partitions, you should set the statistics targets on the new child table.
+
+### <a id="topic_gyb_qrd_2t__section_j3p_drv_mt"></a>Automatic Statistics Collection
+
+HAWQ can be set to automatically run `ANALYZE` on a table that either has no statistics or has changed significantly when certain operations are performed on the table. For partitioned tables, automatic statistics collection is only triggered when the operation is run directly on a leaf table, and then only the leaf table is analyzed.
+
+Automatic statistics collection has three modes:
+
+-   `none` disables automatic statistics collection.
+-   `on_no_stats` triggers an analyze operation for a table with no existing statistics when any of the commands `CREATE TABLE AS SELECT`, `INSERT`, or `COPY` are executed on the table.
+-   `on_change` triggers an analyze operation when any of the commands `CREATE TABLE AS SELECT`, `INSERT`, or `COPY` are executed on the table and the number of rows affected exceeds the threshold defined by the `gp_autostats_on_change_threshold` configuration parameter.
+
+The automatic statistics collection mode is set separately for commands that occur within a procedural language function and commands that execute outside of a function:
+
+-   The `gp_autostats_mode` configuration parameter controls automatic statistics collection behavior outside of functions and is set to `on_no_stats` by default.
+
+With the `on_change` mode, `ANALYZE` is triggered only if the number of rows affected exceeds the threshold defined by the `gp_autostats_on_change_threshold` configuration parameter. The default value for this parameter is a very high value, 2147483647, which effectively disables automatic statistics collection; you must set the threshold to a lower number to enable it. The `on_change` mode could trigger large, unexpected analyze operations that could disrupt the system, so it is not recommended to set it globally. It could be useful in a session, for example to automatically analyze a table following a load.
+
+To disable automatic statistics collection outside of functions, set the `gp_autostats_mode` parameter to `none`. For a command-line-managed HAWQ cluster:
+
+``` shell
+$ hawq configure -c gp_autostats_mode -v none
+```
+
+For an Ambari-managed cluster, set `gp_autostats_mode` via the Ambari Web UI.
+
+Set the `log_autostats` system configuration parameter to `on` if you want to log automatic statistics collection operations.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/dml.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/dml.html.md.erb b/markdown/datamgmt/dml.html.md.erb
new file mode 100644
index 0000000..681883a
--- /dev/null
+++ b/markdown/datamgmt/dml.html.md.erb
@@ -0,0 +1,35 @@
+---
+title: Managing Data with HAWQ
+---
+
+This chapter provides information about manipulating data and concurrent access in HAWQ.
+
+-   **[Basic Data Operations](../datamgmt/BasicDataOperations.html)**
+
+    This topic describes basic data operations that you perform in HAWQ.
+
+-   **[About Database Statistics](../datamgmt/about_statistics.html)**
+
+    An overview of statistics gathered by the `ANALYZE` command in HAWQ.
+
+-   **[Concurrency Control](../datamgmt/ConcurrencyControl.html)**
+
+    This topic discusses the mechanisms used in HAWQ to provide concurrency control.
+
+-   **[Working with Transactions](../datamgmt/Transactions.html)**
+
+    This topic describes transaction support in HAWQ.
+
+-   **[Loading and Unloading Data](../datamgmt/load/g-loading-and-unloading-data.html)**
+
+    The topics in this section describe methods for loading and writing data into and out of HAWQ, and how to format data files.
+
+-   **[Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html)**
+
+    HAWQ Extension Framework (PXF) is an extensible framework that allows HAWQ to query external system data.�
+
+-   **[HAWQ InputFormat for MapReduce](../datamgmt/HAWQInputFormatforMapReduce.html)**
+
+    MapReduce is a programming model developed by Google for processing and generating large data sets on an array of commodity servers. You can use the HAWQ InputFormat option to enable MapReduce jobs to access HAWQ data stored in HDFS.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/datamgmt/load/client-loadtools.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/datamgmt/load/client-loadtools.html.md.erb b/markdown/datamgmt/load/client-loadtools.html.md.erb
new file mode 100644
index 0000000..fe291d0
--- /dev/null
+++ b/markdown/datamgmt/load/client-loadtools.html.md.erb
@@ -0,0 +1,104 @@
+---
+title: Client-Based HAWQ Load Tools
+---
+HAWQ supports data loading from Red Hat Enterprise Linux 5, 6, and 7 and Windows XP client systems. HAWQ Load Tools include both a loader program and a parallel file distribution program.
+
+This topic presents the instructions to install the HAWQ Load Tools on your client machine. It also includes the information necessary to configure HAWQ databases to accept remote client connections.
+
+## <a id="installloadrunrhel"></a>RHEL Load Tools
+
+The RHEL Load Tools are provided in a HAWQ distribution. 
+
+
+### <a id="installloadrunux"></a>Installing the RHEL Loader
+
+1. Download a HAWQ installer package or build HAWQ from source.
+ 
+2. Refer to the HAWQ command line install instructions to set up your package repositories and install the HAWQ binary.
+
+3. Install the `libevent` and `libyaml` packages. These libraries are required by the HAWQ file server. You must have superuser privileges on the system.
+
+    ``` shell
+    $ sudo yum install -y libevent libyaml
+    ```
+
+### <a id="installrhelloadabout"></a>About the RHEL Loader Installation
+
+The files/directories of interest in a HAWQ RHEL Load Tools installation include:
+
+`bin/` \u2014 data loading command-line tools ([gpfdist](../../reference/cli/admin_utilities/gpfdist.html) and [hawq load](../../reference/cli/admin_utilities/hawqload.html))   
+`greenplum_path.sh` \u2014 environment set up file
+
+### <a id="installloadrhelcfgenv"></a>Configuring the RHEL Load Environment
+
+A `greenplum_path.sh` file is located in the HAWQ base install directory following installation. Source `greenplum_path.sh` before running the HAWQ RHEL Load Tools to set up your HAWQ environment:
+
+``` shell
+$ . /usr/local/hawq/greenplum_path.sh
+```
+
+Continue to [Using the HAWQ File Server (gpfdist)](g-using-the-hawq-file-server--gpfdist-.html) for specific information about using the HAWQ load tools.
+
+## <a id="installloadrunwin"></a>Windows Load Tools
+
+### <a id="installpythonwin"></a>Installing Python 2.5
+The HAWQ Load Tools for Windows requires that the 32-bit version of Python 2.5 be installed on your system. 
+
+**Note**: The 64-bit version of Python is **not** compatible with the HAWQ Load Tools for Windows.
+
+1. Download the [Python 2.5 installer for Windows](https://www.python.org/downloads/).  Make note of the directory to which it was downloaded.
+
+2. Double-click on the `python Load Tools for Windows-2.5.x.msi` package to launch the installer.
+3. Select **Install for all users** and click **Next**.
+4. The default Python install location is `C:\Pythonxx`. Click **Up** or **New** to choose another location. Click **Next**.
+5. Click **Next** to install the selected Python components.
+6. Click **Finish** to complete the Python installation.
+
+
+### <a id="installloadrunwin"></a>Running the Windows Installer
+
+1. Download the `greenplum-loaders-4.3.x.x-build-n-WinXP-x86_32.msi` installer package from [Pivotal Network](https://network.pivotal.io/products/pivotal-gpdb). Make note of the directory to which it was downloaded.
+ 
+2. Double-click the `greenplum-loaders-4.3.x.x-build-n-WinXP-x86_32.msi` file to launch the installer.
+3. Click **Next** on the **Welcome** screen.
+4. Click **I Agree** on the **License Agreement** screen.
+5. The default install location for HAWQ Loader Tools for Windows is `C:\"Program Files (x86)"\Greenplum\greenplum-loaders-4.3.8.1-build-1`. Click **Browse** to choose another location.
+6. Click **Next**.
+7. Click **Install** to begin the installation.
+8. Click **Finish** to exit the installer.
+
+    
+### <a id="installloadabout"></a>About the Windows Loader Installation
+Your HAWQ Windows Load Tools installation includes the following files and directories:
+
+`bin/` \u2014 data loading command-line tools ([gpfdist](http://gpdb.docs.pivotal.io/4380/client_tool_guides/load/unix/gpfdist.html) and [gpload](http://gpdb.docs.pivotal.io/4380/client_tool_guides/load/unix/gpload.html))  
+`lib/` \u2014 data loading library files  
+`greenplum_loaders_path.bat` \u2014 environment set up file
+
+
+### <a id="installloadcfgenv"></a>Configuring the Windows Load Environment
+
+A `greenplum_loaders_path.bat` file is provided in your load tools base install directory following installation. This file sets the following environment variables:
+
+- `GPHOME_LOADERS` - base directory of loader installation
+- `PATH` - adds the loader and component program directories
+- `PYTHONPATH` - adds component library directories
+
+Execute `greenplum_loaders_path.bat` to set up your HAWQ environment before running the HAWQ Windows Load Tools.
+ 
+
+## <a id="installloadenableclientconn"></a>Enabling Remote Client Connections
+The HAWQ master database must be configured to accept remote client connections.  Specifically, you need to identify the client hosts and database users that will be connecting to the HAWQ database.
+
+1. Ensure that the HAWQ database master `pg_hba.conf` file is correctly configured to allow connections from the desired users operating on the desired database from the desired hosts, using the authentication method you choose. For details, see [Configuring Client Access](../../clientaccess/client_auth.html#topic2).
+
+    Make sure the authentication method you choose is supported by the client tool you are using.
+    
+2. If you edited the `pg_hba.conf` file, reload the server configuration. If you have any active database connections, you must include the `-M fast` option in the `hawq stop` command:
+
+    ``` shell
+    $ hawq stop cluster -u [-M fast]
+    ```
+   
+
+3. Verify and/or configure the databases and roles you are using to connect, and that the roles have the correct privileges to the database objects.
\ No newline at end of file



[30/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/plext/using_plr.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/plext/using_plr.html.md.erb b/markdown/plext/using_plr.html.md.erb
new file mode 100644
index 0000000..367a1d0
--- /dev/null
+++ b/markdown/plext/using_plr.html.md.erb
@@ -0,0 +1,229 @@
+---
+title: Using PL/R in HAWQ
+---
+
+PL/R is a procedural language. With the HAWQ PL/R extension, you can write database functions in the R programming language and use R packages that contain R functions and data sets.
+
+**Note**: To use PL/R in HAWQ, R must be installed on each node in your HAWQ cluster. Additionally, you must install the PL/R package on an existing HAWQ deployment or have specified PL/R as a build option when compiling HAWQ.
+
+## <a id="plrexamples"></a>PL/R Examples 
+
+This section contains simple PL/R examples.
+
+### <a id="example1"></a>Example 1: Using PL/R for Single Row Operators 
+
+This function generates an array of numbers with a normal distribution using the R function `rnorm()`.
+
+```sql
+CREATE OR REPLACE FUNCTION r_norm(n integer, mean float8, 
+  std_dev float8) RETURNS float8[ ] AS
+$$
+  x<-rnorm(n,mean,std_dev)
+  return(x)
+$$
+LANGUAGE 'plr';
+```
+
+The following `CREATE TABLE` command uses the `r_norm` function to populate the table. The `r_norm` function creates an array of 10 numbers.
+
+```sql
+CREATE TABLE test_norm_var
+  AS SELECT id, r_norm(10,0,1) AS x
+  FROM (SELECT generate_series(1,30:: bigint) AS ID) foo
+  DISTRIBUTED BY (id);
+```
+
+### <a id="example2"></a>Example 2: Returning PL/R data.frames in Tabular Form 
+
+Assuming your PL/R function returns an R `data.frame` as its output \(unless you want to use arrays of arrays\), some work is required in order for HAWQ to see your PL/R `data.frame` as a simple SQL table:
+
+Create a TYPE in HAWQ with the same dimensions as your R `data.frame`:
+
+```sql
+CREATE TYPE t1 AS ...
+```
+
+Use this TYPE when defining your PL/R function:
+
+```sql
+... RETURNS SET OF t1 AS ...
+```
+
+Sample SQL for this situation is provided in the next example.
+
+### <a id="example3"></a>Example 3: Process Employee Information Using PL/R 
+
+The SQL below defines a TYPE and a function to process employee information with `data.frame` using PL/R:
+
+```sql
+-- Create type to store employee information
+DROP TYPE IF EXISTS emp_type CASCADE;
+CREATE TYPE emp_type AS (name text, age int, salary numeric(10,2));
+
+-- Create function to process employee information and return data.frame
+DROP FUNCTION IF EXISTS get_emps();
+CREATE OR REPLACE FUNCTION get_emps() RETURNS SETOF emp_type AS '
+    names <- c("Joe","Jim","Jon")
+    ages <- c(41,25,35)
+    salaries <- c(250000,120000,50000)
+    df <- data.frame(name = names, age = ages, salary = salaries)
+
+    return(df)
+' LANGUAGE 'plr';
+
+-- Call the function
+SELECT * FROM get_emps();
+```
+
+
+## <a id="downloadinstallplrlibraries"></a>Downloading and Installing R Packages 
+
+R packages are modules that contain R functions and data sets. You can install R packages to extend R and PL/R functionality in HAWQ.
+
+**Note**: If you expand HAWQ and add segment hosts, you must install the R packages in the R installation of *each* of the new hosts.</p>
+
+1. For an R package, identify all dependent R packages and each package web URL. The information can be found by selecting the given package from the following navigation page:
+
+	[http://cran.r-project.org/web/packages/available_packages_by_name.html](http://cran.r-project.org/web/packages/available_packages_by_name.html)
+
+	As an example, the page for the R package `arm` indicates that the package requires the following R libraries: `Matrix`, `lattice`, `lme4`, `R2WinBUGS`, `coda`, `abind`, `foreign`, and `MASS`.
+	
+	You can also try installing the package with `R CMD INSTALL` command to determine the dependent packages.
+	
+	For the R installation included with the HAWQ PL/R extension, the required R packages are installed with the PL/R extension. However, the Matrix package requires a newer version.
+	
+1. From the command line, use the `wget` utility to download the tar.gz files for the `arm` package to the HAWQ master host:
+
+	```shell
+	$ wget http://cran.r-project.org/src/contrib/Archive/arm/arm_1.5-03.tar.gz
+	$ wget http://cran.r-project.org/src/contrib/Archive/Matrix/Matrix_0.9996875-1.tar.gz
+	```
+
+1. Use the `hawq scp` utility and the `hawq_hosts` file to copy the tar.gz files to the same directory on all nodes of the HAWQ cluster. The `hawq_hosts` file contains a list of all of the HAWQ segment hosts. You might require root access to do this.
+
+	```shell
+	$ hawq scp -f hosts_all Matrix_0.9996875-1.tar.gz =:/home/gpadmin 
+	$ hawq scp -f hawq_hosts arm_1.5-03.tar.gz =:/home/gpadmin
+	```
+
+1. Use the `hawq ssh` utility in interactive mode to log into each HAWQ segment host (`hawq ssh -f hawq_hosts`). Install the packages from the command prompt using the `R CMD INSTALL` command. Note that this may require root access. For example, this R install command installs the packages for the `arm` package.
+
+	```shell
+	$ R CMD INSTALL Matrix_0.9996875-1.tar.gz arm_1.5-03.tar.gz
+	```
+	**Note**: Some packages require compilation. Refer to the package documentation for possible build requirements.
+
+1. Ensure that the R package was installed in the `/usr/lib64/R/library` directory on all the segments (`hawq ssh` can be used to install the package). For example, this `hawq ssh` command lists the contents of the R library directory.
+
+	```shell
+	$ hawq ssh -f hawq_hosts "ls /usr/lib64/R/library"
+	```
+	
+1. Verify the R package can be loaded.
+
+	This function performs a simple test to determine if an R package can be loaded:
+	
+	```sql
+	CREATE OR REPLACE FUNCTION R_test_require(fname text)
+	RETURNS boolean AS
+	$BODY$
+    	return(require(fname,character.only=T))
+	$BODY$
+	LANGUAGE 'plr';
+	```
+
+	This SQL command calls the previous function to determine if the R package `arm` can be loaded:
+	
+	```sql
+	SELECT R_test_require('arm');
+	```
+
+## <a id="rlibrarydisplay"></a>Displaying R Library Information 
+
+You can use the R command line to display information about the installed libraries and functions on the HAWQ host. You can also add and remove libraries from the R installation. To start the R command line on the host, log in to the host as the `gpadmin` user and run the script R.
+
+``` shell
+$ R
+```
+
+This R function lists the available R packages from the R command line:
+
+```r
+> library()
+```
+
+Display the documentation for a particular R package
+
+```r
+> library(help="package_name")
+> help(package="package_name")
+```
+
+Display the help file for an R function:
+
+```r
+> help("function_name")
+> ?function_name
+```
+
+To see what packages are installed, use the R command `installed.packages()`. This will return a matrix with a row for each package that has been installed. Below, we look at the first 5 rows of this matrix.
+
+```r
+> installed.packages()
+```
+
+Any package that does not appear in the installed packages matrix must be installed and loaded before its functions can be used.
+
+An R package can be installed with `install.packages()`:
+
+```r
+> install.packages("package_name") 
+> install.packages("mypkg", dependencies = TRUE, type="source")
+```
+
+Load a package from the R command line.
+
+```r
+> library(" package_name ") 
+```
+An R package can be removed with remove.packages
+
+```r
+> remove.packages("package_name")
+```
+
+You can use the R command `-e` option to run functions from the command line. For example, this command displays help on the R package named `MASS`.
+
+```shell
+$ R -e 'help("MASS")'
+```
+
+## <a id="plrreferences"></a>References 
+
+[http://www.r-project.org/](http://www.r-project.org/) - The R Project home page
+
+[https://github.com/pivotalsoftware/gp-r](https://github.com/pivotalsoftware/gp-r) - GitHub repository that contains information about using R.
+
+[https://github.com/pivotalsoftware/PivotalR](https://github.com/pivotalsoftware/PivotalR) - GitHub repository for PivotalR, a package that provides an R interface to operate on HAWQ tables and views that is similar to the R `data.frame`. PivotalR also supports using the machine learning package MADlib directly from R.
+
+R documentation is installed with the R package:
+
+```shell
+/usr/share/doc/R-N.N.N
+```
+
+where N.N.N corresponds to the version of R installed.
+
+### <a id="rfunctions"></a>R Functions and Arguments 
+
+See [http://www.joeconway.com/plr/doc/plr-funcs.html](http://www.joeconway.com/plr/doc/plr-funcs.html).
+
+### <a id="passdatavalues"></a>Passing Data Values in R 
+
+See [http://www.joeconway.com/plr/doc/plr-data.html](http://www.joeconway.com/plr/doc/plr-data.html).
+
+### <a id="aggregatefunctions"></a>Aggregate Functions in R 
+
+See [http://www.joeconway.com/plr/doc/plr-aggregate-funcs.html](http://www.joeconway.com/plr/doc/plr-aggregate-funcs.html).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/ConfigurePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/ConfigurePXF.html.md.erb b/markdown/pxf/ConfigurePXF.html.md.erb
new file mode 100644
index 0000000..fec6b27
--- /dev/null
+++ b/markdown/pxf/ConfigurePXF.html.md.erb
@@ -0,0 +1,69 @@
+---
+title: Configuring PXF
+---
+
+This topic describes how to configure the PXF service.
+
+**Note:** After you make any changes to a PXF configuration file (such as `pxf-profiles.xml` for adding custom profiles), propagate the changes to all nodes with PXF installed, and then restart the PXF service on all nodes.
+
+## <a id="settingupthejavaclasspath"></a>Setting up the Java Classpath
+
+The classpath for the PXF service is set during the plug-in installation process. Administrators should only modify it when adding new PXF connectors. The classpath is defined in two files:
+
+1.  `/etc/pxf/conf/pxf-private.classpath`�\u2013 contains all the required resources to run the PXF service, including pxf-hdfs, pxf-hbase, and pxf-hive plug-ins. This file must not be edited or removed.
+2.  `/etc/pxf/conf/pxf-public.classpath` \u2013 plug-in jar files and any dependent jar files for custom plug-ins and custom profiles should be added here. The classpath resources should be defined one per line. Wildcard characters can be used in the name of the resource, but not in the full path. See [Adding and Updating Profiles](ReadWritePXF.html#addingandupdatingprofiles) for information on adding custom profiles.
+
+After changing the classpath files, the PXF service must be restarted.�
+
+## <a id="settingupthejvmcommandlineoptionsforpxfservice"></a>Setting up the JVM Command Line Options for the PXF Service
+
+The PXF service JVM command line options can be added or modified for each pxf-service instance in the `/var/pxf/pxf-service/bin/setenv.sh` file:
+
+Currently the `JVM_OPTS` parameter is set with the following values for maximum Java heap size�and thread stack size:
+
+``` shell
+JVM_OPTS="-Xmx512M -Xss256K"
+```
+
+After adding or modifying�the JVM command line options, the PXF service must be restarted.
+
+(Refer to [Addressing PXF Memory Issues](TroubleshootingPXF.html#pxf-memcfg) for a related discussion of the configuration options available to address memory issues in your PXF deployment.)
+
+## <a id="topic_i3f_hvm_ss"></a>Using PXF on a Secure HDFS Cluster
+
+You can use PXF on a secure HDFS cluster.�Read, write, and analyze operations for PXF tables on HDFS files are enabled.�It requires no changes to preexisting PXF tables from a previous version.
+
+### <a id="requirements"></a>Requirements
+
+-   Both HDFS and YARN principals are created and are properly configured.
+-   HAWQ is correctly configured to work in secure mode.
+
+Please refer to [Troubleshooting PXF](TroubleshootingPXF.html) for common errors related to PXF security and their meaning.
+
+## <a id="credentialsforremoteservices"></a>Credentials for Remote Services
+
+Credentials for remote services allows a PXF plug-in to access a remote service that requires credentials.
+
+### <a id="inhawq"></a>In HAWQ
+
+Two parameters for credentials are implemented in HAWQ:
+
+-   `pxf_remote_service_login` \u2013 a string of characters detailing information regarding login (i.e. user name).
+-   `pxf_remote_service_secret` \u2013 a string of characters detailing information that is considered secret (i.e. password).
+
+Currently, the contents of the two parameters are stored in memory, without any security, for the whole session.�Leaving the session will insecurely drop the contents of the parameters.
+
+**Important:** These parameters are temporary and could soon be deprecated, in favor of a complete solution for managing credentials for remote services in PXF.
+
+### <a id="inapxfplugin"></a>In a PXF Plug-in
+
+In a PXF plug-in, the contents of the two credentials parameters is available through the following InputData API functions:
+
+``` java
+string getLogin()
+string getSecret()
+```
+
+Both functions return 'null' if the corresponding HAWQ parameter was set to an empty string or was not set at all.�
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/HBasePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/HBasePXF.html.md.erb b/markdown/pxf/HBasePXF.html.md.erb
new file mode 100644
index 0000000..8b89730
--- /dev/null
+++ b/markdown/pxf/HBasePXF.html.md.erb
@@ -0,0 +1,105 @@
+---
+title: Accessing HBase Data
+---
+
+## <a id="installingthepxfhbaseplugin"></a>Prerequisites
+
+Before trying to access HBase data with PXF, verify the following:
+
+-   The `/etc/hbase/conf/hbase-env.sh` configuration file must reference the `pxf-hbase.jar`. For example, `/etc/hbase/conf/hbase-env.sh` should include the line:
+
+    ``` bash
+    export HBASE_CLASSPATH=${HBASE_CLASSPATH}:/usr/lib/pxf/pxf-hbase.jar
+    ```
+
+    **Note:** You must restart HBase after making any changes to the HBase configuration.
+
+-   PXF HBase plug-in is installed on all cluster nodes.
+-   HBase and ZooKeeper jars are installed on all cluster nodes.
+
+## <a id="syntax3"></a>Syntax
+
+To create an external HBase table, use the following syntax:
+
+``` sql
+CREATE [READABLE|WRITABLE] EXTERNAL TABLE table_name 
+    ( column_name data_type [, ...] | LIKE other_table )
+LOCATION ('pxf://namenode[:port]/hbase-table-name?Profile=HBase')
+FORMAT 'CUSTOM' (Formatter='pxfwritable_import');
+```
+
+The HBase profile is equivalent to the following PXF parameters:
+
+-   Fragmenter=org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter
+-   Accessor=org.apache.hawq.pxf.plugins.hbase.HBaseAccessor
+-   Resolver=org.apache.hawq.pxf.plugins.hbase.HBaseResolver
+
+## <a id="columnmapping"></a>Column Mapping
+
+Most HAWQ external tables (PXF or others) require that the HAWQ table attributes match the source data record layout, and include all�the available attributes. With HAWQ, however, you use the PXF HBase plug-in to�specify the subset of HBase qualifiers that define the HAWQ PXF table.�To set up a�clear mapping between each attribute in the PXF table and a specific qualifier in the HBase table, you can use either direct mapping or indirect mapping. In addition, the HBase row key is handled in a special way.
+
+### <a id="rowkey"></a>Row Key
+
+You can use the HBase table row key in several ways. For example,�you can see them using query results,�or�you can run a�WHERE clause filter on a range of row key values. To use the row key in the HAWQ query, define the HAWQ table with the reserved PXF attribute�`recordkey.`�This attribute name tells PXF to return the�record key in any key-value based system and in HBase.
+
+**Note:** Because HBase is byte and not character-based, you should define the recordkey as type bytea. This may result in better ability to filter data and increase performance.
+
+``` sql
+CREATE EXTERNAL TABLE <tname> (recordkey bytea, ... ) LOCATION ('pxf:// ...')
+```
+
+### <a id="directmapping"></a>Direct Mapping
+
+Use�Direct Mapping�to map HAWQ table attributes to HBase qualifiers. You can specify the HBase qualifier names of interest, with column family names included, as quoted values.�
+
+For example, you have defined an HBase table called�`hbase_sales` with multiple column families and many qualifiers. To create a HAWQ table with these attributes:
+
+-   `rowkey`
+-   qualifier `saleid` in the�column family `cf1`
+-   qualifier `comments` in the�column family `cf8`�
+
+use the following `CREATE EXTERNAL TABLE` syntax:
+
+``` sql
+CREATE EXTERNAL TABLE hbase_sales (
+  recordkey bytea,
+  "cf1:saleid" int,
+  "cf8:comments" varchar
+) ...
+```
+
+The PXF HBase plug-in uses these attribute names as-is and returns the values of these HBase qualifiers.
+
+### <a id="indirectmappingvialookuptable"></a>Indirect Mapping (via Lookup Table)
+
+The direct mapping method is fast and intuitive, but using�indirect mapping�helps to�reconcile HBase qualifier names with HAWQ behavior:
+
+-   HBase qualifier names may be longer than 32 characters. HAWQ has a 32-character limit on attribute name size.
+-   HBase qualifier names�can be binary or non-printable. HAWQ attribute names are character based.
+
+In�either case, Indirect Mapping uses a lookup table on HBase. You can create the lookup table to store all necessary lookup information.�This works as a template for any future queries. The name of the lookup table must be�`pxflookup` and must include�the column family named�`mapping`.
+
+Using the sales example in Direct Mapping, if our `rowkey` represents the HBase table name and the�`mapping` column family includes the actual attribute mapping in the key value form of`<hawq attr name>=<hbase                             cf:qualifier>`.
+
+#### <a id="example5"></a>Example
+
+This example maps the `saleid` qualifier in the `cf1` column family to the HAWQ `id` column and the `comments` qualifier in the `cf8` family to the HAWQ `cmts` column.
+
+| (row key) | mapping           |
+|-----------|-------------------|
+| sales     | id=cf1:saleid     |
+| sales     | cmts=cf8:comments |
+
+The mapping assigned new names for each qualifier.�You can use these names in your HAWQ table definition:
+
+``` sql
+CREATE EXTERNAL TABLE hbase_sales (
+  recordkey bytea
+  id int,
+  cmts varchar
+) ...
+```
+
+PXF automatically matches HAWQ to HBase column names when a�`pxflookup` table exists in HBase.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/HDFSFileDataPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/HDFSFileDataPXF.html.md.erb b/markdown/pxf/HDFSFileDataPXF.html.md.erb
new file mode 100644
index 0000000..2021565
--- /dev/null
+++ b/markdown/pxf/HDFSFileDataPXF.html.md.erb
@@ -0,0 +1,452 @@
+---
+title: Accessing HDFS File Data
+---
+
+HDFS is the primary distributed storage mechanism used by Apache Hadoop applications. The PXF HDFS plug-in reads file data stored in HDFS.  The plug-in supports plain delimited and comma-separated-value format text files.  The HDFS plug-in also supports the Avro binary format.
+
+This section describes how to use PXF to access HDFS data, including how to create and query an external table from files in the HDFS data store.
+
+## <a id="hdfsplugin_prereq"></a>Prerequisites
+
+Before working with HDFS file data using HAWQ and PXF, ensure that:
+
+-   The HDFS plug-in is installed on all cluster nodes. See [Installing PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   All HDFS users have read permissions to HDFS services and that write permissions have been restricted to specific users.
+
+## <a id="hdfsplugin_fileformats"></a>HDFS File Formats
+
+The PXF HDFS plug-in supports reading the following file formats:
+
+- Text File - comma-separated value (.csv) or delimited format plain text file
+- Avro - JSON-defined, schema-based data serialization format
+
+The PXF HDFS plug-in includes the following profiles to support the file formats listed above:
+
+- `HdfsTextSimple` - text files
+- `HdfsTextMulti` - text files with embedded line feeds
+- `Avro` - Avro files
+
+If you find that the pre-defined PXF HDFS profiles do not meet your needs, you may choose to create a custom HDFS profile from the existing HDFS serialization and deserialization classes. Refer to [Adding and Updating Profiles](ReadWritePXF.html#addingandupdatingprofiles) for information on creating a custom profile.
+
+## <a id="hdfsplugin_cmdline"></a>HDFS Shell Commands
+Hadoop includes command-line tools that interact directly with HDFS.  These tools support typical file system operations including copying and listing files, changing file permissions, and so forth.
+
+The HDFS file system command syntax is `hdfs dfs <options> [<file>]`. Invoked with no options, `hdfs dfs` lists the file system options supported by the tool.
+
+The user invoking the `hdfs dfs` command must have sufficient privileges to the HDFS data store to perform HDFS file system operations. Specifically, the user must have write permission to HDFS to create directories and files.
+
+`hdfs dfs` options used in this topic are:
+
+| Option  | Description |
+|-------|-------------------------------------|
+| `-cat`    | Display file contents. |
+| `-mkdir`    | Create directory in HDFS. |
+| `-put`    | Copy file from local file system to HDFS. |
+
+Examples:
+
+Create a directory in HDFS:
+
+``` shell
+$ hdfs dfs -mkdir -p /data/exampledir
+```
+
+Copy a text file to HDFS:
+
+``` shell
+$ hdfs dfs -put /tmp/example.txt /data/exampledir/
+```
+
+Display the contents of a text file in HDFS:
+
+``` shell
+$ hdfs dfs -cat /data/exampledir/example.txt
+```
+
+
+## <a id="hdfsplugin_queryextdata"></a>Querying External HDFS Data
+The PXF HDFS plug-in supports the `HdfsTextSimple`, `HdfsTextMulti`, and `Avro` profiles.
+
+Use the following syntax to create a HAWQ external table representing HDFS data:�
+
+``` sql
+CREATE EXTERNAL TABLE <table_name> 
+    ( <column_name> <data_type> [, ...] | LIKE <other_table> )
+LOCATION ('pxf://<host>[:<port>]/<path-to-hdfs-file>
+    ?PROFILE=HdfsTextSimple|HdfsTextMulti|Avro[&<custom-option>=<value>[...]]')
+FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);
+```
+
+HDFS-plug-in-specific keywords and values used in the [CREATE EXTERNAL TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the table below.
+
+| Keyword  | Value |
+|-------|-------------------------------------|
+| \<host\>[:\<port\>]    | The HDFS NameNode and port. |
+| \<path-to-hdfs-file\>    | The path to the file in the HDFS data store. |
+| PROFILE    | The `PROFILE` keyword must specify one of the values `HdfsTextSimple`, `HdfsTextMulti`, or `Avro`. |
+| \<custom-option\>  | \<custom-option\> is profile-specific. Profile-specific options are discussed in the relevant profile topic later in this section.|
+| FORMAT 'TEXT' | Use '`TEXT`' `FORMAT` with the `HdfsTextSimple` profile when \<path-to-hdfs-file\> references a plain text delimited file.  |
+| FORMAT 'CSV' | Use '`CSV`' `FORMAT` with `HdfsTextSimple` and `HdfsTextMulti` profiles when \<path-to-hdfs-file\> references a comma-separated value file.  |
+| FORMAT 'CUSTOM' | Use the`CUSTOM` `FORMAT` with  the `Avro` profile. The `Avro` '`CUSTOM`' `FORMAT` supports only the built-in `(formatter='pxfwritable_import')` \<formatting-property\> |
+ \<formatting-properties\>    | \<formatting-properties\> are profile-specific. Profile-specific formatting options are discussed in the relevant profile topic later in this section. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` option in your `FORMAT` specification.
+
+## <a id="profile_hdfstextsimple"></a>HdfsTextSimple Profile
+
+Use the `HdfsTextSimple` profile when reading plain text delimited or .csv files where each row is a single record.
+
+\<formatting-properties\> supported by the `HdfsTextSimple` profile include:
+
+| Keyword  | Value |
+|-------|-------------------------------------|
+| delimiter    | The delimiter character in the file. Default value is a comma `,`.|
+
+### <a id="profile_hdfstextsimple_query"></a>Example: Using the HdfsTextSimple Profile
+
+Perform the following steps to create a sample data file, copy the file to HDFS, and use the `HdfsTextSimple` profile to create PXF external tables to query the data:
+
+1. Create an HDFS directory for PXF example data files:
+
+    ``` shell
+    $ hdfs dfs -mkdir -p /data/pxf_examples
+    ```
+
+2. Create a delimited plain text data file named `pxf_hdfs_simple.txt`:
+
+    ``` shell
+    $ echo 'Prague,Jan,101,4875.33
+Rome,Mar,87,1557.39
+Bangalore,May,317,8936.99
+Beijing,Jul,411,11600.67' > /tmp/pxf_hdfs_simple.txt
+    ```
+
+    Note the use of the comma `,` to separate the four data fields.
+
+4. Add the data file to HDFS:
+
+    ``` shell
+    $ hdfs dfs -put /tmp/pxf_hdfs_simple.txt /data/pxf_examples/
+    ```
+
+5. Display the contents of the `pxf_hdfs_simple.txt` file stored in HDFS:
+
+    ``` shell
+    $ hdfs dfs -cat /data/pxf_examples/pxf_hdfs_simple.txt
+    ```
+
+1. Use the `HdfsTextSimple` profile to create a queryable HAWQ external table from the `pxf_hdfs_simple.txt` file you previously created and added to HDFS:
+
+    ``` sql
+    gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_textsimple(location text, month text, num_orders int, total_sales float8)
+                LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_hdfs_simple.txt?PROFILE=HdfsTextSimple')
+              FORMAT 'TEXT' (delimiter=E',');
+    gpadmin=# SELECT * FROM pxf_hdfs_textsimple;          
+    ```
+
+    ``` pre
+       location    | month | num_orders | total_sales 
+    ---------------+-------+------------+-------------
+     Prague        | Jan   |        101 |     4875.33
+     Rome          | Mar   |         87 |     1557.39
+     Bangalore     | May   |        317 |     8936.99
+     Beijing       | Jul   |        411 |    11600.67
+    (4 rows)
+    ```
+
+2. Create a second external table from `pxf_hdfs_simple.txt`, this time using the `CSV` `FORMAT`:
+
+    ``` sql
+    gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_textsimple_csv(location text, month text, num_orders int, total_sales float8)
+                LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_hdfs_simple.txt?PROFILE=HdfsTextSimple')
+              FORMAT 'CSV';
+    gpadmin=# SELECT * FROM pxf_hdfs_textsimple_csv;          
+    ```
+
+    When specifying `FORMAT 'CSV'` for a comma-separated value file, no `delimiter` formatter option is required, as comma is the default.
+
+## <a id="profile_hdfstextmulti"></a>HdfsTextMulti Profile
+
+Use the `HdfsTextMulti` profile when reading plain text files with delimited single- or multi- line records that include embedded (quoted) linefeed characters.
+
+\<formatting-properties\> supported by the `HdfsTextMulti` profile include:
+
+| Keyword  | Value |
+|-------|-------------------------------------|
+| delimiter    | The delimiter character in the file. |
+
+### <a id="profile_hdfstextmulti_query"></a>Example: Using the HdfsTextMulti Profile
+
+Perform the following steps to create a sample data file, copy the file to HDFS, and use the `HdfsTextMulti` profile to create a PXF external table to query the data:
+
+1. Create a second delimited plain text file:
+
+    ``` shell
+    $ vi /tmp/pxf_hdfs_multi.txt
+    ```
+
+2. Copy/paste the following data into `pxf_hdfs_multi.txt`:
+
+    ``` pre
+    "4627 Star Rd.
+    San Francisco, CA  94107":Sept:2017
+    "113 Moon St.
+    San Diego, CA  92093":Jan:2018
+    "51 Belt Ct.
+    Denver, CO  90123":Dec:2016
+    "93114 Radial Rd.
+    Chicago, IL  60605":Jul:2017
+    "7301 Brookview Ave.
+    Columbus, OH  43213":Dec:2018
+    ```
+
+    Notice the use of the colon `:` to separate the three fields. Also notice the quotes around the first (address) field. This field includes an embedded line feed separating the street address from the city and state.
+
+3. Add the data file to HDFS:
+
+    ``` shell
+    $ hdfs dfs -put /tmp/pxf_hdfs_multi.txt /data/pxf_examples/
+    ```
+
+4. Use the `HdfsTextMulti` profile to create a queryable external table from the `pxf_hdfs_multi.txt` HDFS file, making sure to identify the `:` as the field separator:
+
+    ``` sql
+    gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_textmulti(address text, month text, year int)
+                LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_hdfs_multi.txt?PROFILE=HdfsTextMulti')
+              FORMAT 'CSV' (delimiter=E':');
+    ```
+    
+2. Query the `pxf_hdfs_textmulti` table:
+
+    ``` sql
+    gpadmin=# SELECT * FROM pxf_hdfs_textmulti;
+    ```
+
+    ``` pre
+             address          | month | year 
+    --------------------------+-------+------
+     4627 Star Rd.            | Sept  | 2017
+     San Francisco, CA  94107           
+     113 Moon St.             | Jan   | 2018
+     San Diego, CA  92093               
+     51 Belt Ct.              | Dec   | 2016
+     Denver, CO  90123                  
+     93114 Radial Rd.         | Jul   | 2017
+     Chicago, IL  60605                 
+     7301 Brookview Ave.      | Dec   | 2018
+     Columbus, OH  43213                
+    (5 rows)
+    ```
+
+## <a id="profile_hdfsavro"></a>Avro Profile
+
+Apache Avro is a data serialization framework where the data is serialized in a compact binary format. 
+
+Avro specifies that data types be defined in JSON. Avro format files have an independent schema, also defined in JSON. An Avro schema, together with its data, is fully self-describing.
+
+### <a id="profile_hdfsavrodatamap"></a>Data Type Mapping
+
+Avro supports both primitive and complex data types. 
+
+To represent Avro primitive data types in HAWQ, map data values to HAWQ columns of the same type. 
+
+Avro supports complex data types including arrays, maps, records, enumerations, and fixed types. Map top-level fields of these complex data types to the HAWQ `TEXT` type. While HAWQ does not natively support these types, you can create HAWQ functions or application code to extract or further process subcomponents of these complex data types.
+
+The following table summarizes external mapping rules for Avro data.
+
+<a id="topic_oy3_qwm_ss__table_j4s_h1n_ss"></a>
+
+| Avro Data Type                                                    | PXF/HAWQ Data Type                                                                                                                                                                                            |
+|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Primitive type (int, double, float, long, string, bytes, boolean) | Use the corresponding HAWQ built-in data type; see [Data Types](../reference/HAWQDataTypes.html). |
+| Complex type: Array, Map, Record, or Enum                         | TEXT, with delimiters inserted between collection items, mapped key-value pairs, and record data.                                                                                           |
+| Complex type: Fixed                                               | BYTEA                                                                                                                                                                                               |
+| Union                                                             | Follows the above conventions for primitive or complex data types, depending on the union; supports Null values.                                                                     |
+
+### <a id="profile_hdfsavroptipns"></a>Avro-Specific Custom Options
+
+For complex types, the PXF `Avro` profile inserts default delimiters between collection items and values. You can use non-default delimiter characters by identifying values for specific `Avro` custom options in the `CREATE EXTERNAL TABLE` call. 
+
+The `Avro` profile supports the following \<custom-options\>:
+
+| Option Name   | Description       
+|---------------|--------------------|                                                                                        
+| COLLECTION_DELIM | The delimiter character(s) to place between entries in a top-level array, map, or record field when PXF maps an Avro complex data type to a text column. The default is the comma `,` character. |
+| MAPKEY_DELIM | The delimiter character(s) to place between the key and value of a map entry when PXF maps an Avro complex data type to a text column. The default is the colon `:` character. |
+| RECORDKEY_DELIM | The delimiter character(s) to place between the field name and value of a record entry when PXF maps an Avro complex data type to a text column. The default is the colon `:` character. |
+
+
+### <a id="topic_tr3_dpg_ts__section_m2p_ztg_ts"></a>Avro Schemas and Data
+
+Avro schemas are defined using JSON, and composed of the same primitive and complex types identified in the data mapping section above. Avro schema files typically have a `.avsc` suffix.
+
+Fields in an Avro schema file are defined via an array of objects, each of which is specified by a name and a type.
+
+
+### <a id="topic_tr3_dpg_ts_example"></a>Example: Using the Avro Profile
+
+The examples in this section will operate on Avro data with the following record schema:
+
+- id - long
+- username - string
+- followers - array of string
+- fmap - map of long
+- address - record comprised of street number (int), street name (string), and city (string)
+- relationship - enumerated type
+
+
+#### <a id="topic_tr3_dpg_ts__section_m2p_ztg_ts_99"></a>Create Schema
+
+Perform the following operations to create an Avro schema to represent the example schema described above.
+
+1. Create a file named `avro_schema.avsc`:
+
+    ``` shell
+    $ vi /tmp/avro_schema.avsc
+    ```
+
+2. Copy and paste the following text into `avro_schema.avsc`:
+
+    ``` json
+    {
+    "type" : "record",
+      "name" : "example_schema",
+      "namespace" : "com.example",
+      "fields" : [ {
+        "name" : "id",
+        "type" : "long",
+        "doc" : "Id of the user account"
+      }, {
+        "name" : "username",
+        "type" : "string",
+        "doc" : "Name of the user account"
+      }, {
+        "name" : "followers",
+        "type" : {"type": "array", "items": "string"},
+        "doc" : "Users followers"
+      }, {
+        "name": "fmap",
+        "type": {"type": "map", "values": "long"}
+      }, {
+        "name": "relationship",
+        "type": {
+            "type": "enum",
+            "name": "relationshipEnum",
+            "symbols": ["MARRIED","LOVE","FRIEND","COLLEAGUE","STRANGER","ENEMY"]
+        }
+      }, {
+        "name": "address",
+        "type": {
+            "type": "record",
+            "name": "addressRecord",
+            "fields": [
+                {"name":"number", "type":"int"},
+                {"name":"street", "type":"string"},
+                {"name":"city", "type":"string"}]
+        }
+      } ],
+      "doc:" : "A basic schema for storing messages"
+    }
+    ```
+
+#### <a id="topic_tr3_dpgspk_15g_tsdata"></a>Create Avro Data File (JSON)
+
+Perform the following steps to create a sample Avro data file conforming to the above schema.
+
+1.  Create a text file named `pxf_hdfs_avro.txt`:
+
+    ``` shell
+    $ vi /tmp/pxf_hdfs_avro.txt
+    ```
+
+2. Enter the following data into `pxf_hdfs_avro.txt`:
+
+    ``` pre
+    {"id":1, "username":"john","followers":["kate", "santosh"], "relationship": "FRIEND", "fmap": {"kate":10,"santosh":4}, "address":{"number":1, "street":"renaissance drive", "city":"san jose"}}
+    
+    {"id":2, "username":"jim","followers":["john", "pam"], "relationship": "COLLEAGUE", "fmap": {"john":3,"pam":3}, "address":{"number":9, "street":"deer creek", "city":"palo alto"}}
+    ```
+
+    The sample data uses a comma `,` to separate top level records and a colon `:` to separate map/key values and record field name/values.
+
+3. Convert the text file to Avro format. There are various ways to perform the conversion, both programmatically and via the command line. In this example, we use the [Java Avro tools](http://avro.apache.org/releases.html); the jar file resides in the current directory:
+
+    ``` shell
+    $ java -jar ./avro-tools-1.8.1.jar fromjson --schema-file /tmp/avro_schema.avsc /tmp/pxf_hdfs_avro.txt > /tmp/pxf_hdfs_avro.avro
+    ```
+
+    The generated Avro binary data file is written to `/tmp/pxf_hdfs_avro.avro`. 
+    
+4. Copy the generated Avro file to HDFS:
+
+    ``` shell
+    $ hdfs dfs -put /tmp/pxf_hdfs_avro.avro /data/pxf_examples/
+    ```
+    
+#### <a id="topic_avro_querydata"></a>Query With Avro Profile
+
+Perform the following steps to create and query an external table accessing the `pxf_hdfs_avro.avro` file you added to HDFS in the previous section. When creating the table:
+
+-  Map the top-level primitive fields, `id` (type long) and `username` (type string), to their equivalent HAWQ types (bigint and text). 
+-  Map the remaining complex fields to type text.
+-  Explicitly set the record, map, and collection delimiters using the Avro profile custom options.
+
+
+1. Use the `Avro` profile to create a queryable external table from the `pxf_hdfs_avro.avro` file:
+
+    ``` sql
+    gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_avro(id bigint, username text, followers text, fmap text, relationship text, address text)
+                LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_hdfs_avro.avro?PROFILE=Avro&COLLECTION_DELIM=,&MAPKEY_DELIM=:&RECORDKEY_DELIM=:')
+              FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+    ```
+
+2. Perform a simple query of the `pxf_hdfs_avro` table:
+
+    ``` sql
+    gpadmin=# SELECT * FROM pxf_hdfs_avro;
+    ```
+
+    ``` pre
+     id | username |   followers    |        fmap         | relationship |                      address                      
+    ----+----------+----------------+--------------------+--------------+---------------------------------------------------
+      1 | john     | [kate,santosh] | {kate:10,santosh:4} | FRIEND       | {number:1,street:renaissance drive,city:san jose}
+      2 | jim      | [john,pam]     | {pam:3,john:3}      | COLLEAGUE    | {number:9,street:deer creek,city:palo alto}
+    (2 rows)
+    ```
+
+    The simple query of the external table shows the components of the complex type data separated with the delimiters identified in the `CREATE EXTERNAL TABLE` call.
+
+
+3. Process the delimited components in the text columns as necessary for your application. For example, the following command uses the HAWQ internal `string_to_array` function to convert entries in the `followers` field to a text array column in a new view.
+
+    ``` sql
+    gpadmin=# CREATE VIEW followers_view AS 
+  SELECT username, address, string_to_array(substring(followers FROM 2 FOR (char_length(followers) - 2)), ',')::text[] 
+        AS followers 
+      FROM pxf_hdfs_avro;
+    ```
+
+4. Query the view to filter rows based on whether a particular follower appears in the array:
+
+    ``` sql
+    gpadmin=# SELECT username, address FROM followers_view WHERE followers @> '{john}';
+    ```
+
+    ``` pre
+     username |                   address                   
+    ----------+---------------------------------------------
+     jim      | {number:9,street:deer creek,city:palo alto}
+    ```
+
+## <a id="accessdataonahavhdfscluster"></a>Accessing HDFS Data in a High Availability HDFS Cluster
+
+To�access external HDFS data in a High Availability HDFS cluster, change the `CREATE EXTERNAL TABLE` `LOCATION` clause to use \<HA-nameservice\> rather than  \<host\>[:\<port\>].
+
+``` sql
+gpadmin=# CREATE EXTERNAL TABLE <table_name> ( <column_name> <data_type> [, ...] | LIKE <other_table> )
+            LOCATION ('pxf://<HA-nameservice>/<path-to-hdfs-file>?PROFILE=HdfsTextSimple|HdfsTextMulti|Avro[&<custom-option>=<value>[...]]')
+         FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);
+```
+
+The opposite is true when a highly available HDFS cluster is reverted to a single NameNode configuration. In that case, any table definition that has specified \<HA-nameservice\> should use the \<host\>[:\<port\>] syntax.�
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/HawqExtensionFrameworkPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/HawqExtensionFrameworkPXF.html.md.erb b/markdown/pxf/HawqExtensionFrameworkPXF.html.md.erb
new file mode 100644
index 0000000..578d13f
--- /dev/null
+++ b/markdown/pxf/HawqExtensionFrameworkPXF.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: Using PXF with Unmanaged Data
+---
+
+HAWQ Extension Framework (PXF) is an extensible framework that allows HAWQ to query external system data.�
+
+PXF includes built-in connectors for accessing data inside HDFS files, Hive tables, and HBase tables. PXF also integrates with HCatalog to query Hive tables directly.
+
+PXF allows users to create custom connectors to access other parallel data stores or processing engines.�To create these connectors using Java plug-ins, see the [PXF External Tables and API](PXFExternalTableandAPIReference.html).
+
+-   **[Installing PXF Plug-ins](../pxf/InstallPXFPlugins.html)**
+
+    This topic describes how to install the built-in PXF service plug-ins that are required to connect PXF to HDFS, Hive, and HBase. You should install the appropriate RPMs on each node in your cluster.
+
+-   **[Configuring PXF](../pxf/ConfigurePXF.html)**
+
+    This topic describes how to configure the PXF service.
+
+-   **[Accessing HDFS File Data](../pxf/HDFSFileDataPXF.html)**
+
+    This topic describes how to access HDFS file data using PXF.
+
+-   **[Accessing Hive Data](../pxf/HivePXF.html)**
+
+    This topic describes how to access Hive data using PXF. You have several options for querying data stored in Hive. You can create external tables in PXF and then query those tables, or you can easily query Hive tables by using HAWQ and PXF's integration with HCatalog. HAWQ accesses Hive table metadata stored in HCatalog.
+
+-   **[Accessing HBase Data](../pxf/HBasePXF.html)**
+
+    This topic describes how to access HBase data using PXF.
+
+-   **[Accessing JSON Data](../pxf/JsonPXF.html)**
+
+    This topic describes how to access JSON data using PXF.
+
+-   **[Using Profiles to Read and Write Data](../pxf/ReadWritePXF.html)**
+
+    PXF profiles are collections of common metadata attributes that can be used to simplify the reading and writing of data. You can use any of the built-in profiles that come with PXF or you can create your own.
+
+-   **[PXF External Tables and API](../pxf/PXFExternalTableandAPIReference.html)**
+
+    You can use the PXF API to create�your own connectors to access any other type of parallel data store or processing engine.
+
+-   **[Troubleshooting PXF](../pxf/TroubleshootingPXF.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/HivePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/HivePXF.html.md.erb b/markdown/pxf/HivePXF.html.md.erb
new file mode 100644
index 0000000..199c7a1
--- /dev/null
+++ b/markdown/pxf/HivePXF.html.md.erb
@@ -0,0 +1,700 @@
+---
+title: Accessing Hive Data
+---
+
+Apache Hive is a distributed data warehousing infrastructure.  Hive facilitates managing large data sets supporting multiple data formats, including comma-separated value (.csv), RC, ORC, and parquet. The PXF Hive plug-in reads data stored in Hive, as well as HDFS or HBase.
+
+This section describes how to use PXF to access Hive data. Options for querying data stored in Hive include:
+
+-  Creating an external table in PXF and querying that table
+-  Querying Hive tables via PXF's integration with HCatalog
+
+## <a id="installingthepxfhiveplugin"></a>Prerequisites
+
+Before accessing Hive data with HAWQ and PXF, ensure that:
+
+-   The PXF HDFS plug-in is installed on all cluster nodes. See [Installing PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The PXF Hive plug-in is installed on all cluster nodes.
+-   The Hive JAR files and conf directory�are installed on all cluster nodes.
+-   You have tested PXF on HDFS.
+-   You are running the Hive Metastore service on a machine in your cluster.�
+-   You have set the `hive.metastore.uris`�property in the�`hive-site.xml` on the NameNode.
+
+## <a id="topic_p2s_lvl_25"></a>Hive File Formats
+
+The PXF Hive plug-in supports several file formats and profiles for accessing these formats:
+
+| File Format  | Description | Profile |
+|-------|---------------------------|-------|
+| TextFile | Flat file with data in comma-, tab-, or space-separated value format or JSON notation. | Hive, HiveText |
+| SequenceFile | Flat file consisting of binary key/value pairs. | Hive |
+| RCFile | Record columnar data consisting of binary key/value pairs; high row compression rate. | Hive, HiveRC |
+| ORCFile | Optimized row columnar data with stripe, footer, and postscript sections; reduces data size. | Hive |
+| Parquet | Compressed columnar data representation. | Hive |
+| Avro | JSON-defined, schema-based data serialization format. | Hive |
+
+Refer to [File Formats](https://cwiki.apache.org/confluence/display/Hive/FileFormats) for detailed information about the file formats supported by Hive.
+
+## <a id="topic_p2s_lvl_29"></a>Data Type Mapping
+
+### <a id="hive_primdatatypes"></a>Primitive Data Types
+
+To represent Hive data in HAWQ, map data values that use a primitive data type to HAWQ columns of the same type.
+
+The following table summarizes external mapping rules for Hive primitive types.
+
+| Hive Data Type  | Hawq Data Type |
+|-------|---------------------------|
+| boolean    | bool |
+| int   | int4 |
+| smallint   | int2 |
+| tinyint   | int2 |
+| bigint   | int8 |
+| float   | float4 |
+| double   | float8 |
+| string   | text |
+| binary   | bytea |
+| timestamp   | timestamp |
+
+
+### <a id="topic_b4v_g3n_25"></a>Complex Data Types
+
+Hive supports complex data types including array, struct, map, and union. PXF maps each of these complex types to `text`.  While HAWQ does not natively support these types, you can create HAWQ functions or application code to extract subcomponents of these complex data types.
+
+An example using complex data types is provided later in this topic.
+
+
+## <a id="hive_sampledataset"></a>Sample Data Set
+
+Examples used in this topic will operate on a common data set. This simple data set models a retail sales operation and includes fields with the following names and data types:
+
+| Field Name  | Data Type |
+|-------|---------------------------|
+| location | text |
+| month | text |
+| number\_of\_orders | integer |
+| total\_sales | double |
+
+Prepare the sample data set for use:
+
+1. First, create a text file:
+
+    ```
+    $ vi /tmp/pxf_hive_datafile.txt
+    ```
+
+2. Add the following data to `pxf_hive_datafile.txt`; notice the use of the comma `,` to separate the four field values:
+
+    ```
+    Prague,Jan,101,4875.33
+    Rome,Mar,87,1557.39
+    Bangalore,May,317,8936.99
+    Beijing,Jul,411,11600.67
+    San Francisco,Sept,156,6846.34
+    Paris,Nov,159,7134.56
+    San Francisco,Jan,113,5397.89
+    Prague,Dec,333,9894.77
+    Bangalore,Jul,271,8320.55
+    Beijing,Dec,100,4248.41
+    ```
+
+Make note of the path to `pxf_hive_datafile.txt`; you will use it in later exercises.
+
+
+## <a id="hivecommandline"></a>Hive Command Line
+
+The Hive command line is a subsystem similar to that of `psql`. To start the Hive command line:
+
+``` shell
+$ HADOOP_USER_NAME=hdfs hive
+```
+
+The default Hive database is named `default`. 
+
+### <a id="hivecommandline_createdb"></a>Example: Create a Hive Database
+
+Create a Hive table to expose our sample data set.
+
+1. Create a Hive table named `sales_info` in the `default` database:
+
+    ``` sql
+    hive> CREATE TABLE sales_info (location string, month string,
+            number_of_orders int, total_sales double)
+            ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
+            STORED AS textfile;
+    ```
+
+    Notice that:
+    - The `STORED AS textfile` subclause instructs Hive to create the table in Textfile (the default) format.  Hive Textfile format supports comma-, tab-, and space-separated values, as well as data specified in JSON notation.
+    - The `DELIMITED FIELDS TERMINATED BY` subclause identifies the field delimiter within a data record (line). The `sales_info` table field delimiter is a comma (`,`).
+
+2. Load the `pxf_hive_datafile.txt` sample data file into the `sales_info` table you just created:
+
+    ``` sql
+    hive> LOAD DATA LOCAL INPATH '/tmp/pxf_hive_datafile.txt'
+            INTO TABLE sales_info;
+    ```
+
+3. Perform a query on `sales_info` to verify that the data was loaded successfully:
+
+    ``` sql
+    hive> SELECT * FROM sales_info;
+    ```
+
+In examples later in this section, you will access the `sales_info` Hive table directly via PXF. You will also insert `sales_info` data into tables of other Hive file format types, and use PXF to access those directly as well.
+
+## <a id="topic_p2s_lvl_28"></a>Querying External Hive Data
+
+The PXF Hive plug-in supports several Hive-related profiles. These include `Hive`, `HiveText`, and `HiveRC`.
+
+Use the following syntax to create a HAWQ external table representing Hive data:
+
+``` sql
+CREATE EXTERNAL TABLE <table_name>
+    ( <column_name> <data_type> [, ...] | LIKE <other_table> )
+LOCATION ('pxf://<host>[:<port>]/<hive-db-name>.<hive-table-name>
+    ?PROFILE=Hive|HiveText|HiveRC[&DELIMITER=<delim>'])
+FORMAT 'CUSTOM|TEXT' (formatter='pxfwritable_import' | delimiter='<delim>')
+```
+
+Hive-plug-in-specific keywords and values used in the [CREATE EXTERNAL TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described below.
+
+| Keyword  | Value |
+|-------|-------------------------------------|
+| \<host\>[:<port\>]    | The HDFS NameNode and port. |
+| \<hive-db-name\>    | The name of the Hive database. If omitted, defaults to the Hive database named `default`. |
+| \<hive-table-name\>    | The name of the Hive table. |
+| PROFILE    | The `PROFILE` keyword must specify one of the values `Hive`, `HiveText`, or `HiveRC`. |
+| DELIMITER    | The `DELIMITER` clause is required for both the `HiveText` and `HiveRC` profiles and identifies the field delimiter used in the Hive data set.  \<delim\> must be a single ascii character or specified in hexadecimal representation. |
+| FORMAT (`Hive` profile)   | The `FORMAT` clause must specify `CUSTOM`. The `CUSTOM` format supports only the built-in `pxfwritable_import` `formatter`.   |
+| FORMAT (`HiveText` and `HiveRC` profiles) | The `FORMAT` clause must specify `TEXT`. The `delimiter` must be specified a second time in '\<delim\>'. |
+
+
+## <a id="profile_hive"></a>Hive Profile
+
+The `Hive` profile works with any Hive file format. It can access heterogenous format data in a single table where each partition may be stored as a different file format.
+
+While you can use the `Hive` profile to access any file format, the more specific profiles perform better for those single file format types.
+
+
+### <a id="profile_hive_using"></a>Example: Using the Hive Profile
+
+Use the `Hive` profile to create a queryable HAWQ external table from the Hive `sales_info` textfile format table created earlier.
+
+1. Create a queryable HAWQ external table from the Hive `sales_info` textfile format table created earlier:
+
+    ``` sql
+    postgres=# CREATE EXTERNAL TABLE salesinfo_hiveprofile(location text, month text, num_orders int, total_sales float8)
+                LOCATION ('pxf://namenode:51200/default.sales_info?PROFILE=Hive')
+              FORMAT 'custom' (formatter='pxfwritable_import');
+    ```
+
+2. Query the table:
+
+    ``` sql
+    postgres=# SELECT * FROM salesinfo_hiveprofile;
+    ```
+
+    ``` shell
+       location    | month | num_orders | total_sales
+    ---------------+-------+------------+-------------
+     Prague        | Jan   |        101 |     4875.33
+     Rome          | Mar   |         87 |     1557.39
+     Bangalore     | May   |        317 |     8936.99
+     ...
+
+    ```
+
+## <a id="profile_hivetext"></a>HiveText Profile
+
+Use the `HiveText` profile to query text format files. The `HiveText` profile is more performant than the `Hive` profile.
+
+**Note**: When using the `HiveText` profile, you *must* specify a delimiter option in *both* the `LOCATION` and `FORMAT` clauses.
+
+### <a id="profile_hivetext_using"></a>Example: Using the HiveText Profile
+
+Use the PXF `HiveText` profile to create a queryable HAWQ external table from the Hive `sales_info` textfile format table created earlier.
+
+1. Create the external table:
+
+    ``` sql
+    postgres=# CREATE EXTERNAL TABLE salesinfo_hivetextprofile(location text, month text, num_orders int, total_sales float8)
+                 LOCATION ('pxf://namenode:51200/default.sales_info?PROFILE=HiveText&DELIMITER=\x2c')
+               FORMAT 'TEXT' (delimiter=E',');
+    ```
+
+    (You can safely ignore the "nonstandard use of escape in a string literal" warning and related messages.)
+
+    Notice that:
+    - The `LOCATION` subclause `DELIMITER` value is specified in hexadecimal format. `\x` is a prefix that instructs PXF to interpret the following characters as hexadecimal. `2c` is the hex value for the comma character.
+    - The `FORMAT` subclause `delimiter` value is specified as the single ascii comma character `','`. `E` escapes the character.
+
+2. Query the external table:
+
+    ``` sql
+    postgres=# SELECT * FROM salesinfo_hivetextprofile WHERE location="Beijing";
+    ```
+
+    ``` shell
+     location | month | num_orders | total_sales
+    ----------+-------+------------+-------------
+     Beijing  | Jul   |        411 |    11600.67
+     Beijing  | Dec   |        100 |     4248.41
+    (2 rows)
+    ```
+
+## <a id="profile_hiverc"></a>HiveRC Profile
+
+The RCFile Hive format is used for row columnar formatted data. The `HiveRC` profile provides access to RCFile data.
+
+### <a id="profile_hiverc_rcfiletbl_using"></a>Example: Using the HiveRC Profile
+
+Use the `HiveRC` profile to query RCFile-formatted data in Hive tables. The `HiveRC` profile is more performant than the `Hive` profile for this file format type.
+
+1. Create a Hive table with RCFile format:
+
+    ``` shell
+    $ HADOOP_USER_NAME=hdfs hive
+    ```
+
+    ``` sql
+    hive> CREATE TABLE sales_info_rcfile (location string, month string,
+            number_of_orders int, total_sales double)
+          ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
+          STORED AS rcfile;
+    ```
+
+2. Insert the data from the `sales_info` table into `sales_info_rcfile`:
+
+    ``` sql
+    hive> INSERT INTO TABLE sales_info_rcfile SELECT * FROM sales_info;
+    ```
+
+    A copy of the sample data set is now stored in RCFile format in `sales_info_rcfile`. 
+    
+3. Perform a Hive query on `sales_info_rcfile` to verify that the data was loaded successfully:
+
+    ``` sql
+    hive> SELECT * FROM sales_info_rcfile;
+    ```
+
+4. Use the PXF `HiveRC` profile to create a queryable HAWQ external table from the Hive `sales_info_rcfile` table created in the previous step. When using the `HiveRC` profile, you **must** specify a delimiter option in *both* the `LOCATION` and `FORMAT` clauses.:
+
+    ``` sql
+    postgres=# CREATE EXTERNAL TABLE salesinfo_hivercprofile(location text, month text, num_orders int, total_sales float8)
+                 LOCATION ('pxf://namenode:51200/default.sales_info_rcfile?PROFILE=HiveRC&DELIMITER=\x2c')
+               FORMAT 'TEXT' (delimiter=E',');
+    ```
+
+    (Again, you can safely ignore the "nonstandard use of escape in a string literal" warning and related messages.)
+
+5. Query the external table:
+
+    ``` sql
+    postgres=# SELECT location, total_sales FROM salesinfo_hivercprofile;
+    ```
+
+    ``` shell
+       location    | total_sales
+    ---------------+-------------
+     Prague        |     4875.33
+     Rome          |     1557.39
+     Bangalore     |     8936.99
+     Beijing       |    11600.67
+     ...
+    ```
+
+## <a id="topic_dbb_nz3_ts"></a>Accessing Parquet-Format Hive Tables
+
+The PXF `Hive` profile supports both non-partitioned and partitioned Hive tables that use the Parquet storage format in HDFS. Simply map the table columns using equivalent HAWQ data types. For example, if a Hive table is created using:
+
+``` sql
+hive> CREATE TABLE hive_parquet_table (fname string, lname string, custid int, acctbalance double)
+        STORED AS parquet;
+```
+
+Define the HAWQ external table using:
+
+``` sql
+postgres=# CREATE EXTERNAL TABLE pxf_parquet_table (fname text, lname text, custid int, acctbalance double precision)
+    LOCATION ('pxf://namenode:51200/hive-db-name.hive_parquet_table?profile=Hive')
+    FORMAT 'CUSTOM' (formatter='pxfwritable_import');
+```
+
+And query the HAWQ external table using:
+
+``` sql
+postgres=# SELECT fname,lname FROM pxf_parquet_table;
+```
+
+
+## <a id="profileperf"></a>Profile Performance Considerations
+
+The `HiveRC` and `HiveText` profiles are faster than the generic `Hive` profile.
+
+
+## <a id="complex_dt_example"></a>Complex Data Type Example
+
+This example will employ the array and map complex types, specifically an array of integers and a string key/value pair map.
+
+The data schema for this example includes fields with the following names and data types:
+
+| Field Name  | Data Type |
+|-------|---------------------------|
+| index | int |
+| name | string
+| intarray | array of integers |
+| propmap | map of string key and value pairs |
+
+When specifying an array field in a Hive table, you must identify the terminator for each item in the collection. Similarly, the map key termination character must also be specified.
+
+1. Create a text file from which you will load the data set:
+
+    ```
+    $ vi /tmp/pxf_hive_complex.txt
+    ```
+
+2. Add the following data to `pxf_hive_complex.txt`.  The data uses a comma `,` to separate field values, the percent symbol `%` to separate collection items, and a `:` to terminate map key values:
+
+    ```
+    3,Prague,1%2%3,zone:euro%status:up
+    89,Rome,4%5%6,zone:euro
+    400,Bangalore,7%8%9,zone:apac%status:pending
+    183,Beijing,0%1%2,zone:apac
+    94,Sacramento,3%4%5,zone:noam%status:down
+    101,Paris,6%7%8,zone:euro%status:up
+    56,Frankfurt,9%0%1,zone:euro
+    202,Jakarta,2%3%4,zone:apac%status:up
+    313,Sydney,5%6%7,zone:apac%status:pending
+    76,Atlanta,8%9%0,zone:noam%status:down
+    ```
+
+3. Create a Hive table to represent this data:
+
+    ``` shell
+    $ HADOOP_USER_NAME=hdfs hive
+    ```
+
+    ``` sql
+    hive> CREATE TABLE table_complextypes( index int, name string, intarray ARRAY<int>, propmap MAP<string, string>)
+             ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
+             COLLECTION ITEMS TERMINATED BY '%'
+             MAP KEYS TERMINATED BY ':'
+             STORED AS TEXTFILE;
+    ```
+
+    Notice that:
+    - `FIELDS TERMINATED BY` identifies a comma as the field terminator.
+    - The `COLLECTION ITEMS TERMINATED BY` subclause specifies the percent sign as the collection items (array item, map key/value pair) terminator.
+    - `MAP KEYS TERMINATED BY` identifies a colon as the terminator for map keys.
+
+4. Load the `pxf_hive_complex.txt` sample data file into the `table_complextypes` table you just created:
+
+    ``` sql
+    hive> LOAD DATA LOCAL INPATH '/tmp/pxf_hive_complex.txt' INTO TABLE table_complextypes;
+    ```
+
+5. Perform a query on Hive table `table_complextypes` to verify that the data was loaded successfully:
+
+    ``` sql
+    hive> SELECT * FROM table_complextypes;
+    ```
+
+    ``` shell
+    3	Prague	[1,2,3]	{"zone":"euro","status":"up"}
+    89	Rome	[4,5,6]	{"zone":"euro"}
+    400	Bangalore	[7,8,9]	{"zone":"apac","status":"pending"}
+    ...
+    ```
+
+6. Use the PXF `Hive` profile to create a queryable HAWQ external table representing the Hive `table_complextypes`:
+
+    ``` sql
+    postgres=# CREATE EXTERNAL TABLE complextypes_hiveprofile(index int, name text, intarray text, propmap text)
+                 LOCATION ('pxf://namenode:51200/table_complextypes?PROFILE=Hive')
+               FORMAT 'CUSTOM' (formatter='pxfwritable_import');
+    ```
+
+    Notice that the integer array and map complex types are mapped to type text.
+
+7. Query the external table:
+
+    ``` sql
+    postgres=# SELECT * FROM complextypes_hiveprofile;
+    ```
+
+    ``` shell     
+     index |    name    | intarray |              propmap
+    -------+------------+----------+------------------------------------
+         3 | Prague     | [1,2,3]  | {"zone":"euro","status":"up"}
+        89 | Rome       | [4,5,6]  | {"zone":"euro"}
+       400 | Bangalore  | [7,8,9]  | {"zone":"apac","status":"pending"}
+       183 | Beijing    | [0,1,2]  | {"zone":"apac"}
+        94 | Sacramento | [3,4,5]  | {"zone":"noam","status":"down"}
+       101 | Paris      | [6,7,8]  | {"zone":"euro","status":"up"}
+        56 | Frankfurt  | [9,0,1]  | {"zone":"euro"}
+       202 | Jakarta    | [2,3,4]  | {"zone":"apac","status":"up"}
+       313 | Sydney     | [5,6,7]  | {"zone":"apac","status":"pending"}
+        76 | Atlanta    | [8,9,0]  | {"zone":"noam","status":"down"}
+    (10 rows)
+    ```
+
+    `intarray` and `propmap` are each text strings.
+
+## <a id="hcatalog"></a>Using PXF and HCatalog to Query Hive
+
+Hive tables can be queried directly through HCatalog integration with HAWQ and PXF, regardless of the underlying file storage format.
+
+In previous sections, you created an external table in PXF that described the target table's Hive metadata. Another option for querying Hive tables is to take advantage of HAWQ's integration with HCatalog. This integration allows HAWQ to directly use table metadata stored in HCatalog.
+
+HCatalog is built on top of the Hive metastore and incorporates Hive's DDL. This provides several advantages:
+
+-   You do not need to know the table schema of your Hive tables
+-   You do not need to manually enter information about Hive table location or format
+-   If Hive table metadata changes, HCatalog provides updated metadata. This is in contrast to the use of static external PXF tables to define Hive table metadata for HAWQ.
+
+The following diagram depicts how HAWQ integrates with HCatalog to query Hive tables:
+
+<img src="../images/hawq_hcatalog.png" id="hcatalog__image_ukw_h2v_c5" class="image" width="672" />
+
+1.  HAWQ retrieves table metadata from HCatalog using PXF.
+2.  HAWQ creates in-memory catalog tables from the retrieved metadata. If a table is referenced multiple times in a transaction, HAWQ uses its in-memory metadata to reduce external calls to HCatalog.
+3.  PXF queries Hive using table metadata that is stored in the HAWQ in-memory catalog tables. Table metadata is dropped at the end of the transaction.
+
+
+### <a id="topic_j1l_enabling"></a>Enabling HCatalog Integration
+
+To enable HCatalog query integration in HAWQ, perform the following steps:
+
+1.  Make sure your deployment meets the requirements listed in [Prerequisites](#installingthepxfhiveplugin).
+2.  If necessary, set the `pxf_service_address` global configuration property to the hostname or IP address and port where you have installed the PXF Hive plug-in. By default, the value is set to `localhost:51200`.
+
+    ``` sql
+    postgres=# SET pxf_service_address TO <hivenode>:51200
+    ```
+
+3.  HCatalog internally uses the `pxf` protocol to query.  Grant this protocol privilege to all roles requiring access:
+
+    ``` sql
+    postgres=# GRANT ALL ON PROTOCOL pxf TO <role>;
+    ```
+
+4. It is not recommended to create a HAWQ table using the `WITH (OIDS)` clause. If any user tables were created using the `WITH (OIDS)` clause, additional operations are required to enable HCatalog integration. To access a Hive table via HCatalog when user tables were created using `WITH (OIDS)`, HAWQ users must have `SELECT` permission to query every user table within the same schema that was created using the `WITH (OIDS)` clause. 
+
+    1. Determine which user tables were created using the `WITH (OIDS)` clause:
+
+        ``` sql
+        postgres=# SELECT oid, relname FROM pg_class 
+                     WHERE relhasoids = true 
+                       AND relnamespace <> (SELECT oid FROM pg_namespace WHERE nspname = 'pg_catalog');
+        ```
+
+    2. Grant `SELECT` privilege on all returned tables to all roles to which you chose to provide HCatalog query access. For example:
+
+        ``` sql
+        postgres=# GRANT SELECT ON <table-created-WITH-OIDS> TO <role>
+        ``` 
+
+### <a id="topic_j1l_y55_c5"></a>Usage    
+
+To query a Hive table with HCatalog integration, query HCatalog directly from HAWQ. The query syntax is:
+
+``` sql
+postgres=# SELECT * FROM hcatalog.hive-db-name.hive-table-name;
+```
+
+For example:
+
+``` sql
+postgres=# SELECT * FROM hcatalog.default.sales_info;
+```
+
+To obtain a description of a Hive table with HCatalog integration, you can use the `psql` client interface.
+
+-   Within HAWQ, use either the `\d                                         hcatalog.hive-db-name.hive-table-name` or `\d+                                         hcatalog.hive-db-name.hive-table-name` commands to describe a single table.  `\d` displays only HAWQ's interpretation of the underlying source (Hive in this case) data type, while `\d+` displays both the HAWQ interpreted and Hive source data types. For example, from the `psql` client interface:
+
+    ``` shell
+    $ psql -d postgres
+    ```
+
+    ``` sql
+    postgres=# \d+ hcatalog.default.sales_info_rcfile;
+    ```
+
+    ``` shell
+    PXF Hive Table "default.sales_info_rcfile"
+          Column      |  Type  | Source type 
+    ------------------+--------+-------------
+     location         | text   | string
+     month            | text   | string
+     number_of_orders | int4   | int
+     total_sales      | float8 | double
+    ```
+-   Use `\d hcatalog.hive-db-name.*` to describe the whole database schema, i.e. all tables in `hive-db-name`.
+-   Use `\d hcatalog.*.*` to describe the whole schema, i.e. all databases and tables.
+
+When using `\d` or `\d+` commands in the `psql` HAWQ client, `hcatalog` will not be listed as a database. If you use other `psql` compatible clients, `hcatalog` will be listed as a database with a size value of `-1` since `hcatalog` is not a real database in HAWQ.
+
+Alternatively, you can use the `pxf_get_item_fields` user-defined function (UDF) to obtain Hive table descriptions from other client interfaces or third-party applications. The UDF takes a PXF profile and a table pattern string as its input parameters.  **Note:** The only supported input profile at this time is `'Hive'`.
+
+- The following statement returns a description of a specific table. The description includes path, itemname (table), fieldname, and fieldtype.
+
+    ``` sql
+    postgres=# SELECT * FROM pxf_get_item_fields('Hive','default.sales_info_rcfile');
+    ```
+
+    ``` pre
+      path   |     itemname      |    fieldname     | fieldtype
+    ---------+-------------------+------------------+-----------
+     default | sales_info_rcfile | location         | text
+     default | sales_info_rcfile | month            | text
+     default | sales_info_rcfile | number_of_orders | int4
+     default | sales_info_rcfile | total_sales      | float8
+    ```
+
+- The following statement returns table descriptions from the default database.
+
+    ``` sql
+    postgres=# SELECT * FROM pxf_get_item_fields('Hive','default.*');
+    ```
+
+- The following statement returns a description of the entire schema.
+
+    ``` sql
+    postgres=# SELECT * FROM pxf_get_item_fields('Hive', '*.*');
+    ```
+
+### <a id="topic_r5k_pst_25"></a>Limitations
+
+HCatalog integration has the following limitations:
+
+-   HCatalog integration queries and describe commands do not support complex types; only primitive types are supported. Use PXF external tables to query complex types in Hive. (See [Complex Types Example](#complex_dt_example).)
+-   Even for primitive types, HCatalog metadata descriptions produced by `\d` are HAWQ's interpretation of the underlying Hive data types. For example, the Hive type `tinyint` is converted to HAWQ type `int2`. (See [Data Type Mapping](#hive_primdatatypes).)
+-   HAWQ reserves the database name `hcatalog` for system use. You cannot connect to or alter the system `hcatalog` database.
+
+## <a id="partitionfiltering"></a>Partition Filtering
+
+The PXF Hive plug-in supports the Hive partitioning feature and directory structure. This enables partition exclusion on selected HDFS files comprising the Hive table.�To use�the partition filtering�feature to reduce network traffic and I/O, run a PXF query using a `WHERE` clause�that refers to a specific partition in the partitioned Hive table.
+
+To take advantage of PXF partition filtering push-down, the Hive and PXF partition field names should be the same. Otherwise, PXF ignores partition filtering and the filtering is performed on the HAWQ side, impacting�performance.
+
+**Note:** The Hive plug-in filters only on partition columns, not on other table attributes.
+
+### <a id="partitionfiltering_pushdowncfg"></a>Configure Partition Filtering Push-Down
+
+PXF partition filtering push-down is enabled by default. To disable PXF partition filtering push-down, set the `pxf_enable_filter_pushdown` HAWQ server configuration parameter to `off`:
+
+``` sql
+postgres=# SHOW pxf_enable_filter_pushdown;
+ pxf_enable_filter_pushdown
+-----------------------------
+ on
+(1 row)
+postgres=# SET pxf_enable_filter_pushdown=off;
+```
+
+### <a id="example2"></a>Create Partitioned Hive Table
+
+Create a�Hive table `sales_part`�with two partition columns, `delivery_state` and `delivery_city:`
+
+``` sql
+hive> CREATE TABLE sales_part (name string, type string, supplier_key int, price double)
+        PARTITIONED BY (delivery_state string, delivery_city string)
+        ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
+```
+
+Load data into this Hive table and�add some partitions:
+
+``` sql
+hive> INSERT INTO TABLE sales_part 
+        PARTITION(delivery_state = 'CALIFORNIA', delivery_city = 'Fresno') 
+        VALUES ('block', 'widget', 33, 15.17);
+hive> INSERT INTO TABLE sales_part 
+        PARTITION(delivery_state = 'CALIFORNIA', delivery_city = 'Sacramento') 
+        VALUES ('cube', 'widget', 11, 1.17);
+hive> INSERT INTO TABLE sales_part 
+        PARTITION(delivery_state = 'NEVADA', delivery_city = 'Reno') 
+        VALUES ('dowel', 'widget', 51, 31.82);
+hive> INSERT INTO TABLE sales_part 
+        PARTITION(delivery_state = 'NEVADA', delivery_city = 'Las Vegas') 
+        VALUES ('px49', 'pipe', 52, 99.82);
+```
+
+The Hive storage directory structure for the `sales_part` table appears as follows:
+
+``` pre
+$ sudo -u hdfs hdfs dfs -ls -R /apps/hive/warehouse/sales_part
+/apps/hive/warehouse/sales_part/delivery_state=CALIFORNIA/delivery_city=\u2019Fresno\u2019/
+/apps/hive/warehouse/sales_part/delivery_state=CALIFORNIA/delivery_city=Sacramento/
+/apps/hive/warehouse/sales_part/delivery_state=NEVADA/delivery_city=Reno/
+/apps/hive/warehouse/sales_part/delivery_state=NEVADA/delivery_city=\u2019Las Vegas\u2019/
+```
+
+To define a HAWQ PXF table that will read this Hive table�and�take advantage of partition filter push-down, define the fields corresponding to the Hive partition fields at the end of the `CREATE EXTERNAL TABLE` attribute list.�In HiveQL,�a�`SELECT *`�statement on a partitioned table shows the partition fields at the end of the record.
+
+``` sql
+postgres=# CREATE EXTERNAL TABLE pxf_sales_part(
+  item_name TEXT, 
+  item_type TEXT, 
+  supplier_key INTEGER, 
+  item_price DOUBLE PRECISION, 
+  delivery_state TEXT, 
+  delivery_city TEXT
+)
+LOCATION ('pxf://namenode:51200/sales_part?Profile=Hive')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+
+postgres=# SELECT * FROM pxf_sales_part;
+```
+
+### <a id="example3"></a>Query Without Pushdown
+
+In the following example, the HAWQ query filters the `delivery_city` partition `Sacramento`. The filter on� `item_name` is not pushed down, since it is not a partition column. It is performed on the HAWQ side after all the data on `Sacramento` is transferred for processing.
+
+``` sql
+postgres=# SELECT * FROM pxf_sales_part WHERE delivery_city = 'Sacramento' AND item_name = 'cube';
+```
+
+### <a id="example4"></a>Query With Pushdown
+
+The following HAWQ query reads all the data under�`delivery_state` partition `CALIFORNIA`, regardless of the city.
+
+``` sql
+postgres=# SET pxf_enable_filter_pushdown=on;
+postgres=# SELECT * FROM pxf_sales_part WHERE delivery_state = 'CALIFORNIA';
+```
+
+## <a id="topic_fdm_zrh_1s"></a>Using PXF with Hive Default Partitions
+
+This topic describes a difference in query results between Hive and PXF queries when Hive tables use a default partition. When dynamic partitioning is enabled in Hive, a partitioned table may store data in a default partition. Hive creates a default partition when the value of a partitioning column does not match the defined type of the column (for example, when a NULL value is used for any partitioning column). In Hive, any query that includes a filter on a partition column *excludes* any data that is stored in the table's default partition.
+
+Similar to Hive, PXF represents a table's partitioning columns as columns that are appended to the end of the table. However, PXF translates any column value in a default partition to a NULL value. This means that a HAWQ query that includes an IS NULL filter on a partitioning column can return different results than the same Hive query.
+
+Consider a Hive partitioned table that is created with the statement:
+
+``` sql
+hive> CREATE TABLE sales (order_id bigint, order_amount float) PARTITIONED BY (xdate date);
+```
+
+The table is loaded with five rows that contain the following data:
+
+``` pre
+1.0    1900-01-01
+2.2    1994-04-14
+3.3    2011-03-31
+4.5    NULL
+5.0    2013-12-06
+```
+
+The insertion of row 4 creates a Hive default partition, because the partition column `xdate` contains a null value.
+
+In Hive, any query that filters on the partition column omits data in the default partition. For example, the following query returns no rows:
+
+``` sql
+hive> SELECT * FROM sales WHERE xdate is null;
+```
+
+However, if you map this table as a PXF external table in HAWQ, all default partition values are translated into actual NULL values. In HAWQ, executing the same query against the PXF table returns row 4 as the result, because the filter matches the NULL value.
+
+Keep this behavior in mind when executing IS NULL queries on Hive partitioned tables.
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/pxf/InstallPXFPlugins.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/pxf/InstallPXFPlugins.html.md.erb b/markdown/pxf/InstallPXFPlugins.html.md.erb
new file mode 100644
index 0000000..4ae4101
--- /dev/null
+++ b/markdown/pxf/InstallPXFPlugins.html.md.erb
@@ -0,0 +1,81 @@
+---
+title: Installing PXF Plug-ins
+---
+
+This topic describes how to install the built-in PXF service plug-ins that are required to connect PXF to HDFS, Hive, HBase, and JSON. 
+
+**Note:** PXF requires that you run Tomcat on the host machine. Tomcat reserves ports 8005, 8080, and 8009. If you have configured Oozie JXM reporting on a host that will run PXF, make sure that the reporting service uses a port other than 8005. This helps to prevent port conflict errors from occurring when you start the PXF service.
+
+## <a id="directories_and_logs"></a>PXF Installation and Log File Directories
+
+Installing PXF plug-ins, regardless of method, creates directories and log files on each node receiving the plug-in installation:
+
+| Directory                      | Description                                                                                                                                                                                                                                |
+|--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `/usr/lib/pxf`                 | PXF library location                                                                                                                                                                                                                       |
+| `/etc/pxf/conf`                | PXF configuration directory. This directory contains the `pxf-public.classpath` and `pxf-private.classpath` configuration files. See [Setting up the Java Classpath](ConfigurePXF.html#settingupthejavaclasspath). |
+| `/var/pxf/pxf-service`         | PXF service instance location                                                                                                                                                                                                              |
+| `/var/log/pxf` | This directory includes `pxf-service.log` and all Tomcat-related logs including `catalina.out`. Logs are owned by user:group `pxf`:`pxf`. Other users have read access.                                                                          |
+| `/var/run/pxf/catalina.pid`    | PXF Tomcat container PID location                                                                                                                                                                                                          |
+
+
+## <a id="install_pxf_plug_ambari"></a>Installing PXF Using Ambari
+
+If you are using Ambari to install and manage your HAWQ cluster, you do *not* need to follow the manual installation steps in this topic. Installing using the Ambari web interface installs all of the necessary PXF plug-in components.
+
+## <a id="install_pxf_plug_cmdline"></a>Installing PXF from the Command Line
+
+Each PXF service plug-in resides in its own RPM.  You may have built these RPMs in the Apache HAWQ open source project repository (see [PXF Build Instructions](https://github.com/apache/incubator-hawq/blob/master/pxf/README.md)), or these RPMs may have been included in a commercial product download package.
+
+Perform the following steps on **_each_** node in your cluster to install PXF:
+
+1. Install the PXF software, including Apache, the PXF service, and all PXF plug-ins: HDFS, HBase, Hive, JSON:
+
+    ```shell
+    $ sudo yum install -y pxf
+    ```
+
+    Installing PXF in this manner:
+    * installs the required version of `apache-tomcat`
+    * creates a `/etc/pxf/pxf-n.n.n` directory, adding a softlink from `/etc/pxf` to this directory
+    * sets up the PXF service configuration files in `/etc/pxf`
+    * creates a `/usr/lib/pxf-n.n.n` directory, adding a softlink from `/usr/lib/pxf` to this directory
+    * copies the PXF service JAR file `pxf-service-n.n.n.jar` to `/usr/lib/pxf-n.n.n/`
+    * copies JAR files for each of the PXF plugs-ins to `/usr/lib/pxf-n.n.n/`
+    * creates softlinks from `pxf-xxx.jar` in `/usr/lib/pxf-n.n.n/`
+
+2. Initialize the PXF service:
+
+    ```shell
+    $ sudo service pxf-service init
+    ```
+
+2. Start the PXF service:
+
+    ```shell
+    $ sudo service pxf-service start
+    ```
+    
+    Additional `pxf-service` command options include `stop`, `restart`, and `status`.
+
+2. If you choose to use the HBase plug-in, perform the following configuration:
+
+    1. Add the PXF HBase plug-in JAR file to the HBase `CLASSPATH` by updating the `HBASE_CLASSPATH` environment variable setting in the HBase environment file `/etc/hbase/conf/hbase-env.sh`:
+
+        ``` shell
+        export HBASE_CLASSPATH=${HBASE_CLASSPATH}:/usr/lib/pxf/pxf-hbase.jar
+        ```
+
+    3. Restart the HBase service after making this update to HBase configuration.
+
+        On the HBase Master node:
+
+        ``` shell
+        $ su -l hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh restart master; sleep 25"
+       ```
+
+        On an HBase Region Server node:
+
+        ```shell
+        $ su -l hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh restart regionserver"
+        ```


[31/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/TableDistributionStorage.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/TableDistributionStorage.html.md.erb b/markdown/overview/TableDistributionStorage.html.md.erb
new file mode 100755
index 0000000..ec1d8b5
--- /dev/null
+++ b/markdown/overview/TableDistributionStorage.html.md.erb
@@ -0,0 +1,41 @@
+---
+title: Table Distribution and Storage
+---
+
+HAWQ stores all table data, except the system table, in HDFS. When a user creates a table, the metadata is stored on the master's local file system and the table content is stored in HDFS.
+
+In order to simplify table data management, all the data of one relation are saved under one HDFS folder.
+
+For all HAWQ table storage formats, AO \(Append-Only\) and Parquet, the data files are splittable, so that HAWQ can assign multiple virtual segments to consume one data file concurrently. This increases the degree of query parallelism.
+
+## Table Distribution Policy
+
+The default table distribution policy in HAWQ is random.
+
+Randomly distributed tables have some benefits over hash distributed tables. For example, after cluster expansion, HAWQ can use more resources automatically without redistributing the data. For huge tables, redistribution is very expensive, and data locality for randomly distributed tables is better after the underlying HDFS redistributes its data during rebalance or DataNode failures. This is quite common when the cluster is large.
+
+On the other hand, for some queries, hash distributed tables are faster than randomly distributed tables. For example, hash distributed tables have some performance benefits for some TPC-H queries. You should choose the distribution policy that is best suited for your application's scenario.
+
+See [Choosing the Table Distribution Policy](../ddl/ddl-table.html) for more details.
+
+## Data Locality
+
+Data is distributed across HDFS DataNodes. Since remote read involves network I/O, a data locality algorithm improves the local read ratio. HAWQ considers three aspects when allocating data blocks to virtual segments:
+
+-   Ratio of local read
+-   Continuity of file read
+-   Data balance among virtual segments
+
+## External Data Access
+
+HAWQ can access data in external files using the HAWQ Extension Framework (PXF).
+PXF is an extensible framework that allows HAWQ to access data in external
+sources as readable or writable HAWQ tables. PXF has built-in connectors for
+accessing data inside HDFS files, Hive tables, and HBase tables. PXF also
+integrates with HCatalog to query Hive tables directly. See [Using PXF
+with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html) for more
+details.
+
+Users can create custom PXF connectors to access other parallel data stores or
+processing engines. Connectors are Java plug-ins that use the PXF API. For more
+information see [PXF External Tables and API](../pxf/PXFExternalTableandAPIReference.html).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/overview/system-overview.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/overview/system-overview.html.md.erb b/markdown/overview/system-overview.html.md.erb
new file mode 100644
index 0000000..9fc1c53
--- /dev/null
+++ b/markdown/overview/system-overview.html.md.erb
@@ -0,0 +1,11 @@
+---
+title: Apache HAWQ (Incubating) System Overview
+---
+* <a href="./HAWQOverview.html" class="subnav">What is HAWQ?</a>
+* <a href="./HAWQArchitecture.html" class="subnav">HAWQ Architecture</a>
+* <a href="./TableDistributionStorage.html" class="subnav">Table Distribution and Storage</a>
+* <a href="./ElasticSegments.html" class="subnav">Elastic Virtual Segment Allocation</a>
+* <a href="./ResourceManagement.html" class="subnav">Resource Management</a>
+* <a href="./HDFSCatalogCache.html" class="subnav">HDFS Catalog Cache</a>
+* <a href="./ManagementTools.html" class="subnav">Management Tools</a>
+* <a href="./RedundancyFailover.html" class="subnav">Redundancy and Fault Tolerance</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/plext/UsingProceduralLanguages.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/plext/UsingProceduralLanguages.html.md.erb b/markdown/plext/UsingProceduralLanguages.html.md.erb
new file mode 100644
index 0000000..bef1b93
--- /dev/null
+++ b/markdown/plext/UsingProceduralLanguages.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: Using Languages and Extensions in HAWQ
+---
+
+HAWQ supports user-defined functions that are created with the SQL and C built-in languages, and also supports user-defined aliases for internal functions.
+
+HAWQ also supports user-defined functions written in languages other than SQL and C. These other languages are generically called *procedural languages* (PLs) and are extensions to the core HAWQ functionality. HAWQ specifically supports the PL/Java, PL/Perl, PL/pgSQL, PL/Python, and PL/R procedural languages. 
+
+HAWQ additionally provides the `pgcrypto` extension for password hashing and data encryption.
+
+This chapter describes these languages and extensions:
+
+-   <a href="builtin_langs.html">Using HAWQ Built-In Languages</a>
+-   <a href="using_pljava.html">Using PL/Java</a>
+-   <a href="using_plperl.html">Using PL/Perl</a>
+-   <a href="using_plpgsql.html">Using PL/pgSQL</a>
+-   <a href="using_plpython.html">Using PL/Python</a>
+-   <a href="using_plr.html">Using PL/R</a>
+-   <a href="using_pgcrypto.html">Using pgcrypto</a>
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/plext/builtin_langs.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/plext/builtin_langs.html.md.erb b/markdown/plext/builtin_langs.html.md.erb
new file mode 100644
index 0000000..01891e8
--- /dev/null
+++ b/markdown/plext/builtin_langs.html.md.erb
@@ -0,0 +1,110 @@
+---
+title: Using HAWQ Built-In Languages
+---
+
+This section provides an introduction to using the HAWQ built-in languages.
+
+HAWQ supports user-defined functions created with the SQL and C built-in languages. HAWQ also supports user-defined aliases for internal functions.
+
+
+## <a id="enablebuiltin"></a>Enabling Built-in Language Support
+
+Support for SQL and C language user-defined functions and aliasing of internal functions is enabled by default for all HAWQ databases.
+
+## <a id="builtinsql"></a>Defining SQL Functions
+
+SQL functions execute an arbitrary list of SQL statements. The SQL statements in the body of a SQL function must be separated by semicolons. The final statement in a non-void-returning SQL function must be a [SELECT](../reference/sql/SELECT.html) that returns data of the type specified by the function's return type. The function will return a single or set of rows corresponding to this last SQL query.
+
+The following example creates and calls a SQL function to count the number of rows of the table named `orders`:
+
+``` sql
+gpadmin=# CREATE FUNCTION count_orders() RETURNS bigint AS $$
+ SELECT count(*) FROM orders;
+$$ LANGUAGE SQL;
+CREATE FUNCTION
+gpadmin=# SELECT count_orders();
+ my_count 
+----------
+   830513
+(1 row)
+```
+
+For additional information about creating SQL functions, refer to [Query Language (SQL) Functions](https://www.postgresql.org/docs/8.2/static/xfunc-sql.html) in the PostgreSQL documentation.
+
+## <a id="builtininternal"></a>Aliasing Internal Functions
+
+Many HAWQ internal functions are written in C. These functions are declared during initialization of the database cluster and statically linked to the HAWQ server. See [Built-in Functions and Operators](../query/functions-operators.html#topic29) for detailed information about HAWQ internal functions.
+
+You cannot define new internal functions, but you can create aliases for existing internal functions.
+
+The following example creates a new function named `all_caps` that is an alias for the `upper` HAWQ internal function:
+
+
+``` sql
+gpadmin=# CREATE FUNCTION all_caps (text) RETURNS text AS 'upper'
+            LANGUAGE internal STRICT;
+CREATE FUNCTION
+gpadmin=# SELECT all_caps('change me');
+ all_caps  
+-----------
+ CHANGE ME
+(1 row)
+
+```
+
+For more information about aliasing internal functions, refer to [Internal Functions](https://www.postgresql.org/docs/8.2/static/xfunc-internal.html) in the PostgreSQL documentation.
+
+## <a id="builtinc_lang"></a>Defining C Functions
+
+You must compile user-defined functions written in C into shared libraries so that the HAWQ server can load them on demand. This dynamic loading distinguishes C language functions from internal functions that are written in C.
+
+The [CREATE FUNCTION](../reference/sql/CREATE-FUNCTION.html) call for a user-defined C function must include both the name of the shared library and the name of the function.
+
+If an absolute path to the shared library is not provided, an attempt is made to locate the library relative to the: 
+
+1. HAWQ PostgreSQL library directory (obtained via the `pg_config --pkglibdir` command)
+2. `dynamic_library_path` configuration value
+3. current working directory
+
+in that order. 
+
+Example:
+
+``` c
+#include "postgres.h"
+#include "fmgr.h"
+
+#ifdef PG_MODULE_MAGIC
+PG_MODULE_MAGIC;
+#endif
+
+PG_FUNCTION_INFO_V1(double_it);
+         
+Datum
+double_it(PG_FUNCTION_ARGS)
+{
+    int32   arg = PG_GETARG_INT32(0);
+
+    PG_RETURN_INT64(arg + arg);
+}
+```
+
+If the above function is compiled into a shared object named `libdoubleit.so` located in `/share/libs`, you would register and invoke the function with HAWQ as follows:
+
+``` sql
+gpadmin=# CREATE FUNCTION double_it_c(integer) RETURNS integer
+            AS '/share/libs/libdoubleit', 'double_it'
+            LANGUAGE C STRICT;
+CREATE FUNCTION
+gpadmin=# SELECT double_it_c(27);
+ double_it 
+-----------
+        54
+(1 row)
+
+```
+
+The shared library `.so` extension may be omitted.
+
+For additional information about using the C language to create functions, refer to [C-Language Functions](https://www.postgresql.org/docs/8.2/static/xfunc-c.html) in the PostgreSQL documentation.
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/plext/using_pgcrypto.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/plext/using_pgcrypto.html.md.erb b/markdown/plext/using_pgcrypto.html.md.erb
new file mode 100644
index 0000000..e3e9225
--- /dev/null
+++ b/markdown/plext/using_pgcrypto.html.md.erb
@@ -0,0 +1,32 @@
+---
+title: Enabling Cryptographic Functions for PostgreSQL (pgcrypto)
+---
+
+`pgcrypto` is a package extension included in your HAWQ distribution. You must explicitly enable the cryptographic functions to use this extension.
+
+## <a id="pgcryptoprereq"></a>Prerequisites 
+
+
+Before you enable the `pgcrypto` software package, make sure that your HAWQ database is running, you have sourced `greenplum_path.sh`, and that the `$GPHOME` environment variable is set.
+
+## <a id="enablepgcrypto"></a>Enable pgcrypto 
+
+On every database in which you want to enable `pgcrypto`, run the following command:
+
+``` shell
+$ psql -d <dbname> -f $GPHOME/share/postgresql/contrib/pgcrypto.sql
+```
+	
+Replace \<dbname\> with the name of the target database.
+	
+## <a id="uninstallpgcrypto"></a>Disable pgcrypto 
+
+The `uninstall_pgcrypto.sql` script removes `pgcrypto` objects from your database.  On each database in which you enabled `pgcrypto` support, execute the following:
+
+``` shell
+$ psql -d <dbname> -f $GPHOME/share/postgresql/contrib/uninstall_pgcrypto.sql
+```
+
+Replace \<dbname\> with the name of the target database.
+	
+**Note:**  This script does not remove dependent user-created objects.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/plext/using_pljava.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/plext/using_pljava.html.md.erb b/markdown/plext/using_pljava.html.md.erb
new file mode 100644
index 0000000..99b5767
--- /dev/null
+++ b/markdown/plext/using_pljava.html.md.erb
@@ -0,0 +1,709 @@
+---
+title: Using PL/Java
+---
+
+This section contains an overview of the HAWQ PL/Java language. 
+
+
+## <a id="aboutpljava"></a>About PL/Java 
+
+With the HAWQ PL/Java extension, you can write Java methods using your favorite Java IDE and install the JAR files that implement the methods in your HAWQ cluster.
+
+**Note**: If building HAWQ from source, you must specify PL/Java as a build option when compiling HAWQ. To use PL/Java in a HAWQ deployment, you must explicitly enable the PL/Java extension in all desired databases.  
+
+The HAWQ PL/Java package is based on the open source PL/Java 1.4.0. HAWQ PL/Java provides the following features.
+
+- Ability to execute PL/Java functions with Java 1.6 or 1.7.
+- Standardized utilities (modeled after the SQL 2003 proposal) to install and maintain Java code in the database.
+- Standardized mappings of parameters and result. Complex types as well as sets are supported.
+- An embedded, high performance, JDBC driver utilizing the internal HAWQ Database SPI routines.
+- Metadata support for the JDBC driver. Both `DatabaseMetaData` and `ResultSetMetaData` are included.
+- The ability to return a `ResultSet` from a query as an alternative to building a ResultSet row by row.
+- Full support for savepoints and exception handling.
+- The ability to use IN, INOUT, and OUT parameters.
+- Two separate HAWQ languages:
+	- pljava, TRUSTED PL/Java language
+	- pljavau, UNTRUSTED PL/Java language
+- Transaction and Savepoint listeners enabling code execution when a transaction or savepoint is committed or rolled back.
+- Integration with GNU GCJ on selected platforms.
+
+A function in SQL will appoint a static method in a Java class. In order for the function to execute, the appointed class must available on the class path specified by the HAWQ server configuration parameter `pljava_classpath`. The PL/Java extension adds a set of functions that helps to install and maintain the Java classes. Classes are stored in normal Java archives, JAR files. A JAR file can optionally contain a deployment descriptor that in turn contains SQL commands to be executed when the JAR is deployed or undeployed. The functions are modeled after the standards proposed for SQL 2003.
+
+PL/Java implements a standard way of passing parameters and return values. Complex types and sets are passed using the standard JDBC ResultSet class.
+
+A JDBC driver is included in PL/Java. This driver calls HAWQ internal SPI routines. The driver is essential since it is common for functions to make calls back to the database to fetch data. When PL/Java functions fetch data, they must use the same transactional boundaries that are used by the main function that entered PL/Java execution context.
+
+PL/Java is optimized for performance. The Java virtual machine executes within the same process as the backend to minimize call overhead. PL/Java is designed with the objective to enable the power of Java to the database itself so that database intensive business logic can execute as close to the actual data as possible.
+
+The standard Java Native Interface (JNI) is used when bridging calls between the backend and the Java VM.
+
+
+## <a id="abouthawqpljava"></a>About HAWQ PL/Java 
+
+There are a few key differences between the implementation of PL/Java in standard PostgreSQL and HAWQ.
+
+### <a id="pljavafunctions"></a>Functions 
+
+The following functions are not supported in HAWQ. The classpath is handled differently in a distributed HAWQ environment than in the PostgreSQL environment.
+
+- sqlj.install_jar
+- sqlj.install_jar
+- sqlj.replace_jar
+- sqlj.remove_jar
+- sqlj.get_classpath
+- sqlj.set_classpath
+
+HAWQ uses the `pljava_classpath` server configuration parameter in place of the `sqlj.set_classpath` function.
+
+### <a id="serverconfigparams"></a>Server Configuration Parameters 
+
+PL/Java uses server configuration parameters to configure classpath, Java VM, and other options. Refer to the [Server Configuration Parameter Reference](../reference/HAWQSiteConfig.html) for general information about HAWQ server configuration parameters.
+
+The following server configuration parameters are used by PL/Java in HAWQ. These parameters replace the `pljava.*` parameters that are used in the standard PostgreSQL PL/Java implementation.
+
+#### pljava\_classpath
+
+A colon (:) separated list of the jar files containing the Java classes used in any PL/Java functions. The jar files must be installed in the same locations on all HAWQ hosts. With the trusted PL/Java language handler, jar file paths must be relative to the `$GPHOME/lib/postgresql/java/` directory. With the untrusted language handler (javaU language tag), paths may be relative to `$GPHOME/lib/postgresql/java/` or absolute.
+
+#### pljava\_statement\_cache\_size
+
+Sets the size in KB of the Most Recently Used (MRU) cache for prepared statements.
+
+#### pljava\_release\_lingering\_savepoints
+
+If TRUE, lingering savepoints will be released on function exit. If FALSE, they will be rolled back.
+
+#### pljava\_vmoptions
+
+Defines the start up options for the Java VM.
+
+### <a id="setting_serverconfigparams"></a>Setting PL/Java Configuration Parameters 
+
+You can set PL/Java server configuration parameters at the session level, or globally across your whole cluster. Your HAWQ cluster configuration must be reloaded after setting a server configuration value globally.
+
+#### <a id="setsrvrcfg_global"></a>Cluster Level
+
+You will perform different procedures to set a PL/Java server configuration parameter for your whole HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set PL/Java server configuration parameters.
+
+The following examples add a JAR file named `myclasses.jar` to the `pljava_classpath` server configuration parameter for the entire HAWQ cluster.
+
+If you use Ambari to manage your HAWQ cluster:
+
+1. Set the `pljava_classpath` configuration property to include `myclasses.jar` via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. 
+2. Select **Service Actions > Restart All** to load the updated configuration.
+
+If you manage your HAWQ cluster from the command line:
+
+1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
+
+    ``` shell
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+
+1. Use the `hawq config` utility to set `pljava_classpath`:
+
+    ``` shell
+    $ hawq config -c pljava_classpath -v \'myclasses.jar\'
+    ```
+2. Reload the HAWQ configuration:
+
+    ``` shell
+    $ hawq stop cluster -u
+    ```
+
+#### <a id="setsrvrcfg_session"></a>Session Level 
+
+To set a PL/Java server configuration parameter for only the *current* database session, set the parameter within the `psql` subsystem. For example, to set `pljava_classpath`:
+	
+``` sql
+=> SET pljava_classpath='myclasses.jar';
+```
+
+
+## <a id="enablepljava"></a>Enabling and Removing PL/Java Support 
+
+The PL/Java extension must be explicitly enabled on each database in which it will be used.
+
+
+### <a id="pljavaprereq"></a>Prerequisites 
+
+Before you enable PL/Java:
+
+1. Ensure that you have installed a supported Java runtime environment and that the `$JAVA_HOME` variable is set to the same path on the master and all segment nodes.
+
+2. Perform the following step on all machines to set up `ldconfig` for the installed JDK:
+
+	``` shell
+	$ echo "$JAVA_HOME/jre/lib/amd64/server" > /etc/ld.so.conf.d/libjdk.conf
+	$ ldconfig
+	```
+4. Make sure that your HAWQ cluster is running, you have sourced `greenplum_path.sh` and that your `$GPHOME` environment variable is set.
+
+
+### <a id="enablepljava"></a>Enable PL/Java and Install JAR Files 
+
+To use PL/Java:
+
+1. Enable the language for each database.
+1. Install user-created JAR files on all HAWQ hosts.
+1. Add the names of the JAR files to the HAWQ `pljava_classpath` server configuration parameter. This parameter value should identify a list of the installed JAR files.
+
+#### <a id="enablepljava"></a>Enable PL/Java and Install JAR Files 
+
+Perform the following steps as the `gpadmin` user:
+
+1. Enable PL/Java by running the `$GPHOME/share/postgresql/pljava/install.sql` SQL script in the databases that will use PL/Java. The `install.sql` script registers both the trusted and untrusted PL/Java languages. For example, the following command enables PL/Java on a database named `testdb`:
+
+	``` shell
+	$ psql -d testdb -f $GPHOME/share/postgresql/pljava/install.sql
+	```
+	
+	To enable the PL/Java extension in all new HAWQ databases, run the script on the `template1` database: 
+
+    ``` shell
+    $ psql -d template1 -f $GPHOME/share/postgresql/pljava/install.sql
+    ```
+
+    Use this option *only* if you are certain you want to enable PL/Java in all new databases.
+	
+2. Copy your Java archives (JAR files) to `$GPHOME/lib/postgresql/java/` on all HAWQ hosts. This example uses the `hawq scp` utility to copy the `myclasses.jar` file located in the current directory:
+
+	``` shell
+	$ hawq scp -f hawq_hosts myclasses.jar =:$GPHOME/lib/postgresql/java/
+	```
+	The `hawq_hosts` file contains a list of the HAWQ hosts.
+
+3. Add the JAR files to the `pljava_classpath` configuration parameter. Refer to [Setting PL/Java Configuration Parameters](#setting_serverconfigparams) for the specific procedure.
+
+5. (Optional) Your HAWQ installation includes an `examples.sql` file.  This script contains sample PL/Java functions that you can use for testing. Run the commands in this file to create and run test functions that use the Java classes in `examples.jar`:
+
+	``` shell
+	$ psql -f $GPHOME/share/postgresql/pljava/examples.sql
+	```
+
+#### Configuring PL/Java VM Options
+
+PL/Java JVM options can be configured via the `pljava_vmoptions` server configuration parameter. For example, `pljava_vmoptions=-Xmx512M` sets the maximum heap size of the JVM. The default `-Xmx` value is `64M`.
+
+Refer to [Setting PL/Java Configuration Parameters](#setting_serverconfigparams) for the specific procedure to set PL/Java server configuration parameters.
+
+	
+### <a id="uninstallpljava"></a>Disable PL/Java 
+
+To disable PL/Java, you should:
+
+1. Remove PL/Java support from each database in which it was added.
+2. Uninstall the Java JAR files.
+
+#### <a id="uninstallpljavasupport"></a>Remove PL/Java Support from Databases 
+
+For a database that no longer requires the PL/Java language, remove support for PL/Java by running the `uninstall.sql` script as the `gpadmin` user. For example, the following command disables the PL/Java language in the specified database:
+
+``` shell
+$ psql -d <dbname> -f $GPHOME/share/postgresql/pljava/uninstall.sql
+```
+
+Replace \<dbname\> with the name of the target database.
+
+
+#### <a id="uninstallpljavapackage"></a>Uninstall the Java JAR files 
+
+When no databases have PL/Java as a registered language, remove the Java JAR files.
+
+If you use Ambari to manage your cluster:
+
+1. Remove the `pljava_classpath` configuration property via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down.
+
+2. Remove the JAR files from the `$GPHOME/lib/postgresql/java/` directory of each HAWQ host.
+
+3. Select **Service Actions > Restart All** to restart your HAWQ cluster.
+
+
+If you manage your cluster from the command line:
+
+1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
+
+    ``` shell
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+
+1. Use the `hawq config` utility to remove `pljava_classpath`:
+
+    ``` shell
+    $ hawq config -r pljava_classpath
+    ```
+    
+2. Remove the JAR files from the `$GPHOME/lib/postgresql/java/` directory of each HAWQ host.
+
+3. If you manage your cluster from the command line, run:
+
+    ``` shell
+    $ hawq restart cluster
+    ```
+
+
+## <a id="writingpljavafunc"></a>Writing PL/Java Functions 
+
+This section provides information about writing functions with PL/Java.
+
+- [SQL Declaration](#sqldeclaration)
+- [Type Mapping](#typemapping)
+- [NULL Handling](#nullhandling)
+- [Complex Types](#complextypes)
+- [Returning Complex Types](#returningcomplextypes)
+- [Functions That Return Sets](#functionreturnsets)
+- [Returning a SETOF \<scalar type\>](#returnsetofscalar)
+- [Returning a SETOF \<complex type\>](#returnsetofcomplex)
+
+
+### <a id="sqldeclaration"></a>SQL Declaration 
+
+A Java function is declared with the name of a class and a static method on that class. The class will be resolved using the classpath that has been defined for the schema where the function is declared. If no classpath has been defined for that schema, the public schema is used. If no classpath is found there either, the class is resolved using the system classloader.
+
+The following function can be declared to access the static method getProperty on `java.lang.System` class:
+
+```sql
+=> CREATE FUNCTION getsysprop(VARCHAR)
+     RETURNS VARCHAR
+     AS 'java.lang.System.getProperty'
+   LANGUAGE java;
+```
+
+Run the following command to return the Java `user.home` property:
+
+```sql
+=> SELECT getsysprop('user.home');
+```
+
+### <a id="typemapping"></a>Type Mapping 
+
+Scalar types are mapped in a straightforward way. This table lists the current mappings.
+
+***Table 1: PL/Java data type mappings***
+
+| PostgreSQL | Java |
+|------------|------|
+| bool | boolean |
+| char | byte |
+| int2 | short |
+| int4 | int |
+| int8 | long |
+| varchar | java.lang.String |
+| text | java.lang.String |
+| bytea | byte[ ] |
+| date | java.sql.Date |
+| time | java.sql.Time (stored value treated as local time) |
+| timetz | java.sql.Time |
+| timestamp	| java.sql.Timestamp (stored value treated as local time) |
+| timestampz |	java.sql.Timestamp |
+| complex |	java.sql.ResultSet |
+| setof complex	| java.sql.ResultSet |
+
+All other types are mapped to `java.lang.String` and will utilize the standard textin/textout routines registered for respective type.
+
+### <a id="nullhandling"></a>NULL Handling 
+
+The scalar types that map to Java primitives can not be passed as NULL values. To pass NULL values, those types can have an alternative mapping. You enable this mapping by explicitly denoting it in the method reference.
+
+```sql
+=> CREATE FUNCTION trueIfEvenOrNull(integer)
+     RETURNS bool
+     AS 'foo.fee.Fum.trueIfEvenOrNull(java.lang.Integer)'
+   LANGUAGE java;
+```
+
+The Java code would be similar to this:
+
+```java
+package foo.fee;
+public class Fum
+{
+  static boolean trueIfEvenOrNull(Integer value)
+  {
+    return (value == null)
+      ? true
+      : (value.intValue() % 1) == 0;
+  }
+}
+```
+
+The following two statements both yield true:
+
+```sql
+=> SELECT trueIfEvenOrNull(NULL);
+=> SELECT trueIfEvenOrNull(4);
+```
+
+In order to return NULL values from a Java method, you use the object type that corresponds to the primitive (for example, you return `java.lang.Integer` instead of `int`). The PL/Java resolve mechanism finds the method regardless. Since Java cannot have different return types for methods with the same name, this does not introduce any ambiguity.
+
+### <a id="complextypes"></a>Complex Types 
+
+A complex type will always be passed as a read-only `java.sql.ResultSet` with exactly one row. The `ResultSet` is positioned on its row so a call to `next()` should not be made. The values of the complex type are retrieved using the standard getter methods of the `ResultSet`.
+
+Example:
+
+```sql
+=> CREATE TYPE complexTest
+     AS(base integer, incbase integer, ctime timestamptz);
+=> CREATE FUNCTION useComplexTest(complexTest)
+     RETURNS VARCHAR
+     AS 'foo.fee.Fum.useComplexTest'
+   IMMUTABLE LANGUAGE java;
+```
+
+In the Java class `Fum`, we add the following static method:
+
+```java
+public static String useComplexTest(ResultSet complexTest)
+throws SQLException
+{
+  int base = complexTest.getInt(1);
+  int incbase = complexTest.getInt(2);
+  Timestamp ctime = complexTest.getTimestamp(3);
+  return "Base = \"" + base +
+    "\", incbase = \"" + incbase +
+    "\", ctime = \"" + ctime + "\"";
+}
+```
+
+### <a id="returningcomplextypes"></a>Returning Complex Types 
+
+Java does not stipulate any way to create a `ResultSet`. Hence, returning a ResultSet is not an option. The SQL-2003 draft suggests that a complex return value should be handled as an IN/OUT parameter. PL/Java implements a `ResultSet` that way. If you declare a function that returns a complex type, you will need to use a Java method with boolean return type with a last parameter of type `java.sql.ResultSet`. The parameter will be initialized to an empty updateable ResultSet that contains exactly one row.
+
+Assume that the complexTest type in previous section has been created.
+
+```sql
+=> CREATE FUNCTION createComplexTest(int, int)
+     RETURNS complexTest
+     AS 'foo.fee.Fum.createComplexTest'
+   IMMUTABLE LANGUAGE java;
+```
+
+The PL/Java method resolve will now find the following method in the `Fum` class:
+
+```java
+public static boolean complexReturn(int base, int increment, 
+  ResultSet receiver)
+throws SQLException
+{
+  receiver.updateInt(1, base);
+  receiver.updateInt(2, base + increment);
+  receiver.updateTimestamp(3, new 
+    Timestamp(System.currentTimeMillis()));
+  return true;
+}
+```
+
+The return value denotes if the receiver should be considered as a valid tuple (true) or NULL (false).
+
+### <a id="functionreturnsets"></a>Functions that Return Sets 
+
+When returning result set, you should not build a result set before returning it, because building a large result set would consume a large amount of resources. It is better to produce one row at a time. Incidentally, that is what the HAWQ backend expects a function with SETOF return to do. You can return a SETOF a scalar type such as an int, float or varchar, or you can return a SETOF a complex type.
+
+### <a id="returnsetofscalar"></a>Returning a SETOF \<scalar type\> 
+
+In order to return a set of a scalar type, you need create a Java method that returns something that implements the `java.util.Iterator` interface. Here is an example of a method that returns a SETOF varchar:
+
+```sql
+=> CREATE FUNCTION javatest.getSystemProperties()
+     RETURNS SETOF varchar
+     AS 'foo.fee.Bar.getNames'
+   IMMUTABLE LANGUAGE java;
+```
+
+This simple Java method returns an iterator:
+
+```java
+package foo.fee;
+import java.util.Iterator;
+
+public class Bar
+{
+    public static Iterator getNames()
+    {
+        ArrayList names = new ArrayList();
+        names.add("Lisa");
+        names.add("Bob");
+        names.add("Bill");
+        names.add("Sally");
+        return names.iterator();
+    }
+}
+```
+
+### <a id="returnsetofcomplex"></a>Returning a SETOF \<complex type\> 
+
+A method returning a SETOF <complex type> must use either the interface `org.postgresql.pljava.ResultSetProvider` or `org.postgresql.pljava.ResultSetHandle`. The reason for having two interfaces is that they cater for optimal handling of two distinct use cases. The former is for cases when you want to dynamically create each row that is to be returned from the SETOF function. The latter makes is in cases where you want to return the result of an executed query.
+
+#### Using the ResultSetProvider Interface
+
+This interface has two methods. The boolean `assignRowValues(java.sql.ResultSet tupleBuilder, int rowNumber)` and the `void close()` method. The HAWQ query evaluator will call the `assignRowValues` repeatedly until it returns false or until the evaluator decides that it does not need any more rows. Then it calls close.
+
+You can use this interface the following way:
+
+```sql
+=> CREATE FUNCTION javatest.listComplexTests(int, int)
+     RETURNS SETOF complexTest
+     AS 'foo.fee.Fum.listComplexTest'
+   IMMUTABLE LANGUAGE java;
+```
+
+The function maps to a static java method that returns an instance that implements the `ResultSetProvider` interface.
+
+```java
+public class Fum implements ResultSetProvider
+{
+  private final int m_base;
+  private final int m_increment;
+  public Fum(int base, int increment)
+  {
+    m_base = base;
+    m_increment = increment;
+  }
+  public boolean assignRowValues(ResultSet receiver, int 
+currentRow)
+  throws SQLException
+  {
+    // Stop when we reach 12 rows.
+    //
+    if(currentRow >= 12)
+      return false;
+    receiver.updateInt(1, m_base);
+    receiver.updateInt(2, m_base + m_increment * currentRow);
+    receiver.updateTimestamp(3, new 
+Timestamp(System.currentTimeMillis()));
+    return true;
+  }
+  public void close()
+  {
+   // Nothing needed in this example
+  }
+  public static ResultSetProvider listComplexTests(int base, 
+int increment)
+  throws SQLException
+  {
+    return new Fum(base, increment);
+  }
+}
+```
+
+The `listComplextTests` method is called once. It may return NULL if no results are available or an instance of the `ResultSetProvider`. Here the Java class `Fum` implements this interface so it returns an instance of itself. The method `assignRowValues` will then be called repeatedly until it returns false. At that time, close will be called.
+
+#### Using the ResultSetHandle Interface
+
+This interface is similar to the `ResultSetProvider` interface in that it has a `close()` method that will be called at the end. But instead of having the evaluator call a method that builds one row at a time, this method has a method that returns a `ResultSet`. The query evaluator will iterate over this set and deliver the `ResultSet` contents, one tuple at a time, to the caller until a call to `next()` returns false or the evaluator decides that no more rows are needed.
+
+Here is an example that executes a query using a statement that it obtained using the default connection. The SQL suitable for the deployment descriptor looks like this:
+
+```sql
+=> CREATE FUNCTION javatest.listSupers()
+     RETURNS SETOF pg_user
+     AS 'org.postgresql.pljava.example.Users.listSupers'
+   LANGUAGE java;
+=> CREATE FUNCTION javatest.listNonSupers()
+     RETURNS SETOF pg_user
+     AS 'org.postgresql.pljava.example.Users.listNonSupers'
+   LANGUAGE java;
+```
+
+And in the Java package `org.postgresql.pljava.example` a class `Users` is added:
+
+```java
+public class Users implements ResultSetHandle
+{
+  private final String m_filter;
+  private Statement m_statement;
+  public Users(String filter)
+  {
+    m_filter = filter;
+  }
+  public ResultSet getResultSet()
+  throws SQLException
+  {
+    m_statement = 
+      DriverManager.getConnection("jdbc:default:connection").cr
+eateStatement();
+    return m_statement.executeQuery("SELECT * FROM pg_user 
+       WHERE " + m_filter);
+  }
+
+  public void close()
+  throws SQLException
+  {
+    m_statement.close();
+  }
+
+  public static ResultSetHandle listSupers()
+  {
+    return new Users("usesuper = true");
+  }
+
+  public static ResultSetHandle listNonSupers()
+  {
+    return new Users("usesuper = false");
+  }
+}
+```
+## <a id="usingjdbc"></a>Using JDBC 
+
+PL/Java contains a JDBC driver that maps to the PostgreSQL SPI functions. A connection that maps to the current transaction can be obtained using the following statement:
+
+```java
+Connection conn = 
+  DriverManager.getConnection("jdbc:default:connection"); 
+```
+
+After obtaining a connection, you can prepare and execute statements similar to other JDBC connections. These are limitations for the PL/Java JDBC driver:
+
+- The transaction cannot be managed in any way. Thus, you cannot use methods on the connection such as:
+   - `commit()`
+   - `rollback()`
+   - `setAutoCommit()`
+   - `setTransactionIsolation()`
+- Savepoints are available with some restrictions. A savepoint cannot outlive the function in which it was set and it must be rolled back or released by that same function.
+- A ResultSet returned from `executeQuery()` are always `FETCH_FORWARD` and `CONCUR_READ_ONLY`.
+- Meta-data is only available in PL/Java 1.1 or higher.
+- `CallableStatement` (for stored procedures) is not implemented.
+- The types `Clob` or `Blob` are not completely implemented, they need more work. The types `byte[]` and `String` can be used for `bytea` and `text` respectively.
+
+## <a id="exceptionhandling"></a>Exception Handling 
+
+You can catch and handle an exception in the HAWQ backend just like any other exception. The backend `ErrorData` structure is exposed as a property in a class called `org.postgresql.pljava.ServerException` (derived from `java.sql.SQLException`) and the Java try/catch mechanism is synchronized with the backend mechanism.
+
+**Important:** You will not be able to continue executing backend functions until your function has returned and the error has been propagated when the backend has generated an exception unless you have used a savepoint. When a savepoint is rolled back, the exceptional condition is reset and you can continue your execution.
+
+## <a id="savepoints"></a>Savepoints 
+
+HAWQ savepoints are exposed using the `java.sql.Connection` interface. Two restrictions apply.
+
+- A savepoint must be rolled back or released in the function where it was set.
+- A savepoint must not outlive the function where it was set.
+
+## <a id="logging"></a>Logging 
+
+PL/Java uses the standard Java Logger. Hence, you can write things like:
+
+```java
+Logger.getAnonymousLogger().info( "Time is " + new 
+Date(System.currentTimeMillis()));
+```
+
+At present, the logger uses a handler that maps the current state of the HAWQ configuration setting `log_min_messages` to a valid Logger level and that outputs all messages using the HAWQ backend function `elog()`.
+
+**Note:** The `log_min_messages` setting is read from the database the first time a PL/Java function in a session is executed. On the Java side, the setting does not change after the first PL/Java function execution in a specific session until the HAWQ session that is working with PL/Java is restarted.
+
+The following mapping apply between the Logger levels and the HAWQ backend levels.
+
+***Table 2: PL/Java Logging Levels Mappings***
+
+| java.util.logging.Level | HAWQ Level |
+|-------------------------|------------|
+| SEVERE ERROR | ERROR |
+| WARNING |	WARNING |
+| CONFIG |	LOG |
+| INFO | INFO |
+| FINE | DEBUG1 |
+| FINER | DEBUG2 |
+| FINEST | DEBUG3 |
+
+## <a id="security"></a>Security 
+
+This section describes security aspects of using PL/Java.
+
+### <a id="installation"></a>Installation 
+
+Only a database super user can install PL/Java. The PL/Java utility functions are installed using SECURITY DEFINER so that they execute with the access permissions that where granted to the creator of the functions.
+
+### <a id="trustedlang"></a>Trusted Language 
+
+PL/Java is a trusted language. The trusted PL/Java language has no access to the file system as stipulated by PostgreSQL definition of a trusted language. Any database user can create and access functions in a trusted language.
+
+PL/Java also installs a language handler for the language `javau`. This version is not trusted and only a superuser can create new functions that use it. Any user can call the functions.
+
+
+## <a id="pljavaexample"></a>Example 
+
+The following simple Java example creates a JAR file that contains a single method and runs the method.
+
+<p class="note"><b>Note:</b> The example requires Java SDK to compile the Java file.</p>
+
+The following method returns a substring.
+
+```java
+{
+public static String substring(String text, int beginIndex,
+  int endIndex)
+    {
+    return text.substring(beginIndex, endIndex);
+    }
+}
+```
+
+Enter the Java code in a text file `example.class`.
+
+Contents of the file `manifest.txt`:
+
+```plaintext
+Manifest-Version: 1.0
+Main-Class: Example
+Specification-Title: "Example"
+Specification-Version: "1.0"
+Created-By: 1.6.0_35-b10-428-11M3811
+Build-Date: 01../2013 10:09 AM
+```
+
+Compile the Java code:
+
+```shell
+$ javac *.java
+```
+
+Create a JAR archive named `analytics.jar` that contains the class file and the manifest file in the JAR:
+
+```shell
+$ jar cfm analytics.jar manifest.txt *.class
+```
+
+Upload the JAR file to the HAWQ master host.
+
+Run the `hawq scp` utility to copy the jar file to the HAWQ Java directory. Use the `-f` option to specify the file that contains a list of the master and segment hosts:
+
+```shell
+$ hawq scp -f hawq_hosts analytics.jar =:/usr/local/hawq/lib/postgresql/java/
+```
+
+Add the `analytics.jar` JAR file to the `pljava_classpath` configuration parameter. Refer to [Setting PL/Java Configuration Parameters](#setting_serverconfigparams) for the specific procedure.
+
+From the `psql` subsystem, run the following command to show the installed JAR files:
+
+``` sql
+=> SHOW pljava_classpath
+```
+
+The following SQL commands create a table and define a Java function to test the method in the JAR file:
+
+```sql
+=> CREATE TABLE temp (a varchar) DISTRIBUTED randomly; 
+=> INSERT INTO temp values ('my string'); 
+--Example function 
+=> CREATE OR REPLACE FUNCTION java_substring(varchar, int, int) 
+     RETURNS varchar AS 'Example.substring' 
+   LANGUAGE java; 
+--Example execution 
+=> SELECT java_substring(a, 1, 5) FROM temp;
+```
+
+If you add these SQL commands to a file named `mysample.sql`, you can run the commands from the `psql` subsystem using the `\i` meta-command:
+
+``` sql
+=> \i mysample.sql 
+```
+
+The output is similar to this:
+
+```shell
+java_substring
+----------------
+ y st
+(1 row)
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/plext/using_plperl.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/plext/using_plperl.html.md.erb b/markdown/plext/using_plperl.html.md.erb
new file mode 100644
index 0000000..d6ffa04
--- /dev/null
+++ b/markdown/plext/using_plperl.html.md.erb
@@ -0,0 +1,27 @@
+---
+title: Using PL/Perl
+---
+
+This section contains an overview of the HAWQ PL/Perl language extension.
+
+## <a id="enableplperl"></a>Enabling PL/Perl
+
+If PL/Perl is enabled during HAWQ build time, HAWQ installs the PL/Perl language extension automatically. To use PL/Perl, you must enable it on specific databases.
+
+On every database where you want to enable PL/Perl, connect to the database using the psql client.
+
+``` shell
+$ psql -d <dbname>
+```
+
+Replace \<dbname\> with the name of the target database.
+
+Then, run the following SQL command:
+
+``` shell
+psql# CREATE LANGUAGE plperl;
+```
+
+## <a id="references"></a>References 
+
+For more information on using PL/Perl, see the PostgreSQL PL/Perl documentation at [https://www.postgresql.org/docs/8.2/static/plperl.html](https://www.postgresql.org/docs/8.2/static/plperl.html).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/plext/using_plpgsql.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/plext/using_plpgsql.html.md.erb b/markdown/plext/using_plpgsql.html.md.erb
new file mode 100644
index 0000000..3661e9b
--- /dev/null
+++ b/markdown/plext/using_plpgsql.html.md.erb
@@ -0,0 +1,142 @@
+---
+title: Using PL/pgSQL in HAWQ
+---
+
+SQL is the language of most other relational databases use as query language. It is portable and easy to learn. But every SQL statement must be executed individually by the database server. 
+
+PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+
+-   create functions
+-   add control structures to the SQL language
+-   perform complex computations
+-   inherit all user-defined types, functions, and operators
+-   be trusted by the server
+
+You can use functions created with PL/pgSQL with any database that supports built-in functions. For example, it is possible to create complex conditional computation functions and later use them to define operators or use them in index expressions.
+
+Every SQL statement must be executed individually by the database server. Your client application must send each query to the database server, wait for it to be processed, receive and process the results, do some computation, then send further queries to the server. This requires interprocess communication and incurs network overhead if your client is on a different machine than the database server.
+
+With PL/pgSQL, you can group a block of computation and a series of queries inside the database server, thus having the power of a procedural language and the ease of use of SQL, but with considerable savings of client/server communication overhead.
+
+-   Extra round trips between client and server are eliminated
+-   Intermediate results that the client does not need do not have to be marshaled or transferred between server and client
+-   Multiple rounds of query parsing can be avoided
+
+This can result in a considerable performance increase as compared to an application that does not use stored functions.
+
+PL/pgSQL supports all the data types, operators, and functions of SQL.
+
+**Note:**  PL/pgSQL is automatically installed and registered in all HAWQ databases.
+
+## <a id="supportedargumentandresultdatatypes"></a>Supported Data Types for Arguments and Results 
+
+Functions written in PL/pgSQL accept as arguments any scalar or array data type supported by the server, and they can return a result containing this data type. They can also accept or return any composite type (row type) specified by name. It is also possible to declare a PL/pgSQL function as returning record, which means that the result is a row type whose columns are determined by specification in the calling query. See <a href="#tablefunctions" class="xref">Table Functions</a>.
+
+PL/pgSQL functions can be declared to accept a variable number of arguments by using the VARIADIC marker. This works exactly the same way as for SQL functions. See <a href="#sqlfunctionswithvariablenumbersofarguments" class="xref">SQL Functions with Variable Numbers of Arguments</a>.
+
+PL/pgSQLfunctions can also be declared to accept and return the polymorphic typesanyelement,anyarray,anynonarray, and anyenum. The actual data types handled by a polymorphic function can vary from call to call, as discussed in <a href="http://www.postgresql.org/docs/8.4/static/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC" class="xref">Section 34.2.5</a>. An example is shown in <a href="http://www.postgresql.org/docs/8.4/static/plpgsql-declarations.html#PLPGSQL-DECLARATION-ALIASES" class="xref">Section 38.3.1</a>.
+
+PL/pgSQL functions can also be declared to return a "set" (or table) of any data type that can be returned as a single instance. Such a function generates its output by executing RETURN NEXT for each desired element of the result set, or by using RETURN QUERY to output the result of evaluating a query.
+
+Finally, a PL/pgSQL function can be declared to return void if it has no useful return value.
+
+PL/pgSQL functions can also be declared with output parameters in place of an explicit specification of the return type. This does not add any fundamental capability to the language, but it is often convenient, especially for returning multiple values. The RETURNS TABLE notation can also be used in place of RETURNS SETOF .
+
+This topic describes the following PL/pgSQLconcepts:
+
+-   [Table Functions](#tablefunctions)
+-   [SQL Functions with Variable number of Arguments](#sqlfunctionswithvariablenumbersofarguments)
+-   [Polymorphic Types](#polymorphictypes)
+
+
+## <a id="tablefunctions"></a>Table Functions 
+
+
+Table functions are functions that produce a set of rows, made up of either base data types (scalar types) or composite data types (table rows). They are used like a table, view, or subquery in the FROM clause of a query. Columns returned by table functions can be included in SELECT, JOIN, or WHERE clauses in the same manner as a table, view, or subquery column.
+
+If a table function returns a base data type, the single result column name matches the function name. If the function returns a composite type, the result columns get the same names as the individual attributes of the type.
+
+A table function can be aliased in the FROM clause, but it also can be left unaliased. If a function is used in the FROM clause with no alias, the function name is used as the resulting table name.
+
+Some examples:
+
+```sql
+CREATE TABLE foo (fooid int, foosubid int, fooname text);
+
+CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$
+    SELECT * FROM foo WHERE fooid = $1;
+$$ LANGUAGE SQL;
+
+SELECT * FROM getfoo(1) AS t1;
+
+SELECT * FROM foo
+    WHERE foosubid IN (
+                        SELECT foosubid
+                        FROM getfoo(foo.fooid) z
+                        WHERE z.fooid = foo.fooid
+                      );
+
+CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);
+
+SELECT * FROM vw_getfoo;
+```
+
+In some cases, it is useful to define table functions that can return different column sets depending on how they are invoked. To support this, the table function can be declared as returning the pseudotype record. When such a function is used in a query, the expected row structure must be specified in the query itself, so that the system can know how to parse and plan the query. Consider this example:
+
+```sql
+SELECT *
+    FROM dblink('dbname=mydb', 'SELECT proname, prosrc FROM pg_proc')
+      AS t1(proname name, prosrc text)
+    WHERE proname LIKE 'bytea%';
+```
+
+The `dblink` function executes a remote query (see `contrib/dblink`). It is declared to return `record` since it might be used for any kind of query. The actual column set must be specified in the calling query so that the parser knows, for example, what `*` should expand to.
+
+
+## <a id="sqlfunctionswithvariablenumbersofarguments"></a>SQL Functions with Variable Numbers of Arguments 
+
+SQL functions can be declared to accept variable numbers of arguments, so long as all the "optional" arguments are of the same data type. The optional arguments will be passed to the function as an array. The function is declared by marking the last parameter as VARIADIC; this parameter must be declared as being of an array type. For example:
+
+```sql
+CREATE FUNCTION mleast(VARIADIC numeric[]) RETURNS numeric AS $$
+    SELECT min($1[i]) FROM generate_subscripts($1, 1) g(i);
+$$ LANGUAGE SQL;
+
+SELECT mleast(10, -1, 5, 4.4);
+ mleast 
+--------
+     -1
+(1 row)
+```
+
+Effectively, all the actual arguments at or beyond the VARIADIC position are gathered up into a one-dimensional array, as if you had written
+
+```sql
+SELECT mleast(ARRAY[10, -1, 5, 4.4]);    -- doesn't work
+```
+
+You can't actually write that, though; or at least, it will not match this function definition. A parameter marked VARIADIC matches one or more occurrences of its element type, not of its own type.
+
+Sometimes it is useful to be able to pass an already-constructed array to a variadic function; this is particularly handy when one variadic function wants to pass on its array parameter to another one. You can do that by specifying VARIADIC in the call:
+
+```sql
+SELECT mleast(VARIADIC ARRAY[10, -1, 5, 4.4]);
+```
+
+This prevents expansion of the function's variadic parameter into its element type, thereby allowing the array argument value to match normally. VARIADIC can only be attached to the last actual argument of a function call.
+
+
+
+## <a id="polymorphictypes"></a>Polymorphic Types 
+
+Four pseudo-types of special interest are anyelement,anyarray, anynonarray, and anyenum, which are collectively called *polymorphic types*. Any function declared using these types is said to be a*polymorphic function*. A polymorphic function can operate on many different data types, with the specific data type(s) being determined by the data types actually passed to it in a particular call.
+
+Polymorphic arguments and results are tied to each other and are resolved to a specific data type when a query calling a polymorphic function is parsed. Each position (either argument or return value) declared as anyelement is allowed to have any specific actual data type, but in any given call they must all be the sam eactual type. Each position declared as anyarray can have any array data type, but similarly they must all be the same type. If there are positions declared anyarray and others declared anyelement, the actual array type in the anyarray positions must be an array whose elements are the same type appearing in the anyelement positions.anynonarray is treated exactly the same as anyelement, but adds the additional constraint that the actual type must not be an array type. anyenum is treated exactly the same as anyelement, but adds the additional constraint that the actual type must be an enum type.
+
+Thus, when more than one argument position is declared with a polymorphic type, the net effect is that only certain combinations of actual argument types are allowed. For example, a function declared as equal(anyelement, anyelement) will take any two input values, so long as they are of the same data type.
+
+When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also polymorphic, and the actual data type supplied as the argument determines the actual result type for that call. For example, if there were not already an array subscripting mechanism, one could define a function that implements subscripting `assubscript(anyarray, integer)` returns `anyelement`. This declaration constrains the actual first argument to be an array type, and allows the parser to infer the correct result type from the actual first argument's type. Another example is that a function declared `asf(anyarray)` returns `anyenum` will only accept arrays of `enum` types.
+
+Note that `anynonarray` and `anyenum` do not represent separate type variables; they are the same type as `anyelement`, just with an additional constraint. For example, declaring a function as `f(anyelement,           anyenum)` is equivalent to declaring it as `f(anyenum, anyenum)`; both actual arguments have to be the same enum type.
+
+Variadic functions described in <a href="#sqlfunctionswithvariablenumbersofarguments" class="xref">SQL Functions with Variable Numbers of Arguments</a> can be polymorphic: this is accomplished by declaring its last parameter as `VARIADIC anyarray`. For purposes of argument matching and determining the actual result type, such a function behaves the same as if you had written the appropriate number of `anynonarray` parameters.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/plext/using_plpython.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/plext/using_plpython.html.md.erb b/markdown/plext/using_plpython.html.md.erb
new file mode 100644
index 0000000..063509a
--- /dev/null
+++ b/markdown/plext/using_plpython.html.md.erb
@@ -0,0 +1,789 @@
+---
+title: Using PL/Python in HAWQ
+---
+
+This section provides an overview of the HAWQ PL/Python procedural language extension.
+
+## <a id="abouthawqplpython"></a>About HAWQ PL/Python 
+
+PL/Python is embedded in your HAWQ product distribution or within your HAWQ build if you chose to enable it as a build option. 
+
+With the HAWQ PL/Python extension, you can write user-defined functions in Python that take advantage of Python features and modules, enabling you to quickly build robust HAWQ database applications.
+
+HAWQ uses the system Python installation.
+
+### <a id="hawqlimitations"></a>HAWQ PL/Python Limitations 
+
+- HAWQ does not support PL/Python trigger functions.
+- PL/Python is available only as a HAWQ untrusted language.
+ 
+## <a id="enableplpython"></a>Enabling and Removing PL/Python Support 
+
+To use PL/Python in HAWQ, you must either install a binary version of HAWQ that includes PL/Python or specify PL/Python as a build option when you compile HAWQ from source.
+
+You must register the PL/Python language with a database before you can create and execute a PL/Python UDF on that database. You must be a database superuser to register and remove new languages in HAWQ databases.
+
+On every database to which you want to install and enable PL/Python:
+
+1. Connect to the database using the `psql` client:
+
+    ``` shell
+    gpadmin@hawq-node$ psql -d <dbname>
+    ```
+
+    Replace \<dbname\> with the name of the target database.
+
+2. Run the following SQL command to register the PL/Python procedural language:
+
+    ``` sql
+    dbname=# CREATE LANGUAGE plpythonu;
+    ```
+
+    **Note**: `plpythonu` is installed as an *untrusted* language; it offers no way of restricting what you can program in UDFs created with the language. Creating and executing PL/Python UDFs is permitted only by database superusers and other database users explicitly `GRANT`ed the permissions.
+
+To remove support for `plpythonu` from a database, run the following SQL command; you must be a database superuser to remove a registered procedural language:
+
+``` sql
+dbname=# DROP LANGUAGE plpythonu;
+```
+
+## <a id="developfunctions"></a>Developing Functions with PL/Python 
+
+PL/Python functions are defined using the standard SQL [CREATE FUNCTION](../reference/sql/CREATE-FUNCTION.html) syntax.
+
+The body of a PL/Python user-defined function is a Python script. When the function is called, its arguments are passed as elements of the array `args[]`. You can also pass named arguments as ordinary variables to the Python script. 
+
+PL/Python function results are returned with a `return` statement, or a `yield` statement in the case of a result-set statement.
+
+The following PL/Python function computes and returns the maximum of two integers:
+
+``` sql
+=# CREATE FUNCTION mypymax (a integer, b integer)
+     RETURNS integer
+   AS $$
+     if (a is None) or (b is None):
+       return None
+     if a > b:
+       return a
+     return b
+   $$ LANGUAGE plpythonu;
+```
+
+To execute the `mypymax` function:
+
+``` sql
+=# SELECT mypymax(5, 7);
+ mypymax 
+---------
+       7
+(1 row)
+```
+
+Adding the `STRICT` keyword to the `LANGUAGE` subclause instructs HAWQ to return null when any of the input arguments are null. When created as `STRICT`, the function itself need not perform null checks.
+
+The following example uses an unnamed argument, the built-in Python `max()` function, and the `STRICT` keyword to create a UDF named `mypymax2`:
+
+``` sql
+=# CREATE FUNCTION mypymax2 (a integer, integer)
+     RETURNS integer AS $$ 
+   return max(a, args[0]) 
+   $$ LANGUAGE plpythonu STRICT;
+=# SELECT mypymax(5, 3);
+ mypymax2
+----------
+        5
+(1 row)
+=# SELECT mypymax(5, null);
+ mypymax2
+----------
+       
+(1 row)
+```
+
+## <a id="example_createtbl"></a>Creating the Sample Data
+
+Perform the following steps to create, and insert data into, a simple table. This table will be used in later exercises.
+
+1. Create a database named `testdb`:
+
+    ``` shell
+    gpadmin@hawq-node$ createdb testdb
+    ```
+
+1. Create a table named `sales`:
+
+    ``` shell
+    gpadmin@hawq-node$ psql -d testdb
+    ```
+    ``` sql
+    testdb=> CREATE TABLE sales (id int, year int, qtr int, day int, region text)
+               DISTRIBUTED BY (id);
+    ```
+
+2. Insert data into the table:
+
+    ``` sql
+    testdb=> INSERT INTO sales VALUES
+     (1, 2014, 1,1, 'usa'),
+     (2, 2002, 2,2, 'europe'),
+     (3, 2014, 3,3, 'asia'),
+     (4, 2014, 4,4, 'usa'),
+     (5, 2014, 1,5, 'europe'),
+     (6, 2014, 2,6, 'asia'),
+     (7, 2002, 3,7, 'usa') ;
+    ```
+
+## <a id="pymod_intro"></a>Python Modules 
+A Python module is a text file containing Python statements and definitions. Python modules are named, with the file name for a module following the `<python-module-name>.py` naming convention.
+
+Should you need to build a Python module, ensure that the appropriate software is installed on the build system. Also be sure that you are building for the correct deployment architecture, i.e. 64-bit.
+
+### <a id="pymod_intro_hawq"></a>HAWQ Considerations 
+
+When installing a Python module in HAWQ, you must add the module to all segment nodes in the cluster. You must also add all Python modules to any new segment hosts when you expand your HAWQ cluster.
+
+PL/Python supports the built-in HAWQ Python module named `plpy`.  You can also install 3rd party Python modules.
+
+
+## <a id="modules_plpy"></a>plpy Module 
+
+The HAWQ PL/Python procedural language extension automatically imports the Python module `plpy`. `plpy` implements functions to execute SQL queries and prepare execution plans for queries.  The `plpy` module also includes functions to manage errors and messages.
+   
+### <a id="executepreparesql"></a>Executing and Preparing SQL Queries 
+
+Use the PL/Python `plpy` module `plpy.execute()` function to execute a SQL query. Use the `plpy.prepare()` function to prepare an execution plan for a query. Preparing the execution plan for a query is useful if you want to run the query from multiple Python functions.
+
+#### <a id="plpyexecute"></a>plpy.execute() 
+
+Invoking `plpy.execute()` with a query string and an optional limit argument runs the query, returning the result in a Python result object. This result object:
+
+- emulates a list or dictionary object
+- returns rows that can be accessed by row number and column name; row numbering starts with 0 (zero)
+- can be modified
+- includes an `nrows()` method that returns the number of rows returned by the query
+- includes a `status()` method that returns the `SPI_execute()` return value
+
+For example, the following Python statement when present in a PL/Python user-defined function will execute a `SELECT * FROM mytable` query:
+
+``` python
+rv = plpy.execute("SELECT * FROM my_table", 3)
+```
+
+As instructed by the limit argument `3`, the `plpy.execute` function will return up to 3 rows from `my_table`. The result set is stored in the `rv` object.
+
+Access specific columns in the table by name. For example, if `my_table` has a column named `my_column`:
+
+``` python
+my_col_data = rv[i]["my_column"]
+```
+
+You specified that the function return a maximum of 3 rows in the `plpy.execute()` command above. As such, the index `i` used to access the result value `rv` must specify an integer between 0 and 2, inclusive.
+
+##### <a id="plpyexecute_example"></a>Example: plpy.execute()
+
+Example: Use `plpy.execute()` to run a similar query on the `sales` table you created in an earlier section:
+
+1. Define a PL/Python UDF that executes a query to return at most 5 rows from the `sales` table:
+
+    ``` sql
+    =# CREATE OR REPLACE FUNCTION mypytest(a integer) 
+         RETURNS text 
+       AS $$ 
+         rv = plpy.execute("SELECT * FROM sales ORDER BY id", 5)
+         region = rv[a-1]["region"]
+         return region
+       $$ LANGUAGE plpythonu;
+    ```
+
+    When executed, this UDF returns the `region` value from the `id` identified by the input value `a`. Since row numbering of the result set starts at 0, you must access the result set with index `a - 1`. 
+    
+    Specifying the `ORDER BY id` clause in the `SELECT` statement ensures that subsequent invocations of `mypytest` with the same input argument will return identical result sets.
+
+3. Run `mypytest` with an argument identifying `id` `3`:
+
+    ```sql
+    =# SELECT mypytest(3);
+     mypytest 
+    ----------
+     asia
+    (1 row)
+    ```
+    
+    Recall that the row numbering starts from 0 in a Python returned result set. The valid input argument for the `mypytest2` function is an integer between 0 and 4, inclusive.
+
+    The query returns the `region` from the row with `id = 3`, `asia`.
+    
+Note: This example demonstrates some of the concepts discussed previously. It may not be the ideal way to return a specific column value.
+
+#### <a id="plpyprepare"></a>plpy.prepare() 
+
+The function `plpy.prepare()` prepares the execution plan for a query. Preparing the execution plan for a query is useful if you plan to run the query from multiple Python functions.
+
+You invoke `plpy.prepare()` with a query string. Also include a list of parameter types if you are using parameter references in the query. For example, the following statement in a PL/Python user-defined function returns the execution plan for a query:
+
+``` python
+plan = plpy.prepare("SELECT * FROM sales ORDER BY id WHERE 
+  region = $1", [ "text" ])
+```
+
+The string `text` identifies the data type of the variable `$1`. 
+
+After preparing an execution plan, you use the function `plpy.execute()` to run it.  For example:
+
+``` python
+rv = plpy.execute(plan, [ "usa" ])
+```
+
+When executed, `rv` will include all rows in the `sales` table where `region = usa`.
+
+Read on for a description of how one passes data between PL/Python function calls.
+
+##### <a id="plpyprepare_dictionaries"></a>Saving Execution Plans
+
+When you prepare an execution plan using the PL/Python module, the plan is automatically saved. See the [Postgres Server Programming Interface (SPI)](http://www.postgresql.org/docs/8.2/static/spi.html) documentation for information about execution plans.
+
+To make effective use of saved plans across function calls, you use one of the Python persistent storage dictionaries, SD or GD.
+
+The global dictionary SD is available to store data between function calls. This variable is private static data. The global dictionary GD is public data, and is available to all Python functions within a session. *Use GD with care*.
+
+Each function gets its own execution environment in the Python interpreter, so that global data and function arguments from `myfunc1` are not available to `myfunc2`. The exception is the data in the GD dictionary, as mentioned previously.
+
+This example saves an execution plan to the SD dictionary and then executes the plan:
+
+```sql
+=# CREATE FUNCTION usesavedplan() RETURNS text AS $$
+     select1plan = plpy.prepare("SELECT region FROM sales WHERE id=1")
+     SD["s1plan"] = select1plan
+     # other function processing
+     # execute the saved plan
+     rv = plpy.execute(SD["s1plan"])
+     return rv[0]["region"]
+   $$ LANGUAGE plpythonu;
+=# SELECT usesavedplan();
+```
+
+##### <a id="plpyprepare_example"></a>Example: plpy.prepare()
+
+Example: Use `plpy.prepare()` and `plpy.execute()` to prepare and run an execution plan using the GD dictionary:
+
+1. Define a PL/Python UDF to prepare and save an execution plan to the GD. Also  return the name of the plan:
+
+    ``` sql
+    =# CREATE OR REPLACE FUNCTION mypy_prepplan() 
+         RETURNS text 
+       AS $$ 
+         plan = plpy.prepare("SELECT * FROM sales WHERE region = $1 ORDER BY id", [ "text" ])
+         GD["getregionplan"] = plan
+         return "getregionplan"
+       $$ LANGUAGE plpythonu;
+    ```
+
+    This UDF, when run, will return the name (key) of the execution plan generated from the `plpy.prepare()` call.
+
+1. Define a PL/Python UDF to run the execution plan; this function will take the plan name and `region` name as an input:
+
+    ``` sql
+    =# CREATE OR REPLACE FUNCTION mypy_execplan(planname text, regionname text)
+         RETURNS integer 
+       AS $$ 
+         rv = plpy.execute(GD[planname], [ regionname ], 5)
+         year = rv[0]["year"]
+         return year
+       $$ LANGUAGE plpythonu STRICT;
+    ```
+
+    This UDF executes the `planname` plan that was previously saved to the GD. You will call `mypy_execplan()` with the `planname` returned from the `plpy.prepare()` call.
+
+3. Execute the `mypy_prepplan()` and `mypy_execplan()` UDFs, passing `region` `usa`:
+
+    ``` sql
+    =# SELECT mypy_execplan( mypy_prepplan(), 'usa' );
+     mypy_execplan
+    ---------------
+         2014
+    (1 row)
+    ```
+
+### <a id="pythonerrors"></a>Handling Python Errors and Messages 
+
+The `plpy` module implements the following message- and error-related functions, each of which takes a message string as an argument:
+
+- `plpy.debug(msg)`
+- `plpy.log(msg)`
+- `plpy.info(msg)`
+- `plpy.notice(msg)`
+- `plpy.warning(msg)`
+- `plpy.error(msg)`
+- `plpy.fatal(msg)`
+
+`plpy.error()` and `plpy.fatal()` raise a Python exception which, if uncaught, propagates out to the calling query, possibly aborting the current transaction or subtransaction. `raise plpy.ERROR(msg)` and `raise plpy.FATAL(msg)` are equivalent to calling `plpy.error()` and `plpy.fatal()`, respectively. Use the other message functions to generate messages of different priority levels.
+
+Messages may be reported to the client and/or written to the HAWQ server log file.  The HAWQ server configuration parameters [`log_min_messages`](../reference/guc/parameter_definitions.html#log_min_messages) and [`client_min_messages`](../reference/guc/parameter_definitions.html#client_min_messages) control where messages are reported.
+
+#### <a id="plpymessages_example"></a>Example: Generating Messages
+
+In this example, you will create a PL/Python UDF that includes some debug log messages. You will also configure your `psql` session to enable debug-level client logging.
+
+1. Define a PL/Python UDF that executes a query that will return at most 5 rows from the `sales` table. Invoke the `plpy.debug()` method to display some additional information:
+
+    ``` sql
+    =# CREATE OR REPLACE FUNCTION mypytest_debug(a integer) 
+         RETURNS text 
+       AS $$ 
+         plpy.debug('mypytest_debug executing query:  SELECT * FROM sales ORDER BY id')
+         rv = plpy.execute("SELECT * FROM sales ORDER BY id", 5)
+         plpy.debug('mypytest_debug: query returned ' + str(rv.nrows()) + ' rows')
+         region = rv[a]["region"]
+         return region
+       $$ LANGUAGE plpythonu;
+    ```
+
+2. Execute the `mypytest_debug()` UDF, passing the integer `2` as an argument:
+
+    ```sql
+    =# SELECT mypytest_debug(2);
+     mypytest_debug 
+    ----------------
+     asia
+    (1 row)
+    ```
+
+3. Enable `DEBUG2` level client logging:
+
+    ``` sql
+    =# SET client_min_messages=DEBUG2;
+    ```
+    
+2. Execute the `mypytest_debug()` UDF again:
+
+    ```sql
+    =# SELECT mypytest_debug(2);
+    ...
+    DEBUG2:  mypytest_debug executing query:  SELECT * FROM sales ORDER BY id
+    ...
+    DEBUG2:  mypytest_debug: query returned 5 rows
+    ...
+    ```
+
+    Debug output is very verbose. You will parse a lot of output to find the `mypytest_debug` messages. *Hint*: look both near the start and end of the output.
+    
+6. Turn off client-level debug logging:
+
+    ```sql
+    =# SET client_min_messages=NOTICE;
+    ```
+
+## <a id="pythonmodules-3rdparty"></a>3rd-Party Python Modules 
+
+PL/Python supports installation and use of 3rd-party Python Modules. This section includes examples for installing the `setuptools` and NumPy Python modules.
+
+**Note**: You must have superuser privileges to install Python modules to the system Python directories.
+
+### <a id="simpleinstall"></a>Example: Installing setuptools 
+
+In this example, you will manually install the Python `setuptools` module from the Python Package Index repository. `setuptools` enables you to easily download, build, install, upgrade, and uninstall Python packages.
+
+You will first build the module from the downloaded package, installing it on a single host. You will then build and install the module on all segment nodes in your HAWQ cluster.
+
+1. Download the `setuptools` module package from the Python Package Index site. For example, run this `wget` command on a HAWQ node as the `gpadmin` user:
+
+    ``` shell
+    $ ssh gpadmin@<hawq-node>
+    gpadmin@hawq-node$ . /usr/local/hawq/greenplum_path.sh
+    gpadmin@hawq-node$ mkdir plpython_pkgs
+    gpadmin@hawq-node$ cd plpython_pkgs
+    gpadmin@hawq-node$ export PLPYPKGDIR=`pwd`
+    gpadmin@hawq-node$ wget --no-check-certificate https://pypi.python.org/packages/source/s/setuptools/setuptools-18.4.tar.gz
+    ```
+
+2. Extract the files from the `tar.gz` package:
+
+    ``` shell
+    gpadmin@hawq-node$ tar -xzvf setuptools-18.4.tar.gz
+    ```
+
+3. Run the Python scripts to build and install the Python package; you must have superuser privileges to install Python modules to the system Python installation:
+
+    ``` shell
+    gpadmin@hawq-node$ cd setuptools-18.4
+    gpadmin@hawq-node$ python setup.py build 
+    gpadmin@hawq-node$ sudo python setup.py install
+    ```
+
+4. Run the following command to verify the module is available to Python:
+
+    ``` shell
+    gpadmin@hawq-node$ python -c "import setuptools"
+    ```
+    
+    If no error is returned, the `setuptools` module was successfully imported.
+
+5. The `setuptools` package installs the `easy_install` utility. This utility enables you to install Python packages from the Python Package Index repository. For example, this command installs the Python `pip` utility from the Python Package Index site:
+
+    ``` shell
+    gpadmin@hawq-node$ sudo easy_install pip
+    ```
+
+5. Copy the `setuptools` package to all HAWQ nodes in your cluster. For example, this command copies the `tar.gz` file from the current host to the host systems listed in the file `hawq-hosts`:
+
+    ``` shell
+    gpadmin@hawq-node$ cd $PLPYPKGDIR
+    gpadmin@hawq-node$ hawq scp -f hawq-hosts setuptools-18.4.tar.gz =:/home/gpadmin
+    ```
+
+6. Run the commands to build, install, and test the `setuptools` package you just copied to all hosts in your HAWQ cluster. For example:
+
+    ``` shell
+    gpadmin@hawq-node$ hawq ssh -f hawq-hosts
+    >>> mkdir plpython_pkgs
+    >>> cd plpython_pkgs
+    >>> tar -xzvf ../setuptools-18.4.tar.gz
+    >>> cd setuptools-18.4
+    >>> python setup.py build 
+    >>> sudo python setup.py install
+    >>> python -c "import setuptools"
+    >>> exit
+    ```
+
+### <a id="complexinstall"></a>Example: Installing NumPy 
+
+In this example, you will build and install the Python module NumPy. NumPy is a module for scientific computing with Python. For additional information about NumPy, refer to [http://www.numpy.org/](http://www.numpy.org/).
+
+This example assumes `yum` is installed on all HAWQ segment nodes and that the `gpadmin` user is a member of `sudoers` with `root` privileges on the nodes.
+
+#### <a id="complexinstall_prereq"></a>Prerequisites
+Building the NumPy package requires the following software:
+
+- OpenBLAS libraries - an open source implementation of BLAS (Basic Linear Algebra Subprograms)
+- Python development packages - python-devel
+- gcc compilers - gcc, gcc-gfortran, and gcc-c++
+
+Perform the following steps to set up the OpenBLAS compilation environment on each HAWQ node:
+
+1. Use `yum` to install gcc compilers from system repositories. The compilers are required on all hosts where you compile OpenBLAS.  For example:
+
+	``` shell
+	root@hawq-node$ yum -y install gcc gcc-gfortran gcc-c++ python-devel
+	```
+
+2. (Optionally required) If you cannot install the correct compiler versions with `yum`, you have the option to download the gcc compilers, including `gfortran`, from source and build and install them manually. Refer to [Building gfortran from Source](https://gcc.gnu.org/wiki/GFortranBinaries#FromSource) for `gfortran` build and install information.
+
+2. Create a symbolic link to `g++`, naming it `gxx`:
+
+	``` bash
+	root@hawq-node$ ln -s /usr/bin/g++ /usr/bin/gxx
+	```
+
+3. You may also need to create symbolic links to any libraries that have different versions available; for example, linking `libppl_c.so.4` to `libppl_c.so.2`.
+
+4. You can use the `hawq scp` utility to copy files to HAWQ hosts and the `hawq ssh` utility to run commands on those hosts.
+
+
+#### <a id="complexinstall_downdist"></a>Obtaining Packages
+
+Perform the following steps to download and distribute the OpenBLAS and NumPy source packages:
+
+1. Download the OpenBLAS and NumPy source files. For example, these `wget` commands download `tar.gz` files into a `packages` directory in the current working directory:
+
+    ``` shell
+    $ ssh gpadmin@<hawq-node>
+    gpadmin@hawq-node$ wget --directory-prefix=packages http://github.com/xianyi/OpenBLAS/tarball/v0.2.8
+    gpadmin@hawq-node$ wget --directory-prefix=packages http://sourceforge.net/projects/numpy/files/NumPy/1.8.0/numpy-1.8.0.tar.gz/download
+    ```
+
+2. Distribute the software to all nodes in your HAWQ cluster. For example, if you downloaded the software to `/home/gpadmin/packages`, these commands create the `packages` directory on all nodes and copies the software to the nodes listed in the `hawq-hosts` file:
+
+    ``` shell
+    gpadmin@hawq-node$ hawq ssh -f hawq-hosts mkdir packages 
+    gpadmin@hawq-node$ hawq scp -f hawq-hosts packages/* =:/home/gpadmin/packages
+    ```
+
+#### <a id="buildopenblas"></a>Build and Install OpenBLAS Libraries 
+
+Before building and installing the NumPy module, you must first build and install the OpenBLAS libraries. This section describes how to build and install the libraries on a single HAWQ node.
+
+1. Extract the OpenBLAS files from the file:
+
+	``` shell
+	$ ssh gpadmin@<hawq-node>
+	gpadmin@hawq-node$ cd packages
+	gpadmin@hawq-node$ tar xzf v0.2.8 -C /home/gpadmin/packages
+	gpadmin@hawq-node$ mv /home/gpadmin/packages/xianyi-OpenBLAS-9c51cdf /home/gpadmin/packages/OpenBLAS
+	```
+	
+	These commands extract the OpenBLAS tar file and simplify the unpacked directory name.
+
+2. Compile OpenBLAS. You must set the `LIBRARY_PATH` environment variable to the current `$LD_LIBRARY_PATH`. For example:
+
+	``` shell
+	gpadmin@hawq-node$ cd OpenBLAS
+	gpadmin@hawq-node$ export LIBRARY_PATH=$LD_LIBRARY_PATH
+	gpadmin@hawq-node$ make FC=gfortran USE_THREAD=0 TARGET=SANDYBRIDGE
+	```
+	
+	Replace the `TARGET` argument with the target appropriate for your hardware. The `TargetList.txt` file identifies the list of supported OpenBLAS targets.
+	
+	Compiling OpenBLAS make take some time.
+
+3. Install the OpenBLAS libraries in `/usr/local` and then change the owner of the files to `gpadmin`. You must have `root` privileges. For example:
+
+	``` shell
+	gpadmin@hawq-node$ sudo make PREFIX=/usr/local install
+	gpadmin@hawq-node$ sudo ldconfig
+	gpadmin@hawq-node$ sudo chown -R gpadmin /usr/local/lib
+	```
+
+	The following libraries are installed to `/usr/local/lib`, along with symbolic links:
+
+	``` shell
+	gpadmin@hawq-node$ ls -l gpadmin@hawq-node$
+	    ...
+	    libopenblas.a -> libopenblas_sandybridge-r0.2.8.a
+	    libopenblas_sandybridge-r0.2.8.a
+	    libopenblas_sandybridge-r0.2.8.so
+	    libopenblas.so -> libopenblas_sandybridge-r0.2.8.so
+	    libopenblas.so.0 -> libopenblas_sandybridge-r0.2.8.so
+	    ...
+	```
+
+4. Install the OpenBLAS libraries on all nodes in your HAWQ cluster. You can use the `hawq ssh` utility to similarly build and install the OpenBLAS libraries on each of the nodes. 
+
+    Or, you may choose to copy the OpenBLAS libraries you just built to all of the HAWQ cluster nodes. For example, these `hawq ssh` and `hawq scp` commands install prerequisite packages, and copy and install the OpenBLAS libraries on the hosts listed in the `hawq-hosts` file.
+
+    ``` shell
+    $ hawq ssh -f hawq-hosts -e 'sudo yum -y install gcc gcc-gfortran gcc-c++ python-devel'
+    $ hawq ssh -f hawq-hosts -e 'ln -s /usr/bin/g++ /usr/bin/gxx'
+    $ hawq ssh -f hawq-hosts -e sudo chown gpadmin /usr/local/lib
+    $ hawq scp -f hawq-hosts /usr/local/lib/libopen*sandy* =:/usr/local/lib
+    ```
+    ``` shell
+    $ hawq ssh -f hawq-hosts
+    >>> cd /usr/local/lib
+    >>> ln -s libopenblas_sandybridge-r0.2.8.a libopenblas.a
+    >>> ln -s libopenblas_sandybridge-r0.2.8.so libopenblas.so
+    >>> ln -s libopenblas_sandybridge-r0.2.8.so libopenblas.so.0
+    >>> sudo ldconfig
+   ```
+
+#### Build and Install NumPy <a name="buildinstallnumpy"></a>
+
+After you have installed the OpenBLAS libraries, you can build and install NumPy module. These steps install the NumPy module on a single host. You can use the `hawq ssh` utility to build and install the NumPy module on multiple hosts.
+
+1. Extract the NumPy module source files:
+
+	``` shell
+	gpadmin@hawq-node$ cd /home/gpadmin/packages
+	gpadmin@hawq-node$ tar xzf numpy-1.8.0.tar.gz
+	```
+	
+	Unpacking the `numpy-1.8.0.tar.gz` file creates a directory named `numpy-1.8.0` in the current directory.
+
+2. Set up the environment for building and installing NumPy:
+
+	``` shell
+	gpadmin@hawq-node$ export BLAS=/usr/local/lib/libopenblas.a
+	gpadmin@hawq-node$ export LAPACK=/usr/local/lib/libopenblas.a
+	gpadmin@hawq-node$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
+	gpadmin@hawq-node$ export LIBRARY_PATH=$LD_LIBRARY_PATH
+	```
+
+3. Build and install NumPy. (Building the NumPy package might take some time.)
+
+	``` shell
+	gpadmin@hawq-node$ cd numpy-1.8.0
+	gpadmin@hawq-node$ python setup.py build
+	gpadmin@hawq-node$ sudo python setup.py install
+	```
+
+	**Note:** If the NumPy module did not successfully build, the NumPy build process might need a `site.cfg` file that specifies the location of the OpenBLAS libraries. Create the `site.cfg` file in the NumPy package directory:
+
+	``` shell
+	gpadmin@hawq-node$ touch site.cfg
+	```
+
+	Add the following to the `site.cfg` file and run the NumPy build command again:
+
+	``` pre
+	[default]
+	library_dirs = /usr/local/lib
+
+	[atlas]
+	atlas_libs = openblas
+	library_dirs = /usr/local/lib
+
+	[lapack]
+	lapack_libs = openblas
+	library_dirs = /usr/local/lib
+
+	# added for scikit-learn 
+	[openblas]
+	libraries = openblas
+	library_dirs = /usr/local/lib
+	include_dirs = /usr/local/include
+	```
+
+4. Verify that the NumPy module is available for import by Python:
+
+	``` shell
+	gpadmin@hawq-node$ cd $HOME
+	gpadmin@hawq-node$ python -c "import numpy"
+	```
+	
+	If no error is returned, the NumPy module was successfully imported.
+
+5. As performed in the `setuptools` Python module installation, use the `hawq ssh` utility to build, install, and test the NumPy module on all HAWQ nodes.
+
+5. The environment variables that were required to build the NumPy module are also required in the `gpadmin` runtime environment to run Python NumPy functions. You can use the `echo` command to add the environment variables to `gpadmin`'s `.bashrc` file. For example, the following `echo` commands add the environment variables to the `.bashrc` file in `gpadmin`'s home directory:
+
+	``` shell
+	$ echo -e '\n#Needed for NumPy' >> ~/.bashrc
+	$ echo -e 'export BLAS=/usr/local/lib/libopenblas.a' >> ~/.bashrc
+	$ echo -e 'export LAPACK=/usr/local/lib/libopenblas.a' >> ~/.bashrc
+	$ echo -e 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib' >> ~/.bashrc
+	$ echo -e 'export LIBRARY_PATH=$LD_LIBRARY_PATH' >> ~/.bashrc
+	```
+
+    You can use the `hawq ssh` utility with these `echo` commands to add the environment variables to the `.bashrc` file on all nodes in your HAWQ cluster.
+
+### <a id="testingpythonmodules"></a>Testing Installed Python Modules 
+
+You can create a simple PL/Python user-defined function (UDF) to validate that a Python module is available in HAWQ. This example tests the NumPy module.
+
+1. Create a PL/Python UDF that imports the NumPy module:
+
+    ``` shell
+    gpadmin@hawq_node$ psql -d testdb
+    ```
+    ``` sql
+    =# CREATE OR REPLACE FUNCTION test_importnumpy(x int)
+       RETURNS text
+       AS $$
+         try:
+             from numpy import *
+             return 'SUCCESS'
+         except ImportError, e:
+             return 'FAILURE'
+       $$ LANGUAGE plpythonu;
+    ```
+
+    The function returns SUCCESS if the module is imported, and FAILURE if an import error occurs.
+
+2. Create a table that loads data on each HAWQ segment instance:
+
+    ``` sql
+    => CREATE TABLE disttbl AS (SELECT x FROM generate_series(1,50) x ) DISTRIBUTED BY (x);
+    ```
+    
+    Depending upon the size of your HAWQ installation, you may need to generate a larger series to ensure data is distributed to all segment instances.
+
+3. Run the UDF on the segment nodes where data is stored in the primary segment instances.
+
+    ``` sql
+    =# SELECT gp_segment_id, test_importnumpy(1) AS status
+         FROM disttbl
+         GROUP BY gp_segment_id, status
+         ORDER BY gp_segment_id, status;
+    ```
+
+    The `SELECT` command returns SUCCESS if the UDF imported the Python module on the HAWQ segment instance. FAILURE is returned if the Python module could not be imported.
+   
+
+#### <a id="testingpythonmodules"></a>Troubleshooting Python Module Import Failures
+
+Possible causes of a Python module import failure include:
+
+- A problem accessing required libraries. For the NumPy example, HAWQ might have a problem accessing the OpenBLAS libraries or the Python libraries on a segment host.
+
+	*Try*: Test importing the module on the segment host. This `hawq ssh` command tests importing the NumPy module on the segment host named mdw1.
+
+	``` shell
+	gpadmin@hawq-node$ hawq ssh -h mdw1 python -c "import numpy"
+	```
+
+- Environment variables may not be configured in the HAWQ environment. The Python import command may not return an error in this case.
+
+	*Try*: Ensure that the environment variables are properly set. For the NumPy example, ensure that the environment variables listed at the end of the section [Build and Install NumPy](#buildinstallnumpy) are defined in the `.bashrc` file for the `gpadmin` user on the master and all segment nodes.
+	
+	**Note:** The `.bashrc` file for the `gpadmin` user on the HAWQ master and all segment nodes must source the `greenplum_path.sh` file.
+
+	
+- HAWQ might not have been restarted after adding environment variable settings to the `.bashrc` file. Again, the Python import command may not return an error in this case.
+
+	*Try*: Ensure that you have restarted HAWQ.
+	
+	``` shell
+	gpadmin@master$ hawq restart cluster
+	```
+
+## <a id="dictionarygd"></a>Using the GD Dictionary to Improve PL/Python Performance 
+
+Importing a Python module is an expensive operation that can adversely affect performance. If you are importing the same module frequently, you can use Python global variables to import the module on the first invocation and forego loading the module on subsequent imports. 
+
+The following PL/Python function uses the GD persistent storage dictionary to avoid importing the module NumPy if it has already been imported in the GD. The UDF includes a call to `plpy.notice()` to display a message when importing the module.
+
+``` sql
+=# CREATE FUNCTION mypy_import2gd() RETURNS text AS $$ 
+     if 'numpy' not in GD:
+       plpy.notice('mypy_import2gd: importing module numpy')
+       import numpy
+       GD['numpy'] = numpy
+     return 'numpy'
+   $$ LANGUAGE plpythonu;
+```
+``` sql
+=# SELECT mypy_import2gd();
+NOTICE:  mypy_import2gd: importing module numpy
+CONTEXT:  PL/Python function "mypy_import2gd"
+ mypy_import2gd 
+----------------
+ numpy
+(1 row)
+```
+``` sql
+=# SELECT mypy_import2gd();
+ mypy_import2gd 
+----------------
+ numpy
+(1 row)
+```
+
+The second `SELECT` call does not include the `NOTICE` message, indicating that the module was obtained from the GD.
+
+## <a id="references"></a>References 
+
+This section lists references for using PL/Python.
+
+### <a id="technicalreferences"></a>Technical References 
+
+For information about PL/Python in HAWQ, see the [PL/Python - Python Procedural Language](http://www.postgresql.org/docs/8.2/static/plpython.html) PostgreSQL documentation.
+
+For information about Python Package Index (PyPI), refer to [PyPI - the Python Package Index](https://pypi.python.org/pypi).
+
+The following Python modules may be of interest:
+
+- [SciPy library](http://www.scipy.org/scipylib/index.html) provides user-friendly and efficient numerical routines including those for numerical integration and optimization. To download the SciPy package tar file:
+
+    ``` shell
+    hawq-node$ wget http://sourceforge.net/projects/scipy/files/scipy/0.10.1/scipy-0.10.1.tar.gz
+    ```
+
+- [Natural Language Toolkit](http://www.nltk.org/) (`nltk`) is a platform for building Python programs to work with human language data. 
+
+    The Python [`distribute`](https://pypi.python.org/pypi/distribute/0.6.21) package is required for `nltk`. The `distribute` package should be installed before installing `ntlk`. To download the `distribute` package tar file:
+
+    ``` shell
+    hawq-node$ wget http://pypi.python.org/packages/source/d/distribute/distribute-0.6.21.tar.gz
+    ```
+
+    To download the `nltk` package tar file:
+
+    ``` shell
+    hawq-node$ wget http://pypi.python.org/packages/source/n/nltk/nltk-2.0.2.tar.gz#md5=6e714ff74c3398e88be084748df4e657
+    ```
+
+### <a id="usefulreading"></a>Useful Reading 
+
+For information about the Python language, see [http://www.python.org/](http://www.python.org/).
+
+A set of slides that were used in a talk about how the Pivotal Data Science team uses the PyData stack in the Pivotal MPP databases and on Pivotal Cloud Foundry [http://www.slideshare.net/SrivatsanRamanujam/all-thingspythonpivotal](http://www.slideshare.net/SrivatsanRamanujam/all-thingspythonpivotal).
+



[48/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/bestpractices/querying_data_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/querying_data_bestpractices.html.md.erb b/bestpractices/querying_data_bestpractices.html.md.erb
deleted file mode 100644
index 3efe569..0000000
--- a/bestpractices/querying_data_bestpractices.html.md.erb
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: Best Practices for Querying Data
----
-
-To obtain the best results when querying data in HAWQ, review the best practices described in this topic.
-
-## <a id="virtual_seg_performance"></a>Factors Impacting Query Performance
-
-The number of virtual segments used for a query directly impacts the query's performance. The following factors can impact the degree of parallelism of a query:
-
--   **Cost of the query**. Small queries use fewer segments and larger queries use more segments. Some techniques used in defining resource queues can influence the number of both virtual segments and general resources allocated to queries. For more information, see [Best Practices for Using Resource Queues](managing_resources_bestpractices.html#topic_hvd_pls_wv).
--   **Available resources at query time**. If more resources are available in the resource queue, those resources will be used.
--   **Hash table and bucket number**. If the query involves only hash-distributed tables, the query's parallelism is fixed (equal to the hash table bucket number) under the following conditions: 
- 
-  	- The bucket number (bucketnum) configured for all the hash tables is the same for all tables 
-   - The table size for random tables is no more than 1.5 times the size allotted for the hash tables. 
-
-  Otherwise, the number of virtual segments depends on the query's cost: hash-distributed table queries behave like queries on randomly-distributed tables.
-  
--   **Query Type**: It can be difficult to calculate  resource costs for queries with some user-defined functions or for queries to external tables. With these queries,  the number of virtual segments is controlled by the  `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, as well as by the ON clause and the location list of external tables. If the query has a hash result table (e.g. `INSERT into hash_table`), the number of virtual segments must be equal to the bucket number of the resulting hash table. If the query is performed in utility mode, such as for `COPY` and `ANALYZE` operations, the virtual segment number is calculated by different policies.
-
-  ***Note:*** PXF external tables use the `default_hash_table_bucket_number` parameter, not the `hawq_rm_nvseg_perquery_perseg_limit` parameter, to control the number of virtual segments.
-
-See [Query Performance](../query/query-performance.html#topic38) for more details.
-
-## <a id="id_xtk_jmq_1v"></a>Examining Query Plans to Solve Problems
-
-If a query performs poorly, examine its query plan and ask the following questions:
-
--   **Do operations in the plan take an exceptionally long time?** Look for an operation that consumes the majority of query processing time. For example, if a scan on a hash table takes longer than expected, the data locality may be low; reloading the data can increase the data locality and speed up the query. Or, adjust `enable_<operator>` parameters to see if you can force the legacy query optimizer (planner) to choose a different plan by disabling a particular query plan operator for that query.
--   **Are the optimizer's estimates close to reality?** Run `EXPLAIN             ANALYZE` and see if the number of rows the optimizer estimates is close to the number of rows the query operation actually returns. If there is a large discrepancy, collect more statistics on the relevant columns.
--   **Are selective predicates applied early in the plan?** Apply the most selective filters early in the plan so fewer rows move up the plan tree. If the query plan does not correctly estimate query predicate selectivity, collect more statistics on the relevant columns. You can also try reordering the `WHERE` clause of your SQL statement.
--   **Does the optimizer choose the best join order?** When you have a query that joins multiple tables, make sure that the optimizer chooses the most selective join order. Joins that eliminate the largest number of rows should be done earlier in the plan so fewer rows move up the plan tree.
-
-    If the plan is not choosing the optimal join order, set `join_collapse_limit=1` and use explicit `JOIN` syntax in your SQL statement to force the legacy query optimizer (planner) to the specified join order. You can also collect more statistics on the relevant join columns.
-
--   **Does the optimizer selectively scan partitioned tables?** If you use table partitioning, is the optimizer selectively scanning only the child tables required to satisfy the query predicates? Scans of the parent tables should return 0 rows since the parent tables do not contain any data. See [Verifying Your Partition Strategy](../ddl/ddl-partition.html#topic74) for an example of a query plan that shows a selective partition scan.
--   **Does the optimizer choose hash aggregate and hash join operations where applicable?** Hash operations are typically much faster than other types of joins or aggregations. Row comparison and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer to choose hash operations, there must be sufficient memory available to hold the estimated number of rows. Run an `EXPLAIN  ANALYZE` for the query to show which plan operations spilled to disk, how much work memory they used, and how much memory was required to avoid spilling to disk. For example:
-
-    `Work_mem used: 23430K bytes avg, 23430K bytes max (seg0). Work_mem wanted: 33649K bytes avg, 33649K bytes max (seg0) to lessen workfile I/O affecting 2               workers.`
-
-  **Note:** The "bytes wanted" (*work\_mem* property) is based on the amount of data written to work files and is not exact. This property is not configurable. Use resource queues to manage memory use. For more information on resource queues, see [Configuring Resource Management](../resourcemgmt/ConfigureResourceManagement.html) and [Working with Hierarchical Resource Queues](../resourcemgmt/ResourceQueues.html).
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/bestpractices/secure_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/secure_bestpractices.html.md.erb b/bestpractices/secure_bestpractices.html.md.erb
deleted file mode 100644
index 04c5343..0000000
--- a/bestpractices/secure_bestpractices.html.md.erb
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Best Practices for Securing HAWQ
----
-
-To secure your HAWQ deployment, review the recommendations listed in this topic.
-
--   Set up SSL to encrypt your client server communication channel. See [Encrypting Client/Server Connections](../clientaccess/client_auth.html#topic5).
--   Configure `pg_hba.conf` only on HAWQ master. Do not configure it on segments.
-    **Note:** For a more secure system, consider removing all connections that use trust authentication from your master `pg_hba.conf`. Trust authentication means the role is granted access without any authentication, therefore bypassing all security. Replace trust entries with ident authentication if your system has an ident service available.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/Gemfile
----------------------------------------------------------------------
diff --git a/book/Gemfile b/book/Gemfile
new file mode 100644
index 0000000..f66d333
--- /dev/null
+++ b/book/Gemfile
@@ -0,0 +1,5 @@
+source "https://rubygems.org"
+
+gem 'bookbindery'
+
+gem 'libv8', '3.16.14.7'

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/Gemfile.lock
----------------------------------------------------------------------
diff --git a/book/Gemfile.lock b/book/Gemfile.lock
new file mode 100644
index 0000000..3c483c0
--- /dev/null
+++ b/book/Gemfile.lock
@@ -0,0 +1,203 @@
+GEM
+  remote: https://rubygems.org/
+  specs:
+    activesupport (4.2.7.1)
+      i18n (~> 0.7)
+      json (~> 1.7, >= 1.7.7)
+      minitest (~> 5.1)
+      thread_safe (~> 0.3, >= 0.3.4)
+      tzinfo (~> 1.1)
+    addressable (2.4.0)
+    ansi (1.5.0)
+    bookbindery (9.12.0)
+      ansi (~> 1.4)
+      css_parser
+      elasticsearch
+      fog-aws (~> 0.7.1)
+      font-awesome-sass
+      git (~> 1.2.8)
+      middleman (~> 3.4.0)
+      middleman-livereload (~> 3.4.3)
+      middleman-syntax (~> 2.0)
+      nokogiri (= 1.6.7.2)
+      puma
+      rack-rewrite
+      redcarpet (~> 3.2.3)
+      rouge (!= 1.9.1)
+      therubyracer
+      thor
+    builder (3.2.2)
+    capybara (2.4.4)
+      mime-types (>= 1.16)
+      nokogiri (>= 1.3.3)
+      rack (>= 1.0.0)
+      rack-test (>= 0.5.4)
+      xpath (~> 2.0)
+    chunky_png (1.3.6)
+    coffee-script (2.4.1)
+      coffee-script-source
+      execjs
+    coffee-script-source (1.10.0)
+    compass (1.0.3)
+      chunky_png (~> 1.2)
+      compass-core (~> 1.0.2)
+      compass-import-once (~> 1.0.5)
+      rb-fsevent (>= 0.9.3)
+      rb-inotify (>= 0.9)
+      sass (>= 3.3.13, < 3.5)
+    compass-core (1.0.3)
+      multi_json (~> 1.0)
+      sass (>= 3.3.0, < 3.5)
+    compass-import-once (1.0.5)
+      sass (>= 3.2, < 3.5)
+    css_parser (1.4.5)
+      addressable
+    elasticsearch (2.0.0)
+      elasticsearch-api (= 2.0.0)
+      elasticsearch-transport (= 2.0.0)
+    elasticsearch-api (2.0.0)
+      multi_json
+    elasticsearch-transport (2.0.0)
+      faraday
+      multi_json
+    em-websocket (0.5.1)
+      eventmachine (>= 0.12.9)
+      http_parser.rb (~> 0.6.0)
+    erubis (2.7.0)
+    eventmachine (1.2.0.1)
+    excon (0.51.0)
+    execjs (2.7.0)
+    faraday (0.9.2)
+      multipart-post (>= 1.2, < 3)
+    ffi (1.9.14)
+    fog-aws (0.7.6)
+      fog-core (~> 1.27)
+      fog-json (~> 1.0)
+      fog-xml (~> 0.1)
+      ipaddress (~> 0.8)
+    fog-core (1.42.0)
+      builder
+      excon (~> 0.49)
+      formatador (~> 0.2)
+    fog-json (1.0.2)
+      fog-core (~> 1.0)
+      multi_json (~> 1.10)
+    fog-xml (0.1.2)
+      fog-core
+      nokogiri (~> 1.5, >= 1.5.11)
+    font-awesome-sass (4.6.2)
+      sass (>= 3.2)
+    formatador (0.2.5)
+    git (1.2.9.1)
+    haml (4.0.7)
+      tilt
+    hike (1.2.3)
+    hooks (0.4.1)
+      uber (~> 0.0.14)
+    http_parser.rb (0.6.0)
+    i18n (0.7.0)
+    ipaddress (0.8.3)
+    json (1.8.3)
+    kramdown (1.12.0)
+    libv8 (3.16.14.7)
+    listen (3.0.8)
+      rb-fsevent (~> 0.9, >= 0.9.4)
+      rb-inotify (~> 0.9, >= 0.9.7)
+    middleman (3.4.1)
+      coffee-script (~> 2.2)
+      compass (>= 1.0.0, < 2.0.0)
+      compass-import-once (= 1.0.5)
+      execjs (~> 2.0)
+      haml (>= 4.0.5)
+      kramdown (~> 1.2)
+      middleman-core (= 3.4.1)
+      middleman-sprockets (>= 3.1.2)
+      sass (>= 3.4.0, < 4.0)
+      uglifier (~> 2.5)
+    middleman-core (3.4.1)
+      activesupport (~> 4.1)
+      bundler (~> 1.1)
+      capybara (~> 2.4.4)
+      erubis
+      hooks (~> 0.3)
+      i18n (~> 0.7.0)
+      listen (~> 3.0.3)
+      padrino-helpers (~> 0.12.3)
+      rack (>= 1.4.5, < 2.0)
+      thor (>= 0.15.2, < 2.0)
+      tilt (~> 1.4.1, < 2.0)
+    middleman-livereload (3.4.6)
+      em-websocket (~> 0.5.1)
+      middleman-core (>= 3.3)
+      rack-livereload (~> 0.3.15)
+    middleman-sprockets (3.4.2)
+      middleman-core (>= 3.3)
+      sprockets (~> 2.12.1)
+      sprockets-helpers (~> 1.1.0)
+      sprockets-sass (~> 1.3.0)
+    middleman-syntax (2.1.0)
+      middleman-core (>= 3.2)
+      rouge (~> 1.0)
+    mime-types (3.1)
+      mime-types-data (~> 3.2015)
+    mime-types-data (3.2016.0521)
+    mini_portile2 (2.0.0)
+    minitest (5.9.0)
+    multi_json (1.12.1)
+    multipart-post (2.0.0)
+    nokogiri (1.6.7.2)
+      mini_portile2 (~> 2.0.0.rc2)
+    padrino-helpers (0.12.8)
+      i18n (~> 0.6, >= 0.6.7)
+      padrino-support (= 0.12.8)
+      tilt (~> 1.4.1)
+    padrino-support (0.12.8)
+      activesupport (>= 3.1)
+    puma (3.6.0)
+    rack (1.6.4)
+    rack-livereload (0.3.16)
+      rack
+    rack-rewrite (1.5.1)
+    rack-test (0.6.3)
+      rack (>= 1.0)
+    rb-fsevent (0.9.7)
+    rb-inotify (0.9.7)
+      ffi (>= 0.5.0)
+    redcarpet (3.2.3)
+    ref (2.0.0)
+    rouge (1.11.1)
+    sass (3.4.22)
+    sprockets (2.12.4)
+      hike (~> 1.2)
+      multi_json (~> 1.0)
+      rack (~> 1.0)
+      tilt (~> 1.1, != 1.3.0)
+    sprockets-helpers (1.1.0)
+      sprockets (~> 2.0)
+    sprockets-sass (1.3.1)
+      sprockets (~> 2.0)
+      tilt (~> 1.1)
+    therubyracer (0.12.2)
+      libv8 (~> 3.16.14.0)
+      ref
+    thor (0.19.1)
+    thread_safe (0.3.5)
+    tilt (1.4.1)
+    tzinfo (1.2.2)
+      thread_safe (~> 0.1)
+    uber (0.0.15)
+    uglifier (2.7.2)
+      execjs (>= 0.3.0)
+      json (>= 1.8.0)
+    xpath (2.0.0)
+      nokogiri (~> 1.3)
+
+PLATFORMS
+  ruby
+
+DEPENDENCIES
+  bookbindery
+  libv8 (= 3.16.14.7)
+
+BUNDLED WITH
+   1.11.2

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/config.yml
----------------------------------------------------------------------
diff --git a/book/config.yml b/book/config.yml
new file mode 100644
index 0000000..22d2799
--- /dev/null
+++ b/book/config.yml
@@ -0,0 +1,21 @@
+book_repo: incubator-hawq-docs/book
+
+public_host: http://localhost:9292/
+
+sections:
+ - repository:
+     name: incubator-hawq-docs/markdown
+   directory: docs/userguide/2.1.0.0-incubating
+   subnav_template: apache-hawq-nav
+
+template_variables:
+  use_global_header: true
+  global_header_product_href: https://github.com/apache/incubator-hawq
+  global_header_product_link_text: Downloads
+  support_url: http://mail-archives.apache.org/mod_mbox/incubator-hawq-dev/
+  product_url: http://hawq.incubator.apache.org/
+  book_title: Apache HAWQ (incubating) Documentation
+  support_link: <a href="https://issues.apache.org/jira/browse/HAWQ" target="_blank">Issues</a>
+  support_call_to_action: <a href="https://issues.apache.org/jira/browse/HAWQ" target="_blank">Need Help?</a>
+  product_link: <div class="header-item"><a href="http://hawq.incubator.apache.org/">Back to Apache HAWQ Page</a></div>
+  book_title_short: Apache HAWQ (Incubating) Docs

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/images/favicon.ico
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/images/favicon.ico b/book/master_middleman/source/images/favicon.ico
new file mode 100644
index 0000000..b2c3a0c
Binary files /dev/null and b/book/master_middleman/source/images/favicon.ico differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/javascripts/book.js
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/javascripts/book.js b/book/master_middleman/source/javascripts/book.js
new file mode 100644
index 0000000..90879c4
--- /dev/null
+++ b/book/master_middleman/source/javascripts/book.js
@@ -0,0 +1,16 @@
+// Declare your book-specific javascript overrides in this file.
+//= require 'waypoints/waypoint'
+//= require 'waypoints/context'
+//= require 'waypoints/group'
+//= require 'waypoints/noframeworkAdapter'
+//= require 'waypoints/sticky'
+
+window.onload = function() {
+  Bookbinder.boot();
+  var sticky = new Waypoint.Sticky({
+    element: document.querySelector('#js-to-top'),
+    wrapper: '<div class="sticky-wrapper" />',
+    stuckClass: 'sticky',
+    offset: 100
+  });
+}

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/javascripts/waypoints/context.js
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/javascripts/waypoints/context.js b/book/master_middleman/source/javascripts/waypoints/context.js
new file mode 100644
index 0000000..5e3551b
--- /dev/null
+++ b/book/master_middleman/source/javascripts/waypoints/context.js
@@ -0,0 +1,300 @@
+(function() {
+  'use strict'
+
+  function requestAnimationFrameShim(callback) {
+    window.setTimeout(callback, 1000 / 60)
+  }
+
+  var keyCounter = 0
+  var contexts = {}
+  var Waypoint = window.Waypoint
+  var oldWindowLoad = window.onload
+
+  /* http://imakewebthings.com/waypoints/api/context */
+  function Context(element) {
+    this.element = element
+    this.Adapter = Waypoint.Adapter
+    this.adapter = new this.Adapter(element)
+    this.key = 'waypoint-context-' + keyCounter
+    this.didScroll = false
+    this.didResize = false
+    this.oldScroll = {
+      x: this.adapter.scrollLeft(),
+      y: this.adapter.scrollTop()
+    }
+    this.waypoints = {
+      vertical: {},
+      horizontal: {}
+    }
+
+    element.waypointContextKey = this.key
+    contexts[element.waypointContextKey] = this
+    keyCounter += 1
+
+    this.createThrottledScrollHandler()
+    this.createThrottledResizeHandler()
+  }
+
+  /* Private */
+  Context.prototype.add = function(waypoint) {
+    var axis = waypoint.options.horizontal ? 'horizontal' : 'vertical'
+    this.waypoints[axis][waypoint.key] = waypoint
+    this.refresh()
+  }
+
+  /* Private */
+  Context.prototype.checkEmpty = function() {
+    var horizontalEmpty = this.Adapter.isEmptyObject(this.waypoints.horizontal)
+    var verticalEmpty = this.Adapter.isEmptyObject(this.waypoints.vertical)
+    if (horizontalEmpty && verticalEmpty) {
+      this.adapter.off('.waypoints')
+      delete contexts[this.key]
+    }
+  }
+
+  /* Private */
+  Context.prototype.createThrottledResizeHandler = function() {
+    var self = this
+
+    function resizeHandler() {
+      self.handleResize()
+      self.didResize = false
+    }
+
+    this.adapter.on('resize.waypoints', function() {
+      if (!self.didResize) {
+        self.didResize = true
+        Waypoint.requestAnimationFrame(resizeHandler)
+      }
+    })
+  }
+
+  /* Private */
+  Context.prototype.createThrottledScrollHandler = function() {
+    var self = this
+    function scrollHandler() {
+      self.handleScroll()
+      self.didScroll = false
+    }
+
+    this.adapter.on('scroll.waypoints', function() {
+      if (!self.didScroll || Waypoint.isTouch) {
+        self.didScroll = true
+        Waypoint.requestAnimationFrame(scrollHandler)
+      }
+    })
+  }
+
+  /* Private */
+  Context.prototype.handleResize = function() {
+    Waypoint.Context.refreshAll()
+  }
+
+  /* Private */
+  Context.prototype.handleScroll = function() {
+    var triggeredGroups = {}
+    var axes = {
+      horizontal: {
+        newScroll: this.adapter.scrollLeft(),
+        oldScroll: this.oldScroll.x,
+        forward: 'right',
+        backward: 'left'
+      },
+      vertical: {
+        newScroll: this.adapter.scrollTop(),
+        oldScroll: this.oldScroll.y,
+        forward: 'down',
+        backward: 'up'
+      }
+    }
+
+    for (var axisKey in axes) {
+      var axis = axes[axisKey]
+      var isForward = axis.newScroll > axis.oldScroll
+      var direction = isForward ? axis.forward : axis.backward
+
+      for (var waypointKey in this.waypoints[axisKey]) {
+        var waypoint = this.waypoints[axisKey][waypointKey]
+        var wasBeforeTriggerPoint = axis.oldScroll < waypoint.triggerPoint
+        var nowAfterTriggerPoint = axis.newScroll >= waypoint.triggerPoint
+        var crossedForward = wasBeforeTriggerPoint && nowAfterTriggerPoint
+        var crossedBackward = !wasBeforeTriggerPoint && !nowAfterTriggerPoint
+        if (crossedForward || crossedBackward) {
+          waypoint.queueTrigger(direction)
+          triggeredGroups[waypoint.group.id] = waypoint.group
+        }
+      }
+    }
+
+    for (var groupKey in triggeredGroups) {
+      triggeredGroups[groupKey].flushTriggers()
+    }
+
+    this.oldScroll = {
+      x: axes.horizontal.newScroll,
+      y: axes.vertical.newScroll
+    }
+  }
+
+  /* Private */
+  Context.prototype.innerHeight = function() {
+    /*eslint-disable eqeqeq */
+    if (this.element == this.element.window) {
+      return Waypoint.viewportHeight()
+    }
+    /*eslint-enable eqeqeq */
+    return this.adapter.innerHeight()
+  }
+
+  /* Private */
+  Context.prototype.remove = function(waypoint) {
+    delete this.waypoints[waypoint.axis][waypoint.key]
+    this.checkEmpty()
+  }
+
+  /* Private */
+  Context.prototype.innerWidth = function() {
+    /*eslint-disable eqeqeq */
+    if (this.element == this.element.window) {
+      return Waypoint.viewportWidth()
+    }
+    /*eslint-enable eqeqeq */
+    return this.adapter.innerWidth()
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/context-destroy */
+  Context.prototype.destroy = function() {
+    var allWaypoints = []
+    for (var axis in this.waypoints) {
+      for (var waypointKey in this.waypoints[axis]) {
+        allWaypoints.push(this.waypoints[axis][waypointKey])
+      }
+    }
+    for (var i = 0, end = allWaypoints.length; i < end; i++) {
+      allWaypoints[i].destroy()
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/context-refresh */
+  Context.prototype.refresh = function() {
+    /*eslint-disable eqeqeq */
+    var isWindow = this.element == this.element.window
+    /*eslint-enable eqeqeq */
+    var contextOffset = isWindow ? undefined : this.adapter.offset()
+    var triggeredGroups = {}
+    var axes
+
+    this.handleScroll()
+    axes = {
+      horizontal: {
+        contextOffset: isWindow ? 0 : contextOffset.left,
+        contextScroll: isWindow ? 0 : this.oldScroll.x,
+        contextDimension: this.innerWidth(),
+        oldScroll: this.oldScroll.x,
+        forward: 'right',
+        backward: 'left',
+        offsetProp: 'left'
+      },
+      vertical: {
+        contextOffset: isWindow ? 0 : contextOffset.top,
+        contextScroll: isWindow ? 0 : this.oldScroll.y,
+        contextDimension: this.innerHeight(),
+        oldScroll: this.oldScroll.y,
+        forward: 'down',
+        backward: 'up',
+        offsetProp: 'top'
+      }
+    }
+
+    for (var axisKey in axes) {
+      var axis = axes[axisKey]
+      for (var waypointKey in this.waypoints[axisKey]) {
+        var waypoint = this.waypoints[axisKey][waypointKey]
+        var adjustment = waypoint.options.offset
+        var oldTriggerPoint = waypoint.triggerPoint
+        var elementOffset = 0
+        var freshWaypoint = oldTriggerPoint == null
+        var contextModifier, wasBeforeScroll, nowAfterScroll
+        var triggeredBackward, triggeredForward
+
+        if (waypoint.element !== waypoint.element.window) {
+          elementOffset = waypoint.adapter.offset()[axis.offsetProp]
+        }
+
+        if (typeof adjustment === 'function') {
+          adjustment = adjustment.apply(waypoint)
+        }
+        else if (typeof adjustment === 'string') {
+          adjustment = parseFloat(adjustment)
+          if (waypoint.options.offset.indexOf('%') > - 1) {
+            adjustment = Math.ceil(axis.contextDimension * adjustment / 100)
+          }
+        }
+
+        contextModifier = axis.contextScroll - axis.contextOffset
+        waypoint.triggerPoint = elementOffset + contextModifier - adjustment
+        wasBeforeScroll = oldTriggerPoint < axis.oldScroll
+        nowAfterScroll = waypoint.triggerPoint >= axis.oldScroll
+        triggeredBackward = wasBeforeScroll && nowAfterScroll
+        triggeredForward = !wasBeforeScroll && !nowAfterScroll
+
+        if (!freshWaypoint && triggeredBackward) {
+          waypoint.queueTrigger(axis.backward)
+          triggeredGroups[waypoint.group.id] = waypoint.group
+        }
+        else if (!freshWaypoint && triggeredForward) {
+          waypoint.queueTrigger(axis.forward)
+          triggeredGroups[waypoint.group.id] = waypoint.group
+        }
+        else if (freshWaypoint && axis.oldScroll >= waypoint.triggerPoint) {
+          waypoint.queueTrigger(axis.forward)
+          triggeredGroups[waypoint.group.id] = waypoint.group
+        }
+      }
+    }
+
+    Waypoint.requestAnimationFrame(function() {
+      for (var groupKey in triggeredGroups) {
+        triggeredGroups[groupKey].flushTriggers()
+      }
+    })
+
+    return this
+  }
+
+  /* Private */
+  Context.findOrCreateByElement = function(element) {
+    return Context.findByElement(element) || new Context(element)
+  }
+
+  /* Private */
+  Context.refreshAll = function() {
+    for (var contextId in contexts) {
+      contexts[contextId].refresh()
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/context-find-by-element */
+  Context.findByElement = function(element) {
+    return contexts[element.waypointContextKey]
+  }
+
+  window.onload = function() {
+    if (oldWindowLoad) {
+      oldWindowLoad()
+    }
+    Context.refreshAll()
+  }
+
+  Waypoint.requestAnimationFrame = function(callback) {
+    var requestFn = window.requestAnimationFrame ||
+      window.mozRequestAnimationFrame ||
+      window.webkitRequestAnimationFrame ||
+      requestAnimationFrameShim
+    requestFn.call(window, callback)
+  }
+  Waypoint.Context = Context
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/javascripts/waypoints/group.js
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/javascripts/waypoints/group.js b/book/master_middleman/source/javascripts/waypoints/group.js
new file mode 100644
index 0000000..57c3038
--- /dev/null
+++ b/book/master_middleman/source/javascripts/waypoints/group.js
@@ -0,0 +1,105 @@
+(function() {
+  'use strict'
+
+  function byTriggerPoint(a, b) {
+    return a.triggerPoint - b.triggerPoint
+  }
+
+  function byReverseTriggerPoint(a, b) {
+    return b.triggerPoint - a.triggerPoint
+  }
+
+  var groups = {
+    vertical: {},
+    horizontal: {}
+  }
+  var Waypoint = window.Waypoint
+
+  /* http://imakewebthings.com/waypoints/api/group */
+  function Group(options) {
+    this.name = options.name
+    this.axis = options.axis
+    this.id = this.name + '-' + this.axis
+    this.waypoints = []
+    this.clearTriggerQueues()
+    groups[this.axis][this.name] = this
+  }
+
+  /* Private */
+  Group.prototype.add = function(waypoint) {
+    this.waypoints.push(waypoint)
+  }
+
+  /* Private */
+  Group.prototype.clearTriggerQueues = function() {
+    this.triggerQueues = {
+      up: [],
+      down: [],
+      left: [],
+      right: []
+    }
+  }
+
+  /* Private */
+  Group.prototype.flushTriggers = function() {
+    for (var direction in this.triggerQueues) {
+      var waypoints = this.triggerQueues[direction]
+      var reverse = direction === 'up' || direction === 'left'
+      waypoints.sort(reverse ? byReverseTriggerPoint : byTriggerPoint)
+      for (var i = 0, end = waypoints.length; i < end; i += 1) {
+        var waypoint = waypoints[i]
+        if (waypoint.options.continuous || i === waypoints.length - 1) {
+          waypoint.trigger([direction])
+        }
+      }
+    }
+    this.clearTriggerQueues()
+  }
+
+  /* Private */
+  Group.prototype.next = function(waypoint) {
+    this.waypoints.sort(byTriggerPoint)
+    var index = Waypoint.Adapter.inArray(waypoint, this.waypoints)
+    var isLast = index === this.waypoints.length - 1
+    return isLast ? null : this.waypoints[index + 1]
+  }
+
+  /* Private */
+  Group.prototype.previous = function(waypoint) {
+    this.waypoints.sort(byTriggerPoint)
+    var index = Waypoint.Adapter.inArray(waypoint, this.waypoints)
+    return index ? this.waypoints[index - 1] : null
+  }
+
+  /* Private */
+  Group.prototype.queueTrigger = function(waypoint, direction) {
+    this.triggerQueues[direction].push(waypoint)
+  }
+
+  /* Private */
+  Group.prototype.remove = function(waypoint) {
+    var index = Waypoint.Adapter.inArray(waypoint, this.waypoints)
+    if (index > -1) {
+      this.waypoints.splice(index, 1)
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/first */
+  Group.prototype.first = function() {
+    return this.waypoints[0]
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/last */
+  Group.prototype.last = function() {
+    return this.waypoints[this.waypoints.length - 1]
+  }
+
+  /* Private */
+  Group.findOrCreate = function(options) {
+    return groups[options.axis][options.name] || new Group(options)
+  }
+
+  Waypoint.Group = Group
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/javascripts/waypoints/noframeworkAdapter.js
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/javascripts/waypoints/noframeworkAdapter.js b/book/master_middleman/source/javascripts/waypoints/noframeworkAdapter.js
new file mode 100644
index 0000000..99abcb5
--- /dev/null
+++ b/book/master_middleman/source/javascripts/waypoints/noframeworkAdapter.js
@@ -0,0 +1,213 @@
+(function() {
+  'use strict'
+
+  var Waypoint = window.Waypoint
+
+  function isWindow(element) {
+    return element === element.window
+  }
+
+  function getWindow(element) {
+    if (isWindow(element)) {
+      return element
+    }
+    return element.defaultView
+  }
+
+  function classNameRegExp(className) {
+    return new RegExp("\\b" + className + "\\b");
+  }
+
+  function NoFrameworkAdapter(element) {
+    this.element = element
+    this.handlers = {}
+  }
+
+  NoFrameworkAdapter.prototype.innerHeight = function() {
+    var isWin = isWindow(this.element)
+    return isWin ? this.element.innerHeight : this.element.clientHeight
+  }
+
+  NoFrameworkAdapter.prototype.innerWidth = function() {
+    var isWin = isWindow(this.element)
+    return isWin ? this.element.innerWidth : this.element.clientWidth
+  }
+
+  NoFrameworkAdapter.prototype.off = function(event, handler) {
+    function removeListeners(element, listeners, handler) {
+      for (var i = 0, end = listeners.length - 1; i < end; i++) {
+        var listener = listeners[i]
+        if (!handler || handler === listener) {
+          element.removeEventListener(listener)
+        }
+      }
+    }
+
+    var eventParts = event.split('.')
+    var eventType = eventParts[0]
+    var namespace = eventParts[1]
+    var element = this.element
+
+    if (namespace && this.handlers[namespace] && eventType) {
+      removeListeners(element, this.handlers[namespace][eventType], handler)
+      this.handlers[namespace][eventType] = []
+    }
+    else if (eventType) {
+      for (var ns in this.handlers) {
+        removeListeners(element, this.handlers[ns][eventType] || [], handler)
+        this.handlers[ns][eventType] = []
+      }
+    }
+    else if (namespace && this.handlers[namespace]) {
+      for (var type in this.handlers[namespace]) {
+        removeListeners(element, this.handlers[namespace][type], handler)
+      }
+      this.handlers[namespace] = {}
+    }
+  }
+
+  /* Adapted from jQuery 1.x offset() */
+  NoFrameworkAdapter.prototype.offset = function() {
+    if (!this.element.ownerDocument) {
+      return null
+    }
+
+    var documentElement = this.element.ownerDocument.documentElement
+    var win = getWindow(this.element.ownerDocument)
+    var rect = {
+      top: 0,
+      left: 0
+    }
+
+    if (this.element.getBoundingClientRect) {
+      rect = this.element.getBoundingClientRect()
+    }
+
+    return {
+      top: rect.top + win.pageYOffset - documentElement.clientTop,
+      left: rect.left + win.pageXOffset - documentElement.clientLeft
+    }
+  }
+
+  NoFrameworkAdapter.prototype.on = function(event, handler) {
+    var eventParts = event.split('.')
+    var eventType = eventParts[0]
+    var namespace = eventParts[1] || '__default'
+    var nsHandlers = this.handlers[namespace] = this.handlers[namespace] || {}
+    var nsTypeList = nsHandlers[eventType] = nsHandlers[eventType] || []
+
+    nsTypeList.push(handler)
+    this.element.addEventListener(eventType, handler)
+  }
+
+  NoFrameworkAdapter.prototype.outerHeight = function(includeMargin) {
+    var height = this.innerHeight()
+    var computedStyle
+
+    if (includeMargin && !isWindow(this.element)) {
+      computedStyle = window.getComputedStyle(this.element)
+      height += parseInt(computedStyle.marginTop, 10)
+      height += parseInt(computedStyle.marginBottom, 10)
+    }
+
+    return height
+  }
+
+  NoFrameworkAdapter.prototype.outerWidth = function(includeMargin) {
+    var width = this.innerWidth()
+    var computedStyle
+
+    if (includeMargin && !isWindow(this.element)) {
+      computedStyle = window.getComputedStyle(this.element)
+      width += parseInt(computedStyle.marginLeft, 10)
+      width += parseInt(computedStyle.marginRight, 10)
+    }
+
+    return width
+  }
+
+  NoFrameworkAdapter.prototype.scrollLeft = function() {
+    var win = getWindow(this.element)
+    return win ? win.pageXOffset : this.element.scrollLeft
+  }
+
+  NoFrameworkAdapter.prototype.scrollTop = function() {
+    var win = getWindow(this.element)
+    return win ? win.pageYOffset : this.element.scrollTop
+  }
+
+  NoFrameworkAdapter.prototype.height = function(newHeight) {
+    this.element.style.height = newHeight;
+  }
+
+  NoFrameworkAdapter.prototype.removeClass = function(className) {
+    this.element.className = this.element.className.replace(classNameRegExp(className), '');
+  }
+
+  NoFrameworkAdapter.prototype.toggleClass = function(className, addClass) {
+    var check = classNameRegExp(className);
+    if (check.test(this.element.className)) {
+      if (!addClass) {
+        this.removeClass(className);
+      }
+    } else {
+      this.element.className += ' ' + className;
+    }
+  }
+
+  NoFrameworkAdapter.prototype.parent = function() {
+    return new NoFrameworkAdapter(this.element.parentNode);
+  }
+
+  NoFrameworkAdapter.prototype.wrap = function(wrapper) {
+    this.element.insertAdjacentHTML('beforebegin', wrapper)
+    var wrapperNode = this.element.previousSibling
+    this.element.parentNode.removeChild(this.element)
+    wrapperNode.appendChild(this.element)
+  }
+
+  NoFrameworkAdapter.extend = function() {
+    var args = Array.prototype.slice.call(arguments)
+
+    function merge(target, obj) {
+      if (typeof target === 'object' && typeof obj === 'object') {
+        for (var key in obj) {
+          if (obj.hasOwnProperty(key)) {
+            target[key] = obj[key]
+          }
+        }
+      }
+
+      return target
+    }
+
+    for (var i = 1, end = args.length; i < end; i++) {
+      merge(args[0], args[i])
+    }
+    return args[0]
+  }
+
+  NoFrameworkAdapter.inArray = function(element, array, i) {
+    return array == null ? -1 : array.indexOf(element, i)
+  }
+
+  NoFrameworkAdapter.isEmptyObject = function(obj) {
+    /* eslint no-unused-vars: 0 */
+    for (var name in obj) {
+      return false
+    }
+    return true
+  }
+
+  NoFrameworkAdapter.proxy = function(func, obj) {
+    return function() {
+      return func.apply(obj, arguments);
+    }
+  }
+
+  Waypoint.adapters.push({
+    name: 'noframework',
+    Adapter: NoFrameworkAdapter
+  })
+  Waypoint.Adapter = NoFrameworkAdapter
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/javascripts/waypoints/sticky.js
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/javascripts/waypoints/sticky.js b/book/master_middleman/source/javascripts/waypoints/sticky.js
new file mode 100644
index 0000000..569fcdb
--- /dev/null
+++ b/book/master_middleman/source/javascripts/waypoints/sticky.js
@@ -0,0 +1,63 @@
+(function() {
+  'use strict'
+
+  var Waypoint = window.Waypoint;
+  var adapter = Waypoint.Adapter;
+
+  /* http://imakewebthings.com/waypoints/shortcuts/sticky-elements */
+  function Sticky(options) {
+    this.options = adapter.extend({}, Waypoint.defaults, Sticky.defaults, options)
+    this.element = this.options.element
+    this.$element = new adapter(this.element)
+    this.createWrapper()
+    this.createWaypoint()
+  }
+
+  /* Private */
+  Sticky.prototype.createWaypoint = function() {
+    var originalHandler = this.options.handler
+
+    this.waypoint = new Waypoint(adapter.extend({}, this.options, {
+      element: this.wrapper,
+      handler: adapter.proxy(function(direction) {
+        var shouldBeStuck = this.options.direction.indexOf(direction) > -1
+        var wrapperHeight = shouldBeStuck ? this.$element.outerHeight(true) : ''
+
+        this.$wrapper.height(wrapperHeight)
+        this.$element.toggleClass(this.options.stuckClass, shouldBeStuck)
+
+        if (originalHandler) {
+          originalHandler.call(this, direction)
+        }
+      }, this)
+    }))
+  }
+
+  /* Private */
+  Sticky.prototype.createWrapper = function() {
+    if (this.options.wrapper) {
+      this.$element.wrap(this.options.wrapper)
+    }
+    this.$wrapper = this.$element.parent()
+    this.wrapper = this.$wrapper.element
+  }
+
+  /* Public */
+  Sticky.prototype.destroy = function() {
+    if (this.$element.parent().element === this.wrapper) {
+      this.waypoint.destroy()
+      this.$element.removeClass(this.options.stuckClass)
+      if (this.options.wrapper) {
+        this.$element.unwrap()
+      }
+    }
+  }
+
+  Sticky.defaults = {
+    wrapper: '<div class="sticky-wrapper" />',
+    stuckClass: 'stuck',
+    direction: 'down right'
+  }
+
+  Waypoint.Sticky = Sticky
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/javascripts/waypoints/waypoint.js
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/javascripts/waypoints/waypoint.js b/book/master_middleman/source/javascripts/waypoints/waypoint.js
new file mode 100644
index 0000000..7f76f1d
--- /dev/null
+++ b/book/master_middleman/source/javascripts/waypoints/waypoint.js
@@ -0,0 +1,160 @@
+(function() {
+  'use strict'
+
+  var keyCounter = 0
+  var allWaypoints = {}
+
+  /* http://imakewebthings.com/waypoints/api/waypoint */
+  function Waypoint(options) {
+    if (!options) {
+      throw new Error('No options passed to Waypoint constructor')
+    }
+    if (!options.element) {
+      throw new Error('No element option passed to Waypoint constructor')
+    }
+    if (!options.handler) {
+      throw new Error('No handler option passed to Waypoint constructor')
+    }
+
+    this.key = 'waypoint-' + keyCounter
+    this.options = Waypoint.Adapter.extend({}, Waypoint.defaults, options)
+    this.element = this.options.element
+    this.adapter = new Waypoint.Adapter(this.element)
+    this.callback = options.handler
+    this.axis = this.options.horizontal ? 'horizontal' : 'vertical'
+    this.enabled = this.options.enabled
+    this.triggerPoint = null
+    this.group = Waypoint.Group.findOrCreate({
+      name: this.options.group,
+      axis: this.axis
+    })
+    this.context = Waypoint.Context.findOrCreateByElement(this.options.context)
+
+    if (Waypoint.offsetAliases[this.options.offset]) {
+      this.options.offset = Waypoint.offsetAliases[this.options.offset]
+    }
+    this.group.add(this)
+    this.context.add(this)
+    allWaypoints[this.key] = this
+    keyCounter += 1
+  }
+
+  /* Private */
+  Waypoint.prototype.queueTrigger = function(direction) {
+    this.group.queueTrigger(this, direction)
+  }
+
+  /* Private */
+  Waypoint.prototype.trigger = function(args) {
+    if (!this.enabled) {
+      return
+    }
+    if (this.callback) {
+      this.callback.apply(this, args)
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/destroy */
+  Waypoint.prototype.destroy = function() {
+    this.context.remove(this)
+    this.group.remove(this)
+    delete allWaypoints[this.key]
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/disable */
+  Waypoint.prototype.disable = function() {
+    this.enabled = false
+    return this
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/enable */
+  Waypoint.prototype.enable = function() {
+    this.context.refresh()
+    this.enabled = true
+    return this
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/next */
+  Waypoint.prototype.next = function() {
+    return this.group.next(this)
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/previous */
+  Waypoint.prototype.previous = function() {
+    return this.group.previous(this)
+  }
+
+  /* Private */
+  Waypoint.invokeAll = function(method) {
+    var allWaypointsArray = []
+    for (var waypointKey in allWaypoints) {
+      allWaypointsArray.push(allWaypoints[waypointKey])
+    }
+    for (var i = 0, end = allWaypointsArray.length; i < end; i++) {
+      allWaypointsArray[i][method]()
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/destroy-all */
+  Waypoint.destroyAll = function() {
+    Waypoint.invokeAll('destroy')
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/disable-all */
+  Waypoint.disableAll = function() {
+    Waypoint.invokeAll('disable')
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/enable-all */
+  Waypoint.enableAll = function() {
+    Waypoint.invokeAll('enable')
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/refresh-all */
+  Waypoint.refreshAll = function() {
+    Waypoint.Context.refreshAll()
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/viewport-height */
+  Waypoint.viewportHeight = function() {
+    return window.innerHeight || document.documentElement.clientHeight
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/viewport-width */
+  Waypoint.viewportWidth = function() {
+    return document.documentElement.clientWidth
+  }
+
+  Waypoint.adapters = []
+
+  Waypoint.defaults = {
+    context: window,
+    continuous: true,
+    enabled: true,
+    group: 'default',
+    horizontal: false,
+    offset: 0
+  }
+
+  Waypoint.offsetAliases = {
+    'bottom-in-view': function() {
+      return this.context.innerHeight() - this.adapter.outerHeight()
+    },
+    'right-in-view': function() {
+      return this.context.innerWidth() - this.adapter.outerWidth()
+    }
+  }
+
+  window.Waypoint = Waypoint
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/layouts/_title.erb
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/layouts/_title.erb b/book/master_middleman/source/layouts/_title.erb
new file mode 100644
index 0000000..ea744d9
--- /dev/null
+++ b/book/master_middleman/source/layouts/_title.erb
@@ -0,0 +1,6 @@
+<% if current_page.data.title %>
+  <h1 class="title-container" <%= current_page.data.dita ? 'style="display: none;"' : '' %>>
+    <%= current_page.data.title %>
+  </h1>
+<% end %>
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/patch/dynamic_variable_interpretation.py
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/patch/dynamic_variable_interpretation.py b/book/master_middleman/source/patch/dynamic_variable_interpretation.py
new file mode 100644
index 0000000..66df9ff
--- /dev/null
+++ b/book/master_middleman/source/patch/dynamic_variable_interpretation.py
@@ -0,0 +1,192 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+__all__ = ["copy_tarballs_to_hdfs", ]
+import os
+import glob
+import re
+import tempfile
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.resources.copy_from_local import CopyFromLocal
+from resource_management.libraries.resources.execute_hadoop import ExecuteHadoop
+from resource_management.core.resources.system import Execute
+from resource_management.core.exceptions import Fail
+from resource_management.core.logger import Logger
+from resource_management.core import shell
+
+"""
+This file provides helper methods needed for the versioning of RPMs. Specifically, it does dynamic variable
+interpretation to replace strings like {{ hdp_stack_version }}  where the value of the
+variables cannot be determined ahead of time, but rather, depends on what files are found.
+
+It assumes that {{ hdp_stack_version }} is constructed as ${major.minor.patch.rev}-${build_number}
+E.g., 998.2.2.1.0-998
+Please note that "-${build_number}" is optional.
+"""
+
+# These values must be the suffix of the properties in cluster-env.xml
+TAR_SOURCE_SUFFIX = "_tar_source"
+TAR_DESTINATION_FOLDER_SUFFIX = "_tar_destination_folder"
+
+
+def _get_tar_source_and_dest_folder(tarball_prefix):
+  """
+  :param tarball_prefix: Prefix of the tarball must be one of tez, hive, mr, pig
+  :return: Returns a tuple of (x, y) after verifying the properties
+  """
+  component_tar_source_file = default("/configurations/cluster-env/%s%s" % (tarball_prefix.lower(), TAR_SOURCE_SUFFIX), None)
+  # E.g., /usr/hdp/current/hadoop-client/tez-{{ hdp_stack_version }}.tar.gz
+
+  component_tar_destination_folder = default("/configurations/cluster-env/%s%s" % (tarball_prefix.lower(), TAR_DESTINATION_FOLDER_SUFFIX), None)
+  # E.g., hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/
+
+  if not component_tar_source_file or not component_tar_destination_folder:
+    Logger.warning("Did not find %s tar source file and destination folder properties in cluster-env.xml" %
+                   tarball_prefix)
+    return None, None
+
+  if component_tar_source_file.find("/") == -1:
+    Logger.warning("The tar file path %s is not valid" % str(component_tar_source_file))
+    return None, None
+
+  if not component_tar_destination_folder.endswith("/"):
+    component_tar_destination_folder = component_tar_destination_folder + "/"
+
+  if not component_tar_destination_folder.startswith("hdfs://"):
+    return None, None
+
+  return component_tar_source_file, component_tar_destination_folder
+
+
+def _copy_files(source_and_dest_pairs, file_owner, group_owner, kinit_if_needed):
+  """
+  :param source_and_dest_pairs: List of tuples (x, y), where x is the source file in the local file system,
+  and y is the destination file path in HDFS
+  :param file_owner: Owner to set for the file copied to HDFS (typically hdfs account)
+  :param group_owner: Owning group to set for the file copied to HDFS (typically hadoop group)
+  :param kinit_if_needed: kinit command if it is needed, otherwise an empty string
+  :return: Returns 0 if at least one file was copied and no exceptions occurred, and 1 otherwise.
+
+  Must kinit before calling this function.
+  """
+  import params
+
+  return_value = 1
+  if source_and_dest_pairs and len(source_and_dest_pairs) > 0:
+    return_value = 0
+    for (source, destination) in source_and_dest_pairs:
+      try:
+        destination_dir = os.path.dirname(destination)
+
+        params.HdfsDirectory(destination_dir,
+                             action="create",
+                             owner=file_owner,
+                             mode=0555
+        )
+
+        CopyFromLocal(source,
+                      mode=0444,
+                      owner=file_owner,
+                      group=group_owner,
+                      dest_dir=destination_dir,
+                      kinnit_if_needed=kinit_if_needed,
+                      hdfs_user=params.hdfs_user,
+                      hadoop_bin_dir=params.hadoop_bin_dir,
+                      hadoop_conf_dir=params.hadoop_conf_dir
+        )
+      except:
+        return_value = 1
+  return return_value
+
+
+def copy_tarballs_to_hdfs(tarball_prefix, component_user, file_owner, group_owner):
+  """
+  :param tarball_prefix: Prefix of the tarball must be one of tez, hive, mr, pig
+  :param component_user: User that will execute the Hadoop commands
+  :param file_owner: Owner of the files copied to HDFS (typically hdfs account)
+  :param group_owner: Group owner of the files copied to HDFS (typically hadoop group)
+  :return: Returns 0 on success, 1 if no files were copied, and in some cases may raise an exception.
+
+  In order to call this function, params.py must have all of the following,
+  hdp_stack_version, kinit_path_local, security_enabled, hdfs_user, hdfs_principal_name, hdfs_user_keytab,
+  hadoop_bin_dir, hadoop_conf_dir, and HdfsDirectory as a partial function.
+  """
+  import params
+
+  if not hasattr(params, "hdp_stack_version") or params.hdp_stack_version is None:
+    Logger.warning("Could not find hdp_stack_version")
+    return 1
+
+  component_tar_source_file, component_tar_destination_folder = _get_tar_source_and_dest_folder(tarball_prefix)
+  if not component_tar_source_file or not component_tar_destination_folder:
+    Logger.warning("Could not retrieve properties for tarball with prefix: %s" % str(tarball_prefix))
+    return 1
+
+  if not os.path.exists(component_tar_source_file):
+    Logger.warning("Could not find file: %s" % str(component_tar_source_file))
+    return 1
+
+  # Ubuntu returns: "stdin: is not a tty", as subprocess output.
+  tmpfile = tempfile.NamedTemporaryFile()
+  with open(tmpfile.name, 'r+') as file:
+    get_hdp_version_cmd = '/usr/bin/hdp-select versions > %s' % tmpfile.name
+    code, stdoutdata = shell.call(get_hdp_version_cmd)
+    out = file.read()
+  pass
+  if code != 0 or out is None:
+    Logger.warning("Could not verify HDP version by calling '%s'. Return Code: %s, Output: %s." %
+                   (get_hdp_version_cmd, str(code), str(out)))
+    return 1
+
+  hdp_version = out.strip() # this should include the build number
+
+  file_name = os.path.basename(component_tar_source_file)
+  destination_file = os.path.join(component_tar_destination_folder, file_name)
+  destination_file = destination_file.replace("{{ hdp_stack_version }}", hdp_version)
+
+  does_hdfs_file_exist_cmd = "fs -ls %s" % destination_file
+
+  kinit_if_needed = ""
+  if params.security_enabled:
+    kinit_if_needed = format("{kinit_path_local} -kt {hdfs_user_keytab} {hdfs_principal_name};")
+
+  if kinit_if_needed:
+    Execute(kinit_if_needed,
+            user=component_user,
+            path='/bin'
+    )
+
+  does_hdfs_file_exist = False
+  try:
+    ExecuteHadoop(does_hdfs_file_exist_cmd,
+                  user=component_user,
+                  logoutput=True,
+                  conf_dir=params.hadoop_conf_dir,
+                  bin_dir=params.hadoop_bin_dir
+    )
+    does_hdfs_file_exist = True
+  except Fail:
+    pass
+
+  if not does_hdfs_file_exist:
+    source_and_dest_pairs = [(component_tar_source_file, destination_file), ]
+    return _copy_files(source_and_dest_pairs, file_owner, group_owner, kinit_if_needed)
+  return 1

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/stylesheets/book-styles.css.scss
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/stylesheets/book-styles.css.scss b/book/master_middleman/source/stylesheets/book-styles.css.scss
new file mode 100644
index 0000000..1236d8e
--- /dev/null
+++ b/book/master_middleman/source/stylesheets/book-styles.css.scss
@@ -0,0 +1,3 @@
+* {
+  box-sizing: border-box;
+}

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/stylesheets/partials/_book-base-values.scss
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/stylesheets/partials/_book-base-values.scss b/book/master_middleman/source/stylesheets/partials/_book-base-values.scss
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/stylesheets/partials/_book-vars.scss
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/stylesheets/partials/_book-vars.scss b/book/master_middleman/source/stylesheets/partials/_book-vars.scss
new file mode 100644
index 0000000..4245d57
--- /dev/null
+++ b/book/master_middleman/source/stylesheets/partials/_book-vars.scss
@@ -0,0 +1,19 @@
+$navy: #243640;
+$blue1: #2185c5;
+$blue2: #a7cae1;
+$bluegray1: #4b6475;
+$teal1: #03786D;
+$teal2: #00a79d;
+
+$color-accent: $teal1;
+$color-accent-bright: $teal2;
+
+// link colors
+$color-link: $blue1;
+$color-link-border: $blue2;
+
+$color-border-tip: $blue2;
+
+$color-bg-header: $navy;
+$color-bg-dark: $bluegray1;
+


[41/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/ambari-admin.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/ambari-admin.html.md.erb b/markdown/admin/ambari-admin.html.md.erb
new file mode 100644
index 0000000..a5b2169
--- /dev/null
+++ b/markdown/admin/ambari-admin.html.md.erb
@@ -0,0 +1,439 @@
+---
+title: Managing HAWQ Using Ambari
+---
+
+Ambari provides an easy interface to perform some of the most common HAWQ and PXF Administration Tasks.
+
+## <a id="amb-yarn"></a>Integrating YARN for Resource Management
+
+HAWQ supports integration with YARN for global resource management. In a YARN managed environment, HAWQ can request resources (containers) dynamically from YARN, and return resources when HAWQ\u2019s workload is not heavy.
+
+See also [Integrating YARN with HAWQ](../resourcemgmt/YARNIntegration.html) for command-line instructions and additional details about using HAWQ with YARN.
+
+### When to Perform
+
+Follow this procedure if you have already installed YARN and HAWQ, but you are currently using the HAWQ Standalone mode (not YARN) for resource management. This procedure helps you configure YARN and HAWQ so that HAWQ uses YARN for resource management. This procedure assumes that you will use the default YARN queue for managing HAWQ.
+
+### Procedure
+
+1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+2.  Select **HAWQ** from the list of installed services.
+3.  Select the **Configs** tab, then the **Settings** tab.
+4.  Use the **Resource Manager** menu to change select the **YARN** option.
+5.  Click **Save**.<br/><br/>HAWQ will use the default YARN queue, and Ambari automatically configures settings for `hawq_rm_yarn_address`, `hawq_rm_yarn_app_name`, and `hawq_rm_yarn_scheduler_address` in the `hawq-site.xml` file.<br/><br/>If YARN HA was enabled, Ambari also automatically configures the `yarn.resourcemanager.ha` and `yarn.resourcemanager.scheduler.ha` properties in `yarn-site.xml`.
+6.  If you are using HDP 2.3, follow these additional instructions:
+    1. Select **YARN** from the list of installed services.
+    2. Select the **Configs** tab, then the **Advanced** tab.
+    3. Expand the **Advanced yarn-site** section.
+    4. Locate the `yarn.resourcemanager.system-metrics-publisher.enabled` property and change its value to `false`.
+    5. Click **Save**.
+6.  (Optional.)  When HAWQ is integrated with YARN and has no workload, HAWQ does not acquire any resources right away. HAWQ\u2019s resource manager only requests resources from YARN when HAWQ receives its first query request. In order to guarantee optimal resource allocation for subsequent queries and to avoid frequent YARN resource negotiation, you can adjust `hawq_rm_min_resource_perseg` so HAWQ receives at least some number of YARN containers per segment regardless of the size of the initial query. The default value is 2, which means HAWQ\u2019s resource manager acquires at least 2 YARN containers for each segment even if the first query\u2019s resource request is small.<br/><br/>This configuration property cannot exceed the capacity of HAWQ\u2019s YARN queue. For example, if HAWQ\u2019s queue capacity in YARN is no more than 50% of the whole cluster, and each YARN node has a maximum of 64GB memory and 16 vcores, then `hawq_rm_min_resource_perseg` in HAWQ cannot be set to more than 8 since HAW
 Q\u2019s resource manager acquires YARN containers by vcore. In the case above, the HAWQ resource manager acquires a YARN container quota of 4GB memory and 1 vcore.<br/><br/>To change this parameter, expand **Custom hawq-site** and click **Add Property ...** Then specify `hawq_rm_min_resource_perseg` as the key and enter the desired Value. Click **Add** to add the property definition.
+7.  (Optional.)  If the level of HAWQ\u2019s workload is lowered, then HAWQ's resource manager may have some idle YARN resources. You can adjust `hawq_rm_resource_idle_timeout` to let the HAWQ resource manager return idle resources more quickly or more slowly.<br/><br/>For example, when HAWQ's resource manager has to reacquire resources, it can cause latency for query resource requests. To let HAWQ resource manager retain resources longer in anticipation of an upcoming workload, increase the value of `hawq_rm_resource_idle_timeout`. The default value of `hawq_rm_resource_idle_timeout` is 300 seconds.<br/><br/>To change this parameter, expand **Custom hawq-site** and click **Add Property ...** Then specify `hawq_rm_resource_idle_timeout` as the key and enter the desired Value. Click **Add** to add the property definition.
+8.  Click **Save** to save your configuration changes.
+
+## <a id="move_yarn_rm"></a>Moving a YARN Resource Manager
+
+If you are using YARN to manage HAWQ resources and need to move a YARN resource manager, then you must update your HAWQ configuration.
+
+### When to Perform
+
+Use one of the following procedures to move YARN resource manager component from one node to another when HAWQ is configured to use YARN as the global resource manager (`hawq_global_rm_type` is `yarn`). The exact procedure you should use depends on whether you have enabled high availability in YARN.
+
+**Note:** In a Kerberos-secured environment, you must update <code>hadoop.proxyuser.yarn.hosts</code> property in HDFS <code>core-site.xml</code> before running a service check. The values should be set to the current YARN Resource Managers.</p>
+
+### Procedure (Single YARN Resource Manager)
+
+1. Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+1. Click **YARN** in the list of installed services.
+1. Select **Move ResourceManager**, and complete the steps in the Ambari wizard to move the Resource Manager to a new host.
+1. After moving the Resource Manager successfully in YARN, click **HAWQ** in the list of installed services.
+1. On the HAWQ **Configs** page, select the **Advanced** tab.
+1. Under Advanced hawq-site section, update the following HAWQ properties:
+   - `hawq_rm_yarn_address`. Enter the same value defined in the `yarn.resourcemanager.address` property of `yarn-site.xml`.
+   - `hawq_rm_yarn_scheduler_address`. Enter the same value in the `yarn.resourcemanager.scheduler.address` property of `yarn-site.xml`.
+1. Restart all HAWQ components so that the configurations get updated on all HAWQ hosts.
+1. Run HAWQ Service Check, as described in [Performing a HAWQ Service Check](#amb-service-check), to ensure that HAWQ is operating properly.
+
+### Procedure (Highly Available YARN Resource Managers)
+
+1. Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+1. Click **YARN** in the list of installed services.
+1. Select **Move ResourceManager**, and complete the steps in the Ambari wizard to move the Resource Manager to a new host.
+1. After moving the Resource Manager successfully in YARN, click **HAWQ** in the list of installed services.
+1. On the HAWQ **Configs** page, select the **Advanced** tab.
+1. Under `Custom yarn-client` section, update the HAWQ properties `yarn.resourcemanager.ha` and `yarn.resourcemanager.scheduler.ha`. These parameter values should be updated to match the corresponding parameters for the YARN service. Check the values under **ResourceManager hosts** in the **Resource Manager** section of the **Advanced** configurations for the YARN service.
+1. Restart all HAWQ components so that the configuration change is updated on all HAWQ hosts. You can ignore the warning about the values of `hawq_rm_yarn_address` and `hawq_rm_yarn_scheduler_address` in `hawq-site.xml` not matching the values in `yarn-site.xml`, and click **Proceed Anyway**.
+1. Run HAWQ Service Check, as described in [Performing a HAWQ Service Check](#amb-service-check), to ensure that HAWQ is operating properly.
+
+
+## <a id="amb-service-check"></a>Performing a HAWQ Service Check
+
+A HAWQ Service check uses the `hawq state` command to display the configuration and status of segment hosts in a HAWQ Cluster. It also performs tests to ensure that HAWQ can write to and read from tables, and to ensure that HAWQ can write to and read from HDFS external tables using PXF.
+
+### When to Perform
+* Execute this procedure immediately after any common maintenance operations, such as adding, activating, or removing the HAWQ Master Standby.
+* Execute this procedure as a first step in troubleshooting problems in accessing HDFS data.
+
+### Procedure
+1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+2.  Click **HAWQ** in the list of installed services.
+4. Select **Service Actions > Run Service Check**, then click **OK** to perform the service check.
+
+    Ambari displays the **HAWQ Service Check** task in the list of background operations. If any test fails, then Ambari displays a red error icon next to the task.  
+5. Click the **HAWQ Service Check** task to view the actual log messages that are generated while performing the task. The log messages display the basic configuration and status of HAWQ segments, as well as the results of the HAWQ and PXF tests (if PXF is installed).
+
+6. Click **OK** to dismiss the log messages or list of background tasks.
+
+## <a id="amb-config-check"></a>Performing a Configuration Check
+
+A configuration check determines if operating system parameters on the HAWQ host machines match their recommended settings. You can also perform this procedure from the command line using the `hawq check` command. The `hawq check` command is run against all HAWQ hosts.
+
+### Procedure
+1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+2.  Click **HAWQ** in the list of installed services.
+3. (Optional) Perform this step if you want to view or modify the host configuration parameters that are evaluated during the HAWQ config check:
+   1. Select the **Configs** tab, then select the **Advanced** tab in the settings.
+   1. Expand **Advanced Hawq Check** to view or change the list of parameters that are checked with a `hawq check` command or with the Ambari HAWQ Config check.
+
+         **Note:** All parameter entries are stored in the `/usr/local/hawq/etc/hawq_check.cnf` file. Click the **Set Recommended** button if you want to restore the file to its original contents.
+4. Select **Service Actions > Run HAWQ Config Check**, then click **OK** to perform the configuration check.
+
+    Ambari displays the **Run HAWQ Config Check** task in the list of background operations. If any parameter does not meet the specification defined in `/usr/local/hawq/etc/hawq_check.cnf`, then Ambari displays a red error icon next to the task.  
+5. Click the **Run HAWQ Config Check** task to view the actual log messages that are generated while performing the task. Address any configuration errors on the indicated host machines.
+
+6. Click **OK** to dismiss the log messages or list of background tasks.
+
+## <a id="amb-restart"></a>Performing a Rolling Restart
+Ambari provides the ability to restart a HAWQ cluster by restarting one or more segments at a time until all segments (or all segments with stale configurations) restart. You can specify a delay between restarting segments, and Ambari can stop the process if a specified number of segments fail to restart. Performing a rolling restart in this manner can help ensure that some HAWQ segments are available to service client requests.
+
+**Note:** If you do not need to preserve client connections, you can instead perform an full restart of the entire HAWQ cluster using **Service Actions > Restart All**.
+
+### Procedure
+1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+2.  Click **HAWQ** in the list of installed services.
+3.  Select **Service Actions > Restart HAWQ Segments**.
+4. In the Restart HAWQ Segments page:
+   * Specify the number of segments that you want Ambari to restart at a time.
+   * Specify the number of seconds Ambari should wait before restarting the next batch of HAWQ segments.
+   * Specify the number of restart failures that may occur before Ambari stops the rolling restart process.
+   * Select **Only restart HAWQ Segments with stale configs** if you want to limit the restart process to those hosts.
+   * Select **Turn On Maintenance Mode for HAWQ** to enable maintenance mode before starting the rolling restart process. This suppresses alerts that are normally generated when a segment goes offline.
+5. Click **Trigger Rolling Restart** to begin the restart process.
+
+   Ambari displays the **Rolling Restart of HAWQ segments** task in the list of background operations, and indicates the current batch of segments that it is restarting. Click the name of the task to view the log messages generated during the restart. If any segment fails to restart, Ambari displays a red warning icon next to the task.
+
+## <a id="bulk-lifecycle"></a>Performing Host-Level Actions on HAWQ Segment and PXF Hosts
+
+Ambari host-level actions enable you to perform actions on one or more hosts in the cluster at once. With HAWQ clusters, you can apply the **Start**, **Stop**, or **Restart** actions to one or more HAWQ segment hosts or PXF hosts. Using the host-level actions saves you the trouble of accessing individual hosts in Ambari and applying service actions one-by-one.
+
+### When to Perform
+*  Use the Ambari host-level actions when you have a large number of hosts in your cluster and you want to start, stop, or restart all HAWQ segment hosts or all PXF hosts as part of regularly-scheduled maintenance.
+
+### Procedure
+1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+2.  Select the **Hosts** tab at the top of the screen to display a list of all hosts in the cluster.
+3.  To apply a host-level action to all HAWQ segment hosts or PXF hosts, select an action using the applicable menu:
+    *  **Actions > Filtered Hosts > HAWQ Segments >** [ **Start** | **Stop** |  **Restart** ]
+    *  **Actions > Filtered Hosts > PXF Hosts >** [ **Start** | **Stop** |  **Restart** ]
+4.  To apply a host level action to a subset of HAWQ segments or PXF hosts:
+    1.  Filter the list of available hosts using one of the filter options:
+        *  **Filter > HAWQ Segments**
+        *  **Filter > PXF Hosts**
+    2.  Use the check boxes to select the hosts to which you want to apply the action.
+    3.  Select **Actions > Selected Hosts >** [ **Start** | **Stop** |  **Restart** ] to apply the action to your selected hosts.
+
+
+## <a id="amb-expand"></a>Expanding the HAWQ Cluster
+
+Apache HAWQ supports dynamic node expansion. You can add segment nodes while HAWQ is running without having to suspend or terminate cluster operations.
+
+### Guidelines for Cluster Expansion
+
+This topic provides some guidelines around expanding your HAWQ cluster.
+
+There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
+
+-  When you add a new node, install both a DataNode and a HAWQ segment on the new node.  If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node.
+-  After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
+-  Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, select the **Service Actions > Clear HAWQ's HDFS Metadata Cache** option in Ambari.
+-  Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command.
+-  If you are using hash tables, consider updating the `default_hash_table_bucket_number` server configuration parameter to a larger value after expanding the cluster but before redistributing the hash tables.
+
+### Procedure
+First ensure that the new node(s) has been configured per the instructions found in [Apache HAWQ System Requirements](../requirements/system-requirements.html) and [Select HAWQ Host Machines](../install/select-hosts.html).
+
+1.  If you have any user-defined function (UDF) libraries installed in your existing HAWQ cluster, install them on the new node(s) that you want to add to the HAWQ cluster.
+2.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+3.  Click **HAWQ** in the list of installed services.
+4.  Select the **Configs** tab, then select the **Advanced** tab in the settings.
+5.  Expand the **General** section, and ensure that the **Exchange SSH Keys** property (`hawq_ssh_keys`) is set to `true`.  Change this property to `true` if needed, and click **Save** to continue. Ambari must be able to exchange SSH keys with any hosts that you add to the cluster in the following steps.
+6.  Select the **Hosts** tab at the top of the screen to display the Hosts summary.
+7.  If the host(s) that you want to add are not currently listed in the Hosts summary page, follow these steps:
+    1. Select **Actions > Add New Hosts** to start the Add Host Wizard.
+    2. Follow the initial steps of the Add Host Wizard to identify the new host, specify SSH keys or manually register the host, and confirm the new host(s) to add.
+
+         See [Set Up Password-less SSH](http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_Installing_HDP_AMB/content/_set_up_password-less_ssh.html) in the HDP documentation if you need more information about performing these tasks.
+    3. When you reach the Assign Slaves and Clients page, ensure that the **DataNode**, **HAWQ Segment**, and **PXF** (if the PXF service is installed) components are selected. Select additional components as necessary for your cluster.
+    4. Complete the wizard to add the new host and install the selected components.
+8. If the host(s) that you want to add already appear in the Hosts summary, follow these steps:
+   1. Click the hostname that you want to add to the HAWQ cluster from the list of hosts.
+   2. In the Components summary, ensure that the host already runs the DataNode component. If it does not, select **Add > DataNode** and then click **Confirm Add**.  Click **OK** when the task completes.
+   3. In the Components summary, select **Add > HAWQ Segment**.
+   4. Click **Confirm Add** to acknowledge the component to add. Click **OK** when the task completes.
+   5. In the Components summary, select **Add > PXF**.
+   6. Click **Confirm Add** to acknowledge the component to add. Click **OK** when the task completes.
+17. (Optional) If you are using hash tables, adjust the **Default buckets for Hash Distributed tables** setting (`default_hash_table_bucket_number`) on the HAWQ service's **Configs > Settings** tab. Update this property's value by multiplying the new number of nodes in the cluster by the appropriate number indicated below.
+
+    |Number of Nodes After Expansion|Suggested default\_hash\_table\_bucket\_number value|
+    |---------------|------------------------------------------|
+    |<= 85|6 \* \#nodes|
+    |\> 85 and <= 102|5 \* \#nodes|
+    |\> 102 and <= 128|4 \* \#nodes|
+    |\> 128 and <= 170|3 \* \#nodes|
+    |\> 170 and <= 256|2 \* \#nodes|
+    |\> 256 and <= 512|1 \* \#nodes|
+    |\> 512|512|
+18.  Ambari requires the HAWQ service to be restarted in order to apply the configuration changes. If you need to apply the configuration *without* restarting HAWQ (for dynamic cluster expansion), then you can use the HAWQ CLI commands described in [Manually Updating the HAWQ Configuration](#manual-config-steps) *instead* of following this step.
+    <br/><br/>Stop and then start the HAWQ service to apply your configuration changes via Ambari. Select **Service Actions > Stop**, followed by **Service Actions > Start** to ensure that the HAWQ Master starts before the newly-added segment. During the HAWQ startup, Ambari exchanges ssh keys for the `gpadmin` user, and applies the new configuration.
+    >**Note:** Do not use the **Restart All** service action to complete this step.
+19.  Consider the impact of rebalancing HDFS to other components, such as HBase, before you complete this step.
+    <br/><br/>Rebalance your HDFS data by selecting the **HDFS** service and then choosing **Service Actions > Rebalance HDFS**. Follow the Ambari instructions to complete the rebalance action.
+20.  Speed up the clearing of the metadata cache by first selecting the **HAWQ** service and then selecting **Service Actions > Clear HAWQ's HDFS Metadata Cache**.
+21.  If you are using hash distributed tables and wish to take advantage of the performance benefits of using a larger cluster, redistribute the data in all hash-distributed tables by using either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command. You should redistribute the table data if you modified the `default_hash_table_bucket_number` configuration parameter.
+
+    **Note:** The redistribution of table data can take a significant amount of time.
+22.  (Optional.) If you changed the **Exchange SSH Keys** property value before adding the host(s), change the value back to `false` after Ambari exchanges keys with the new hosts. This prevents Ambari from exchanging keys with all hosts every time the HAWQ master is started or restarted.
+
+23.  (Optional.) If you enabled temporary password-based authentication while preparing/configuring your HAWQ host systems, turn off password-based authentication as described in [Apache HAWQ System Requirements](../requirements/system-requirements.html#topic_pwdlessssh).
+
+#### <a id="manual-config-steps"></a>Manually Updating the HAWQ Configuration
+If you need to expand your HAWQ cluster without restarting the HAWQ service, follow these steps to manually apply the new HAWQ configuration. (Use these steps *instead* of following Step 7 in the above procedure.):
+
+1.  Update your configuration to use the new `default_hash_table_bucket_number` value that you calculated:
+  1. SSH into the HAWQ master host as the `gpadmin` user:
+    ```shell
+    $ ssh gpadmin@<HAWQ_MASTER_HOST>
+    ```
+   2. Source the `greenplum_path.sh` file to update the shell environment:
+    ```shell
+    $ source /usr/local/hawq/greenplum_path.sh
+    ```
+   3. Verify the current value of `default_hash_table_bucket_number`:
+    ```shell
+    $ hawq config -s default_hash_table_bucket_number
+    ```
+   4. Update `default_hash_table_bucket_number` to the new value that you calculated:
+    ```shell
+    $ hawq config -c default_hash_table_bucket_number -v <new_value>
+    ```
+   5. Reload the configuration without restarting the cluster:
+    ```shell
+    $ hawq stop cluster -u
+    ```
+   6. Verify that the `default_hash_table_bucket_number` value was updated:
+    ```shell
+    $ hawq config -s default_hash_table_bucket_number
+    ```
+2.  Edit the `/usr/local/hawq/etc/slaves` file and add the new HAWQ hostname(s) to the end of the file. Separate multiple hosts with new lines. For example, after adding host4 and host5 to a cluster already contains hosts 1-3, the updated file contents would be:
+
+     ```
+     host1
+     host2
+     host3
+     host4
+     host5
+     ```
+3.  Continue with Step 8 in the previous procedure, [Expanding the HAWQ Cluster](#amb-expand).  When the HAWQ service is ready to be restarted via Ambari, Ambari will refresh the new configurations.
+
+## <a id="amb-activate-standby"></a>Activating the HAWQ Standby Master
+Activating the HAWQ Standby Master promotes the standby host as the new HAWQ Master host. The previous HAWQ Master configuration is automatically removed from the cluster.
+
+### When to Perform
+* Execute this procedure immediately if the HAWQ Master fails or becomes unreachable.
+* If you want to take the current HAWQ Master host offline for maintenance, execute this procedure during a scheduled maintenance period. This procedure requires a restart of the HAWQ service.
+
+### Procedure
+1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+2.  Click **HAWQ** in the list of installed services.
+3.  Select **Service Actions > Activate HAWQ Standby Master** to start the Activate HAWQ Standby Master Wizard.
+4.  Read the description of the Wizard and click **Next** to review the tasks that will be performed.
+5.  Ambari displays the host name of the current HAWQ Master that will be removed from the cluster, as well as the HAWQ Standby Master host that will be activated. The information is provided only for review and cannot be edited on this page. Click **Next** to confirm the operation.
+6. Click **OK** to confirm that you want to perform the procedure, as it is not possible to roll back the operation using Ambari.
+
+   Ambari displays a list of tasks that are performed to activate the standby server and remove the previous HAWQ Master host. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
+7. Click **Complete** after the Wizard finishes all tasks.
+
+   **Important:** After the Wizard completes, your HAWQ cluster no longer includes a HAWQ Standby Master host. As a best practice, follow the instructions in [Adding a HAWQ Standby Master](#amb-add-standby) to configure a new one.
+
+## <a id="amb-add-standby"></a>Adding a HAWQ Standby Master
+
+The HAWQ Standby Master serves as a backup of the HAWQ Master host, and is an important part of providing high availability for the HAWQ cluster. When your cluster uses a standby master, you can activate the standby if the active HAWQ Master host fails or becomes unreachable.
+
+### When to Perform
+* Execute this procedure during a scheduled maintenance period, because it requires a restart of the HAWQ service.
+* Adding a HAWQ standby master is recommended as a best practice for all new clusters to provide high availability.
+* Add a new standby master soon after you activate an existing standby master to ensure that the cluster has a backup master service.
+
+### Procedure
+
+1.  Select an existing host in the cluster to run the HAWQ standby master. You cannot run the standby master on the same host that runs the HAWQ master. Also, do not run a standby master on the node where you deployed the Ambari server; if the Ambari postgres instance is running on the same port as the HAWQ master posgres instance, initialization fails and will leave the cluster in an inconsistent state.
+1. Login to the HAWQ host that you chose to run the standby master and determine if there is an existing HAWQ master directory (for example, `/data/hawq/master`) on the machine. If the directory exists, rename the directory. For example:
+
+    ```shell
+    $ mv /data/hawq/master /data/hawq/master-old
+    ```
+
+   **Note:**  If a HAWQ master directory exists on the host when you configure the HAWQ standby master, then the standby master may be initialized with stale data. Rename any existing master directory before you proceed.
+   
+1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+2.  Click **HAWQ** in the list of installed services.
+3.  Select **Service Actions > Add HAWQ Standby Master** to start the Add HAWQ Standby Master Wizard.
+4.  Read the Get Started page for information about HAWQ the standby master and to acknowledge that the procedure requires a service restart. Click **Next** to display the Select Host page.
+5.  Use the dropdown menu to select a host to use for the HAWQ Standby Master. Click **Next** to display the Review page.
+
+    **Note:**
+    * The Current HAWQ Master host is shown only for reference. You cannot change the HAWQ Master host when you configure a standby master.
+    * You cannot place the standby master on the same host as the HAWQ master.
+6. Review the information to verify the host on which the HAWQ Standby Master will be installed. Click **Back** to change your selection or **Next** to continue.
+7. Confirm that you have renamed any existing HAWQ master data directory on the selected host machine, as described earlier in this procedure. If an existing master data directory exists, the new HAWQ Standby Master may be initialized with stale data and can place the cluster in an inconsistent state. Click **Confirm** to continue.
+
+     Ambari displays a list of tasks that are performed to install the standby master server and reconfigure the cluster. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
+7. Click **Complete** after the Wizard finishes all tasks.
+
+## <a id="amb-remove-standby"></a>Removing the HAWQ Standby Master
+
+This service action enables you to remove the HAWQ Standby Master component in situations where you may need to reinstall the component.
+
+### When to Perform
+* Execute this procedure if you need to decommission or replace theHAWQ Standby Master host.
+* Execute this procedure and then add the HAWQ Standby Master once again, if the HAWQ Standby Master is unable to synchronize with the HAWQ Master and you need to reinitialize the service.
+* Execute this procedure during a scheduled maintenance period, because it requires a restart of the HAWQ service.
+
+### Procedure
+1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
+2.  Click **HAWQ** in the list of installed services.
+3.  Select **Service Actions > Remove HAWQ Standby Master** to start the Remove HAWQ Standby Master Wizard.
+4.  Read the Get Started page for information about the procedure and to acknowledge that the procedure requires a service restart. Click **Next** to display the Review page.
+5.  Ambari displays the HAWQ Standby Master host that will be removed from the cluster configuration. Click **Next** to continue, then click **OK** to confirm.
+
+     Ambari displays a list of tasks that are performed to remove the standby master from the cluster. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
+
+7. Click **Complete** after the Wizard finishes all tasks.
+
+      **Important:** After the Wizard completes, your HAWQ cluster no longer includes a HAWQ Standby Master host. As a best practice, follow the instructions in [Adding a HAWQ Standby Master](#amb-add-standby) to configure a new one.
+
+## <a id="hdp-upgrade"></a>Upgrading the HDP Stack
+
+If you install HAWQ using Ambari 2.2.2 with the HDP 2.3 stack, before you attempt to upgrade to HDP 2.4 you must use Ambari to change the `dfs.allow.truncate` property to `false`. Ambari will display a configuration warning with this setting, but it is required in order to complete the upgrade; choose **Proceed Anyway** when Ambari warns you about the configured value of `dfs.allow.truncate`.
+
+After you complete the upgrade to HDP 2.4, change the value of `dfs.allow.truncate` back to `true` to ensure that HAWQ can operate as intended.
+
+## <a id="gpadmin-password-change"></a>Changing the HAWQ gpadmin Password
+The password issued by the Ambari web console is used for the `hawq ssh-exkeys` utility, which is run during the start phase of the HAWQ Master.
+Ambari stores and uses its own copy of the gpadmin password, independently of the host system. Passwords on the master and slave nodes are not automatically updated and synchronized with Ambari. Not updating the Ambari system user password causes Ambari to behave as if the gpadmin password was never changed \(it keeps using the old password\).
+
+If passwordless ssh has not been set up, `hawq ssh-exkeys` attempts to exchange the key by using the password provided by the Ambari web console. If the password on the host machine differs from the HAWQ System User password recognized on Ambari, exchanging the key with the HAWQ Master fails. Components without passwordless ssh might not be registered with the HAWQ cluster.
+
+### When to Perform
+You should change the gpadmin password when:
+
+* The gpadmin password on the host machines has expired.
+* You want to change passwords as part of normal system security procedures.
+When updating the gpadmin password, it must be kept in synch with the gpadmin user on the HAWQ hosts. This requires manually changing the password on the Master and Slave hosts, then updating the Ambari password.
+
+###Procedure
+All of the listed steps are mandatory. This ensures that HAWQ service remains fully functional.
+
+1.  Use a script to manually change the password for the gpadmin user on all HAWQ hosts \(all Master and Slave component hosts\). To manually update the password, you must have ssh access to all host machines as the gpadmin user. Generate a hosts file to use with the `hawq ssh` command to reset the password on all hosts. Use a text editor to create a file that lists the hostname of the master node, the standby master node, and each segment node used in the cluster. Specify one hostname per line, for example:
+
+    ```
+    mdw
+    smdw
+    sdw1
+    sdw2
+    sdw3
+    ```
+
+    You can then use a command similar to the following to change the password on all hosts that are listed in the file:
+
+    ```shell
+    $ hawq ssh -f hawq_hosts 'echo "gpadmin:newpassword" | /usr/sbin/chpasswd'
+    ```    
+
+    **Note:** Be sure to make appropriate user and password system administrative changes in order to prevent operational disruption. For example, you may need to disable the password expiration policy for the `gpadmin` account.
+2.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\) Then perform the following steps:
+    1. Click **HAWQ** in the list of installed services.
+    2. On the HAWQ Server Configs page, go to the **Advanced** tab and update the **HAWQ System User Password** to the new password specified in the script.
+    3. Click **Save** to save the updated configuration.
+    4. Restart HAWQ service to propagate the configuration change to all Ambari agents.
+
+    This will synchronize the password on the host machines with the password that you specified in Ambari.
+
+## <a id="gpadmin-setup-alert"></a>Setting Up Alerts
+ 
+Alerts advise you of when a HAWQ process is down or not responding, or when certain conditions requiring attention occur.
+Alerts can be created for the Master, Standby Master, Segments, and PXF components. You can also set up custom alert groups to monitor these conditions and send email notifications when they occur.
+
+### When to Perform
+Alerts are enabled by default. You might want to disable alert functions when performing system operations in maintenance mode and then re-enable them after returning to normal operation.
+
+You can configure alerts to display messages for all system status changes or only for conditions of interest, such as warnings or critical conditions. Alerts can advise you if there are communication issues between the HAWQ Master and HAWQ segments, or if the HAWQ Master, Standby Master, a segment, or the PXF service is down or not responding. 
+
+You can configure Ambari to check for alerts at specified intervals, on a particular service or host, and what level of criticality you want to trigger an alert (OK, WARNING, or CRITICAL).
+
+### Procedure
+Ambari can show Alerts and also configure certain status conditions. 
+
+#### Viewing Alerts
+To view the current alert information for HAWQ, click the **Groups** button at the top left of the Alerts page, then select **HAWQ Default** in the drop-down menu, then click on the **Alert** button at the top of the Ambari console. Ambari will display a list of all available alert functions and their current status. 
+
+To check PXF alerts, click the **Groups** dropdown button at the top left of the Alerts page. Select **PXF Default** in the dropdown menu. Alerts are displayed on the PXF Status page.
+
+To view the current Alert settings, click on the name of the alert.
+
+The Alerts you can view are as follows:
+
+* HAWQ Master Process:
+This alert is triggered when the HAWQ Master process is down or not responding. 
+
+* HAWQ Segment Process:
+This alert is triggered when a HAWQ Segment on a node is down or not responding.  
+
+* HAWQ Standby Master Process:
+This alert is triggered when the HAWQ Standby Master process is down or not responding. If no standby is present, the Alert shows as **NONE**. 
+
+* HAWQ Standby Master Sync Status:
+This alert is triggered when the HAWQ Standby Master is not synchronized with the HAWQ Master. Using this Alert eliminates the need to check the gp\_master\_mirroring catalog table to determine if the Standby Master is fully synchronized. 
+If no standby Master is present, the status will show as **UNKNOWN**.
+   If this Alert is triggered, go to the HAWQ **Services** tab and click on the **Service Action** button to re-sync the HAWQ Standby Master with the HAWQ Master.
+   
+* HAWQ Segment Registration Status:
+This alert is triggered when any of the HAWQ Segments fail to register with the HAWQ Master. This indicates that the HAWQ segments having an up status in the gp\_segment\_configuration table do not match the HAWQ Segments listed in the /usr/local/hawq/etc/slaves file on the HAWQ Master. 
+
+* Percent HAWQ Segment Status Available:
+This Alert monitors the percentage of HAWQ segments available versus total segments. 
+   Alerts for **WARN**, and **CRITICAL** are displayed when the number of unresponsive HAWQ segments in the cluster is greater than the specified threshold. Otherwise, the status will show as **OK**.
+
+* PXF Process Alerts:
+PXF Process alerts are triggered when a PXF process on a node is down or not responding on the network. If PXF Alerts are enabled, the Alert status is shown on the PXF Status page.
+
+#### Setting the Monitoring Inteval
+You can customize how often you wish the system to check for certain conditions. The default interval for checking the HAWQ system is 1 minute. 
+
+To customize the interval, perform the following steps:
+
+1.  Click on the name of the Alert you want to edit. 
+2.  When the Configuration screen appears, click **Edit**. 
+3.  Enter a number for how often to check status for the selected Alert, then click **Save**. The interval must be specified in whole minutes.
+
+
+#### Setting the Available HAWQ Segment Threshold
+HAWQ monitors the percentage of available HAWQ segments and can send an alert when a specified percent of unresponsive segments is reached. 
+
+To set the threshold for the unresponsive segments that will trigger an alert:
+
+   1.  Click on **Percent HAWQ Segments Available**. 
+   2.  Click **Edit**. Enter the percentage of total segments to create a **Warning** alert (default is 10 percent of the total segments) or **Critical** alert (default is 25 percent of total segments).
+   3.  Click **Save** when done.
+   Alerts for **WARN**, and **CRITICAL** will be displayed when the number of unresponsive HAWQ segments in the cluster is greater than the specified percentage. 
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/ambari-rest-api.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/ambari-rest-api.html.md.erb b/markdown/admin/ambari-rest-api.html.md.erb
new file mode 100644
index 0000000..2cc79e4
--- /dev/null
+++ b/markdown/admin/ambari-rest-api.html.md.erb
@@ -0,0 +1,163 @@
+---
+title: Using the Ambari REST API
+---
+
+You can monitor and manage the resources in your HAWQ cluster using the Ambari REST API.  In addition to providing access to the metrics information in your cluster, the API supports viewing, creating, deleting, and updating cluster resources.
+
+This section will provide an introduction to using the Ambari REST APIs for HAWQ-related cluster management activities.
+
+Refer to [Ambari API Reference v1](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md) for the official Ambari API documentation, including full REST resource definitions and response semantics. *Note*: These APIs may change in new versions of Ambari.
+
+
+## <a id="ambari-rest-uri"></a>Manageable HAWQ Resources
+
+HAWQ provides several REST resources to support starting and stopping services, executing service checks, and viewing configuration information among other activities. HAWQ resources you can manage using the Ambari REST API include:
+
+| Ambari Resource      | Description     |
+|----------------------|------------------------|
+| cluster | The HAWQ cluster. |
+| service | The HAWQ and PXF service. You can manage other Hadoop services as well. |
+| component | A specific HAWQ/PXF service component, i.e. the HAWQ Master, PXF. |
+| configuration | A specific HAWQ/PXF configuration entity, for example the hawq-site or pxf-profiles configuration files, or a specific single HAWQ or PXF configuration property. |
+| request | A group of tasks. |
+
+## <a id="ambari-rest-uri"></a>URI Structure
+
+The Ambari REST API provides access to HAWQ cluster resources via URI (uniform resource identifier) paths. To use the Ambari REST API, you will send HTTP requests and parse JSON-formatted HTTP responses.
+
+The Ambari REST API supports standard HTTP request methods including:
+
+- `GET` - read resource properties, metrics
+- `POST` - create new resource
+- `PUT` - update resource
+- `DELETE` - delete resource
+
+URIs for Ambari REST API resources have the following structure:
+
+``` shell
+http://<ambari-server-host>:<port>/api/v1/<resource-path>
+```
+
+The Ambari REST API supports the following HAWQ-related \<resource-paths\>:
+
+| REST Resource Path              | Description     |
+|----------------------|------------------------|
+| clusters/\<cluster\-name\> | The HAWQ cluster name. |
+| clusters/\<cluster\-name\>/services/PXF | The PXF service. |
+| clusters/\<cluster\-name\>/services/HAWQ | The HAWQ service. |
+| clusters/\<cluster\-name\>/services/HAWQ/components | All HAWQ service components. |
+| clusters/\<cluster\-name\>/services/HAWQ/components/\<name\> | A specific HAWQ service component, i.e. HAWQMASTER. |
+| clusters/\<cluster\-name\>/configurations | Cluster configurations. |
+| clusters/\<cluster\-name\>/requests | Group of tasks that run a command. |
+
+## <a id="ambari-rest-curl"></a>Submitting Requests with cURL
+
+Your HTTP request to the Ambari REST API should include the following information:
+
+- User name and password for basic authentication.
+- An HTTP request header.
+- The HTTP request method.
+- JSON-formatted request data, if required.
+- The URI identifying the Ambari REST resource.
+
+You can use the `curl` command to transfer HTTP request data to, and receive data from, the Ambari server using the HTTP protocol.
+
+Use the following syntax to issue a `curl` command for Ambari HAWQ/PXF management operations:
+
+``` shell
+$ curl -u <user>:<passwd> -H <header> -X GET|POST|PUT|DELETE -d <data> <URI>
+```
+
+`curl` options relevant to Ambari REST API communication include:
+
+| Option              | Description     |
+|----------------------|------------------------|
+| -u \<user\>:\<passwd\> | Identify the username and password for basic authentication to the HTTP server. |
+| -H \<header\>   | Identify an extra header to include in the HTTP request. \<header\> must specify `'X-Requested-By:ambari'`.   |
+| -X \<command\>   | Identify the request method. \<command\> may specify `GET` (the default), `POST`, `PUT`, and `DELETE`. |
+| -d \<data\>     | Send the specified \<data\> to the HTTP server along with the request. The \<command\> and \<URI\> determine if \<data\> is required, and if so, its content.  |
+| \<URI\>    | Path to the Ambari REST resource.  |
+
+
+## <a id="ambari-rest-api-auth"></a>Authenticating with the Ambari REST API
+
+The first step in using the Ambari REST API is to authenticate with the Ambari server. The Ambari REST API supports HTTP basic authentication. With this authentication method, you provide a username and password that is internally encoded and sent in the HTTP header.
+
+Example: Testing Authentication
+
+1. Set up some environment variables; replace the values with those appropriate for your operating environment.  For example:
+
+    ``` shell
+    $ export AMBUSER=admin
+    $ export AMBPASSWD=admin
+    $ export AMBHOST=<ambari-server>
+    $ export AMBPORT=8080
+    ```
+
+2. Submit a `curl` request to the Ambari server:
+
+    ``` shell
+    $ curl -u $AMBUSER:$AMBPASSWD http://$AMBHOST:$AMBPORT
+    ```
+    
+    If authentication succeeds, Apache license information is displayed.
+
+
+## <a id="ambari-rest-using"></a>Using the Ambari REST API for HAWQ Management
+
+
+### <a id="ambari-rest-ex-clustname"></a>Example: Retrieving the HAWQ Cluster Name
+
+1. Set up an additional environment variables:
+
+    ``` shell
+    $ export AMBCREDS="$AMBUSER:$AMBPASSWD"
+    $ export AMBURLBASE="http://${AMBHOST}:${AMBPORT}/api/v1/clusters"
+    ```
+    
+    You will use these variables in upcoming examples to simplify `curl` calls.
+    
+2. Use the Ambari REST API to determine the name of your HAWQ cluster; also set `$AMBURLBASE` to include the cluster name:
+
+    ``` shell
+    $ export CLUSTER_NAME="$(curl -u ${AMBCREDS} -i -H 'X-Requested-By:ambari' $AMBURLBASE | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p')"
+    $ echo $CLUSTER_NAME
+    TestCluster
+    $ export AMBURLBASE=$AMBURLBASE/$CLUSTER_NAME
+    ```
+
+### <a id="ambari-rest-ex-mgmt"></a>Examples: Managing the HAWQ and PXF Services
+
+The following subsections provide `curl` commands for common HAWQ cluster management activities.
+
+Refer to [API usage scenarios, troubleshooting, and other FAQs](https://cwiki.apache.org/confluence/display/AMBARI/API+usage+scenarios%2C+troubleshooting%2C+and+other+FAQs) for additional Ambari REST API usage examples.
+
+
+#### <a id="ambari-rest-ex-get"></a>Viewing HAWQ Cluster Service and Configuration Information
+
+| Task              |Command           |
+|----------------------|------------------------|
+| View HAWQ service information. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' $AMBURLBASE/services/HAWQ` |
+| List all HAWQ components. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' $AMBURLBASE/services/HAWQ/components` |
+| View information about the HAWQ master. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' $AMBURLBASE/services/HAWQ/components/HAWQMASTER` |
+| View the `hawq-site` configuration settings. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' "$AMBURLBASE/configurations?type=hawq-site&tag=TOPOLOGY_RESOLVED"` |
+| View the initial `core-site` configuration settings. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' "$AMBURLBASE/configurations?type=core-site&tag=INITIAL"` |
+| View the `pxf-profiles` configuration file. | `curl -u $AMBCREDS -X GET -H 'X-Requested-By:ambari' "$AMBURLBASE/configurations?type=pxf-profiles&tag=INITIAL"` |
+| View all components on node. | `curl -u $AMBCREDS -i  -X GET -H 'X-Requested-B:ambari' $AMBURLBASE/hosts/<hawq-node>` |
+
+
+#### <a id="ambari-rest-ex-put"></a>Starting/Stopping HAWQ and PXF Services
+
+| Task              |Command           |
+|----------------------|------------------------|
+| Start the HAWQ service. | `curl -u $AMBCREDS -X PUT -H 'X-Requested-By:ambari' -d '{"RequestInfo": {"context" :"Start HAWQ via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' $AMBURLBASE/services/HAWQ` |
+| Stop the HAWQ service. | `curl -u $AMBCREDS -X PUT -H 'X-Requested-By:ambari' -d '{"RequestInfo": {"context" :"Stop HAWQ via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' $AMBURLBASE/services/HAWQ` |
+| Start the PXF service. | `curl -u $AMBCREDS -X PUT -H 'X-Requested-By:ambari' -d '{"RequestInfo": {"context" :"Start PXF via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' $AMBURLBASE//services/PXF` |
+| Stop the PXF service. | `curl -u $AMBCREDS -X PUT -H 'X-Requested-By:ambari' -d '{"RequestInfo": {"context" :"Stop PXF via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' $AMBURLBASE/services/PXF` |
+
+#### <a id="ambari-rest-ex-post"></a>Invoking HAWQ and PXF Service Actions
+
+| Task              |Command           |
+|----------------------|------------------------|
+| Run a HAWQ service check. | `curl -u $AMBCREDS -X POST -H 'X-Requested-By:ambari' -d '{"RequestInfo":{"context":"HAWQ Service Check","command":"HAWQ_SERVICE_CHECK"}, "Requests/resource_filters":[{ "service_name":"HAWQ"}]}'  $AMBURLBASE/requests` |
+| Run a PXF service check. | `curl -u $AMBCREDS -X POST -H 'X-Requested-By:ambari' -d '{"RequestInfo":{"context":"PXF Service Check","command":"PXF_SERVICE_CHECK"}, "Requests/resource_filters":[{ "service_name":"PXF"}]}'  $AMBURLBASE/requests` |

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/maintain.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/maintain.html.md.erb b/markdown/admin/maintain.html.md.erb
new file mode 100644
index 0000000..f4b1491
--- /dev/null
+++ b/markdown/admin/maintain.html.md.erb
@@ -0,0 +1,31 @@
+---
+title: Routine System Maintenance Tasks
+---
+
+## <a id="overview-topic"></a>Overview
+
+To keep a HAWQ system running efficiently, the database must be regularly cleared of expired data and the table statistics must be updated so that the query optimizer has accurate information.
+
+HAWQ requires that certain tasks be performed regularly to achieve optimal performance. The tasks discussed here are required, but database administrators can automate them using standard UNIX tools such as `cron` scripts. An administrator sets up the appropriate scripts and checks that they execute successfully. See [Recommended Monitoring and Maintenance Tasks](RecommendedMonitoringTasks.html) for additional suggested maintenance activities you can implement to keep your HAWQ system running optimally.
+
+## <a id="topic10"></a>Database Server Log Files 
+
+HAWQ log output tends to be voluminous, especially at higher debug levels, and you do not need to save it indefinitely. Administrators rotate the log files periodically so new log files are started and old ones are removed.
+
+HAWQ has log file rotation enabled on the master and all segment instances. Daily log files are created in the `pg_log` subdirectory of the master and each segment data directory using the following naming convention: <code>hawq-<i>YYYY-MM-DD\_hhmmss</i>.csv</code>. Although log files are rolled over daily, they are not automatically truncated or deleted. Administrators need to implement scripts or programs to periodically clean up old log files in the `pg_log` directory of the master and of every segment instance.
+
+For information about viewing the database server log files, see [Viewing the Database Server Log Files](monitor.html).
+
+## <a id="topic11"></a>Management Utility Log Files 
+
+Log files for the HAWQ management utilities are written to `~/hawqAdminLogs` by default. The naming convention for management log files is:
+
+<pre><code><i>script_name_date</i>.log
+</code></pre>
+
+The log entry format is:
+
+<pre><code><i>timestamp:utility:host:user</i>:[INFO|WARN|FATAL]:<i>message</i>
+</code></pre>
+
+The log file for a particular utility execution is appended to its daily log file each time that utility is run.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/monitor.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/monitor.html.md.erb b/markdown/admin/monitor.html.md.erb
new file mode 100644
index 0000000..418c8c3
--- /dev/null
+++ b/markdown/admin/monitor.html.md.erb
@@ -0,0 +1,444 @@
+---
+title: Monitoring a HAWQ System
+---
+
+You can monitor a HAWQ system using a variety of tools included with the system or available as add-ons.
+
+Observing the HAWQ system day-to-day performance helps administrators understand the system behavior, plan workflow, and troubleshoot problems. This chapter discusses tools for monitoring database performance and activity.
+
+Also, be sure to review [Recommended Monitoring and Maintenance Tasks](RecommendedMonitoringTasks.html) for monitoring activities you can script to quickly detect problems in the system.
+
+
+## <a id="topic31"></a>Using hawq\_toolkit 
+
+Use HAWQ's administrative schema [*hawq\_toolkit*](../reference/toolkit/hawq_toolkit.html) to query the system catalogs, log files, and operating environment for system status information. The *hawq\_toolkit* schema contains several views you can access using SQL commands. The *hawq\_toolkit* schema is accessible to all database users. Some objects require superuser permissions. Use a command similar to the following to add the *hawq\_toolkit* schema to your schema search path:
+
+```sql
+=> SET ROLE 'gpadmin' ;
+=# SET search_path TO myschema, hawq_toolkit ;
+```
+
+## <a id="topic3"></a>Monitoring System State 
+
+As a HAWQ administrator, you must monitor the system for problem events such as a segment going down or running out of disk space on a segment host. The following topics describe how to monitor the health of a HAWQ system and examine certain state information for a HAWQ system.
+
+-   [Checking System State](#topic12)
+-   [Checking Disk Space Usage](#topic15)
+-   [Viewing Metadata Information about Database Objects](#topic24)
+-   [Viewing Query Workfile Usage Information](#topic27)
+
+### <a id="topic12"></a>Checking System State 
+
+A HAWQ system is comprised of multiple PostgreSQL instances \(the master and segments\) spanning multiple machines. To monitor a HAWQ system, you need to know information about the system as a whole, as well as status information of the individual instances. The `hawq state` utility provides status information about a HAWQ system.
+
+#### <a id="topic13"></a>Viewing Master and Segment Status and Configuration 
+
+The default `hawq state` action is to check segment instances and show a brief status of the valid and failed segments. For example, to see a quick status of your HAWQ system:
+
+```shell
+$ hawq state -b
+```
+
+You can also display information about the HAWQ master data directory by invoking `hawq state` with the `-d` option:
+
+```shell
+$ hawq state -d <master_data_dir>
+```
+
+
+### <a id="topic15"></a>Checking Disk Space Usage 
+
+#### <a id="topic16"></a>Checking Sizing of Distributed Databases and Tables 
+
+The *hawq\_toolkit* administrative schema contains several views that you can use to determine the disk space usage for a distributed HAWQ database, schema, table, or index.
+
+##### <a id="topic17"></a>Viewing Disk Space Usage for a Database 
+
+To see the total size of a database \(in bytes\), use the *hawq\_size\_of\_database* view in the *hawq\_toolkit* administrative schema. For example:
+
+```sql
+=> SELECT * FROM hawq_toolkit.hawq_size_of_database
+     ORDER BY sodddatname;
+```
+
+##### <a id="topic18"></a>Viewing Disk Space Usage for a Table 
+
+The *hawq\_toolkit* administrative schema contains several views for checking the size of a table. The table sizing views list the table by object ID \(not by name\). To check the size of a table by name, you must look up the relation name \(`relname`\) in the *pg\_class* table. For example:
+
+```sql
+=> SELECT relname AS name, sotdsize AS size, sotdtoastsize
+     AS toast, sotdadditionalsize AS other
+     FROM hawq_toolkit.hawq_size_of_table_disk AS sotd, pg_class
+   WHERE sotd.sotdoid=pg_class.oid ORDER BY relname;
+```
+
+##### <a id="topic19"></a>Viewing Disk Space Usage for Indexes 
+
+The *hawq\_toolkit* administrative schema contains a number of views for checking index sizes. To see the total size of all index\(es\) on a table, use the *hawq\_size\_of\_all\_table\_indexes* view. To see the size of a particular index, use the *hawq\_size\_of\_index* view. The index sizing views list tables and indexes by object ID \(not by name\). To check the size of an index by name, you must look up the relation name \(`relname`\) in the *pg\_class* table. For example:
+
+```sql
+=> SELECT soisize, relname AS indexname
+     FROM pg_class, hawq_size_of_index
+   WHERE pg_class.oid=hawq_size_of_index.soioid
+     AND pg_class.relkind='i';
+```
+
+### <a id="topic24"></a>Viewing Metadata Information about Database Objects 
+
+HAWQ uses its system catalogs to track various metadata information about the objects stored in a database (tables, views, indexes and so on), as well as global objects including roles and tablespaces.
+
+#### <a id="topic25"></a>Viewing the Last Operation Performed 
+
+You can use the system views *pg\_stat\_operations* and *pg\_stat\_partition\_operations* to look up actions performed on a database object. For example, to view when the `cust` table was created and when it was last analyzed:
+
+```sql
+=> SELECT schemaname AS schema, objname AS table,
+     usename AS role, actionname AS action,
+     subtype AS type, statime AS time
+   FROM pg_stat_operations
+   WHERE objname='cust';
+```
+
+```
+�schema | table | role | action  | type  | time
+--------+-------+------+---------+-------+--------------------------
+��sales | cust  | main | CREATE  | TABLE | 2010-02-09 18:10:07.867977-08
+��sales | cust  | main | VACUUM  |       | 2010-02-10 13:32:39.068219-08
+��sales | cust  | main | ANALYZE |       | 2010-02-25 16:07:01.157168-08
+(3 rows)
+
+```
+
+#### <a id="topic26"></a>Viewing the Definition of an Object 
+
+You can use the `psql` `\d` meta-command to display the definition of an object, such as a table or view. For example, to see the definition of a table named `sales`:
+
+``` sql
+=> \d sales
+```
+
+```
+Append-Only Table "public.sales"
+ Column |  Type   | Modifiers 
+--------+---------+-----------
+ id     | integer | 
+ year   | integer | 
+ qtr    | integer | 
+ day    | integer | 
+ region | text    | 
+Compression Type: None
+Compression Level: 0
+Block Size: 32768
+Checksum: f
+Distributed by: (id)
+```
+
+
+### <a id="topic27"></a>Viewing Query Workfile Usage Information 
+
+The HAWQ administrative schema *hawq\_toolkit* contains views that display information about HAWQ workfiles. HAWQ creates workfiles on disk if it does not have sufficient memory to execute the query in memory. This information can be used for troubleshooting and tuning queries. The information in the views can also be used to specify the values for the HAWQ configuration parameters `hawq_workfile_limit_per_query` and `hawq_workfile_limit_per_segment`.
+
+Views in the *hawq\_toolkit* schema include:
+
+-   *hawq\_workfile\_entries* - one row for each operator currently using disk space for workfiles on a segment
+-   *hawq\_workfile\_usage\_per\_query* - one row for each running query currently using disk space for workfiles on a segment
+-   *hawq\_workfile\_usage\_per\_segment* - one row for each segment where each row displays the total amount of disk space currently in use for workfiles on the segment
+
+
+## <a id="topic28"></a>Viewing the Database Server Log Files 
+
+Every database instance in HAWQ \(master and segments\) runs a PostgreSQL database server with its own server log file. Daily log files are created in the `pg_log` directory of the master  and each segment data directory.
+
+### <a id="topic29"></a>Log File Format 
+
+The server log files are written in comma-separated values \(CSV\) format. Log entries may not include values for all log fields. For example, only log entries associated with a query worker process will have the `slice_id` populated. You can identify related log entries of a particular query by the query's session identifier \(`gp_session_id`\) and command identifier \(`gp_command_count`\).
+
+Log entries may include the following fields:
+
+<table>
+  <tr><th>#</th><th>Field Name</th><th>Data Type</th><th>Description</th></tr>
+  <tr><td>1</td><td>event_time</td><td>timestamp with time zone</td><td>Time that the log entry was written to the log</td></tr>
+  <tr><td>2</td><td>user_name</td><td>varchar(100)</td><td>The database user name</td></tr>
+  <tr><td>3</td><td>database_name</td><td>varchar(100)</td><td>The database name</td></tr>
+  <tr><td>4</td><td>process_id</td><td>varchar(10)</td><td>The system process ID (prefixed with "p")</td></tr>
+  <tr><td>5</td><td>thread_id</td><td>varchar(50)</td><td>The thread count (prefixed with "th")</td></tr>
+  <tr><td>6</td><td>remote_host</td><td>varchar(100)</td><td>On the master, the hostname/address of the client machine. On the segment, the hostname/address of the master.</td></tr>
+  <tr><td>7</td><td>remote_port</td><td>varchar(10)</td><td>The segment or master port number</td></tr>
+  <tr><td>8</td><td>session_start_time</td><td>timestamp with time zone</td><td>Time session connection was opened</td></tr>
+  <tr><td>9</td><td>transaction_id</td><td>int</td><td>Top-level transaction ID on the master. This ID is the parent of any subtransactions.</td></tr>
+  <tr><td>10</td><td>gp_session_id</td><td>text</td><td>Session identifier number (prefixed with "con")</td></tr>
+  <tr><td>11</td><td>gp_command_count</td><td>text</td><td>The command number within a session (prefixed with "cmd")</td></tr>
+  <tr><td>12</td><td>gp_segment</td><td>text</td><td>The segment content identifier. The master always has a content ID of -1.</td></tr>
+  <tr><td>13</td><td>slice_id</td><td>text</td><td>The slice ID (portion of the query plan being executed)</td></tr>
+  <tr><td>14</td><td>distr_tranx_id</td><td>text</td><td>Distributed transaction ID</td></tr>
+  <tr><td>15</td><td>local_tranx_id</td><td>text</td><td>Local transaction ID</td></tr>
+  <tr><td>16</td><td>sub_tranx_id</td><td>text</td><td>Subtransaction ID</td></tr>
+  <tr><td>17</td><td>event_severity</td><td>varchar(10)</td><td>Values include: LOG, ERROR, FATAL, PANIC, DEBUG1, DEBUG2</td></tr>
+  <tr><td>18</td><td>sql_state_code</td><td>varchar(10)</td><td>SQL state code associated with the log message</td></tr>
+  <tr><td>19</td><td>event_message</td><td>text</td><td>Log or error message text</td></tr>
+  <tr><td>20</td><td>event_detail</td><td>text</td><td>Detail message text associated with an error or warning message</td></tr>
+  <tr><td>21</td><td>event_hint</td><td>text</td><td>Hint message text associated with an error or warning message</td></tr>
+  <tr><td>22</td><td>internal_query</td><td>text</td><td>The internally-generated query text</td></tr>
+  <tr><td>23</td><td>internal_query_pos</td><td>int</td><td>The cursor index into the internally-generated query text</td></tr>
+  <tr><td>24</td><td>event_context</td><td>text</td><td>The context in which this message gets generated</td></tr>
+  <tr><td>25</td><td>debug_query_string</td><td>text</td><td>User-supplied query string with full detail for debugging. This string can be modified for internal use.</td></tr>
+  <tr><td>26</td><td>error_cursor_pos</td><td>int</td><td>The cursor index into the query string</td></tr>
+  <tr><td>27</td><td>func_name</td><td>text</td><td>The function in which this message is generated</td></tr>
+  <tr><td>28</td><td>file_name</td><td>text</td><td>The internal code file where the message originated</td></tr>
+  <tr><td>29</td><td>file_line</td><td>int</td><td>The line of the code file where the message originated</td></tr>
+  <tr><td>30</td><td>stack_trace</td><td>text</td><td>Stack trace text associated with this message</td></tr>
+</table>
+### <a id="topic30"></a>Searching the HAWQ Server Log Files 
+
+You can use the `gplogfilter` HAWQ utility to search through a HAWQ log file for entries matching specific criteria. By default, this utility searches through the HAWQ master log file in the default logging location. For example, to display the entries to the master log file starting after 2 pm on a certain date:
+
+``` shell
+$ gplogfilter -b '2016-01-18 14:00'
+```
+
+To search through all segment log files simultaneously, run `gplogfilter` through the `hawq ssh` utility. For example, specify a \<seg\_hosts\> file that includes all segment hosts of interest, then invoke `gplogfilter` to display the last three lines of each segment log file on each segment host. (Note: enter the commands after the `=>` prompt, do not include the `=>`.):
+
+``` shell
+$ hawq ssh -f <seg_hosts>
+=> source /usr/local/hawq/greenplum_path.sh
+=> gplogfilter -n 3 /data/hawq/segment/pg_log/hawq*.csv
+```
+
+## <a id="topic_jx2_rqg_kp"></a>HAWQ Error Codes 
+
+The following section describes SQL error codes for certain database events.
+
+### <a id="topic_pyh_sqg_kp"></a>SQL Standard Error Codes 
+
+The following table lists all the defined error codes. Some are not used, but are defined by the SQL standard. The error classes are also shown. For each error class there is a standard error code having the last three characters 000. This code is used only for error conditions that fall within the class but do not have any more-specific code assigned.
+
+The PL/pgSQL condition name for each error code is the same as the phrase shown in the table, with underscores substituted for spaces. For example, code 22012, DIVISION BY ZERO, has condition name DIVISION\_BY\_ZERO. Condition names can be written in either upper or lower case.
+
+**Note:** PL/pgSQL does not recognize warning, as opposed to error, condition names; those are classes 00, 01, and 02.
+
+|Error Code|Meaning|Constant|
+|----------|-------|--------|
+|**Class 00**\u2014 Successful Completion|
+|00000|SUCCESSFUL COMPLETION|successful\_completion|
+|Class 01 \u2014 Warning|
+|01000|WARNING|warning|
+|0100C|DYNAMIC RESULT SETS RETURNED|dynamic\_result\_sets\_returned|
+|01008|IMPLICIT ZERO BIT PADDING|implicit\_zero\_bit\_padding|
+|01003|NULL VALUE ELIMINATED IN SET FUNCTION|null\_value\_eliminated\_in\_set\_function|
+|01007|PRIVILEGE NOT GRANTED|privilege\_not\_granted|
+|01006|PRIVILEGE NOT REVOKED|privilege\_not\_revoked|
+|01004|STRING DATA RIGHT TRUNCATION|string\_data\_right\_truncation|
+|01P01|DEPRECATED FEATURE|deprecated\_feature|
+|**Class 02** \u2014 No Data \(this is also a warning class per the SQL standard\)|
+|02000|NO DATA|no\_data|
+|02001|NO ADDITIONAL DYNAMIC RESULT SETS RETURNED|no\_additional\_dynamic\_result\_sets\_returned|
+|**Class 03** \u2014 SQL Statement Not Yet Complete|
+|03000|SQL STATEMENT NOT YET COMPLETE|sql\_statement\_not\_yet\_complete|
+|**Class 08** \u2014 Connection Exception|
+|08000|CONNECTION EXCEPTION|connection\_exception|
+|08003|CONNECTION DOES NOT EXIST|connection\_does\_not\_exist|
+|08006|CONNECTION FAILURE|connection\_failure|
+|08001|SQLCLIENT UNABLE TO ESTABLISH SQLCONNECTION|sqlclient\_unable\_to\_establish\_sqlconnection|
+|08004|SQLSERVER REJECTED ESTABLISHMENT OF SQLCONNECTION|sqlserver\_rejected\_establishment\_of\_sqlconnection|
+|08007|TRANSACTION RESOLUTION UNKNOWN|transaction\_resolution\_unknown|
+|08P01|PROTOCOL VIOLATION|protocol\_violation|
+|**Class 09** \u2014 Triggered Action Exception|
+|09000|TRIGGERED ACTION EXCEPTION|triggered\_action\_exception|
+|**Class 0A** \u2014 Feature Not Supported|
+|0A000|FEATURE NOT SUPPORTED|feature\_not\_supported|
+|**Class 0B** \u2014 Invalid Transaction Initiation|
+|0B000|INVALID TRANSACTION INITIATION|invalid\_transaction\_initiation|
+|**Class 0F** \u2014 Locator Exception|
+|0F000|LOCATOR EXCEPTION|locator\_exception|
+|0F001|INVALID LOCATOR SPECIFICATION|invalid\_locator\_specification|
+|**Class 0L** \u2014 Invalid Grantor|
+|0L000|INVALID GRANTOR|invalid\_grantor|
+|0LP01|INVALID GRANT OPERATION|invalid\_grant\_operation|
+|**Class 0P** \u2014 Invalid Role Specification|
+|0P000|INVALID ROLE SPECIFICATION|invalid\_role\_specification|
+|**Class 21** \u2014 Cardinality Violation|
+|21000|CARDINALITY VIOLATION|cardinality\_violation|
+|**Class 22** \u2014 Data Exception|
+|22000|DATA EXCEPTION|data\_exception|
+|2202E|ARRAY SUBSCRIPT ERROR|array\_subscript\_error|
+|22021|CHARACTER NOT IN REPERTOIRE|character\_not\_in\_repertoire|
+|22008|DATETIME FIELD OVERFLOW|datetime\_field\_overflow|
+|22012|DIVISION BY ZERO|division\_by\_zero|
+|22005|ERROR IN ASSIGNMENT|error\_in\_assignment|
+|2200B|ESCAPE CHARACTER CONFLICT|escape\_character\_conflict|
+|22022|INDICATOR OVERFLOW|indicator\_overflow|
+|22015|INTERVAL FIELD OVERFLOW|interval\_field\_overflow|
+|2201E|INVALID ARGUMENT FOR LOGARITHM|invalid\_argument\_for\_logarithm|
+|2201F|INVALID ARGUMENT FOR POWER FUNCTION|invalid\_argument\_for\_power\_function|
+|2201G|INVALID ARGUMENT FOR WIDTH BUCKET FUNCTION|invalid\_argument\_for\_width\_bucket\_function|
+|22018|INVALID CHARACTER VALUE FOR CAST|invalid\_character\_value\_for\_cast|
+|22007|INVALID DATETIME FORMAT|invalid\_datetime\_format|
+|22019|INVALID ESCAPE CHARACTER|invalid\_escape\_character|
+|2200D|INVALID ESCAPE OCTET|invalid\_escape\_octet|
+|22025|INVALID ESCAPE SEQUENCE|invalid\_escape\_sequence|
+|22P06|NONSTANDARD USE OF ESCAPE CHARACTER|nonstandard\_use\_of\_escape\_character|
+|22010|INVALID INDICATOR PARAMETER VALUE|invalid\_indicator\_parameter\_value|
+|22020|INVALID LIMIT VALUE|invalid\_limit\_value|
+|22023|INVALID PARAMETER VALUE|invalid\_parameter\_value|
+|2201B|INVALID REGULAR EXPRESSION|invalid\_regular\_expression|
+|22009|INVALID TIME ZONE DISPLACEMENT VALUE|invalid\_time\_zone\_displacement\_value|
+|2200C|INVALID USE OF ESCAPE CHARACTER|invalid\_use\_of\_escape\_character|
+|2200G|MOST SPECIFIC TYPE MISMATCH|most\_specific\_type\_mismatch|
+|22004|NULL VALUE NOT ALLOWED|null\_value\_not\_allowed|
+|22002|NULL VALUE NO INDICATOR PARAMETER|null\_value\_no\_indicator\_parameter|
+|22003|NUMERIC VALUE OUT OF RANGE|numeric\_value\_out\_of\_range|
+|22026|STRING DATA LENGTH MISMATCH|string\_data\_length\_mismatch|
+|22001|STRING DATA RIGHT TRUNCATION|string\_data\_right\_truncation|
+|22011|SUBSTRING ERROR|substring\_error|
+|22027|TRIM ERROR|trim\_error|
+|22024|UNTERMINATED C STRING|unterminated\_c\_string|
+|2200F|ZERO LENGTH CHARACTER STRING|zero\_length\_character\_string|
+|22P01|FLOATING POINT EXCEPTION|floating\_point\_exception|
+|22P02|INVALID TEXT REPRESENTATION|invalid\_text\_representation|
+|22P03|INVALID BINARY REPRESENTATION|invalid\_binary\_representation|
+|22P04|BAD COPY FILE FORMAT|bad\_copy\_file\_format|
+|22P05|UNTRANSLATABLE CHARACTER|untranslatable\_character|
+|**Class 23** \u2014 Integrity Constraint Violation|
+|23000|INTEGRITY CONSTRAINT VIOLATION|integrity\_constraint\_violation|
+|23001|RESTRICT VIOLATION|restrict\_violation|
+|23502|NOT NULL VIOLATION|not\_null\_violation|
+|23503|FOREIGN KEY VIOLATION|foreign\_key\_violation|
+|23505|UNIQUE VIOLATION|unique\_violation|
+|23514|CHECK VIOLATION|check\_violation|
+|**Class 24** \u2014 Invalid Cursor State|
+|24000|INVALID CURSOR STATE|invalid\_cursor\_state|
+|**Class 25** \u2014 Invalid Transaction State|
+|25000|INVALID TRANSACTION STATE|invalid\_transaction\_state|
+|25001|ACTIVE SQL TRANSACTION|active\_sql\_transaction|
+|25002|BRANCH TRANSACTION ALREADY ACTIVE|branch\_transaction\_already\_active|
+|25008|HELD CURSOR REQUIRES SAME ISOLATION LEVEL|held\_cursor\_requires\_same\_isolation\_level|
+|25003|INAPPROPRIATE ACCESS MODE FOR BRANCH TRANSACTION|inappropriate\_access\_mode\_for\_branch\_transaction|
+|25004|INAPPROPRIATE ISOLATION LEVEL FOR BRANCH TRANSACTION|inappropriate\_isolation\_level\_for\_branch\_transaction|
+|25005|NO ACTIVE SQL TRANSACTION FOR BRANCH TRANSACTION|no\_active\_sql\_transaction\_for\_branch\_transaction|
+|25006|READ ONLY SQL TRANSACTION|read\_only\_sql\_transaction|
+|25007|SCHEMA AND DATA STATEMENT MIXING NOT SUPPORTED|schema\_and\_data\_statement\_mixing\_not\_supported|
+|25P01|NO ACTIVE SQL TRANSACTION|no\_active\_sql\_transaction|
+|25P02|IN FAILED SQL TRANSACTION|in\_failed\_sql\_transaction|
+|**Class 26** \u2014 Invalid SQL Statement Name|
+|26000|INVALID SQL STATEMENT NAME|invalid\_sql\_statement\_name|
+|**Class 27** \u2014 Triggered Data Change Violation|
+|27000|TRIGGERED DATA CHANGE VIOLATION|triggered\_data\_change\_violation|
+|**Class 28** \u2014 Invalid Authorization Specification|
+|28000|INVALID AUTHORIZATION SPECIFICATION|invalid\_authorization\_specification|
+|**Class 2B** \u2014 Dependent Privilege Descriptors Still Exist|
+|2B000|DEPENDENT PRIVILEGE DESCRIPTORS STILL EXIST|dependent\_privilege\_descriptors\_still\_exist|
+|2BP01|DEPENDENT OBJECTS STILL EXIST|dependent\_objects\_still\_exist|
+|**Class 2D** \u2014 Invalid Transaction Termination|
+|2D000|INVALID TRANSACTION TERMINATION|invalid\_transaction\_termination|
+|**Class 2F** \u2014 SQL Routine Exception|
+|2F000|SQL ROUTINE EXCEPTION|sql\_routine\_exception|
+|2F005|FUNCTION EXECUTED NO RETURN STATEMENT|function\_executed\_no\_return\_statement|
+|2F002|MODIFYING SQL DATA NOT PERMITTED|modifying\_sql\_data\_not\_permitted|
+|2F003|PROHIBITED SQL STATEMENT ATTEMPTED|prohibited\_sql\_statement\_attempted|
+|2F004|READING SQL DATA NOT PERMITTED|reading\_sql\_data\_not\_permitted|
+|**Class 34** \u2014 Invalid Cursor Name|
+|34000|INVALID CURSOR NAME|invalid\_cursor\_name|
+|**Class 38** \u2014 External Routine Exception|
+|38000|EXTERNAL ROUTINE EXCEPTION|external\_routine\_exception|
+|38001|CONTAINING SQL NOT PERMITTED|containing\_sql\_not\_permitted|
+|38002|MODIFYING SQL DATA NOT PERMITTED|modifying\_sql\_data\_not\_permitted|
+|38003|PROHIBITED SQL STATEMENT ATTEMPTED|prohibited\_sql\_statement\_attempted|
+|38004|READING SQL DATA NOT PERMITTED|reading\_sql\_data\_not\_permitted|
+|**Class 39** \u2014 External Routine Invocation Exception|
+|39000|EXTERNAL ROUTINE INVOCATION EXCEPTION|external\_routine\_invocation\_exception|
+|39001|INVALID SQLSTATE RETURNED|invalid\_sqlstate\_returned|
+|39004|NULL VALUE NOT ALLOWED|null\_value\_not\_allowed|
+|39P01|TRIGGER PROTOCOL VIOLATED|trigger\_protocol\_violated|
+|39P02|SRF PROTOCOL VIOLATED|srf\_protocol\_violated|
+|**Class 3B** \u2014 Savepoint Exception|
+|3B000|SAVEPOINT EXCEPTION|savepoint\_exception|
+|3B001|INVALID SAVEPOINT SPECIFICATION|invalid\_savepoint\_specification|
+|**Class 3D** \u2014 Invalid Catalog Name|
+|3D000|INVALID CATALOG NAME|invalid\_catalog\_name|
+|**Class 3F** \u2014 Invalid Schema Name|
+|3F000|INVALID SCHEMA NAME|invalid\_schema\_name|
+|**Class 40** \u2014 Transaction Rollback|
+|40000|TRANSACTION ROLLBACK|transaction\_rollback|
+|40002|TRANSACTION INTEGRITY CONSTRAINT VIOLATION|transaction\_integrity\_constraint\_violation|
+|40001|SERIALIZATION FAILURE|serialization\_failure|
+|40003|STATEMENT COMPLETION UNKNOWN|statement\_completion\_unknown|
+|40P01|DEADLOCK DETECTED|deadlock\_detected|
+|**Class 42** \u2014 Syntax Error or Access Rule Violation|
+|42000|SYNTAX ERROR OR ACCESS RULE VIOLATION|syntax\_error\_or\_access\_rule\_violation|
+|42601|SYNTAX ERROR|syntax\_error|
+|42501|INSUFFICIENT PRIVILEGE|insufficient\_privilege|
+|42846|CANNOT COERCE|cannot\_coerce|
+|42803|GROUPING ERROR|grouping\_error|
+|42830|INVALID FOREIGN KEY|invalid\_foreign\_key|
+|42602|INVALID NAME|invalid\_name|
+|42622|NAME TOO LONG|name\_too\_long|
+|42939|RESERVED NAME|reserved\_name|
+|42804|DATATYPE MISMATCH|datatype\_mismatch|
+|42P18|INDETERMINATE DATATYPE|indeterminate\_datatype|
+|42809|WRONG OBJECT TYPE|wrong\_object\_type|
+|42703|UNDEFINED COLUMN|undefined\_column|
+|42883|UNDEFINED FUNCTION|undefined\_function|
+|42P01|UNDEFINED TABLE|undefined\_table|
+|42P02|UNDEFINED PARAMETER|undefined\_parameter|
+|42704|UNDEFINED OBJECT|undefined\_object|
+|42701|DUPLICATE COLUMN|duplicate\_column|
+|42P03|DUPLICATE CURSOR|duplicate\_cursor|
+|42P04|DUPLICATE DATABASE|duplicate\_database|
+|42723|DUPLICATE FUNCTION|duplicate\_function|
+|42P05|DUPLICATE PREPARED STATEMENT|duplicate\_prepared\_statement|
+|42P06|DUPLICATE SCHEMA|duplicate\_schema|
+|42P07|DUPLICATE TABLE|duplicate\_table|
+|42712|DUPLICATE ALIAS|duplicate\_alias|
+|42710|DUPLICATE OBJECT|duplicate\_object|
+|42702|AMBIGUOUS COLUMN|ambiguous\_column|
+|42725|AMBIGUOUS FUNCTION|ambiguous\_function|
+|42P08|AMBIGUOUS PARAMETER|ambiguous\_parameter|
+|42P09|AMBIGUOUS ALIAS|ambiguous\_alias|
+|42P10|INVALID COLUMN REFERENCE|invalid\_column\_reference|
+|42611|INVALID COLUMN DEFINITION|invalid\_column\_definition|
+|42P11|INVALID CURSOR DEFINITION|invalid\_cursor\_definition|
+|42P12|INVALID DATABASE DEFINITION|invalid\_database\_definition|
+|42P13|INVALID FUNCTION DEFINITION|invalid\_function\_definition|
+|42P14|INVALID PREPARED STATEMENT DEFINITION|invalid\_prepared\_statement\_definition|
+|42P15|INVALID SCHEMA DEFINITION|invalid\_schema\_definition|
+|42P16|INVALID TABLE DEFINITION|invalid\_table\_definition|
+|42P17|INVALID OBJECT DEFINITION|invalid\_object\_definition|
+|**Class 44** \u2014 WITH CHECK OPTION Violation|
+|44000|WITH CHECK OPTION VIOLATION|with\_check\_option\_violation|
+|**Class 53** \u2014 Insufficient Resources|
+|53000|INSUFFICIENT RESOURCES|insufficient\_resources|
+|53100|DISK FULL|disk\_full|
+|53200|OUT OF MEMORY|out\_of\_memory|
+|53300|TOO MANY CONNECTIONS|too\_many\_connections|
+|**Class 54** \u2014 Program Limit Exceeded|
+|54000|PROGRAM LIMIT EXCEEDED|program\_limit\_exceeded|
+|54001|STATEMENT TOO COMPLEX|statement\_too\_complex|
+|54011|TOO MANY COLUMNS|too\_many\_columns|
+|54023|TOO MANY ARGUMENTS|too\_many\_arguments|
+|**Class 55** \u2014 Object Not In Prerequisite State|
+|55000|OBJECT NOT IN PREREQUISITE STATE|object\_not\_in\_prerequisite\_state|
+|55006|OBJECT IN USE|object\_in\_use|
+|55P02|CANT CHANGE RUNTIME PARAM|cant\_change\_runtime\_param|
+|55P03|LOCK NOT AVAILABLE|lock\_not\_available|
+|**Class 57** \u2014 Operator Intervention|
+|57000|OPERATOR INTERVENTION|operator\_intervention|
+|57014|QUERY CANCELED|query\_canceled|
+|57P01|ADMIN SHUTDOWN|admin\_shutdown|
+|57P02|CRASH SHUTDOWN|crash\_shutdown|
+|57P03|CANNOT CONNECT NOW|cannot\_connect\_now|
+|**Class 58** \u2014 System Error \(errors external to HAWQ \)|
+|58030|IO ERROR|io\_error|
+|58P01|UNDEFINED FILE|undefined\_file|
+|58P02|DUPLICATE FILE|duplicate\_file|
+|Class F0 \u2014 Configuration File Error|
+|F0000|CONFIG FILE ERROR|config\_file\_error|
+|F0001|LOCK FILE EXISTS|lock\_file\_exists|
+|**Class P0** \u2014 PL/pgSQL Error|
+|P0000|PLPGSQL ERROR|plpgsql\_error|
+|P0001|RAISE EXCEPTION|raise\_exception|
+|P0002|NO DATA FOUND|no\_data\_found|
+|P0003|TOO MANY ROWS|too\_many\_rows|
+|**Class XX** \u2014 Internal Error|
+|XX000|INTERNAL ERROR|internal\_error|
+|XX001|DATA CORRUPTED|data\_corrupted|
+|XX002|INDEX CORRUPTED|index\_corrupted|

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/admin/setuphawqopenv.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/admin/setuphawqopenv.html.md.erb b/markdown/admin/setuphawqopenv.html.md.erb
new file mode 100644
index 0000000..9d9b731
--- /dev/null
+++ b/markdown/admin/setuphawqopenv.html.md.erb
@@ -0,0 +1,81 @@
+---
+title: Introducing the HAWQ Operating Environment
+---
+
+Before invoking operations on a HAWQ cluster, you must set up your HAWQ environment. This set up is required for both administrative and non-administrative HAWQ users.
+
+## <a id="hawq_setupenv"></a>Procedure: Setting Up Your HAWQ Operating Environment
+
+HAWQ installs a script that you can use to set up your HAWQ cluster environment. The `greenplum_path.sh` script, located in your HAWQ root install directory, sets `$PATH` and other environment variables to find HAWQ files.  Most importantly, `greenplum_path.sh` sets the `$GPHOME` environment variable to point to the root directory of the HAWQ installation.  If you installed HAWQ from a product distribution, the HAWQ root is typically `/usr/local/hawq`. If you built HAWQ from source or downloaded the tarball, you will have selected an install root directory on your own.
+
+Perform the following steps to set up your HAWQ operating environment:
+
+1. Log in to the HAWQ node as the desired user.  For example:
+
+    ``` shell
+    $ ssh gpadmin@<master>
+    gpadmin@master$ 
+    ```
+
+    Or, if you are already logged in to \<node\-type\> as a different user, switch to the desired user. For example:
+    
+    ``` shell
+    gpadmin@master$ su - <hawq-user>
+    Password:
+    hawq-user@master$ 
+    ```
+
+2. Set up your HAWQ operating environment by sourcing the `greenplum_path.sh` file:
+
+    ``` shell
+    hawq-node$ source /usr/local/hawq/greenplum_path.sh
+    ```
+
+    If you built HAWQ from source or downloaded the tarball, substitute the path to the installed or extracted `greenplum_path.sh` file \(for example `/opt/hawq-2.1.0.0/greenplum_path.sh`\).
+
+
+3. Edit your `.bash_profile` or other shell initialization file to source `greenplum_path.sh` on login.  For example, add:
+
+    ``` shell
+    source /usr/local/hawq/greenplum_path.sh
+    ```
+    
+4. Set HAWQ-specific environment variables relevant to your deployment in your shell initialization file. These include `PGAPPNAME`, `PGDATABASE`, `PGHOST`, `PGPORT`, and `PGUSER.` For example:
+
+    1.  If you use a custom HAWQ master port number, make this port number the default by setting the `PGPORT` environment variable in your shell initialization file; add:
+
+        ``` shell
+        export PGPORT=10432
+        ```
+    
+        Setting `PGPORT` simplifies `psql` invocation by providing a default for the `-p` (port) option.
+
+    1.  If you will routinely operate on a specific database, make this database the default by setting the `PGDATABASE` environment variable in your shell initialization file:
+
+        ``` shell
+        export PGDATABASE=<database-name>
+        ```
+    
+        Setting `PGDATABASE` simplifies `psql` invocation by providing a default for the `-d` (database) option.
+
+    You may choose to set additional HAWQ deployment-specific environment variables. See [Environment Variables](../reference/HAWQEnvironmentVariables.html#optionalenvironmentvariables).
+
+## <a id="hawq_env_files_and_dirs"></a>HAWQ Files and Directories
+
+The following table identifies some files and directories of interest in a default HAWQ installation.  Unless otherwise specified, the table entries are relative to `$GPHOME`.
+
+|File/Directory                   | Contents           |
+|---------------------------------|---------------------|
+| $HOME/hawqAdminLogs/            | Default HAWQ management utility log file directory |
+| greenplum_path.sh      | HAWQ environment set-up script |
+| bin/      | HAWQ admin, client, database, and administration utilities |
+| etc/              | HAWQ configuration files, including `hawq-site.xml` |
+| include/          | HDFS, PostgreSQL, `libpq` header files  |
+| lib/              | HAWQ libraries |
+| lib/postgresql/   | PostgreSQL shared libraries and JAR files |
+| share/postgresql/ | PostgreSQL and procedural languages samples and scripts    |
+| /data/hawq/[master&#124;segment]/ | Default location of HAWQ master and segment data directories |
+| /data/hawq/[master&#124;segment]/pg_log/ | Default location of HAWQ master and segment log file directories |
+| /etc/pxf/conf/               | PXF service and configuration files |
+| /usr/lib/pxf/                | PXF service and plug-in shared libraries  |
+| /usr/hdp/current/            | HDP runtime and configuration files |


[47/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/master_middleman/source/subnavs/apache-hawq-nav.erb
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/subnavs/apache-hawq-nav.erb b/book/master_middleman/source/subnavs/apache-hawq-nav.erb
new file mode 100644
index 0000000..50ee5bc
--- /dev/null
+++ b/book/master_middleman/source/subnavs/apache-hawq-nav.erb
@@ -0,0 +1,894 @@
+<div id="sub-nav" class="js-sidenav nav-container" role="navigation">
+  <a class="sidenav-title" data-behavior="SubMenuMobile">  Doc Index</a>
+  <div class="nav-content">
+    <ul>
+      <li>
+        Apache HAWQ (incubating)
+      </li>
+      <li><a href="/docs/userguide/2.1.0.0-incubating/requirements/system-requirements.html">System Requirements</a>
+      </li>
+<!--      <li class="has_submenu">
+        <span>
+          Installing Apache HAWQ
+        </span>
+        <ul>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/install/select-hosts.html">Selecting HAWQ Host Machines</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/install/install-ambari.html">Installing HAWQ Using Ambari</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/install/install-cli.html">Installing HAWQ from the Command Line (Optional)</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/install/aws-config.html">Amazon EC2 Configuration</a>
+          </li>
+          <li>
+            <a href="/hdb/install/install_package_extensions.html">Installing Procedural Languages and Package Extensions for HAWQ</a>
+          </li>
+          <li>
+            <a href="/hdb/install/install_pgcrypto.html">Installing Cryptographic Functions for PostgreSQL (pgcrypto)</a></li>
+          <li>
+            <a href="/hdb/install/install_pljava.html">Installing PL/Java</a>
+          </li>
+          <li>
+            <a href="/hdb/install/install_plr.html">Installing PL/R</a>
+          </li> 
+        </ul>
+      </li>-->
+      <li class="has_submenu">
+        <span>
+          HAWQ System Overview
+        </span>
+        <ul>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/overview/HAWQOverview.html">What is HAWQ?</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/overview/HAWQArchitecture.html">HAWQ Architecture</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/overview/TableDistributionStorage.html">Table Distribution and Storage</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/overview/ElasticSegments.html">Elastic Query Execution Runtime</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/overview/ResourceManagement.html">Resource Management</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/overview/HDFSCatalogCache.html">HDFS Catalog Cache</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/overview/ManagementTools.html">Management Tools</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/overview/RedundancyFailover.html">High Availability, Redundancy and Fault Tolerance</a>
+          </li>
+        </ul>
+      </li>
+      <li class="has_submenu">
+        <span>
+          Running a HAWQ Cluster
+        </span>
+        <ul>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/RunningHAWQ.html">Overview</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/setuphawqopenv.html">Introducing the HAWQ Operating Environment</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/ambari-admin.html">Managing HAWQ Using Ambari</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/ambari-rest-api.html">Using the Ambari REST API</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/startstop.html">Starting and Stopping HAWQ</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/ClusterExpansion.html">Expanding a Cluster</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/ClusterShrink.html">Removing a Node</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/BackingUpandRestoringHAWQDatabases.html">Backing Up and Restoring HAWQ</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/HighAvailability.html">High Availability in HAWQ</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/MasterMirroring.html">Master Mirroring</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html">HAWQ Filespaces and High Availability Enabled HDFS</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/FaultTolerance.html">Understanding the Fault Tolerance Service</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/RecommendedMonitoringTasks.html">Recommended Monitoring and Maintenance Tasks</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/maintain.html">Routine System Maintenance Tasks</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/monitor.html">Monitoring a HAWQ System</a>
+          </li>
+        </ul>
+      </li>
+      <li class="has_submenu">
+        <span>
+          Managing Resources
+        </span>
+        <ul>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/resourcemgmt/HAWQResourceManagement.html">How HAWQ Manages Resources</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/resourcemgmt/best-practices.html">Best Practices for Configuring Resource Management</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/resourcemgmt/ConfigureResourceManagement.html">Configuring Resource Management</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/resourcemgmt/YARNIntegration.html">Integrating YARN with HAWQ</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/resourcemgmt/ResourceQueues.html">Working with Hierarchical Resource Queues</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/resourcemgmt/ResourceManagerStatus.html">Analyzing Resource Manager Status</a>
+          </li>
+        </ul>
+      </li>
+      <li class="has_submenu">
+        <span>
+          Managing Client Access
+        </span>
+        <ul>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/client_auth.html">Configuring Client Authentication</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/ldap.html">Using LDAP Authentication with TLS/SSL</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/kerberos.html">Using Kerberos Authentication</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/disable-kerberos.html">Diabling Kerberos Security</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/roles_privs.html">Managing Roles and Privileges</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/g-establishing-a-database-session.html">Establishing a Database Session</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/g-supported-client-applications.html">Supported Client Applications</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/g-hawq-database-client-applications.html">HAWQ Client Applications</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/g-connecting-with-psql.html">Connecting with psql</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/g-database-application-interfaces.html">HAWQ Database Drivers and APIs</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/clientaccess/g-troubleshooting-connection-problems.html">Troubleshooting Connection Problems</a>
+          </li>
+        </ul>
+      </li>
+      <li class="has_submenu">
+        <span>
+          Defining Database Objects
+        </span>
+        <ul>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/ddl/ddl.html">Overview</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/ddl/ddl-database.html">Creating and Managing Databases</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/ddl/ddl-tablespace.html">Creating and Managing Tablespaces</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/ddl/ddl-schema.html">Creating and Managing Schemas</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/ddl/ddl-table.html">Creating and Managing Tables</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/ddl/ddl-storage.html">Choosing the Table Storage Model</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/ddl/ddl-partition.html">Partitioning Large Tables</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/ddl/ddl-view.html">Creating and Managing Views</a>
+          </li>
+        </ul>
+      </li>
+      <li class="has_submenu">
+        <span>
+          Using Procedural Languages
+        </span>
+        <ul>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/plext/UsingProceduralLanguages.html">Using Languages in HAWQ</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/plext/builtin_langs.html">Using HAWQ Built-In Languages</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/plext/using_pljava.html">Using PL/Java</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/plext/using_plpgsql.html">Using PL/pgSQL</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/plext/using_plpython.html">Using PL/Python</a>
+          </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/plext/using_plr.html">Using PL/R</a>
+          </li>
+        </ul>
+      </li>
+      <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/dml.html">Managing Data with HAWQ</a>
+        <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/BasicDataOperations.html">Basic Data Operations</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/about_statistics.html">About Database Statistics</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/ConcurrencyControl.html">Concurrency Control</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/Transactions.html">Working with Transactions</a></li>
+          <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-loading-and-unloading-data.html">Loading and Unloading Data</a>
+            <ul>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-working-with-file-based-ext-tables.html">Working with File-Based External Tables</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-external-tables.html">Accessing File-Based External Tables</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-gpfdist-protocol.html">gpfdist Protocol</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-gpfdists-protocol.html">gpfdists Protocol</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-handling-errors-ext-table-data.html">Handling Errors in External Table Data</a></li>
+                </ul>
+              </li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-using-the-hawq-file-server--gpfdist-.html">Using the HAWQ File Server (gpfdist)</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-about-gpfdist-setup-and-performance.html">About gpfdist Setup and Performance</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-controlling-segment-parallelism.html">Controlling Segment Parallelism</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-installing-gpfdist.html">Installing gpfdist</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-starting-and-stopping-gpfdist.html">Starting and Stopping gpfdist</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-troubleshooting-gpfdist.html">Troubleshooting gpfdist</a></li>
+                </ul>
+              </li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-creating-and-using-web-external-tables.html">Creating and Using Web External Tables</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-command-based-web-external-tables.html">Command-based Web External Tables</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-url-based-web-external-tables.html">URL-based Web External Tables</a></li>
+                </ul>
+              </li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-loading-data-using-an-external-table.html">Loading Data Using an External Table</a></li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-loading-and-writing-non-hdfs-custom-data.html">Loading and Writing Non-HDFS Custom Data</a>
+                <ul>
+                  <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-using-a-custom-format.html">Using a Custom Format</a>
+                    <ul>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-importing-and-exporting-fixed-width-data.html">Importing and Exporting Fixed Width Data</a></li>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-examples-read-fixed-width-data.html">Examples - Read Fixed-Width Data</a></li>
+                    </ul>
+                  </li>
+                </ul>
+              </li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/creating-external-tables-examples.html">Creating External Tables - Examples</a>
+              </li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-handling-load-errors.html">Handling Load Errors</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-define-an-external-table-with-single-row-error-isolation.html">Define an External Table with Single Row Error Isolation</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-create-an-error-table-and-declare-a-reject-limit.html">Capture Row Formatting Errors and Declare a Reject Limit </a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-identifying-invalid-csv-files-in-error-table-data.html">Identifying Invalid CSV Files in Error Table Data</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-moving-data-between-tables.html">Moving Data between Tables</a></li>
+                </ul>
+              </li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-register_files.html">Registering Files into HAWQ Internal Tables</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-loading-data-with-hawqload.html">Loading Data with hawq load</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-loading-data-with-copy.html">Loading Data with COPY</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-running-copy-in-single-row-error-isolation-mode.html">Running COPY in Single Row Error Isolation Mode</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-optimizing-data-load-and-query-performance.html">Optimizing Data Load and Query Performance</a></li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-unloading-data-from-hawq-database.html">Unloading Data from HAWQ</a>
+                <ul>
+                  <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-defining-a-file-based-writable-external-table.html">Defining a File-Based Writable External Table</a>
+                    <ul>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-example-hawq-file-server-gpfdist.html">Example - HAWQ file server (gpfdist)</a></li>
+                    </ul>
+                  </li>
+                  <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-defining-a-command-based-writable-external-web-table.html">Defining a Command-Based Writable External Web Table</a>
+                    <ul>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-disabling-execute-for-web-or-writable-external-tables.html">Disabling EXECUTE for Web or Writable External Tables</a></li>
+                    </ul>
+                  </li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-unloading-data-using-a-writable-external-table.html">Unloading Data Using a Writable External Table</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-unloading-data-using-copy.html">Unloading Data Using COPY</a></li>
+                </ul>
+              </li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-transforming-xml-data.html">Transforming XML Data</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-determine-the-transformation-schema.html">Determine the Transformation Schema</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-write-a-transform.html">Write a Transform</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-write-the-gpfdist-configuration.html">Write the gpfdist Configuration</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-load-the-data.html">Load the Data</a></li>
+                  <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-transfer-and-store-the-data.html">Transfer and Store the Data</a>
+                    <ul>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-transforming-with-gpload.html">Transforming with GPLOAD</a></li>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-transforming-with-insert-into-select-from.html">Transforming with INSERT INTO SELECT FROM</a></li>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-configuration-file-format.html">Configuration File Format</a></li>
+                    </ul>
+                  </li>
+                  <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-xml-transformation-examples.html">XML Transformation Examples</a>
+                    <ul>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-example-1-dblp-database-publications-in-demo-directory.html">Command-based Web External Tables</a></li>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-example-irs-mef-xml-files-in-demo-directory.html">Example using IRS MeF XML Files (In demo Directory)</a></li>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-example-witsml-files-in-demo-directory.html">Example using WITSML\u2122 Files (In demo Directory)</a></li>
+                    </ul>
+                  </li>
+                </ul>
+              </li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-formatting-data-files.html">Formatting Data Files</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-formatting-rows.html">Formatting Rows</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-formatting-columns.html">Formatting Columns</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-representing-null-values.html">Representing NULL Values</a></li>
+                  <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-escaping.html">Escaping</a>
+                    <ul>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-escaping-in-text-formatted-files.html">Escaping in Text Formatted Files</a></li>
+                      <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-escaping-in-csv-formatted-files.html">Escaping in CSV Formatted Files</a></li>
+                    </ul>
+                  </li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/load/g-character-encoding.html">Character Encoding</a></li>
+                </ul>
+              </li>
+            </ul>
+          </li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/datamgmt/HAWQInputFormatforMapReduce.html">HAWQ InputFormat for MapReduce</a></li>
+        </ul>
+      </li>
+      <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/pxf/HawqExtensionFrameworkPXF.html">Using PXF with Unmanaged Data</a>
+            <ul>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/InstallPXFPlugins.html">Installing PXF Plugins</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/ConfigurePXF.html">Configuring PXF</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/HDFSFileDataPXF.html">Accessing HDFS File Data</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/HivePXF.html">Accessing Hive Data</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/HBasePXF.html">Accessing HBase Data</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/JsonPXF.html">Accessing JSON Data</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/ReadWritePXF.html">Using Profiles to Read and Write Data</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/PXFExternalTableandAPIReference.html">PXF External Tables and API</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/pxf/TroubleshootingPXF.html">Troubleshooting PXF</a></li>
+            </ul>
+      </li>
+      <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/query/query.html">Querying Data</a>
+        <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/query/HAWQQueryProcessing.html">About HAWQ Query Processing</a></li>
+          <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/query/gporca/query-gporca-optimizer.html">About GPORCA</a>
+            <ul>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/query/gporca/query-gporca-overview.html">Overview of GPORCA</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/query/gporca/query-gporca-features.html">GPORCA Features and Enhancements</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/query/gporca/query-gporca-enable.html">Enabling GPORCA</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/query/gporca/query-gporca-notes.html">Considerations when Using GPORCA</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/query/gporca/query-gporca-fallback.html">Determining The Query Optimizer In Use</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/query/gporca/query-gporca-changed.html">Changed Behavior with GPORCA</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/query/gporca/query-gporca-limitations.html">GPORCA Limitations</a></li>
+            </ul>
+          </li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/query/defining-queries.html">Defining Queries</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/query/functions-operators.html">Using Functions and Operators</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/query/query-performance.html">Query Performance</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/query/query-profiling.html">Query Profiling</a></li>
+        </ul>
+      </li>
+      <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/HAWQBestPracticesOverview.html">Best Practices</a>
+        <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/operating_hawq_bestpractices.html">Operating HAWQ</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/secure_bestpractices.html">Securing HAWQ</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/managing_resources_bestpractices.html">Managing Resources</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/managing_data_bestpractices.html">Managing Data</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/querying_data_bestpractices.html">Querying Data</a></li>
+        </ul>
+      </li>
+      <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/troubleshooting/Troubleshooting.html">Troubleshooting</a>
+        <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/troubleshooting/Troubleshooting.html#topic_dwd_rnx_15">Query Performance Issues</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/troubleshooting/Troubleshooting.html#topic_vm5_znx_15">Rejection of Query Resource Requests</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/troubleshooting/Troubleshooting.html#topic_qq4_rkl_wv">Queries Cancelled Due to High VMEM Usage</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/troubleshooting/Troubleshooting.html#topic_hlj_zxx_15">Segments Do Not Appear in gp_segment_configuration</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/troubleshooting/Troubleshooting.html#topic_mdz_q2y_15">Handling Segment Resource Fragmentation</a></li>
+        </ul>
+      </li>
+      <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/reference/hawq-reference.html">HAWQ Reference</a>
+        <ul>
+          <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/reference/HAWQSiteConfig.html">Server Configuration Parameter Reference</a>
+            <ul>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_config.html">About Server Configuration Parameters</a></li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html">Configuration Parameter Categories</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic_hfd_1tl_zp">Append-Only Table Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic39">Client Connection Default Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic12">Connection and Authentication Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic47">Database and Tablespace/Filespace Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic29">Error Reporting and Logging Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic45">External Table Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic57">GPORCA Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic49">HAWQ Array Configuration Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic_pxfparam">HAWQ Extension Framework (PXF) Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic56">HAWQ PL/Java Extension Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#hawq_resource_management">HAWQ Resource Management Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic43">Lock Management Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic48">Past PostgreSQL Version Compatibility Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic21">Query Tuning Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#statistics_collection">Statistics Collection Parameters</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/guc_category-list.html#topic15">System Resource Consumption Parameters</a></li>
+                </ul>
+              </li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html">Configuration Parameters</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#add_missing_from">add_missing_from</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#application_name">application_name</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#array_nulls">array_nulls</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#authentication_timeout">authentication_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#backslash_quote">backslash_quote</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#block_size">block_size</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#bonjour_name">bonjour_name</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#check_function_bodies">check_function_bodies</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#client_encoding">client_encoding</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#client_min_messages">client_min_messages</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#cpu_index_tuple_cost">cpu_index_tuple_cost</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#cpu_operator_cost">cpu_operator_cost</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#cpu_tuple_cost">cpu_tuple_cost</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#cursor_tuple_fraction">cursor_tuple_fraction</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#custom_variable_classes">custom_variable_classes</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#DateStyle">DateStyle</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#db_user_namespace">db_user_namespace</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#deadlock_timeout">deadlock_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#debug_assertions">debug_assertions</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#debug_pretty_print">debug_pretty_print</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#debug_print_parse">debug_print_parse</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#debug_print_plan">debug_print_plan</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#debug_print_prelim_plan">debug_print_prelim_plan</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#debug_print_rewritten">debug_print_rewritten</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#debug_print_slice_table">debug_print_slice_table</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#topic_fqj_4fd_kv">default_hash_table_bucket_number</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#default_statistics_target">default_statistics_target</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#default_tablespace">default_tablespace</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#default_transaction_isolation">default_transaction_isolation</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#default_transaction_read_only">default_transaction_read_only</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#dfs_url">dfs_url</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#dynamic_library_path">dynamic_library_path</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#effective_cache_size">effective_cache_size</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_bitmapscan">enable_bitmapscan</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_groupagg">enable_groupagg</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_hashagg">enable_hashagg</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_hashjoin">enable_hashjoin</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_indexscan">enable_indexscan</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_mergejoin">enable_mergejoin</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_nestloop">enable_nestloop</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_seqscan">enable_seqscan</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_sort">enable_sort</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#enable_tidscan">enable_tidscan</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#escape_string_warning">escape_string_warning</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#explain_pretty_print">explain_pretty_print</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#extra_float_digits">extra_float_digits</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#from_collapse_limit">from_collapse_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_adjust_selectivity_for_outerjoins">gp_adjust_selectivity_for_outerjoins</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_analyze_relative_error">gp_analyze_relative_error</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_autostats_mode">gp_autostats_mode</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#topic_imj_zhf_gw">gp_autostats_on_change_threshhold</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_backup_directIO">gp_backup_directIO</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_backup_directIO_read_chunk_mb">gp_backup_directIO_read_chunk_mb</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_cached_segworkers_threshold">gp_cached_segworkers_threshold</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_command_count">gp_command_count</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_connections_per_thread">gp_connections_per_thread</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_debug_linger">gp_debug_linger</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_dynamic_partition_pruning">gp_dynamic_partition_pruning</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_agg_distinct">gp_enable_agg_distinct</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_agg_distinct_pruning">gp_enable_agg_distinct_pruning</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_direct_dispatch">gp_enable_direct_dispatch</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_fallback_plan">gp_enable_fallback_plan</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_fast_sri">gp_enable_fast_sri</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_groupext_distinct_gather">gp_enable_groupext_distinct_gather</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_groupext_distinct_pruning">gp_enable_groupext_distinct_pruning</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_multiphase_agg">gp_enable_multiphase_agg</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_predicate_propagation">gp_enable_predicate_propagation</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_preunique">gp_enable_preunique</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_sequential_window_plans">gp_enable_sequential_window_plans</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_sort_distinct">gp_enable_sort_distinct</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_enable_sort_limit">gp_enable_sort_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_external_enable_exec">gp_external_enable_exec</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_external_grant_privileges">gp_external_grant_privileges</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_external_max_segs">gp_external_max_segs</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_count">gp_filerep_tcp_keepalives_count</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_idle">gp_filerep_tcp_keepalives_idle</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_interval">gp_filerep_tcp_keepalives_interval</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_hashjoin_tuples_per_bucket">gp_hashjoin_tuples_per_bucket</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_idf_deduplicate">gp_idf_deduplicate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_fc_method">gp_interconnect_fc_method</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_hash_multiplier">gp_interconnect_hash_multiplier</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_queue_depth">gp_interconnect_queue_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_setup_timeout">gp_interconnect_setup_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_snd_queue_depth">gp_interconnect_snd_queue_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_type">gp_interconnect_type</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_log_format">gp_log_format</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_csv_line_length">gp_max_csv_line_length</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_databases">gp_max_databases</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_filespaces">gp_max_filespaces</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_packet_size">gp_max_packet_size</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_plan_size">gp_max_plan_size</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_tablespaces">gp_max_tablespaces</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_motion_cost_per_row">gp_motion_cost_per_row</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_reject_percent_threshold">gp_reject_percent_threshold</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_reraise_signal">gp_reraise_signal</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_role">gp_role</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_safefswritesize">gp_safefswritesize</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_segment_connect_timeout">gp_segment_connect_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_segments_for_planner">gp_segments_for_planner</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_session_id">gp_session_id</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_set_proc_affinity">gp_set_proc_affinity</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_set_read_only">gp_set_read_only</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_statistics_pullup_from_child_partition">gp_statistics_pullup_from_child_partition</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_statistics_use_fkeys">gp_statistics_use_fkeys</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_vmem_idle_resource_timeout">gp_vmem_idle_resource_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_vmem_protect_segworker_cache_limit">gp_vmem_protect_segworker_cache_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_workfile_checksumming">gp_workfile_checksumming</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_workfile_compress_algorithm">gp_workfile_compress_algorithm</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_workfile_limit_files_per_query">gp_workfile_limit_files_per_query</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_workfile_limit_per_query">gp_workfile_limit_per_query</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_workfile_limit_per_segment">gp_workfile_limit_per_segment</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_dfs_url">hawq_dfs_url</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_global_rm_type">hawq_global_rm_type</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_master_address_host">hawq_master_address_host</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_master_address_port">hawq_master_address_port</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_master_directory">hawq_master_directory</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_master_temp_directory">hawq_master_temp_directory</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_re_memory_overcommit_max">hawq_re_memory_overcommit_max</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_cluster_report">hawq_rm_cluster_report_period</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_force_alterqueue_cancel_queued_request">hawq_rm_force_alterqueue_cancel_queued_request</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_master_port">hawq_rm_master_port</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_memory_limit_perseg">hawq_rm_memory_limit_perseg</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_min_resource_perseg">hawq_rm_min_resource_perseg</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_nresqueue_limit">hawq_rm_nresqueue_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_nslice_perseg_limit">hawq_rm_nslice_perseg_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_nvcore_limit_perseg">hawq_rm_nvcore_limit_perseg</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_nvseg_perquery_limit">hawq_rm_nvseg_perquery_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_nvseg_perquery_perseg_limit">hawq_rm_nvseg_perquery_perseg_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_nvseg_variance_amon_seg_limit">hawq_rm_nvseg_variance_amon_seg_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_rejectrequest_nseg_limit">hawq_rm_rejectrequest_nseg_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_resource_idle_timeout">hawq_rm_resource_idle_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_return_percent_on_overcommit">hawq_rm_return_percent_on_overcommit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_segment_heartbeat_interval">hawq_rm_segment_heartbeat_interval</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_segment_port">hawq_rm_segment_port</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_stmt_nvseg">hawq_rm_stmt_nvseg</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_stmt_vseg_memory">hawq_rm_stmt_vseg_memory</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_tolerate_nseg_limit">hawq_rm_tolerate_nseg_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_yarn_address">hawq_rm_yarn_address</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_yarn_app_name">hawq_rm_yarn_app_name</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_yarn_queue_name">hawq_rm_yarn_queue_name</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_rm_yarn_scheduler_address">hawq_rm_yarn_scheduler_address</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_segment_address_port">hawq_segment_address_port</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_segment_directory">hawq_segment_directory</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#hawq_segment_temp_directory">hawq_segment_temp_directory</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#integer_datetimes">integer_datetimes</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#IntervalStyle">IntervalStyle</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#join_collapse_limit">join_collapse_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#krb_caseins_users">krb_caseins_users</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#krb_server_keyfile">krb_server_keyfile</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#krb_srvname">krb_srvname</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#lc_collate">lc_collate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#lc_ctype">lc_ctype</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#lc_messages">lc_messages</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#lc_monetary">lc_monetary</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#lc_numeric">lc_numeric</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#lc_time">lc_time</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#listen_addresses">listen_addresses</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#local_preload_libraries">local_preload_libraries</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_autostats">log_autostats</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_connections">log_connections</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_disconnections">log_disconnections</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_dispatch_stats">log_dispatch_stats</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_duration">log_duration</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_error_verbosity">log_error_verbosity</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_executor_stats">log_executor_stats</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_hostname">log_hostname</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_min_duration_statement">log_min_duration_statement</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_min_error_statement">log_min_error_statement</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_min_messages">log_min_messages</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_parser_stats">log_parser_stats</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_planner_stats">log_planner_stats</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_rotation_age">log_rotation_age</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_rotation_size">log_rotation_size</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_statement">log_statement</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_statement_stats">log_statement_stats</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_timezone">log_timezone</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#log_truncate_on_rotation">log_truncate_on_rotation</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_appendonly_tables">max_appendonly_tables</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_connections">max_connections</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_files_per_process">max_files_per_process</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_fsm_pages">max_fsm_pages</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_fsm_relations">max_fsm_relations</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_function_args">max_function_args</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_identifier_length">max_identifier_length</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_index_keys">max_index_keys</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_locks_per_transaction">max_locks_per_transaction</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_prepared_transactions">max_prepared_transactions</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#max_stack_depth">max_stack_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#optimizer">optimizer</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#optimizer_analyze_root_partition">optimizer_analyze_root_partition</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#optimizer_minidump">optimizer_minidump</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#optimizer_parts_to_force_sort_on_insert">optimizer_parts_to_force_sort_on_insert</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#optimizer_prefer_scalar_dqa_multistage_agg">optimizer_prefer_scalar_dqa_multistage_agg</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#password_encryption">password_encryption</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pgstat_track_activity_query_size">pgstat_track_activity_query_size</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pljava_classpath">pljava_classpath</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pljava_statement_cache_size">pljava_statement_cache_size</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pljava_release_lingering_savepoints">pljava_release_lingering_savepoints</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pljava_vmoptions">pljava_vmoptions</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#port">port</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pxf_enable_filter_pushdown">pxf_enable_filter_pushdown</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pxf_enable_stat_collection">pxf_enable_stat_collection</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pxf_remote_service_login">pxf_remote_service_login</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pxf_remote_service_secret">pxf_remote_service_secret</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pxf_service_address">pxf_service_address</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pxf_service_port">pxf_service_port</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#pxf_stat_max_fragments">pxf_stat_max_fragments</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#random_page_cost">random_page_cost</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#regex_flavor">regex_flavor</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#runaway_detector_activation_percent">runaway_detector_activation_percent</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#search_path">search_path</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#seg_max_connections">seg_max_connections</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#seq_page_cost">seq_page_cost</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#server_encoding">server_encoding</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#server_version">server_version</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#server_version_num">server_version_num</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#shared_buffers">shared_buffers</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#shared_preload_libraries">shared_preload_libraries</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#ssl">ssl</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#ssl_ciphers">ssl_ciphers</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#standard_conforming_strings">standard_conforming_strings</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#statement_timeout">statement_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#superuser_reserved_connections">superuser_reserved_connections</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#tcp_keepalives_count">tcp_keepalives_count</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#tcp_keepalives_idle">tcp_keepalives_idle</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#tcp_keepalives_interval">tcp_keepalives_interval</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#temp_buffers">temp_buffers</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#TimeZone">TimeZone</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#timezone_abbreviations">timezone_abbreviations</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#track_activities">track_activities</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#track_counts">track_counts</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#transaction_isolation">transaction_isolation</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#transaction_read_only">transaction_read_only</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#transform_null_equals">transform_null_equals</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#unix_socket_directory">unix_socket_directory</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#unix_socket_group">unix_socket_group</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#unix_socket_permissions">unix_socket_permissions</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#update_process_title">update_process_title</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#vacuum_cost_delay">vacuum_cost_delay</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#vacuum_cost_limit">vacuum_cost_limit</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#vacuum_cost_page_dirty">vacuum_cost_page_dirty</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#vacuum_cost_page_miss">vacuum_cost_page_miss</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#vacuum_freeze_min_age">vacuum_freeze_min_age</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#xid_stop_limit">xid_stop_limit</a></li>
+                </ul>
+              </li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/HAWQSampleSiteConfig.html">Sample hawq-site.xml Configuration File</a></li>
+            </ul>
+          </li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/reference/HDFSConfigurationParameterReference.html">HDFS Configuration Reference</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/reference/HAWQEnvironmentVariables.html">Environment Variables</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/reference/CharacterSetSupportReference.html">Character Set Support Reference</a></li>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/reference/HAWQDataTypes.html">Data Types</a></li>
+          <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/reference/SQLCommandReference.html">SQL Commands</a>
+            <ul>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ABORT.html">ABORT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-AGGREGATE.html">ALTER AGGREGATE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-DATABASE.html">ALTER DATABASE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-FUNCTION.html">ALTER FUNCTION</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-OPERATOR.html">ALTER OPERATOR</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-OPERATOR-CLASS.html">ALTER OPERATOR CLASS</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-RESOURCE-QUEUE.html">ALTER RESOURCE QUEUE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-ROLE.html">ALTER ROLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-TABLE.html">ALTER TABLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-TABLESPACE.html">ALTER TABLESPACE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-TYPE.html">ALTER TYPE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-USER.html">ALTER USER</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ANALYZE.html">ANALYZE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/BEGIN.html">BEGIN</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CHECKPOINT.html">CHECKPOINT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CLOSE.html">CLOSE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/COMMIT.html">COMMIT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/COPY.html">COPY</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-AGGREGATE.html">CREATE AGGREGATE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-DATABASE.html">CREATE DATABASE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-EXTERNAL-TABLE.html">CREATE EXTERNAL TABLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-FUNCTION.html">CREATE FUNCTION</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-GROUP.html">CREATE GROUP</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-LANGUAGE.html">CREATE LANGUAGE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-OPERATOR.html">CREATE OPERATOR</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-OPERATOR-CLASS.html">CREATE OPERATOR CLASS</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-RESOURCE-QUEUE.html">CREATE RESOURCE QUEUE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-ROLE.html">CREATE ROLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-SCHEMA.html">CREATE SCHEMA</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-SEQUENCE.html">CREATE SEQUENCE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-TABLE.html">CREATE TABLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-TABLE-AS.html">CREATE TABLE AS</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-TABLESPACE.html">CREATE TABLESPACE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-TYPE.html">CREATE TYPE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-USER.html">CREATE USER</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/CREATE-VIEW.html">CREATE VIEW</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DEALLOCATE.html">DEALLOCATE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DECLARE.html">DECLARE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-AGGREGATE.html">DROP AGGREGATE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-DATABASE.html">DROP DATABASE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-EXTERNAL-TABLE.html">DROP EXTERNAL TABLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-FILESPACE.html">DROP FILESPACE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-FUNCTION.html">DROP FUNCTION</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-GROUP.html">DROP GROUP</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-LANGUAGE.html">DROP LANGUAGE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-OPERATOR.html">DROP OPERATOR</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-OPERATOR-CLASS.html">DROP OPERATOR CLASS</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-OWNED.html">DROP OWNED</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-RESOURCE-QUEUE.html">DROP RESOURCE QUEUE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-ROLE.html">DROP ROLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-SCHEMA.html">DROP SCHEMA</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-SEQUENCE.html">DROP SEQUENCE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-TABLE.html">DROP TABLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-TABLESPACE.html">DROP TABLESPACE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-TYPE.html">DROP TYPE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-USER.html">DROP USER</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/DROP-VIEW.html">DROP VIEW</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/END.html">END</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/EXECUTE.html">EXECUTE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/EXPLAIN.html">EXPLAIN</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/FETCH.html">FETCH</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/GRANT.html">GRANT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/INSERT.html">INSERT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/PREPARE.html">PREPARE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/REASSIGN-OWNED.html">REASSIGN OWNED</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/RELEASE-SAVEPOINT.html">RELEASE SAVEPOINT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/RESET.html">RESET</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/REVOKE.html">REVOKE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ROLLBACK.html">ROLLBACK</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ROLLBACK-TO-SAVEPOINT.html">ROLLBACK TO SAVEPOINT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/SAVEPOINT.html">SAVEPOINT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/SELECT.html">SELECT</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/SELECT-INTO.html">SELECT INTO</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/SET.html">SET</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/SET-ROLE.html">SET ROLE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/SET-SESSION-AUTHORIZATION.html">SET SESSION AUTHORIZATION</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/SHOW.html">SHOW</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/TRUNCATE.html">TRUNCATE</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/sql/VACUUM.html">VACUUM</a></li>
+            </ul>
+          </li>
+          <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/catalog_ref.html">System Catalog Reference</a>
+            <ul>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/catalog_ref-tables.html">System Tables</a></li>
+              <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/catalog_ref-views.html">System Views</a></li>
+              <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/catalog_ref-html.html">System Catalogs Definitions</a>
+                <ul>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_configuration_history.html">gp_configuration_history</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_distribution_policy.html">gp_distribution_policy</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_global_sequence.html">gp_global_sequence</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_master_mirroring.html">gp_master_mirroring</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_persistent_database_node.html">gp_persistent_database_node</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_persistent_filespace_node.html">gp_persistent_filespace_node</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_persistent_relation_node.html">gp_persistent_relation_node</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_persistent_relfile_node.html">gp_persistent_relfile_node</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_persistent_tablespace_node.html">gp_persistent_tablespace_node</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_relfile_node.html">gp_relfile_node</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_segment_configuration.html">gp_segment_configuration</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/gp_version_at_initdb.html">gp_version_at_initdb</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_aggregate.html">pg_aggregate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_am.html">pg_am</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_amop.html">pg_amop</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_amproc.html">pg_amproc</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_appendonly.html">pg_appendonly</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_attrdef.html">pg_attrdef</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_attribute.html">pg_attribute</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_attribute_encoding.html">pg_attribute_encoding</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_auth_members.html">pg_auth_members</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_authid.html">pg_authid</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_cast.html">pg_cast</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_class.html">pg_class</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_compression.html">pg_compression</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_constraint.html">pg_constraint</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_conversion.html">pg_conversion</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_database.html">pg_database</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_depend.html">pg_depend</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_description.html">pg_description</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_exttable.html">pg_exttable</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_filespace.html">pg_filespace</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_filespace_entry.html">pg_filespace_entry</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_index.html">pg_index</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_inherits.html">pg_inherits</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_language.html">pg_language</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_largeobject.html">pg_largeobject</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_listener.html">pg_listener</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_locks.html">pg_locks</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_namespace.html">pg_namespace</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_opclass.html">pg_opclass</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_operator.html">pg_operator</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_partition.html">pg_partition</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_partition_columns.html">pg_partition_columns</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_partition_encoding.html">pg_partition_encoding</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_partition_rule.html">pg_partition_rule</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_partition_templates.html">pg_partition_templates</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_partitions.html">pg_partitions</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_pltemplate.html">pg_pltemplate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_proc.html">pg_proc</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_resqueue.html">pg_resqueue</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_resqueue_status.html">pg_resqueue_status</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_rewrite.html">pg_rewrite</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_roles.html">pg_roles</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_shdepend.html">pg_shdepend</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_shdescription.html">pg_shdescription</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_stat_activity.html">pg_stat_activity</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_stat_last_operation.html">pg_stat_last_operation</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/catalog/pg_stat_last_shoperation.html">pg_stat_last_shoperation</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating

<TRUNCATED>


[21/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/createuser.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/createuser.html.md.erb b/markdown/reference/cli/client_utilities/createuser.html.md.erb
new file mode 100644
index 0000000..15c189b
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/createuser.html.md.erb
@@ -0,0 +1,158 @@
+---
+title: createuser
+---
+
+Creates a new database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+createuser [<connection_options>] [<role_attribute_options>] [-e | --echo] <role_name>
+
+createuser --help 
+
+createuser --version
+```
+where:
+
+``` pre
+<connection_options> =
+	[-h <host> | --host <host>] 
+	[-p <port> | -- port <port>] 
+	[-U <username> | --username <username>] 
+    [-W | --password]
+    
+<role_attribute_options> = 
+    [-c <number> | --connection-limit <number>]
+    [(-D | --no-createdb) | (-d | --createdb)]
+    [(-E | --encrypted) | (-N | --unencrypted)]
+    [(-i | --inherit) | (-I | --no-inherit)]
+    [(-l | --login) | (-L | --no-login)]
+    [-P | --pwprompt]
+    [(-r | --createrole) | (-R | --no-createrole)]
+    [(-s | --superuser) | -S | --no-superuser]
+    
+```
+
+## <a id="topic1__section3"></a>Description
+
+`createuser` creates a new HAWQ role. You must be a superuser or have the `CREATEROLE` privilege to create new roles. You must connect to the database as a superuser to create new superusers.
+
+Superusers can bypass all access permission checks within the database, so superuser privileges should not be granted lightly.
+
+`createuser` is a wrapper around the SQL command `CREATE ROLE`.
+
+## <a id="args"></a>Arguments
+
+<dt>**\<role\_name\>**</dt>
+<dd>The name of the role to be created. This name must be different from all existing roles in this HAWQ installation.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-e, -\\\-echo  </dt>
+<dd>Echo the commands that `createuser` generates and sends to the server.</dd>
+
+**\<role\_attribute\_options\>**
+
+<dt>-c, -\\\-connection-limit \<number\>  </dt>
+<dd>Set a maximum number of connections for the new role. The default is to set no limit.</dd>
+
+
+<dt>-D, -\\\-no-createdb  </dt>
+<dd>The new role will not be allowed to create databases. This is the default.</dd>
+
+<dt>-d, -\\\-createdb  </dt>
+<dd>The new role will be allowed to create databases.</dd>
+
+
+<dt>-E, -\\\-encrypted  </dt>
+<dd>Encrypts the role's password stored in the database. If not specified, the default password behavior is used.</dd>
+
+<dt>-i, -\\\-inherit  </dt>
+<dd>The new role will automatically inherit privileges of roles it is a member of. This is the default.</dd>
+
+<dt>-I, -\\\-no-inherit  </dt>
+<dd>The new role will not automatically inherit privileges of roles it is a member of.</dd>
+
+<dt>-l, -\\\-login  </dt>
+<dd>The new role will be allowed to log in to HAWQ. This is the default.</dd>
+
+<dt>-L, -\\\-no-login  </dt>
+<dd>The new role will not be allowed to log in (a group-level role).</dd>
+
+<dt>-N, -\\\-unencrypted  </dt>
+<dd>Does not encrypt the role's password stored in the database. If not specified, the default password behavior is used.</dd>
+
+<dt>-P, -\\\-pwprompt  </dt>
+<dd>If given, `createuser` will issue a prompt for the password of the new role. This is not necessary if you do not plan on using password authentication.</dd>
+
+<dt>-r, -\\\-createrole  </dt>
+<dd>The new role will be allowed to create new roles (`CREATEROLE` privilege).</dd>
+
+<dt>-R, -\\\-no-createrole  </dt>
+<dd>The new role will not be allowed to create new roles. This is the default.</dd>
+
+<dt>-s, -\\\-superuser  </dt>
+<dd>The new role will be a superuser.</dd>
+
+<dt>-S, -\\\-no-superuser  </dt>
+<dd>The new role will not be a superuser. This is the default.</dd>
+
+**\<connection\_options\>**
+
+<dt>-h, -\\\-host \<host\> </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p, -\\\-port \<port\>  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U, -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-w, -\\\-no-password  </dt>
+<dd>Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a `.pgpass` file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+**Other Options**
+
+<dt>-\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+Create a role named `joe` using the default options:
+
+``` shell
+$ createuser joe
+Shall the new role be a superuser? (y/n) n
+Shall the new role be allowed to create databases? (y/n) n
+Shall the new role be allowed to create more new roles? (y/n) n
+CREATE ROLE
+```
+
+To create the same role `joe` using connection options and avoiding the prompts and taking a look at the underlying command:
+
+``` shell
+$ createuser -h masterhost -p 54321 -S -D -R -e joe
+CREATE ROLE joe NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT 
+LOGIN;
+CREATE ROLE
+```
+
+To create the role `joe` as a superuser, and assign password `admin123` immediately:
+
+``` shell
+$ createuser -P -s -e joe
+Enter password for new role: admin123
+Enter it again: admin123
+CREATE ROLE joe PASSWORD 'admin123' SUPERUSER CREATEDB 
+CREATEROLE INHERIT LOGIN;
+CREATE ROLE
+```
+
+In the above example, the new password is not actually echoed when typed, but we show what was typed for clarity. However the password will appear in the echoed command, as illustrated if the `-e` option is used.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/dropdb.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/dropdb.html.md.erb b/markdown/reference/cli/client_utilities/dropdb.html.md.erb
new file mode 100644
index 0000000..13df828
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/dropdb.html.md.erb
@@ -0,0 +1,86 @@
+---
+title: dropdb
+---
+
+Removes a database.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+dropdb [<connection_options>] [-e | --echo] [-i | --interactive] <dbname>
+
+dropdb --help 
+
+dropdb --version
+```
+where:
+
+``` pre
+<connection_options> =
+	[-h <host> | --host <host>] 
+	[-p <port> | -- port <port>] 
+	[-U <username> | --username <username>] 
+    [-W | --password] 
+```
+
+## <a id="topic1__section3"></a>Description
+
+`dropdb` destroys an existing database. The user who executes this command must be a superuser or the owner of the database being dropped.
+
+`dropdb` is a wrapper around the SQL command `DROP                                         DATABASE`.
+
+## <a id="args"></a>Arguments
+
+<dt>**\<dbname\>** </dt>
+<dd>The name of the database to be removed.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-e, -\\\-echo  </dt>
+<dd>Echo the commands that `dropdb` generates and sends to the server.</dd>
+
+<dt>-i, -\\\-interactive  </dt>
+<dd>Issues a verification prompt before doing anything destructive.</dd>
+
+**\<connection_options\>**
+
+<dt>-h, -\\\-host \<host\> </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p, -\\\-port \<port\>  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U, -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-w, -\\\-no-password  </dt>
+<dd>Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a `.pgpass` file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+**Other Options**
+
+<dt>-\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+To destroy the database named `demo` using default connection parameters:
+
+``` shell
+$ dropdb demo
+```
+
+To destroy the database named `demo` using connection options, with verification, and a peek at the underlying command:
+
+``` shell
+$ dropdb -p 54321 -h masterhost -i -e demo
+Database "demo" will be permanently deleted.
+Are you sure? (y/n) y
+DROP DATABASE "demo"
+DROP DATABASE
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/dropuser.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/dropuser.html.md.erb b/markdown/reference/cli/client_utilities/dropuser.html.md.erb
new file mode 100644
index 0000000..9d888ae
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/dropuser.html.md.erb
@@ -0,0 +1,78 @@
+---
+title: dropuser
+---
+
+Removes a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+dropuser [<connection_options>] [-e | --echo] [-i | --interactive] <role_name>
+
+dropuser --help 
+
+dropuser --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+`dropuser` removes an existing role from HAWQ. Only superusers and users with the `CREATEROLE` privilege can remove roles. To remove a superuser role, you must yourself be a superuser.
+
+`dropuser` is a wrapper around the SQL command `DROP ROLE`.
+
+## <a id="args"></a>Arguments
+
+<dt>**\<role\_name\>**  </dt>
+<dd>The name of the role to be removed. You will be prompted for a name if not specified on the command line.</dd>
+
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-i, -\\\-interactive  </dt>
+<dd>Prompt for confirmation before actually removing the role.</dd>
+
+<dt>-e, -\\\-echo  </dt>
+<dd>Echo the commands that `dropuser` generates and sends to the server.</dd>
+
+**\<connection_options\>**
+
+<dt>-h, -\\\-host \<host\>  </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p, -\\\-port \<port\>  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U, -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-w, -\\\-no-password  </dt>
+<dd>Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a `.pgpass` file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+
+**Other Options**
+
+<dt>-\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+To remove the role `joe` using default connection options:
+
+``` shell
+$ dropuser joe
+```
+
+To remove the role `joe` using connection options, with verification, and a peek at the underlying command:
+
+``` shell
+$ dropuser -p 54321 -h masterhost -i -e joe
+Role "joe" will be permanently removed.
+Are you sure? (y/n) y
+DROP ROLE "joe"
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/pg_dump.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/pg_dump.html.md.erb b/markdown/reference/cli/client_utilities/pg_dump.html.md.erb
new file mode 100644
index 0000000..d4a3186
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/pg_dump.html.md.erb
@@ -0,0 +1,252 @@
+---
+title: pg_dump
+---
+
+Extracts a database into a single script file or other archive file.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+pg_dump [<connection_options>] [<dump_options>] <dbname>
+
+pg_dump --help
+
+pg_dump --version
+```
+where:
+
+``` pre
+<connection_options> =
+	[-h <host> | --host <host>] 
+	[-p <port> | -- port <port>] 
+	[-U <username> | --username <username>] 
+    [-W | --password] 
+
+<dump_options> =
+	[-a | --data-only]
+	[-b | --blobs]
+	[-c | --clean]
+	[-C | --create]
+	[-d | --inserts]
+	[(-D | --column-inserts) ]
+	[-E <encoding> | --encoding <encoding>]
+	[-f <file> | --file <file>]
+	[-F(p|t|c)] | --format (plain|custom|tar)]
+	[-i | --ignore-version]
+	[-n <schema> | --schema <schema>]
+	[-N <schema> | --exclude-schema <schema>]
+	[-o | --oids]
+	[-O | --no-owner]
+	[-s | --schema-only]
+	[-S <username> | --superuser <username>]
+	[-t <table> | --table <table>]
+	[-T <table> | --exclude-table <table>]
+	[-v | --verbose]
+	[(-x | --no-privileges) ]
+	[--disable-dollar-quoting]
+	[--disable-triggers]
+	[--use-set-session-authorization]
+	[--gp-syntax | --no-gp-syntax]
+	[-Z <0..9> | --compress <0..9>]
+```
+
+
+## <a id="topic1__section3"></a>Description
+
+`pg_dump` is a standard PostgreSQL utility for backing up a database, and is also supported in HAWQ. It creates a single (non-parallel) dump file.
+
+Use `pg_dump` if you are migrating your data to another database vendor's system, or to another HAWQ system with a different segment configuration (for example, if the system you are migrating to has greater or fewer segment instances). To restore, you must use the corresponding [pg\_restore](pg_restore.html#topic1) utility (if the dump file is in archive format), or you can use a client program such as [psql](psql.html#topic1) (if the dump file is in plain text format).
+
+Since `pg_dump` is compatible with regular PostgreSQL, it can be used to migrate data into HAWQ. The `pg_dump` utility in HAWQ is very similar to the PostgreSQL `pg_dump` utility, with the following exceptions and limitations:
+
+-   If using `pg_dump` to backup a HAWQ database, keep in mind that the dump operation can take a long time (several hours) for very large databases. Also, you must make sure you have sufficient disk space to create the dump file.
+-   If you are migrating data from one HAWQ system to another, use the `--gp-syntax` command-line option to include the `DISTRIBUTED BY` clause in `CREATE TABLE` statements. This ensures that HAWQ table data is distributed with the correct distribution key columns upon restore.
+
+`pg_dump` makes consistent backups even if the database is being used concurrently. `pg_dump` does not block other users accessing the database (readers or writers).
+
+When used with one of the archive file formats and combined with `pg_restore`, `pg_dump` provides a flexible archival and transfer mechanism. `pg_dump` can be used to backup an entire database, then `pg_restore `can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file format is the *custom* format (`-Fc`). It allows for selection and reordering of all archived items, and is compressed by default. The tar format (`-Ft`) is not compressed and it is not possible to reorder data when loading, but it is otherwise quite flexible. It can be manipulated with standard UNIX tools such as `tar`.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>**\<dbname\>**</dt>
+<dd>Specifies the name of the database to be dumped. If this is not specified, the environment variable `PGDATABASE` is used. If that is not set, the user name specified for the connection is used.</dd>
+
+
+**\<dump_options\>**
+
+<dt>-a, -\\\-data-only  </dt>
+<dd>Dump only the data, not the schema (data definitions). This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call [pg\_restore](pg_restore.html#topic1).</dd>
+
+<dt>-b, -\\\-blobs  </dt>
+<dd>Include large objects in the dump. This is the default behavior except when `--schema`, `--table`, or `--schema-only` is specified, so the `-b` switch is only useful to add large objects to selective dumps.</dd>
+
+<dt>-c, -\\\-clean  </dt>
+<dd>Adds commands to the text output file to clean (DROP) database objects prior to (the commands for) creating them. Note that objects are not dropped before the dump operation begins, but `DROP` commands are added to the DDL dump output files so that when you use those files to do a restore, the `DROP` commands are run prior to the `CREATE` commands. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call [pg\_restore](pg_restore.html#topic1).</dd>
+
+<dt>-C, -\\\-create  </dt>
+<dd>Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this form, it doesn't matter which database you connect to before running the script.) This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call [pg\_restore](pg_restore.html#topic1).</dd>
+
+<dt>-d, -\\\-inserts  </dt>
+<dd>Dump data as `INSERT` commands (rather than `COPY`). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL-based databases. Also, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents. Note that the restore may fail altogether if you have rearranged column order. The `-D` option is safe against column order changes, though even slower.</dd>
+
+<dt>-D, -\\\-column-inserts  </dt>
+<dd>Dump data as `INSERT` commands with explicit column names `(INSERT INTO` \<table\>`(`\<column\>`, ...) VALUES ...)`. This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL-based databases. Also, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.</dd>
+
+<dt>-E, -\\\-encoding \<encoding\>  </dt>
+<dd>Create the dump in the specified character set encoding. By default, the dump is created in the database encoding. (Another way to get the same result is to set the `PGCLIENTENCODING` environment variable to the desired dump encoding.)</dd>
+
+<dt>-f, -\\\-file \<file\> </dt>
+<dd>Send output to the specified file. If this is omitted, the standard output is used.</dd>
+
+<dt>-F(p|c|t), -\\\-format (plain|custom|tar)  </dt>
+<dd>Selects the format of the output. format can be one of the following:
+
+p, plain \u2014 Output a plain-text SQL script file (the default).
+
+c, custom \u2014 Output a custom archive suitable for input into [pg\_restore](pg_restore.html#topic1). This is the most flexible format in that it allows reordering of loading data as well as object definitions. This format is also compressed by default.
+
+t, tar \u2014 Output a tar archive suitable for input into [pg\_restore](pg_restore.html#topic1). Using this archive format allows reordering and/or exclusion of database objects at the time the database is restored. It is also possible to limit which data is reloaded at restore time.</dd>
+
+<dt>-i, -\\\-ignore-version  </dt>
+<dd>Ignore version mismatch between `pg_dump` and the database server. `pg_dump` can dump from servers running previous releases of HAWQ (or PostgreSQL). However, some older versions might not be supported. Use this option if you need to override the version check.</dd>
+
+<dt>-n, -\\\-schema \<schema\>  </dt>
+<dd>Dump only schemas matching the schema pattern; this selects both the schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by writing multiple `-n` switches. Also, the schema parameter is interpreted as a pattern according to the same rules used by `psql`'s` \d` commands, so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards.
+
+**Note:** When `-n` is specified, `pg_dump` makes no attempt to dump any other database objects that the selected schema(s) may depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully restored by themselves into a clean database.
+
+**Note:** Non-schema objects such as blobs are not dumped when `-n` is specified. You can add blobs back to the dump with the `--blobs` switch.</dd>
+
+<dt>-N, -\\\-exclude-schema \<schema\>  </dt>
+<dd>Do not dump any schemas matching the schema pattern. The pattern is interpreted according to the same rules as for `-n`. `-N` can be specified multiple times to exclude schemas that match several different patterns. When both `-n` and `-N` are specified, the behavior is to dump only schemas that match at least one `-n` switch but no `-N` switches. If `-N` appears without `-n`, then schemas matching `-N` are excluded from an otherwise normal dump.</dd>
+
+<dt>-o, -\\\-oids  </dt>
+<dd>Dump object identifiers (OIDs) as part of the data for every table. Use of this option is not recommended for files that are intended to be restored into HAWQ.</dd>
+
+<dt>-O, -\\\-no-owner  </dt>
+<dd>Do not output commands to set ownership of objects to match the original database. By default, `pg_dump` issues `ALTER OWNER` or `SET SESSION AUTHORIZATION` statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify `-O`. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call [pg\_restore](pg_restore.html#topic1).</dd>
+
+<dt>-s, -\\\-schema-only  </dt>
+<dd>Dump only the object definitions (schema), not data.</dd>
+
+<dt>-S, -\\\-superuser \<username\>  </dt>
+<dd>Specify the superuser user name to use when disabling triggers. This is only relevant if `--disable-triggers` is used. It is better to leave this out, and instead start the resulting script as a superuser.
+
+**Note:** HAWQ does not support user-defined triggers.</dd>
+
+<dt>-t, -\\\-table \<table\>  </dt>
+<dd>Dump only tables (or views or sequences) matching the table pattern. Specify the table in the format `schema.table`.
+
+Multiple tables can be selected by writing multiple `-t` switches. Also, the table parameter is interpreted as a pattern according to the same rules used by `psql`'s `\d` commands, so multiple tables can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards. The `-n` and `-N` switches have no effect when `-t` is used, because tables selected by `-t` will be dumped regardless of those switches, and non-table objects will not be dumped.
+
+**Note:** When `-t` is specified, `pg_dump` makes no attempt to dump any other database objects that the selected table(s) may depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully restored by themselves into a clean database.
+Also, `-t` cannot be used to specify a child table partition. To dump a partitioned table, you must specify the parent table name.</dd>
+
+<dt>-T, -\\\-exclude-table \<table\>  </dt>
+<dd>Do not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for `-t`. `-T` can be given more than once to exclude tables matching any of several patterns. When both `-t` and `-T` are given, the behavior is to dump just the tables that match at least one `-t` switch but no `-T` switches. If `-T` appears without `-t`, then tables matching `-T` are excluded from what is otherwise a normal dump.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Specifies verbose mode. This will cause `pg_dump` to output detailed object comments and start/stop times to the dump file, and progress messages to standard error.</dd>
+
+<dt>-x, -\\\-no-privileges  </dt>
+<dd>Prevent dumping of access privileges (`GRANT/REVOKE` commands).</dd>
+
+<dt>-\\\-disable-dollar-quoting  </dt>
+<dd>This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.</dd>
+
+<dt>-\\\-disable-triggers  </dt>
+<dd>This option is only relevant when creating a data-only dump. It instructs `pg_dump` to include commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you have triggers on the tables that you do not want to invoke during data reload. The commands emitted for `--disable-triggers` must be done as superuser. So, you should also specify a superuser name with `-S`, or preferably be careful to start the resulting script as a superuser. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call [pg\_restore](pg_restore.html#topic1).
+
+**Note:** HAWQ does not support user-defined triggers.</dd>
+
+<dt>-\\\-use-set-session-authorization  </dt>
+<dd>Output SQL-standard `SET SESSION AUTHORIZATION` commands instead of `ALTER OWNER` commands to determine object ownership. This makes the dump more standards compatible, but depending on the history of the objects in the dump, may not restore properly. A dump using `SET SESSION                 AUTHORIZATION` will require superuser privileges to restore correctly, whereas `ALTER OWNER` requires lesser privileges.</dd>
+
+<dt>-\\\-gp-syntax | -\\\-no-gp-syntax   </dt>
+<dd>Use `--gp-syntax` to dump HAWQ syntax in the `CREATE TABLE` statements. This allows the distribution policy (`DISTRIBUTED BY` or `DISTRIBUTED RANDOMLY` clauses) of a HAWQ table to be dumped, which is useful for restoring into other HAWQ systems. The default is to include HAWQ syntax when connected to a HAWQ system, and to exclude it when connected to a regular PostgreSQL system.</dd>
+
+<dt>-Z, -\\\-compress 0..9 </dt>
+<dd>Specify the compression level to use in archive formats that support compression. Currently only the *custom* archive format supports compression.</dd>
+
+
+**\<connection_options\>**
+
+<dt>-h, -\\\-host \<host\> </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p, -\\\-port \<port\>  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U, -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+
+**Other Options**
+
+<dt>-\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+
+## <a id="topic1__section7"></a>Notes
+
+When a data-only dump is chosen and the option `--disable-triggers` is used, `pg_dump` emits commands to disable triggers on user tables before inserting the data and commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs may be left in the wrong state.
+
+Members of `tar` archives are limited to a size less than 8 GB. (This is an inherent limitation of the `tar` file format.) Therefore this format cannot be used if the textual representation of any one table exceeds that size. The total size of a tar archive and any of the other output formats is not limited, except possibly by the operating system.
+
+The dump file produced by `pg_dump` does not contain the statistics used by the optimizer to make query planning decisions. Therefore, it is wise to run `ANALYZE` after restoring from a dump file to ensure good performance.
+
+## <a id="topic1__section8"></a>Examples
+
+Dump a database called `mydb` into a SQL-script file:
+
+``` shell
+$ pg_dump mydb > db.sql
+```
+
+To reload such a script into a (freshly created) database named `newdb`:
+
+``` shell
+$ psql -d newdb -f db.sql
+```
+
+Dump a HAWQ in tar file format and include distribution policy information:
+
+``` shell
+$ pg_dump -Ft --gp-syntax mydb > db.tar
+```
+
+To dump a database into a custom-format archive file:
+
+``` shell
+$ pg_dump -Fc mydb > db.dump
+```
+
+To reload an archive file into a (freshly created) database named `newdb`:
+
+``` shell
+$ pg_restore -d newdb db.dump
+```
+
+**Note:** A warning related to the `gp_enable_column_oriented_table` parameter may appear. If it does, disregard it.
+
+To dump a single table named `mytab`:
+
+``` shell
+$ pg_dump -t mytab mydb > db.sql
+```
+
+To specify an upper-case or mixed-case name in `-t` and related switches, you need to double-quote the name; else it will be folded to lower case. But double quotes are special to the shell, so in turn they must be quoted. Thus, to dump a single table with a mixed-case name, you need something like:
+
+``` shell
+$ pg_dump -t '"MixedCaseName"' mydb > mytab.sql
+```
+
+## <a id="topic1__section9"></a>See Also
+
+[pg\_dumpall](pg_dumpall.html#topic1), [pg\_restore](pg_restore.html#topic1), [psql](psql.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/pg_dumpall.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/pg_dumpall.html.md.erb b/markdown/reference/cli/client_utilities/pg_dumpall.html.md.erb
new file mode 100644
index 0000000..255b459
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/pg_dumpall.html.md.erb
@@ -0,0 +1,180 @@
+---
+title: pg_dumpall
+---
+
+Extracts all databases in a HAWQ system to a single script file or other archive file.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+pg_dumpall [<options>] ...
+```
+where:
+
+``` pre
+<general options> =
+    [-f | --filespaces] 
+    [-i | --ignore-version ]
+    [--help ]
+    [--version]
+<options controlling output content> =
+    [-a   --dataonly ]
+    [-c | --clean ]
+    [-d | --inserts] 
+    [-D | --column_inserts] 
+    [-F | --filespaces ]
+    [-g | --globals-only]
+    [-o | --oids ]
+    [-d | --inserts] 
+    [-O | --no-owner] 
+    [-r | --resource-queues] 
+    [-s | --schema-only]
+    [-S <username>  | --superuser=<username> ]
+    [-v  | --verbose ]
+    [-x | --no-privileges ] 
+    [--disable-dollar-quoting] 
+    [--disable-triggers] 
+    [--use-set-session-authorization]         
+    [--gp-syntax]     
+    [--no-gp-syntax] 
+<connection_options> =
+    [-h <host> | --host <host>] 
+    [-p <port> | -- port <port>] 
+    [-U <username> | --username <username>] 
+    [-w | --no-password]
+    [-W | --password] 
+    
+```
+
+## <a id="topic1__section3"></a>Description
+
+`pg_dumpall` is a standard PostgreSQL utility for backing up all databases in a HAWQ (or PostgreSQL) instance, and is also supported in HAWQ. It creates a single (non-parallel) dump file.
+
+`pg_dumpall` creates a single script file that contains SQL commands that can be used as input to [psql](psql.html#topic1) to restore the databases. It does this by calling [pg\_dump](pg_dump.html#topic1) for each database. `pg_dumpall` also dumps global objects that are common to all databases. (`pg_dump` does not save these objects.) This currently includes information about database users and groups, and access permissions that apply to databases as a whole.
+
+Since `pg_dumpall` reads tables from all databases,  connect as a database superuser to assure producing a complete dump, as well as to execute the saved script, add users and groups, and to create databases.
+
+The SQL script will be written to the standard output. Shell operators should be used to redirect it into a file.
+
+`pg_dumpall` needs to connect to the HAWQ master server several times (once per database). If you use password authentication, a password could be requested for each connection, so using a `~/.pgpass` file is recommended. 
+
+## <a id="topic1__section4"></a>Options
+
+**General Options**
+<dt>-f | -\\\-filespaces  </dt>
+<dd>Dump filespace definitions.</dd>
+
+<dt>-i | -\\\-ignore-version  </dt>
+<dd>Ignore version mismatch between [pg\_dump](pg_dump.html#topic1) and the database server. `pg_dump` can dump from servers running previous releases of HAWQ (or PostgreSQL), but some older versions may not be supported. Use this option if you need to override the version check.</dd>
+
+<dt>--help</dt>
+<dd>Displays this help, then exits.</dt>
+
+<dt>--version</dt>
+<dd>Displays the version information for the output.</dd>
+
+**Output Control Options**
+
+<dt>-a | -\\\-data-only  </dt>
+<dd>Dump only the data, not the schema (data definitions). This option is only meaningful for the plain-text format. For the archive formats, you can specify this option when you call [pg\_restore](pg_restore.html#topic1).</dd>
+
+<dt>-c | -\\\-clean  </dt>
+<dd>Output commands to clean (DROP) database objects prior to (the commands for) creating them. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call [pg\_restore](pg_restore.html#topic1).</dd>
+
+<dt>-d | -\\\-inserts  </dt>
+<dd>Dump data as `INSERT` commands (rather than `COPY`). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL-based databases. Also, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents. Note that the restore may fail altogether if you have rearranged column order. The `-D` option is safe against column order changes, though even slower.</dd>
+
+<dt>-D | -\\\-column-inserts  </dt>
+<dd>Dump data as `INSERT` commands with explicit column names `(INSERT INTO table                                     (column, ...) VALUES ...)`. This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL-based databases. Also, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.</dd>
+
+<dt>-g | -\\\-globals-only  </dt>
+<dd>Dump only global objects (roles and tablespaces), no databases.</dd>
+
+<dt>-o | -\\\-oids  </dt>
+<dd>Dump object identifiers (OIDs) as part of the data for every table. Use of this option is not recommended for files to be restored into HAWQ.</dd>
+
+<dt>-O | -\\\-no-owner  </dt>
+<dd>Do not output commands to set ownership of objects to match the original database. By default, [pg\_dump](pg_dump.html#topic1) issues `ALTER                                 OWNER` or `SET SESSION AUTHORIZATION` statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify `-O`. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call [pg\_restore](pg_restore.html#topic1).</dd>
+
+<dt>-r | -\\\-resource-queues  </dt>
+<dd>Dump resource queue definitions.</dd>
+
+<dt>-s | -\\\-schema-only  </dt>
+<dd>Dump only the object definitions (schema), not data.</dd>
+
+<dt>-S \<username\> | -\\\-superuser=\<username\>  </dt>
+<dd>Specify the superuser user name to use when disabling triggers. This option is only relevant if `--disable-triggers` is used. Starting the resulting script as a superuser is preferred.
+
+**Note:** HAWQ does not support user-defined triggers.</dd>
+
+<dt>-x | -\\\-no-privileges | -\\\-no-acl  </dt>
+<dd>Prevent dumping of access privileges (`GRANT/REVOKE` commands).</dd>
+
+<dt>-\\\-disable-dollar-quoting  </dt>
+<dd>This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.</dd>
+
+<dt>-\\\-disable-triggers  </dt>
+<dd>This option is only relevant when creating a data-only dump. It instructs `pg_dumpall` to include commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you do not want to invoke triggers on the tables during data reload. You need superuser permissions to perform commands issued for `--disable-triggers`. Either  specify a superuser name with the `-S` option, or start the resulting script as a superuser.
+
+**Note:** HAWQ does not support user-defined triggers.</dd>
+
+<dt>-\\\-use-set-session-authorization  </dt>
+<dd>Output SQL-standard `SET SESSION AUTHORIZATION` commands instead of `ALTER OWNER` commands to determine object ownership. This makes the dump more standards compatible, but depending on the history of the objects in the dump, may not restore properly. A dump using `SET SESSION AUTHORIZATION` will require superuser privileges to restore correctly, whereas `ALTER                                 OWNER` requires lesser privileges.</dd>
+
+<dt>-\\\-gp-syntax  </dt>
+<dd>Output HAWQ syntax in the `CREATE                                 TABLE` statements. This allows the distribution policy (`DISTRIBUTED BY` or `DISTRIBUTED                                 RANDOMLY` clauses) of a HAWQ table to be dumped, which is useful for restoring into other HAWQ systems.</dd>
+
+<dt>-\\\-no-gp-syntax </dt>
+<dd>Do not use HAWQ syntax in the dump. This is the default if using postgresql. 
+
+**Connection Options**
+
+<dt>-h \<host\> | -\\\-host \<host\>  </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to `localhost`.</dd>
+
+<dt>-l | -\\\-database \<database_name\>  </dt>
+<dd>Connect to an alternate database.</dd>
+
+<dt>-p \<port\> | -\\\-port \<port\>  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U \<username\> | -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-w | -\\\-no-password  </dt>
+<dd>Do not prompt for a password.</dd>
+
+<dt>-W | -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+## <a id="topic1__section7"></a>Notes
+
+Since `pg_dumpall` calls [pg\_dump](pg_dump.html#topic1) internally, some diagnostic messages will refer to `pg_dump`.
+
+Once restored, it is wise to run `ANALYZE` on each database so the query planner has useful statistics. You can also run `vacuumdb -a                     -z` to analyze all databases.
+
+All tablespace (filespace) directories used by `pg_dumpall` must exist before the restore. Otherwise, database creation will fail for databases in non-default locations.
+
+## <a id="topic1__section8"></a>Examples
+
+To dump all databases:
+
+``` shell
+$ pg_dumpall > db.out
+```
+
+To reload this file:
+
+``` shell
+$ psql template1 -f db.out
+```
+
+To dump only global objects (including filespaces and resource queues):
+
+``` shell
+$ pg_dumpall -g -f -r
+```
+
+## <a id="topic1__section9"></a>See Also
+
+[pg\_dump](pg_dump.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/pg_restore.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/pg_restore.html.md.erb b/markdown/reference/cli/client_utilities/pg_restore.html.md.erb
new file mode 100644
index 0000000..e612282
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/pg_restore.html.md.erb
@@ -0,0 +1,256 @@
+---
+title: pg_restore
+---
+
+Restores a database from an archive file created by `pg_dump`.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+pg_restore [<general_options>] [<restore_options>] [<connection_options>] <filename>
+```
+where:
+
+``` pre
+<general_options>
+    [-d | --dbname=dbname ]
+    [-f outfilename | --file=outfilename ]
+    [-F t|c | --format=tar | custom ] 
+    [-i | --ignore-version ]
+    [-l | --list ]
+    [-v  | --verbose ]
+    [--help]
+    [--version]
+<restore_options> =
+    [-a | --dataonly ]
+    [-c | --clean ]
+    [-C | --create ]
+    [-I | --index=index ]
+    [-L <list-file> | --use-list=<list-file> ]
+    [-n | --schema <schema> ]
+    [-O, --no-owner ]
+    [-P \u2019<function-name(<argtype> [, \u2026])\u2019 | --function=\u2019<function-name>(<argtype> [, \u2026])\u2019]
+    [-s | --schema-only]
+    [-S <username>   | --superuser=<username> ]
+    [-t <table>, --table <table> ]
+    [-T <trigger> | --trigger=<trigger> ]
+    [-x | --no-privileges | --no-acl] 
+    [--disable-triggers] 
+    [--use-set-session-authoriztion]
+    [--no-data-for-failed-tables]
+    [-1 | --single-transaction ]  
+<connection_options> =
+    [-h <host> | --host <host>] 
+    [-p <port> | -- port <port>] 
+    [-U <username> | --username <username>] 
+    [-W | --password] 
+    [-e] | --exit-on-error ]  
+```
+
+## <a id="topic1__section3"></a>Description
+
+`pg_restore` is a utility for restoring a database from an archive created by [pg\_dump](pg_dump.html#topic1) in one of the non-plain-text formats. It will issue the commands necessary to reconstruct the database to the state it was in at the time it was saved. The archive files also allow `pg_restore` to be selective about what is restored, or even to reorder the items prior to being restored.
+
+`pg_restore` can operate in two modes. If a database name is specified, the archive is restored directly into the database. Otherwise, a script containing the SQL commands necessary to rebuild the database is created and written to a file or standard output. The script output is equivalent to the plain text output format of `pg_dump`. Some of the options controlling the output are therefore analogous to `pg_dump` options.
+
+`pg_restore` cannot restore information that is not present in the archive file. For instance, if the archive was made using the "dump data as `INSERT` commands" option, `pg_restore` will not be able to load the data using `COPY` statements.
+
+## <a id="topic1__section4"></a>Options
+
+<dt> *filename*   </dt>
+<dd>Specifies the location of the archive file to be restored. If not specified, the standard input is used.</dd>
+
+**General Options**
+
+<dt>-d *dbname* , -\\\-dbname=*dbname*  </dt>
+<dd>Connect to this database and restore directly into this database. The default is to use the `PGDATABASE` environment variable setting, or the same name as the current system user.</dd>
+
+<dt>-f *outfilename* , -\\\-file=*outfilename*  </dt>
+<dd>Specify output file for generated script, or for the listing when used with `-l`. Default is the standard output.</dd>
+
+<dt>-F t |c , -\\\-format=tar|custom  </dt>
+<dd>The format of the archive produced by [pg\_dump](pg_dump.html#topic1). It is not necessary to specify the format, since `pg_restore` will determine the format automatically. Format can be either `tar` or `custom`.</dd>
+
+<dt>-i , -\\\-ignore-version  </dt>
+<dd>Ignore database version checks.</dd>
+
+<dt>-l , -\\\-list  </dt>
+<dd>List the contents of the archive. The output of this operation can be used with the `-L` option to restrict and reorder the items that are restored.</dd>
+
+<dt>-v , -\\\-verbose  </dt>
+<dd>Specifies verbose mode.</dd>
+
+<dt> -\\\-help  </dt>
+<dd>Displays this help and exits.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays version number information for the database
+, then exits.</dd>
+
+**Restore Options**
+
+<dt>-a , -\\\-data-only  </dt>
+<dd>Restore only the data, not the schema (data definitions).</dd>
+
+<dt>-c , -\\\-clean  </dt>
+<dd>Clean (drop) database objects before recreating them.</dd>
+
+<dt>-C , -\\\-create  </dt>
+<dd>Create the database before restoring into it. (When this option is used, the database named with `-d` is used only to issue the initial `CREATE DATABASE` command. All data is restored into the database name that appears in the archive.)</dd>
+
+<dt>-e , -\\\-exit-on-error  </dt>
+<dd>Exit if an error is encountered while sending SQL commands to the database. The default is to continue and to display a count of errors at the end of the restoration.</dd>
+
+<dt>-I *index* , -\\\-index=*index*  </dt>
+<dd>Restore definition of named index only.</dd>
+
+<dt>-L *list-file* , -\\\-use-list=*list-file*  </dt>
+<dd>Restore elements in the *list-file* only, and in the order they appear in the file. Lines can be moved and may also be commented out by placing a `;` at the start of the line.</dd>
+
+<dt>-n *schema* , -\\\-schema=*schema*  </dt>
+<dd>Restore only objects that are in the named schema. This can be combined with the `-t` option to restore just a specific table.</dd>
+
+<dt>-O , -\\\-no-owner  </dt>
+<dd>Do not output commands to set ownership of objects to match the original database. By default, `pg_restore` issues `ALTER OWNER` or `SET SESSION AUTHORIZATION` statements to set ownership of created schema elements. These statements will fail unless the initial connection to the database is made by a superuser (or the same user that owns all of the objects in the script). With `-O`, any user name can be used for the initial connection, and this user will own all the created objects.</dd>
+
+<dt>-P '*function-name*(*argtype* \[, ...\])' , -\\\-function='*function-name*(*argtype* \[, ...\])'  </dt>
+<dd>Restore the named function only. The function name must be enclosed in quotes. Be careful to spell the function name and arguments exactly as they appear in the dump file's table of contents (as shown by the `--list` option).</dd>
+
+<dt>-s , -\\\-schema-only  </dt>
+<dd>Restore only the schema (data definitions), not the data (table contents). Sequence current values will not be restored, either. (Do not confuse this with the `--schema` option, which uses the word schema in a different meaning.)</dd>
+
+<dt>-S *username* , -\\\-superuser=*username*  </dt>
+<dd>Specify the superuser user name to use when disabling triggers. This is only relevant if `--disable-triggers` is used.
+
+**Note:** HAWQ does not support user-defined triggers.</dd>
+
+<dt>-t *table* , -\\\-table=*table*  </dt>
+<dd>Restore definition and/or data of named table only.</dd>
+
+<dt>-T *trigger* , -\\\-trigger=*trigger*  </dt>
+<dd>Restore named trigger only.
+
+**Note:** HAWQ does not support user-defined triggers.</dd>
+
+<dt>-x , -\\\-no-privileges , -\\\-no-acl  </dt>
+<dd>Prevent restoration of access privileges (`GRANT/REVOKE` commands).</dd>
+
+<dt>-\\\-disable-triggers  </dt>
+<dd>This option is only relevant when performing a data-only restore. It instructs `pg_restore` to execute commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you have triggers on the tables that you do not want to invoke during data reload. The commands emitted for `--disable-triggers` must be done as superuser. So, you should also specify a superuser name with `-S`, or preferably run `pg_restore` as a superuser.
+
+**Note:** HAWQ does not support user-defined triggers.</dd>
+
+<dt>-\\\-no-data-for-failed-tables  </dt>
+<dd>By default, table data is restored even if the creation command for the table failed (e.g., because it already exists). With this option, data for such a table is skipped. This behavior is useful when the target database may already contain the desired table contents. Specifying this option prevents duplicate or obsolete data from being loaded. This option is effective only when restoring directly into a database, not when producing SQL script output.</dd>
+
+<dt>-1 , -\\\-single-transaction  </dt>
+<dd>Execute the restore as a single transaction. This ensures that either all the commands complete successfully, or no changes are applied.</dd>
+
+**Connection Options**
+
+<dt>-h *host* , -\\\-host *host*  </dt>
+<dd>The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.</dd>
+
+<dt>-p *port* , -\\\-port *port*  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.</dd>
+
+<dt>-U *username* , -\\\-username *username*  </dt>
+<dd>The database role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-W , -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+<dt>-e , -\\\-exit-on-error  </dt>
+<dd>Exit if an error is encountered while sending SQL commands to the database. The default is to continue and to display a count of errors at the end of the restoration.</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+If your installation has any local additions to the `template1` database, be careful to load the output of `pg_restore` into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database without any local additions, copy from `template0` not `template1`, for example:
+
+``` sql
+CREATE DATABASE foo WITH TEMPLATE template0;
+```
+
+When restoring data to a pre-existing table and the option `--disable-triggers` is used, `pg_restore` emits commands to disable triggers on user tables before inserting the data then emits commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs may be left in the wrong state.
+
+`pg_restore` will not restore large objects for a single table. If an archive contains large objects, then all large objects will be restored.
+
+See also the `pg_dump` documentation for details on limitations of `pg_dump`.
+
+Once restored, it is wise to run `ANALYZE` on each restored table so the query planner has useful statistics.
+
+When running `pg_restore`, a warning related to the `gp_enable_column_oriented_table` parameter might appear. If it does, disregard it.
+
+## <a id="topic1__section7"></a>Examples
+
+Assume we have dumped a database called `mydb` into a custom-format dump file:
+
+``` shell
+$ pg_dump -Fc mydb > db.dump
+```
+
+To drop the database and recreate it from the dump:
+
+``` shell
+$ dropdb mydb
+$ pg_restore -C -d template1 db.dump
+```
+
+To reload the dump into a new database called `newdb`. Notice there is no `-C`, we instead connect directly to the database to be restored into. Also note that we clone the new database from `template0` not `template1`, to ensure it is initially empty:
+
+``` shell
+$ createdb -T template0 newdb
+$ pg_restore -d newdb db.dump
+```
+
+To reorder database items, it is first necessary to dump the table of contents of the archive:
+
+``` shell
+$ pg_restore -l db.dump > db.list
+```
+
+The listing file consists of a header and one line for each item, for example,
+
+``` pre
+; Archive created at Fri Jul 28 22:28:36 2006
+;     dbname: mydb
+;     TOC Entries: 74
+;     Compression: 0
+;     Dump Version: 1.4-0
+;     Format: CUSTOM
+;
+; Selected TOC Entries:
+;
+2; 145344 TABLE species postgres
+3; 145344 ACL species
+4; 145359 TABLE nt_header postgres
+5; 145359 ACL nt_header
+6; 145402 TABLE species_records postgres
+7; 145402 ACL species_records
+8; 145416 TABLE ss_old postgres
+9; 145416 ACL ss_old
+10; 145433 TABLE map_resolutions postgres
+11; 145433 ACL map_resolutions
+12; 145443 TABLE hs_old postgres
+13; 145443 ACL hs_old
+```
+
+Semicolons start a comment, and the numbers at the start of lines refer to the internal archive ID assigned to each item. Lines in the file can be commented out, deleted, and reordered. For example,
+
+``` pre
+10; 145433 TABLE map_resolutions postgres
+;2; 145344 TABLE species postgres
+;4; 145359 TABLE nt_header postgres
+6; 145402 TABLE species_records postgres
+;8; 145416 TABLE ss_old postgres
+```
+
+Could be used as input to `pg_restore` and would only restore items 10 and 6, in that order:
+
+``` shell
+$ pg_restore -L db.list db.dump
+```
+
+## <a id="topic1__section8"></a>See Also
+
+[pg\_dump](pg_dump.html#topic1)
\ No newline at end of file


[50/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

Posted by yo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/FaultTolerance.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/FaultTolerance.html.md.erb b/admin/FaultTolerance.html.md.erb
deleted file mode 100644
index fc9de93..0000000
--- a/admin/FaultTolerance.html.md.erb
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: Understanding the Fault Tolerance Service
----
-
-The fault tolerance service (FTS) enables HAWQ to continue operating in the event that a segment node fails. The fault tolerance service runs automatically and requires no additional configuration requirements.
-
-Each segment runs a resource manager process that periodically sends (by default, every 30 seconds) the segment\u2019s status to the master's resource manager process. This interval is controlled by the `hawq_rm_segment_heartbeat_interval` server configuration parameter.
-
-When a segment encounters a critical error-- for example, a temporary directory on the segment fails due to a hardware error-- the segment reports that there is temporary directory failure to the HAWQ master through a heartbeat report. When the master receives the report, it marks the segment as DOWN in the `gp_segment_configuration` table. All changes to a segment's status are recorded in the `gp_configuration_history` catalog table, including the reason why the segment is marked as DOWN. When this segment is set to DOWN, master will not run query executors on the segment. The failed segment is fault-isolated from the rest of the cluster.
-
-Besides disk failure, there are other reasons why a segment can be marked as DOWN. For example, if HAWQ is running in YARN mode, every segment should have a NodeManager (Hadoop\u2019s YARN service) running on it, so that the segment can be considered a resource to HAWQ. However, if the NodeManager on a segment is not operating properly, this segment will also be marked as DOWN in `gp_segment_configuration table`. The corresponding reason for the failure is recorded into `gp_configuration_history`.
-
-**Note:** If a disk fails in a particular segment, the failure may cause either an HDFS error or a temporary directory error in HAWQ. HDFS errors are handled by the Hadoop HDFS service.
-
-##Viewing the Current Status of a Segment <a id="view_segment_status"></a>
-
-To view the current status of the segment, query the `gp_segment_configuration` table.
-
-If the status of a segment is DOWN, the "description" column displays the reason. The reason can include any of the following reasons, as single reasons or as a combination of several reasons, split by a semicolon (";").
-
-**Reason: heartbeat timeout**
-
-Master has not received a heartbeat from the segment. If you see this reason, make sure that HAWQ is running on the segment.
-
-If the segment reports a heartbeat at a later time, the segment is marked as UP.
-
-**Reason: failed probing segment**
-
-Master has probed the segment to verify that it is operating normally, and the segment response is NO.
-
-While a HAWQ instance is running, the Query Dispatcher finds that some Query Executors on the segment are not working normally. The resource manager process on master sends a message to this segment. When the segment resource manager receives the message from master, it checks whether its PostgreSQL postmaster process is working normally and sends a reply message to master. When master gets a reply message that indicates that this segment's postmaster process is not working normally, then the master marks the segment as DOWN with the reason "failed probing segment."
-
-Check the logs of the failed segment and try to restart the HAWQ instance.
-
-**Reason: communication error**
-
-Master cannot connect to the segment.
-
-Check the network connection between the master and the segment.
-
-**Reason: resource manager process was reset**
-
-If the timestamp of the segment resource manager process doesn\u2019t match the previous timestamp, it means that the resource manager process on segment has been restarted. In this case, HAWQ master needs to return the resources on this segment and marks the segment as DOWN. If the master receives a new heartbeat from this segment, it will mark it back to UP. 
-
-**Reason: no global node report**
-
-HAWQ is using YARN for resource management. No cluster report has been received for this segment. 
-
-Check that NodeManager is operating normally on this segment. 
-
-If not, try to start NodeManager on the segment. 
-After NodeManager is started, run `yarn node --list` to see if the node is in list. If so, this segment is set to UP.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb b/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
deleted file mode 100644
index b4284be..0000000
--- a/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
+++ /dev/null
@@ -1,223 +0,0 @@
----
-title: HAWQ Filespaces and High Availability Enabled HDFS
----
-
-If you initialized HAWQ without the HDFS High Availability \(HA\) feature, you can enable it by using the following procedure.
-
-## <a id="enablingthehdfsnamenodehafeature"></a>Enabling the HDFS NameNode HA Feature 
-
-To enable the HDFS NameNode HA feature for use with HAWQ, you need to perform the following tasks:
-
-1. Enable high availability in your HDFS cluster.
-1. Collect information about the target filespace.
-1. Stop the HAWQ cluster and backup the catalog (**Note:** Ambari users must perform this manual step.)
-1. Move the filespace location using the command line tool (**Note:** Ambari users must perform this manual step.)
-1. Reconfigure `${GPHOME}/etc/hdfs-client.xml` and `${GPHOME}/etc/hawq-site.xml` files. Then, synchronize updated configuration files to all HAWQ nodes.
-1. Start the HAWQ cluster and resynchronize the standby master after moving the filespace.
-
-
-### <a id="enablehahdfs"></a>Step 1: Enable High Availability in Your HDFS Cluster 
-
-Enable high availability for NameNodes in your HDFS cluster. See the documentation for your Hadoop distribution for instructions on how to do this. 
-
-**Note:** If you're using Ambari to manage your HDFS cluster, you can use the Enable NameNode HA Wizard. For example, [this Hortonworks HDP procedure](https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-user-guide/content/how_to_configure_namenode_high_availability.html) outlines how to do this in Ambari for HDP.
-
-### <a id="collectinginformationaboutthetargetfilespace"></a>Step 2: Collect Information about the Target Filespace 
-
-A default filespace named dfs\_system exists in the pg\_filespace catalog and the parameter, pg\_filespace\_entry, contains detailed information for each filespace.�
-
-To move the filespace location to a HA-enabled HDFS location, you must move the data to a new path on your HA-enabled HDFS cluster.
-
-1.  Use the following SQL query to gather information about the filespace located on HDFS:
-
-    ```sql
-    SELECT
-        fsname, fsedbid, fselocation
-    FROM
-        pg_filespace AS sp, pg_filespace_entry AS entry, pg_filesystem AS fs
-    WHERE
-        sp.fsfsys = fs.oid AND fs.fsysname = 'hdfs' AND sp.oid = entry.fsefsoid
-    ORDER BY
-        entry.fsedbid;
-    ```
-
-    The sample output is as follows:
-
-    ```
-		  fsname | fsedbid | fselocation
-	--------------+---------+-------------------------------------------------
-	cdbfast_fs_c | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_c
-	cdbfast_fs_b | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_b
-	cdbfast_fs_a | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_a
-	dfs_system   | 0       | hdfs://test5:9000/hawq/hawq-1459499690
-	(4 rows)
-    ```
-
-    The output contains the following:
-    - HDFS paths that share the same prefix
-    - Current filespace location
-
-    **Note:** If you see `{replica=3}` in the filespace location, ignore this part of the prefix. This is a known issue.
-
-2.  To enable HA HDFS, you need the filespace name and the common prefix of your HDFS paths. The filespace location is formatted like a URL.
-
-	If the previous filespace location is 'hdfs://test5:9000/hawq/hawq-1459499690' and the HA HDFS common prefix is 'hdfs://hdfs-cluster', then the new filespace location should be 'hdfs://hdfs-cluster/hawq/hawq-1459499690'.
-
-    ```
-    Filespace Name: dfs_system
-    Old location: hdfs://test5:9000/hawq/hawq-1459499690
-    New location: hdfs://hdfs-cluster/hawq/hawq-1459499690
-    ```
-
-### <a id="stoppinghawqclusterandbackupcatalog"></a>Step 3: Stop the HAWQ Cluster and Back Up the Catalog 
-
-**Note:** Ambari users must perform this manual step.
-
-When you enable HA HDFS, you are�changing the HAWQ catalog and persistent tables. You cannot perform transactions while�persistent tables are being updated. Therefore, before you move the filespace location, back up the catalog. This is to ensure that you do not lose data due to a�hardware failure or during an operation \(such as killing the HAWQ process\).�
-
-
-1. If you defined a custom port for HAWQ master, export the `PGPORT` environment variable. For example:
-
-	```shell
-	export PGPORT=9000
-	```
-
-1. Save the HAWQ master data directory, found in the `hawq_master_directory` property value from `hawq-site.xml` to an environment variable.
- 
-	```bash
-	export MDATA_DIR=/path/to/hawq_master_directory
-	```
-
-1.  Disconnect all workload connections. Check the active connection with:
-
-    ```shell
-    $ psql -p ${PGPORT} -c "SELECT * FROM pg_catalog.pg_stat_activity" -d template1
-    ```
-    where `${PGPORT}` corresponds to the port number you optionally customized for HAWQ master. 
-    
-
-2.  Issue a checkpoint:�
-
-    ```shell
-    $ psql�-p ${PGPORT} -c "CHECKPOINT" -d template1
-    ```
-
-3.  Shut down the HAWQ cluster:�
-
-    ```shell
-    $ hawq stop cluster -a -M fast
-    ```
-
-4.  Copy the master data directory to a backup location:
-
-    ```shell
-    $ cp -r ${MDATA_DIR} /catalog/backup/location
-    ```
-	The master data directory contains the catalog. Fatal errors can occur due to hardware failure or if you fail to kill a HAWQ process before attempting a filespace location change. Make sure you back this directory up.
-
-### <a id="movingthefilespacelocation"></a>Step 4: Move the Filespace Location 
-
-**Note:** Ambari users must perform this manual step.
-
-HAWQ provides the command line tool, `hawq filespace`, to move the location of the filespace.
-
-1. If you defined a custom port for HAWQ master, export the `PGPORT` environment variable. For example:
-
-	```shell
-	export PGPORT=9000
-	```
-1. Run the following command to move a filespace location:
-
-	```shell
-	$ hawq filespace --movefilespace default --location=hdfs://hdfs-cluster/hawq_new_filespace
-	```
-	Specify `default` as the value of the `--movefilespace` option. Replace `hdfs://hdfs-cluster/hawq_new_filespace` with the new filespace location.
-
-#### **Important:** Potential Errors During Filespace Move
-
-Non-fatal error can occur if you provide invalid input or if you have not stopped HAWQ before attempting a filespace location change. Check that you have followed the instructions from the beginning, or correct the input error before you re-run `hawq filespace`.
-
-Fatal errors can occur due to hardware failure or if you fail to kill a HAWQ process before attempting a filespace location change. When a fatal error occurs, you will see the message, "PLEASE RESTORE MASTER DATA DIRECTORY" in the output. If this occurs, shut down the database and restore the `${MDATA_DIR}` that you backed up in Step 4.
-
-### <a id="configuregphomeetchdfsclientxml"></a>Step 5: Update HAWQ to Use NameNode HA by Reconfiguring hdfs-client.xml and hawq-site.xml 
-
-If you install and manage your cluster using command-line utilities, follow these steps to modify your HAWQ configuration to use the NameNode HA service.
-
-**Note:** These steps are not required if you use Ambari to manage HDFS and HAWQ, because Ambari makes these changes automatically after you enable NameNode HA.
-
-For command-line administrators:
-
-1. Edit the ` ${GPHOME}/etc/hdfs-client.xml` file on each segment and add the following NameNode properties:
-
-    ```xml
-    <property>
-     <name>dfs.ha.namenodes.hdpcluster</name>
-     <value>nn1,nn2</value>
-    </property>
-
-    <property>
-     <name>dfs.namenode.http-address.hdpcluster.nn1</name>
-     <value>ip-address-1.mycompany.com:50070</value>
-    </property>
-
-    <property>
-     <name>dfs.namenode.http-address.hdpcluster.nn2</name>
-     <value>ip-address-2.mycompany.com:50070</value>
-    </property>
-
-    <property>
-     <name>dfs.namenode.rpc-address.hdpcluster.nn1</name>
-     <value>ip-address-1.mycompany.com:8020</value>
-    </property>
-
-    <property>
-     <name>dfs.namenode.rpc-address.hdpcluster.nn2</name>
-     <value>ip-address-2.mycompany.com:8020</value>
-    </property>
-
-    <property>
-     <name>dfs.nameservices</name>
-     <value>hdpcluster</value>
-    </property>
-     ```
-
-    In the listing above:
-    * Replace `hdpcluster` with the actual service ID that is configured in HDFS.
-    * Replace `ip-address-2.mycompany.com:50070` with the actual NameNode RPC host and port number that is configured in HDFS.
-    * Replace `ip-address-1.mycompany.com:8020` with the actual NameNode HTTP host and port number that is configured in HDFS.
-    * The order of the NameNodes listed in `dfs.ha.namenodes.hdpcluster` is important for performance, especially when running secure HDFS. The first entry (`nn1` in the example above) should correspond to the active NameNode.
-
-2.  Change the following parameter in the `$GPHOME/etc/hawq-site.xml` file:
-
-    ```xml
-    <property>
-        <name>hawq_dfs_url</name>
-        <value>hdpcluster/hawq_default</value>
-        <description>URL for accessing HDFS.</description>
-    </property>
-    ```
-
-    In the listing above:
-    * Replace `hdpcluster` with the actual service ID that is configured in HDFS.
-    * Replace `/hawq_default` with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable.
-
-3. Copy the updated configuration files to all nodes in the cluster (as listed in `hawq_hosts`).
-
-	```shell
-	$ hawq scp -f hawq_hosts hdfs-client.xml hawq-site.xml =:$GPHOME/etc/
-	```
-
-### <a id="reinitializethestandbymaster"></a>Step 6: Restart the HAWQ Cluster and Resynchronize the Standby Master 
-
-1. Restart the HAWQ cluster:
-
-	```shell
-	$ hawq start cluster -a
-	```
-
-1. Moving the filespace to a new location renders the standby master catalog invalid. To update the standby, resync the standby master.  On the active master, run the following command to ensure that the standby master's catalog is resynced with the active master.
-
-	```shell
-	$ hawq init standby -n -M fast
-
-	```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/HighAvailability.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/HighAvailability.html.md.erb b/admin/HighAvailability.html.md.erb
deleted file mode 100644
index 0c2e32b..0000000
--- a/admin/HighAvailability.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: High Availability in HAWQ
----
-
-A HAWQ cluster can be made highly available by providing fault-tolerant hardware, by enabling HAWQ or HDFS high-availability features, and by performing regular monitoring and maintenance procedures to ensure the health of all system components.
-
-Hardware components eventually fail either due to normal wear or to unexpected circumstances. Loss of power can lead to temporarily unavailable components. You can make a system highly available by providing redundant standbys for components that can fail so services can continue uninterrupted when a failure does occur. In some cases, the cost of redundancy is higher than a user\u2019s tolerance for interruption in service. When this is the case, the goal is to ensure that full service is able to be restored, and can be restored within an expected timeframe.
-
-With HAWQ, fault tolerance and data availability is achieved with:
-
-* [Hardware Level Redundancy (RAID and JBOD)](#ha_raid)
-* [Master Mirroring](#ha_master_mirroring)
-* [Dual Clusters](#ha_dual_clusters)
-
-## <a id="ha_raid"></a>Hardware Level Redundancy (RAID and JBOD) 
-
-As a best practice, HAWQ deployments should use RAID for master nodes and JBOD for segment nodes. Using these hardware-level systems provides high performance redundancy for single disk failure without having to go into database level fault tolerance. RAID and JBOD provide a lower level of redundancy at the disk level.
-
-## <a id="ha_master_mirroring"></a>Master Mirroring 
-
-There are two masters in a highly available cluster, a primary and a standby. As with segments, the master and standby should be deployed on different hosts so that the cluster can tolerate a single host failure. Clients connect to the primary master and queries can be executed only on the primary master. The secondary master is kept up-to-date by replicating the write-ahead log (WAL) from the primary to the secondary.
-
-## <a id="ha_dual_clusters"></a>Dual Clusters 
-
-You can add another level of redundancy to your deployment by maintaining two HAWQ clusters, both storing the same data.
-
-The two main methods for keeping data synchronized on dual clusters are "dual ETL" and "backup/restore."
-
-Dual ETL provides a complete standby cluster with the same data as the primary cluster. ETL (extract, transform, and load) refers to the process of cleansing, transforming, validating, and loading incoming data into a data warehouse. With dual ETL, this process is executed twice in parallel, once on each cluster, and is validated each time. It also allows data to be queried on both clusters, doubling the query throughput.
-
-Applications can take advantage of both clusters and also ensure that the ETL is successful and validated on both clusters.
-
-To maintain a dual cluster with the backup/restore method, create backups of the primary cluster and restore them on the secondary cluster. This method takes longer to synchronize data on the secondary cluster than the dual ETL strategy, but requires less application logic to be developed. Populating a second cluster with backups is ideal in use cases where data modifications and ETL are performed daily or less frequently.
-
-See [Backing Up and Restoring HAWQ](BackingUpandRestoringHAWQDatabases.html) for instructions on how to backup and restore HAWQ.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/MasterMirroring.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/MasterMirroring.html.md.erb b/admin/MasterMirroring.html.md.erb
deleted file mode 100644
index b9352f0..0000000
--- a/admin/MasterMirroring.html.md.erb
+++ /dev/null
@@ -1,144 +0,0 @@
----
-title: Using Master Mirroring
----
-
-There are two masters in a HAWQ cluster-- a primary master and a standby master. Clients connect to the primary master and queries can be executed only on the primary master.
-
-You deploy a backup or mirror of the master instance on a separate host machine from the primary master so that the cluster can tolerate a single host failure. A backup master or standby master serves as a warm standby if the primary master becomes non-operational. You create a standby master from the primary master while the primary is online.
-
-The primary master continues to provide services to users while HAWQ takes a transactional snapshot of the primary master instance. In addition to taking a transactional snapshot and deploying it to the standby master, HAWQ also records changes to the primary master. After HAWQ deploys the snapshot to the standby master, HAWQ deploys the updates to synchronize the standby master with the primary master.
-
-After the primary master and standby master are synchronized, HAWQ keeps the standby master up to date using walsender and walreceiver, write-ahead log (WAL)-based replication processes. The walreceiver is a standby master process. The walsender process is a primary master process. The two processes use WAL-based streaming replication to keep the primary and standby masters synchronized.
-
-Since the master does not house user data, only system catalog tables are synchronized between the primary and standby masters. When these tables are updated, changes are automatically copied to the standby master to keep it current with the primary.
-
-*Figure 1: Master Mirroring in HAWQ*
-
-![](../mdimages/standby_master.jpg)
-
-
-If the primary master fails, the replication process stops, and an administrator can activate the standby master. Upon activation of the standby master, the replicated logs reconstruct the state of the primary master at the time of the last successfully committed transaction. The activated standby then functions as the HAWQ master, accepting connections on the port specified when the standby master was initialized.
-
-If the master fails, the administrator uses command line tools or Ambari to instruct the standby master to take over as the new primary master. 
-
-**Tip:** You can configure a virtual IP address for the master and standby so that client programs do not have to switch to a different network address when the \u2018active\u2019 master changes. If the master host fails, the virtual IP address can be swapped to the actual acting master.
-
-##Configuring Master Mirroring <a id="standby_master_configure"></a>
-
-You can configure a new HAWQ system with a standby master during HAWQ\u2019s installation process, or you can add a standby master later. This topic assumes you are adding a standby master to an existing node in your HAWQ cluster.
-
-###Add a standby master to an existing system
-
-1. Ensure the host machine for the standby master has been installed with HAWQ and configured accordingly:
-    * The gpadmin system user has been created.
-    * HAWQ binaries are installed.
-    * HAWQ environment variables are set.
-    * SSH keys have been exchanged.
-    * HAWQ Master Data directory has been created.
-
-2. Initialize the HAWQ master standby:
-
-    a. If you use Ambari to manage your cluster, follow the instructions in [Adding a HAWQ Standby Master](ambari-admin.html#amb-add-standby).
-
-    b. If you do not use Ambari, log in to the HAWQ master and re-initialize the HAWQ master standby node:
- 
-    ``` shell
-    $ ssh gpadmin@<hawq_master>
-    hawq_master$ . /usr/local/hawq/greenplum_path.sh
-    hawq_master$ hawq init standby -s <new_standby_master>
-    ```
-
-    where \<new\_standby\_master\> identifies the hostname of the standby master.
-
-3. Check the status of master mirroring by querying the `gp_master_mirroring system` view. See [Checking on the State of Master Mirroring](#standby_check) for instructions.
-
-4. To activate or failover to the standby master, see [Failing Over to a Standby Master](#standby_failover).
-
-##Failing Over to a Standby Master<a id="standby_failover"></a>
-
-If the primary master fails, log replication stops. You must explicitly activate the standby master in this circumstance.
-
-Upon activation of the standby master, HAWQ reconstructs the state of the master at the time of the last successfully committed transaction.
-
-###To activate the standby master
-
-1. Ensure that a standby master host has been configured for the system.
-
-2. Activate the standby master:
-
-    a. If you use Ambari to manage your cluster, follow the instructions in [Activating the HAWQ Standby Master](ambari-admin.html#amb-activate-standby).
-
-    b. If you do not use Ambari, log in to the HAWQ master and activate the HAWQ master standby node:
-
-	``` shell
-	hawq_master$ hawq activate standby
- 	```
-   After you activate the standby master, it becomes the active or primary master for the HAWQ cluster.
-
-4. (Optional, but recommended.) Configure a new standby master. See [Add a standby master to an existing system](#standby_master_configure) for instructions.
-	
-5. Check the status of the HAWQ cluster by executing the following command on the master:
-
-	```shell
-	hawq_master$ hawq state
-	```
-	
-	The newly-activated master's status should be **Active**. If you configured a new standby master, its status is **Passive**. When a standby master is not configured, the command displays `-No entries found`, the message indicating that no standby master instance is configured.
-
-6. Query the `gp_segment_configuration` table to verify that segments have registered themselves to the new master:
-
-    ``` shell
-    hawq_master$ psql dbname -c 'SELECT * FROM gp_segment_configuration;'
-    ```
-	
-7. Finally, check the status of master mirroring by querying the `gp_master_mirroring` system view. See [Checking on the State of Master Mirroring](#standby_check) for instructions.
-
-
-##Checking on the State of Master Mirroring <a id="standby_check"></a>
-
-To check on the status of master mirroring, query the `gp_master_mirroring` system view. This view provides information about the walsender process used for HAWQ master mirroring. 
-
-```shell
-hawq_master$ psql dbname -c 'SELECT * FROM gp_master_mirroring;'
-```
-
-If a standby master has not been set up for the cluster, you will see the following output:
-
-```
- summary_state  | detail_state | log_time | error_message
-----------------+--------------+----------+---------------
- Not Configured |              |          | 
-(1 row)
-```
-
-If the standby is configured and in sync with the master, you will see output similar to the following:
-
-```
- summary_state | detail_state | log_time               | error_message
----------------+--------------+------------------------+---------------
- Synchronized  |              | 2016-01-22 21:53:47+00 |
-(1 row)
-```
-
-##Resynchronizing Standby with the Master <a id="resync_master"></a>
-
-The standby can become out-of-date if the log synchronization process between the master and standby has stopped or has fallen behind. If this occurs, you will observe output similar to the following after querying the `gp_master_mirroring` view:
-
-```
-   summary_state  | detail_state | log_time               | error_message
-------------------+--------------+------------------------+---------------
- Not Synchronized |              |                        |
-(1 row)
-```
-
-To resynchronize the standby with the master:
-
-1. If you use Ambari to manage your cluster, follow the instructions in [Removing the HAWQ Standby Master](ambari-admin.html#amb-remove-standby).
-
-2. If you do not use Ambari, execute the following command on the HAWQ master:
-
-    ```shell
-    hawq_master$ hawq init standby -n
-    ```
-
-    This command stops and restarts the master and then synchronizes the standby.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/RecommendedMonitoringTasks.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/RecommendedMonitoringTasks.html.md.erb b/admin/RecommendedMonitoringTasks.html.md.erb
deleted file mode 100644
index 5083b44..0000000
--- a/admin/RecommendedMonitoringTasks.html.md.erb
+++ /dev/null
@@ -1,259 +0,0 @@
----
-title: Recommended Monitoring and Maintenance Tasks
----
-
-This section lists monitoring and maintenance activities recommended to ensure high availability and consistent performance of your HAWQ cluster.
-
-The tables in the following sections suggest activities that a HAWQ System Administrator can perform periodically to ensure that all components of the system are operating optimally. Monitoring activities help you to detect and diagnose problems early. Maintenance activities help you to keep the system up-to-date and avoid deteriorating performance, for example, from bloated system tables or diminishing free disk space.
-
-It is not necessary to implement all of these suggestions in every cluster; use the frequency and severity recommendations as a guide to implement measures according to your service requirements.
-
-## <a id="drr_5bg_rp"></a>Database State Monitoring Activities 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td><p>List segments that are currently down. If any rows are returned, this should generate a warning or alert.</p>
-    <p>Recommended frequency: run every 5 to 10 minutes</p><p>Severity: IMPORTANT</p></td>
-    <td>Run the following query in the `postgres` database:
-    <pre><code>SELECT * FROM gp_segment_configuration
-WHERE status <> 'u';
-</code></pre>
-  </td>
-  <td>If the query returns any rows, follow these steps to correct the problem:
-  <ol>
-    <li>Verify that the hosts with down segments are responsive.</li>
-    <li>If hosts are OK, check the pg_log files for the down segments to discover the root cause of the segments going down.</li>
-    </ol>
-    </td>
-    </tr>
-  <tr>
-    <td>
-      <p>Run a distributed query to test that it runs on all segments. One row should be returned for each segment.</p>
-      <p>Recommended frequency: run every 5 to 10 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Execute the following query in the `postgres` database:</p>
-      <pre><code>SELECT gp_segment_id, count(&#42;)
-FROM gp_dist_random('pg_class')
-GROUP BY 1;
-</code></pre>
-  </td>
-  <td>If this query fails, there is an issue dispatching to some segments in the cluster. This is a rare event. Check the hosts that are not able to be dispatched to ensure there is no hardware or networking issue.</td>
-  </tr>
-  <tr>
-    <td>
-      <p>Perform a basic check to see if the master is up and functioning.</p>
-      <p>Recommended frequency: run every 5 to 10 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Run the following query in the `postgres` database:</p>
-      <pre><code>SELECT count(&#42;) FROM gp_segment_configuration;</code></pre>
-    </td>
-    <td>
-      <p>If this query fails the active master may be down. Try again several times and then inspect the active master manually. If the active master is down, reboot or power cycle the active master to ensure no processes remain on the active master and then trigger the activation of the standby master.</p>
-    </td>
-  </tr>
-</table>
-
-## <a id="topic_y4c_4gg_rp"></a>Hardware and Operating System Monitoring 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td>
-      <p>Underlying platform check for maintenance required or system down of the hardware.</p>
-      <p>Recommended frequency: real-time, if possible, or every 15 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Set up system check for hardware and OS errors.</p>
-    </td>
-    <td>
-      <p>If required, remove a machine from the HAWQ cluster to resolve hardware and OS issues, then add it back to the cluster.</p>
-    </td>
-  </tr>
-  <tr>
-    <td>
-      <p>Check disk space usage on volumes used for HAWQ data storage and the OS. Recommended frequency: every 5 to 30 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Set up a disk space check.</p>
-      <ul>
-        <li>Set a threshold to raise an alert when a disk reaches a percentage of capacity. The recommended threshold is 75% full.</li>
-        <li>It is not recommended to run the system with capacities approaching 100%.</li>
-      </ul>
-    </td>
-    <td>
-      <p>Free space on the system by removing some data or files.</p>
-    </td>
-  </tr>
-  <tr>
-    <td>
-      <p>Check for errors or dropped packets on the network interfaces.</p>
-      <p>Recommended frequency: hourly</p>
-      <p>Severity: IMPORTANT</p>
-    </td>
-    <td>
-      <p>Set up a network interface checks.</p>
-    </td>
-    <td>
-      <p>Work with network and OS teams to resolve errors.</p>
-    </td>
-  </tr>
-  <tr>
-    <td>
-      <p>Check for RAID errors or degraded RAID performance.</p>
-      <p>Recommended frequency: every 5 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Set up a RAID check.</p>
-    </td>
-    <td>
-      <ul>
-        <li>Replace failed disks as soon as possible.</li>
-        <li>Work with system administration team to resolve other RAID or controller errors as soon as possible.</li>
-      </ul>
-    </td>
-  </tr>
-  <tr>
-    <td>
-      <p>Check for adequate I/O bandwidth and I/O skew.</p>
-      <p>Recommended frequency: when create a cluster or when hardware issues are suspected.</p>
-    </td>
-    <td>
-      <p>Run the HAWQ `hawq checkperf` utility.</p>
-    </td>
-    <td>
-      <p>The cluster may be under-specified if data transfer rates are not similar to the following:</p>
-      <ul>
-        <li>2GB per second disk read</li>
-        <li>1 GB per second disk write</li>
-        <li>10 Gigabit per second network read and write</li>
-      </ul>
-      <p>If transfer rates are lower than expected, consult with your data architect regarding performance expectations.</p>
-      <p>If the machines on the cluster display an uneven performance profile, work with the system administration team to fix faulty machines.</p>
-    </td>
-  </tr>
-</table>
-
-## <a id="maintentenance_check_scripts"></a>Data Maintenance 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td>Check for missing statistics on tables.</td>
-    <td>Check the `hawq_stats_missing` view in each database:
-    <pre><code>SELECT * FROM hawq_toolkit.hawq_stats_missing;</code></pre>
-    </td>
-    <td>Run <code>ANALYZE</code> on tables that are missing statistics.</td>
-  </tr>
-</table>
-
-## <a id="topic_dld_23h_rp"></a>Database Maintenance 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td>
-      <p>Mark deleted rows in HAWQ system catalogs (tables in the `pg_catalog` schema) so that the space they occupy can be reused.</p>
-      <p>Recommended frequency: daily</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Vacuum each system catalog:</p>
-      <pre><code>VACUUM &lt;<i>table</i>&gt;;</code></pre>
-    </td>
-    <td>Vacuum system catalogues regularly to prevent bloating.</td>
-  </tr>
-  <tr>
-    <td>
-    <p>Vacuum all system catalogs (tables in the <code>pg_catalog</code> schema) that are approaching <a href="../reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a>.</p>
-    <p>Recommended frequency: daily</p>
-    <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p><p>Vacuum an individual system catalog table:</p>
-      <pre><code>VACUUM &lt;<i>table</i>&gt;;</code></pre>
-    </td>
-    <td>After the <a href="../reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a> value is reached, VACUUM will no longer replace transaction IDs with <code>FrozenXID</code> while scanning a table. Perform vacuum on these tables before the limit is reached.</td>
-  </tr>
-    <td>
-      <p>Update table statistics.</p>
-      <p>Recommended frequency: after loading data and before executing queries</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Analyze user tables:</p>
-      <pre><code>ANALYZEDB -d &lt;<i>database</i>&gt; -a</code></pre>
-    </td>
-    <td>Analyze updated tables regularly so that the optimizer can produce efficient query execution plans.</td>
-  </tr>
-  <tr>
-    <td>
-      <p>Backup the database data.</p>
-      <p>Recommended frequency: daily, or as required by your backup plan</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>See <a href="BackingUpandRestoringHAWQDatabases.html">Backing Up and Restoring HAWQ</a> for a discussion of backup procedures.</td>
-    <td>Best practice is to have a current backup ready in case the database must be restored.</td>
-  </tr>
-  <tr>
-    <td>
-      <p>Vacuum system catalogs (tables in the <code>pg_catalog</code> schema) to maintain an efficient catalog.</p>
-      <p>Recommended frequency: weekly, or more often if database objects are created and dropped frequently</p>
-    </td>
-    <td>
-      <p><code>VACUUM</code> the system tables in each database.</p>
-    </td>
-    <td>The optimizer retrieves information from the system tables to create query plans. If system tables and indexes are allowed to become bloated over time, scanning the system tables increases query execution time.</td>
-  </tr>
-</table>
-
-## <a id="topic_idx_smh_rp"></a>Patching and Upgrading 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td>
-      <p>Ensure any bug fixes or enhancements are applied to the kernel.</p>
-      <p>Recommended frequency: at least every 6 months</p>
-      <p>Severity: IMPORTANT</p>
-    </td>
-    <td>Follow the vendor's instructions to update the Linux kernel.</td>
-    <td>Keep the kernel current to include bug fixes and security fixes, and to avoid difficult future upgrades.</td>
-  </tr>
-  <tr>
-    <td>
-      <p>Install HAWQ minor releases.</p>
-      <p>Recommended frequency: quarterly</p>
-      <p>Severity: IMPORTANT</p>
-    </td>
-    <td>Always upgrade to the latest in the series.</td>
-    <td>Keep the HAWQ software current to incorporate bug fixes, performance enhancements, and feature enhancements into your HAWQ cluster.</td>
-  </tr>
-</table>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/RunningHAWQ.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/RunningHAWQ.html.md.erb b/admin/RunningHAWQ.html.md.erb
deleted file mode 100644
index c7de1d5..0000000
--- a/admin/RunningHAWQ.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Running a HAWQ Cluster
----
-
-This section provides information for system administrators responsible for administering a HAWQ deployment.
-
-You should have some knowledge of Linux/UNIX system administration, database management systems, database administration, and structured query language \(SQL\) to administer a HAWQ cluster. Because HAWQ is based on PostgreSQL, you should also have some familiarity with PostgreSQL. The HAWQ documentation calls out similarities between HAWQ and PostgreSQL features throughout.
-
-## <a id="hawq_users"></a>HAWQ Users
-
-HAWQ supports users with both administrative and operating privileges. The HAWQ administrator may choose to manage the HAWQ cluster using either Ambari or the command line. [Managing HAWQ Using Ambari](../admin/ambari-admin.html) provides Ambari-specific HAWQ cluster administration procedures. [Starting and Stopping HAWQ](startstop.html), [Expanding a Cluster](ClusterExpansion.html), and [Removing a Node](ClusterShrink.html) describe specific command-line-managed HAWQ cluster administration procedures. Other topics in this guide are applicable to both Ambari- and command-line-managed HAWQ clusters.
-
-The default HAWQ admininstrator user is named `gpadmin`. The HAWQ admin may choose to assign administrative and/or operating HAWQ privileges to additional users.  Refer to [Configuring Client Authentication](../clientaccess/client_auth.html) and [Managing Roles and Privileges](../clientaccess/roles_privs.html) for additional information about HAWQ user configuration.
-
-## <a id="hawq_systems"></a>HAWQ Deployment Systems
-
-A typical HAWQ deployment includes single HDFS and HAWQ master and standby nodes and multiple HAWQ segment and HDFS data nodes. The HAWQ cluster may also include systems running the HAWQ Extension Framework (PXF) and other Hadoop services. Refer to [HAWQ Architecture](../overview/HAWQArchitecture.html) and [Select HAWQ Host Machines](../install/select-hosts.html) for information about the different systems in a HAWQ deployment and how they are configured.
-
-
-## <a id="hawq_env_databases"></a>HAWQ Databases
-
-[Creating and Managing Databases](../ddl/ddl-database.html) and [Creating and Managing Tables](../ddl/ddl-table.html) describe HAWQ database and table creation commands.
-
-You manage HAWQ databases at the command line using the [psql](../reference/cli/client_utilities/psql.html) utility, an interactive front-end to the HAWQ database. Configuring client access to HAWQ databases and tables may require information related to [Establishing a Database Session](../clientaccess/g-establishing-a-database-session.html).
-
-[HAWQ Database Drivers and APIs](../clientaccess/g-database-application-interfaces.html) identifies supported HAWQ database drivers and APIs for additional client access methods.
-
-## <a id="hawq_env_data"></a>HAWQ Data
-
-HAWQ internal data resides in HDFS. You may require access to data in different formats and locations in your data lake. You can use HAWQ and the HAWQ Extension Framework (PXF) to access and manage both internal and this external data:
-
-- [Managing Data with HAWQ](../datamgmt/dml.html) discusses the basic data operations and details regarding the loading and unloading semantics for HAWQ internal tables.
-- [Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html) describes PXF, an extensible framework you may use to query data external to HAWQ.
-
-## <a id="hawq_env_setup"></a>HAWQ Operating Environment
-
-Refer to [Introducing the HAWQ Operating Environment](setuphawqopenv.html) for a discussion of the HAWQ operating environment, including a procedure to set up the HAWQ environment. This section also provides an introduction to the important files and directories in a HAWQ installation.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/ambari-admin.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ambari-admin.html.md.erb b/admin/ambari-admin.html.md.erb
deleted file mode 100644
index a5b2169..0000000
--- a/admin/ambari-admin.html.md.erb
+++ /dev/null
@@ -1,439 +0,0 @@
----
-title: Managing HAWQ Using Ambari
----
-
-Ambari provides an easy interface to perform some of the most common HAWQ and PXF Administration Tasks.
-
-## <a id="amb-yarn"></a>Integrating YARN for Resource Management
-
-HAWQ supports integration with YARN for global resource management. In a YARN managed environment, HAWQ can request resources (containers) dynamically from YARN, and return resources when HAWQ\u2019s workload is not heavy.
-
-See also [Integrating YARN with HAWQ](../resourcemgmt/YARNIntegration.html) for command-line instructions and additional details about using HAWQ with YARN.
-
-### When to Perform
-
-Follow this procedure if you have already installed YARN and HAWQ, but you are currently using the HAWQ Standalone mode (not YARN) for resource management. This procedure helps you configure YARN and HAWQ so that HAWQ uses YARN for resource management. This procedure assumes that you will use the default YARN queue for managing HAWQ.
-
-### Procedure
-
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Select **HAWQ** from the list of installed services.
-3.  Select the **Configs** tab, then the **Settings** tab.
-4.  Use the **Resource Manager** menu to change select the **YARN** option.
-5.  Click **Save**.<br/><br/>HAWQ will use the default YARN queue, and Ambari automatically configures settings for `hawq_rm_yarn_address`, `hawq_rm_yarn_app_name`, and `hawq_rm_yarn_scheduler_address` in the `hawq-site.xml` file.<br/><br/>If YARN HA was enabled, Ambari also automatically configures the `yarn.resourcemanager.ha` and `yarn.resourcemanager.scheduler.ha` properties in `yarn-site.xml`.
-6.  If you are using HDP 2.3, follow these additional instructions:
-    1. Select **YARN** from the list of installed services.
-    2. Select the **Configs** tab, then the **Advanced** tab.
-    3. Expand the **Advanced yarn-site** section.
-    4. Locate the `yarn.resourcemanager.system-metrics-publisher.enabled` property and change its value to `false`.
-    5. Click **Save**.
-6.  (Optional.)  When HAWQ is integrated with YARN and has no workload, HAWQ does not acquire any resources right away. HAWQ\u2019s resource manager only requests resources from YARN when HAWQ receives its first query request. In order to guarantee optimal resource allocation for subsequent queries and to avoid frequent YARN resource negotiation, you can adjust `hawq_rm_min_resource_perseg` so HAWQ receives at least some number of YARN containers per segment regardless of the size of the initial query. The default value is 2, which means HAWQ\u2019s resource manager acquires at least 2 YARN containers for each segment even if the first query\u2019s resource request is small.<br/><br/>This configuration property cannot exceed the capacity of HAWQ\u2019s YARN queue. For example, if HAWQ\u2019s queue capacity in YARN is no more than 50% of the whole cluster, and each YARN node has a maximum of 64GB memory and 16 vcores, then `hawq_rm_min_resource_perseg` in HAWQ cannot be set to more than 8 since HAW
 Q\u2019s resource manager acquires YARN containers by vcore. In the case above, the HAWQ resource manager acquires a YARN container quota of 4GB memory and 1 vcore.<br/><br/>To change this parameter, expand **Custom hawq-site** and click **Add Property ...** Then specify `hawq_rm_min_resource_perseg` as the key and enter the desired Value. Click **Add** to add the property definition.
-7.  (Optional.)  If the level of HAWQ\u2019s workload is lowered, then HAWQ's resource manager may have some idle YARN resources. You can adjust `hawq_rm_resource_idle_timeout` to let the HAWQ resource manager return idle resources more quickly or more slowly.<br/><br/>For example, when HAWQ's resource manager has to reacquire resources, it can cause latency for query resource requests. To let HAWQ resource manager retain resources longer in anticipation of an upcoming workload, increase the value of `hawq_rm_resource_idle_timeout`. The default value of `hawq_rm_resource_idle_timeout` is 300 seconds.<br/><br/>To change this parameter, expand **Custom hawq-site** and click **Add Property ...** Then specify `hawq_rm_resource_idle_timeout` as the key and enter the desired Value. Click **Add** to add the property definition.
-8.  Click **Save** to save your configuration changes.
-
-## <a id="move_yarn_rm"></a>Moving a YARN Resource Manager
-
-If you are using YARN to manage HAWQ resources and need to move a YARN resource manager, then you must update your HAWQ configuration.
-
-### When to Perform
-
-Use one of the following procedures to move YARN resource manager component from one node to another when HAWQ is configured to use YARN as the global resource manager (`hawq_global_rm_type` is `yarn`). The exact procedure you should use depends on whether you have enabled high availability in YARN.
-
-**Note:** In a Kerberos-secured environment, you must update <code>hadoop.proxyuser.yarn.hosts</code> property in HDFS <code>core-site.xml</code> before running a service check. The values should be set to the current YARN Resource Managers.</p>
-
-### Procedure (Single YARN Resource Manager)
-
-1. Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-1. Click **YARN** in the list of installed services.
-1. Select **Move ResourceManager**, and complete the steps in the Ambari wizard to move the Resource Manager to a new host.
-1. After moving the Resource Manager successfully in YARN, click **HAWQ** in the list of installed services.
-1. On the HAWQ **Configs** page, select the **Advanced** tab.
-1. Under Advanced hawq-site section, update the following HAWQ properties:
-   - `hawq_rm_yarn_address`. Enter the same value defined in the `yarn.resourcemanager.address` property of `yarn-site.xml`.
-   - `hawq_rm_yarn_scheduler_address`. Enter the same value in the `yarn.resourcemanager.scheduler.address` property of `yarn-site.xml`.
-1. Restart all HAWQ components so that the configurations get updated on all HAWQ hosts.
-1. Run HAWQ Service Check, as described in [Performing a HAWQ Service Check](#amb-service-check), to ensure that HAWQ is operating properly.
-
-### Procedure (Highly Available YARN Resource Managers)
-
-1. Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-1. Click **YARN** in the list of installed services.
-1. Select **Move ResourceManager**, and complete the steps in the Ambari wizard to move the Resource Manager to a new host.
-1. After moving the Resource Manager successfully in YARN, click **HAWQ** in the list of installed services.
-1. On the HAWQ **Configs** page, select the **Advanced** tab.
-1. Under `Custom yarn-client` section, update the HAWQ properties `yarn.resourcemanager.ha` and `yarn.resourcemanager.scheduler.ha`. These parameter values should be updated to match the corresponding parameters for the YARN service. Check the values under **ResourceManager hosts** in the **Resource Manager** section of the **Advanced** configurations for the YARN service.
-1. Restart all HAWQ components so that the configuration change is updated on all HAWQ hosts. You can ignore the warning about the values of `hawq_rm_yarn_address` and `hawq_rm_yarn_scheduler_address` in `hawq-site.xml` not matching the values in `yarn-site.xml`, and click **Proceed Anyway**.
-1. Run HAWQ Service Check, as described in [Performing a HAWQ Service Check](#amb-service-check), to ensure that HAWQ is operating properly.
-
-
-## <a id="amb-service-check"></a>Performing a HAWQ Service Check
-
-A HAWQ Service check uses the `hawq state` command to display the configuration and status of segment hosts in a HAWQ Cluster. It also performs tests to ensure that HAWQ can write to and read from tables, and to ensure that HAWQ can write to and read from HDFS external tables using PXF.
-
-### When to Perform
-* Execute this procedure immediately after any common maintenance operations, such as adding, activating, or removing the HAWQ Master Standby.
-* Execute this procedure as a first step in troubleshooting problems in accessing HDFS data.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-4. Select **Service Actions > Run Service Check**, then click **OK** to perform the service check.
-
-    Ambari displays the **HAWQ Service Check** task in the list of background operations. If any test fails, then Ambari displays a red error icon next to the task.  
-5. Click the **HAWQ Service Check** task to view the actual log messages that are generated while performing the task. The log messages display the basic configuration and status of HAWQ segments, as well as the results of the HAWQ and PXF tests (if PXF is installed).
-
-6. Click **OK** to dismiss the log messages or list of background tasks.
-
-## <a id="amb-config-check"></a>Performing a Configuration Check
-
-A configuration check determines if operating system parameters on the HAWQ host machines match their recommended settings. You can also perform this procedure from the command line using the `hawq check` command. The `hawq check` command is run against all HAWQ hosts.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3. (Optional) Perform this step if you want to view or modify the host configuration parameters that are evaluated during the HAWQ config check:
-   1. Select the **Configs** tab, then select the **Advanced** tab in the settings.
-   1. Expand **Advanced Hawq Check** to view or change the list of parameters that are checked with a `hawq check` command or with the Ambari HAWQ Config check.
-
-         **Note:** All parameter entries are stored in the `/usr/local/hawq/etc/hawq_check.cnf` file. Click the **Set Recommended** button if you want to restore the file to its original contents.
-4. Select **Service Actions > Run HAWQ Config Check**, then click **OK** to perform the configuration check.
-
-    Ambari displays the **Run HAWQ Config Check** task in the list of background operations. If any parameter does not meet the specification defined in `/usr/local/hawq/etc/hawq_check.cnf`, then Ambari displays a red error icon next to the task.  
-5. Click the **Run HAWQ Config Check** task to view the actual log messages that are generated while performing the task. Address any configuration errors on the indicated host machines.
-
-6. Click **OK** to dismiss the log messages or list of background tasks.
-
-## <a id="amb-restart"></a>Performing a Rolling Restart
-Ambari provides the ability to restart a HAWQ cluster by restarting one or more segments at a time until all segments (or all segments with stale configurations) restart. You can specify a delay between restarting segments, and Ambari can stop the process if a specified number of segments fail to restart. Performing a rolling restart in this manner can help ensure that some HAWQ segments are available to service client requests.
-
-**Note:** If you do not need to preserve client connections, you can instead perform an full restart of the entire HAWQ cluster using **Service Actions > Restart All**.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3.  Select **Service Actions > Restart HAWQ Segments**.
-4. In the Restart HAWQ Segments page:
-   * Specify the number of segments that you want Ambari to restart at a time.
-   * Specify the number of seconds Ambari should wait before restarting the next batch of HAWQ segments.
-   * Specify the number of restart failures that may occur before Ambari stops the rolling restart process.
-   * Select **Only restart HAWQ Segments with stale configs** if you want to limit the restart process to those hosts.
-   * Select **Turn On Maintenance Mode for HAWQ** to enable maintenance mode before starting the rolling restart process. This suppresses alerts that are normally generated when a segment goes offline.
-5. Click **Trigger Rolling Restart** to begin the restart process.
-
-   Ambari displays the **Rolling Restart of HAWQ segments** task in the list of background operations, and indicates the current batch of segments that it is restarting. Click the name of the task to view the log messages generated during the restart. If any segment fails to restart, Ambari displays a red warning icon next to the task.
-
-## <a id="bulk-lifecycle"></a>Performing Host-Level Actions on HAWQ Segment and PXF Hosts
-
-Ambari host-level actions enable you to perform actions on one or more hosts in the cluster at once. With HAWQ clusters, you can apply the **Start**, **Stop**, or **Restart** actions to one or more HAWQ segment hosts or PXF hosts. Using the host-level actions saves you the trouble of accessing individual hosts in Ambari and applying service actions one-by-one.
-
-### When to Perform
-*  Use the Ambari host-level actions when you have a large number of hosts in your cluster and you want to start, stop, or restart all HAWQ segment hosts or all PXF hosts as part of regularly-scheduled maintenance.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Select the **Hosts** tab at the top of the screen to display a list of all hosts in the cluster.
-3.  To apply a host-level action to all HAWQ segment hosts or PXF hosts, select an action using the applicable menu:
-    *  **Actions > Filtered Hosts > HAWQ Segments >** [ **Start** | **Stop** |  **Restart** ]
-    *  **Actions > Filtered Hosts > PXF Hosts >** [ **Start** | **Stop** |  **Restart** ]
-4.  To apply a host level action to a subset of HAWQ segments or PXF hosts:
-    1.  Filter the list of available hosts using one of the filter options:
-        *  **Filter > HAWQ Segments**
-        *  **Filter > PXF Hosts**
-    2.  Use the check boxes to select the hosts to which you want to apply the action.
-    3.  Select **Actions > Selected Hosts >** [ **Start** | **Stop** |  **Restart** ] to apply the action to your selected hosts.
-
-
-## <a id="amb-expand"></a>Expanding the HAWQ Cluster
-
-Apache HAWQ supports dynamic node expansion. You can add segment nodes while HAWQ is running without having to suspend or terminate cluster operations.
-
-### Guidelines for Cluster Expansion
-
-This topic provides some guidelines around expanding your HAWQ cluster.
-
-There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
-
--  When you add a new node, install both a DataNode and a HAWQ segment on the new node.  If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node.
--  After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
--  Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, select the **Service Actions > Clear HAWQ's HDFS Metadata Cache** option in Ambari.
--  Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command.
--  If you are using hash tables, consider updating the `default_hash_table_bucket_number` server configuration parameter to a larger value after expanding the cluster but before redistributing the hash tables.
-
-### Procedure
-First ensure that the new node(s) has been configured per the instructions found in [Apache HAWQ System Requirements](../requirements/system-requirements.html) and [Select HAWQ Host Machines](../install/select-hosts.html).
-
-1.  If you have any user-defined function (UDF) libraries installed in your existing HAWQ cluster, install them on the new node(s) that you want to add to the HAWQ cluster.
-2.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-3.  Click **HAWQ** in the list of installed services.
-4.  Select the **Configs** tab, then select the **Advanced** tab in the settings.
-5.  Expand the **General** section, and ensure that the **Exchange SSH Keys** property (`hawq_ssh_keys`) is set to `true`.  Change this property to `true` if needed, and click **Save** to continue. Ambari must be able to exchange SSH keys with any hosts that you add to the cluster in the following steps.
-6.  Select the **Hosts** tab at the top of the screen to display the Hosts summary.
-7.  If the host(s) that you want to add are not currently listed in the Hosts summary page, follow these steps:
-    1. Select **Actions > Add New Hosts** to start the Add Host Wizard.
-    2. Follow the initial steps of the Add Host Wizard to identify the new host, specify SSH keys or manually register the host, and confirm the new host(s) to add.
-
-         See [Set Up Password-less SSH](http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_Installing_HDP_AMB/content/_set_up_password-less_ssh.html) in the HDP documentation if you need more information about performing these tasks.
-    3. When you reach the Assign Slaves and Clients page, ensure that the **DataNode**, **HAWQ Segment**, and **PXF** (if the PXF service is installed) components are selected. Select additional components as necessary for your cluster.
-    4. Complete the wizard to add the new host and install the selected components.
-8. If the host(s) that you want to add already appear in the Hosts summary, follow these steps:
-   1. Click the hostname that you want to add to the HAWQ cluster from the list of hosts.
-   2. In the Components summary, ensure that the host already runs the DataNode component. If it does not, select **Add > DataNode** and then click **Confirm Add**.  Click **OK** when the task completes.
-   3. In the Components summary, select **Add > HAWQ Segment**.
-   4. Click **Confirm Add** to acknowledge the component to add. Click **OK** when the task completes.
-   5. In the Components summary, select **Add > PXF**.
-   6. Click **Confirm Add** to acknowledge the component to add. Click **OK** when the task completes.
-17. (Optional) If you are using hash tables, adjust the **Default buckets for Hash Distributed tables** setting (`default_hash_table_bucket_number`) on the HAWQ service's **Configs > Settings** tab. Update this property's value by multiplying the new number of nodes in the cluster by the appropriate number indicated below.
-
-    |Number of Nodes After Expansion|Suggested default\_hash\_table\_bucket\_number value|
-    |---------------|------------------------------------------|
-    |<= 85|6 \* \#nodes|
-    |\> 85 and <= 102|5 \* \#nodes|
-    |\> 102 and <= 128|4 \* \#nodes|
-    |\> 128 and <= 170|3 \* \#nodes|
-    |\> 170 and <= 256|2 \* \#nodes|
-    |\> 256 and <= 512|1 \* \#nodes|
-    |\> 512|512|
-18.  Ambari requires the HAWQ service to be restarted in order to apply the configuration changes. If you need to apply the configuration *without* restarting HAWQ (for dynamic cluster expansion), then you can use the HAWQ CLI commands described in [Manually Updating the HAWQ Configuration](#manual-config-steps) *instead* of following this step.
-    <br/><br/>Stop and then start the HAWQ service to apply your configuration changes via Ambari. Select **Service Actions > Stop**, followed by **Service Actions > Start** to ensure that the HAWQ Master starts before the newly-added segment. During the HAWQ startup, Ambari exchanges ssh keys for the `gpadmin` user, and applies the new configuration.
-    >**Note:** Do not use the **Restart All** service action to complete this step.
-19.  Consider the impact of rebalancing HDFS to other components, such as HBase, before you complete this step.
-    <br/><br/>Rebalance your HDFS data by selecting the **HDFS** service and then choosing **Service Actions > Rebalance HDFS**. Follow the Ambari instructions to complete the rebalance action.
-20.  Speed up the clearing of the metadata cache by first selecting the **HAWQ** service and then selecting **Service Actions > Clear HAWQ's HDFS Metadata Cache**.
-21.  If you are using hash distributed tables and wish to take advantage of the performance benefits of using a larger cluster, redistribute the data in all hash-distributed tables by using either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command. You should redistribute the table data if you modified the `default_hash_table_bucket_number` configuration parameter.
-
-    **Note:** The redistribution of table data can take a significant amount of time.
-22.  (Optional.) If you changed the **Exchange SSH Keys** property value before adding the host(s), change the value back to `false` after Ambari exchanges keys with the new hosts. This prevents Ambari from exchanging keys with all hosts every time the HAWQ master is started or restarted.
-
-23.  (Optional.) If you enabled temporary password-based authentication while preparing/configuring your HAWQ host systems, turn off password-based authentication as described in [Apache HAWQ System Requirements](../requirements/system-requirements.html#topic_pwdlessssh).
-
-#### <a id="manual-config-steps"></a>Manually Updating the HAWQ Configuration
-If you need to expand your HAWQ cluster without restarting the HAWQ service, follow these steps to manually apply the new HAWQ configuration. (Use these steps *instead* of following Step 7 in the above procedure.):
-
-1.  Update your configuration to use the new `default_hash_table_bucket_number` value that you calculated:
-  1. SSH into the HAWQ master host as the `gpadmin` user:
-    ```shell
-    $ ssh gpadmin@<HAWQ_MASTER_HOST>
-    ```
-   2. Source the `greenplum_path.sh` file to update the shell environment:
-    ```shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-   3. Verify the current value of `default_hash_table_bucket_number`:
-    ```shell
-    $ hawq config -s default_hash_table_bucket_number
-    ```
-   4. Update `default_hash_table_bucket_number` to the new value that you calculated:
-    ```shell
-    $ hawq config -c default_hash_table_bucket_number -v <new_value>
-    ```
-   5. Reload the configuration without restarting the cluster:
-    ```shell
-    $ hawq stop cluster -u
-    ```
-   6. Verify that the `default_hash_table_bucket_number` value was updated:
-    ```shell
-    $ hawq config -s default_hash_table_bucket_number
-    ```
-2.  Edit the `/usr/local/hawq/etc/slaves` file and add the new HAWQ hostname(s) to the end of the file. Separate multiple hosts with new lines. For example, after adding host4 and host5 to a cluster already contains hosts 1-3, the updated file contents would be:
-
-     ```
-     host1
-     host2
-     host3
-     host4
-     host5
-     ```
-3.  Continue with Step 8 in the previous procedure, [Expanding the HAWQ Cluster](#amb-expand).  When the HAWQ service is ready to be restarted via Ambari, Ambari will refresh the new configurations.
-
-## <a id="amb-activate-standby"></a>Activating the HAWQ Standby Master
-Activating the HAWQ Standby Master promotes the standby host as the new HAWQ Master host. The previous HAWQ Master configuration is automatically removed from the cluster.
-
-### When to Perform
-* Execute this procedure immediately if the HAWQ Master fails or becomes unreachable.
-* If you want to take the current HAWQ Master host offline for maintenance, execute this procedure during a scheduled maintenance period. This procedure requires a restart of the HAWQ service.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3.  Select **Service Actions > Activate HAWQ Standby Master** to start the Activate HAWQ Standby Master Wizard.
-4.  Read the description of the Wizard and click **Next** to review the tasks that will be performed.
-5.  Ambari displays the host name of the current HAWQ Master that will be removed from the cluster, as well as the HAWQ Standby Master host that will be activated. The information is provided only for review and cannot be edited on this page. Click **Next** to confirm the operation.
-6. Click **OK** to confirm that you want to perform the procedure, as it is not possible to roll back the operation using Ambari.
-
-   Ambari displays a list of tasks that are performed to activate the standby server and remove the previous HAWQ Master host. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
-7. Click **Complete** after the Wizard finishes all tasks.
-
-   **Important:** After the Wizard completes, your HAWQ cluster no longer includes a HAWQ Standby Master host. As a best practice, follow the instructions in [Adding a HAWQ Standby Master](#amb-add-standby) to configure a new one.
-
-## <a id="amb-add-standby"></a>Adding a HAWQ Standby Master
-
-The HAWQ Standby Master serves as a backup of the HAWQ Master host, and is an important part of providing high availability for the HAWQ cluster. When your cluster uses a standby master, you can activate the standby if the active HAWQ Master host fails or becomes unreachable.
-
-### When to Perform
-* Execute this procedure during a scheduled maintenance period, because it requires a restart of the HAWQ service.
-* Adding a HAWQ standby master is recommended as a best practice for all new clusters to provide high availability.
-* Add a new standby master soon after you activate an existing standby master to ensure that the cluster has a backup master service.
-
-### Procedure
-
-1.  Select an existing host in the cluster to run the HAWQ standby master. You cannot run the standby master on the same host that runs the HAWQ master. Also, do not run a standby master on the node where you deployed the Ambari server; if the Ambari postgres instance is running on the same port as the HAWQ master posgres instance, initialization fails and will leave the cluster in an inconsistent state.
-1. Login to the HAWQ host that you chose to run the standby master and determine if there is an existing HAWQ master directory (for example, `/data/hawq/master`) on the machine. If the directory exists, rename the directory. For example:
-
-    ```shell
-    $ mv /data/hawq/master /data/hawq/master-old
-    ```
-
-   **Note:**  If a HAWQ master directory exists on the host when you configure the HAWQ standby master, then the standby master may be initialized with stale data. Rename any existing master directory before you proceed.
-   
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3.  Select **Service Actions > Add HAWQ Standby Master** to start the Add HAWQ Standby Master Wizard.
-4.  Read the Get Started page for information about HAWQ the standby master and to acknowledge that the procedure requires a service restart. Click **Next** to display the Select Host page.
-5.  Use the dropdown menu to select a host to use for the HAWQ Standby Master. Click **Next** to display the Review page.
-
-    **Note:**
-    * The Current HAWQ Master host is shown only for reference. You cannot change the HAWQ Master host when you configure a standby master.
-    * You cannot place the standby master on the same host as the HAWQ master.
-6. Review the information to verify the host on which the HAWQ Standby Master will be installed. Click **Back** to change your selection or **Next** to continue.
-7. Confirm that you have renamed any existing HAWQ master data directory on the selected host machine, as described earlier in this procedure. If an existing master data directory exists, the new HAWQ Standby Master may be initialized with stale data and can place the cluster in an inconsistent state. Click **Confirm** to continue.
-
-     Ambari displays a list of tasks that are performed to install the standby master server and reconfigure the cluster. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
-7. Click **Complete** after the Wizard finishes all tasks.
-
-## <a id="amb-remove-standby"></a>Removing the HAWQ Standby Master
-
-This service action enables you to remove the HAWQ Standby Master component in situations where you may need to reinstall the component.
-
-### When to Perform
-* Execute this procedure if you need to decommission or replace theHAWQ Standby Master host.
-* Execute this procedure and then add the HAWQ Standby Master once again, if the HAWQ Standby Master is unable to synchronize with the HAWQ Master and you need to reinitialize the service.
-* Execute this procedure during a scheduled maintenance period, because it requires a restart of the HAWQ service.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3.  Select **Service Actions > Remove HAWQ Standby Master** to start the Remove HAWQ Standby Master Wizard.
-4.  Read the Get Started page for information about the procedure and to acknowledge that the procedure requires a service restart. Click **Next** to display the Review page.
-5.  Ambari displays the HAWQ Standby Master host that will be removed from the cluster configuration. Click **Next** to continue, then click **OK** to confirm.
-
-     Ambari displays a list of tasks that are performed to remove the standby master from the cluster. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
-
-7. Click **Complete** after the Wizard finishes all tasks.
-
-      **Important:** After the Wizard completes, your HAWQ cluster no longer includes a HAWQ Standby Master host. As a best practice, follow the instructions in [Adding a HAWQ Standby Master](#amb-add-standby) to configure a new one.
-
-## <a id="hdp-upgrade"></a>Upgrading the HDP Stack
-
-If you install HAWQ using Ambari 2.2.2 with the HDP 2.3 stack, before you attempt to upgrade to HDP 2.4 you must use Ambari to change the `dfs.allow.truncate` property to `false`. Ambari will display a configuration warning with this setting, but it is required in order to complete the upgrade; choose **Proceed Anyway** when Ambari warns you about the configured value of `dfs.allow.truncate`.
-
-After you complete the upgrade to HDP 2.4, change the value of `dfs.allow.truncate` back to `true` to ensure that HAWQ can operate as intended.
-
-## <a id="gpadmin-password-change"></a>Changing the HAWQ gpadmin Password
-The password issued by the Ambari web console is used for the `hawq ssh-exkeys` utility, which is run during the start phase of the HAWQ Master.
-Ambari stores and uses its own copy of the gpadmin password, independently of the host system. Passwords on the master and slave nodes are not automatically updated and synchronized with Ambari. Not updating the Ambari system user password causes Ambari to behave as if the gpadmin password was never changed \(it keeps using the old password\).
-
-If passwordless ssh has not been set up, `hawq ssh-exkeys` attempts to exchange the key by using the password provided by the Ambari web console. If the password on the host machine differs from the HAWQ System User password recognized on Ambari, exchanging the key with the HAWQ Master fails. Components without passwordless ssh might not be registered with the HAWQ cluster.
-
-### When to Perform
-You should change the gpadmin password when:
-
-* The gpadmin password on the host machines has expired.
-* You want to change passwords as part of normal system security procedures.
-When updating the gpadmin password, it must be kept in synch with the gpadmin user on the HAWQ hosts. This requires manually changing the password on the Master and Slave hosts, then updating the Ambari password.
-
-###Procedure
-All of the listed steps are mandatory. This ensures that HAWQ service remains fully functional.
-
-1.  Use a script to manually change the password for the gpadmin user on all HAWQ hosts \(all Master and Slave component hosts\). To manually update the password, you must have ssh access to all host machines as the gpadmin user. Generate a hosts file to use with the `hawq ssh` command to reset the password on all hosts. Use a text editor to create a file that lists the hostname of the master node, the standby master node, and each segment node used in the cluster. Specify one hostname per line, for example:
-
-    ```
-    mdw
-    smdw
-    sdw1
-    sdw2
-    sdw3
-    ```
-
-    You can then use a command similar to the following to change the password on all hosts that are listed in the file:
-
-    ```shell
-    $ hawq ssh -f hawq_hosts 'echo "gpadmin:newpassword" | /usr/sbin/chpasswd'
-    ```    
-
-    **Note:** Be sure to make appropriate user and password system administrative changes in order to prevent operational disruption. For example, you may need to disable the password expiration policy for the `gpadmin` account.
-2.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\) Then perform the following steps:
-    1. Click **HAWQ** in the list of installed services.
-    2. On the HAWQ Server Configs page, go to the **Advanced** tab and update the **HAWQ System User Password** to the new password specified in the script.
-    3. Click **Save** to save the updated configuration.
-    4. Restart HAWQ service to propagate the configuration change to all Ambari agents.
-
-    This will synchronize the password on the host machines with the password that you specified in Ambari.
-
-## <a id="gpadmin-setup-alert"></a>Setting Up Alerts
- 
-Alerts advise you of when a HAWQ process is down or not responding, or when certain conditions requiring attention occur.
-Alerts can be created for the Master, Standby Master, Segments, and PXF components. You can also set up custom alert groups to monitor these conditions and send email notifications when they occur.
-
-### When to Perform
-Alerts are enabled by default. You might want to disable alert functions when performing system operations in maintenance mode and then re-enable them after returning to normal operation.
-
-You can configure alerts to display messages for all system status changes or only for conditions of interest, such as warnings or critical conditions. Alerts can advise you if there are communication issues between the HAWQ Master and HAWQ segments, or if the HAWQ Master, Standby Master, a segment, or the PXF service is down or not responding. 
-
-You can configure Ambari to check for alerts at specified intervals, on a particular service or host, and what level of criticality you want to trigger an alert (OK, WARNING, or CRITICAL).
-
-### Procedure
-Ambari can show Alerts and also configure certain status conditions. 
-
-#### Viewing Alerts
-To view the current alert information for HAWQ, click the **Groups** button at the top left of the Alerts page, then select **HAWQ Default** in the drop-down menu, then click on the **Alert** button at the top of the Ambari console. Ambari will display a list of all available alert functions and their current status. 
-
-To check PXF alerts, click the **Groups** dropdown button at the top left of the Alerts page. Select **PXF Default** in the dropdown menu. Alerts are displayed on the PXF Status page.
-
-To view the current Alert settings, click on the name of the alert.
-
-The Alerts you can view are as follows:
-
-* HAWQ Master Process:
-This alert is triggered when the HAWQ Master process is down or not responding. 
-
-* HAWQ Segment Process:
-This alert is triggered when a HAWQ Segment on a node is down or not responding.  
-
-* HAWQ Standby Master Process:
-This alert is triggered when the HAWQ Standby Master process is down or not responding. If no standby is present, the Alert shows as **NONE**. 
-
-* HAWQ Standby Master Sync Status:
-This alert is triggered when the HAWQ Standby Master is not synchronized with the HAWQ Master. Using this Alert eliminates the need to check the gp\_master\_mirroring catalog table to determine if the Standby Master is fully synchronized. 
-If no standby Master is present, the status will show as **UNKNOWN**.
-   If this Alert is triggered, go to the HAWQ **Services** tab and click on the **Service Action** button to re-sync the HAWQ Standby Master with the HAWQ Master.
-   
-* HAWQ Segment Registration Status:
-This alert is triggered when any of the HAWQ Segments fail to register with the HAWQ Master. This indicates that the HAWQ segments having an up status in the gp\_segment\_configuration table do not match the HAWQ Segments listed in the /usr/local/hawq/etc/slaves file on the HAWQ Master. 
-
-* Percent HAWQ Segment Status Available:
-This Alert monitors the percentage of HAWQ segments available versus total segments. 
-   Alerts for **WARN**, and **CRITICAL** are displayed when the number of unresponsive HAWQ segments in the cluster is greater than the specified threshold. Otherwise, the status will show as **OK**.
-
-* PXF Process Alerts:
-PXF Process alerts are triggered when a PXF process on a node is down or not responding on the network. If PXF Alerts are enabled, the Alert status is shown on the PXF Status page.
-
-#### Setting the Monitoring Inteval
-You can customize how often you wish the system to check for certain conditions. The default interval for checking the HAWQ system is 1 minute. 
-
-To customize the interval, perform the following steps:
-
-1.  Click on the name of the Alert you want to edit. 
-2.  When the Configuration screen appears, click **Edit**. 
-3.  Enter a number for how often to check status for the selected Alert, then click **Save**. The interval must be specified in whole minutes.
-
-
-#### Setting the Available HAWQ Segment Threshold
-HAWQ monitors the percentage of available HAWQ segments and can send an alert when a specified percent of unresponsive segments is reached. 
-
-To set the threshold for the unresponsive segments that will trigger an alert:
-
-   1.  Click on **Percent HAWQ Segments Available**. 
-   2.  Click **Edit**. Enter the percentage of total segments to create a **Warning** alert (default is 10 percent of the total segments) or **Critical** alert (default is 25 percent of total segments).
-   3.  Click **Save** when done.
-   Alerts for **WARN**, and **CRITICAL** will be displayed when the number of unresponsive HAWQ segments in the cluster is greater than the specified percentage. 
-