You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2017/01/10 23:54:37 UTC

[46/57] [abbrv] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/book/redirects.rb
----------------------------------------------------------------------
diff --git a/book/redirects.rb b/book/redirects.rb
new file mode 100644
index 0000000..a09023b
--- /dev/null
+++ b/book/redirects.rb
@@ -0,0 +1,4 @@
+r301 '/', '/docs/userguide/2.1.0.0-incubating/overview/HAWQOverview.html'
+r301 '/index.html', '/docs/userguide/2.1.0.0-incubating/overview/HAWQOverview.html'
+r301 '/docs', '/docs/userguide/2.1.0.0-incubating/overview/HAWQOverview.html'
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/client_auth.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/client_auth.html.md.erb b/clientaccess/client_auth.html.md.erb
deleted file mode 100644
index a13f4e1..0000000
--- a/clientaccess/client_auth.html.md.erb
+++ /dev/null
@@ -1,193 +0,0 @@
----
-title: Configuring Client Authentication
----
-
-When a HAWQ system is first initialized, the system contains one predefined *superuser* role. This role will have the same name as the operating system user who initialized the HAWQ system. This role is referred to as `gpadmin`. By default, the system is configured to only allow local connections to the database from the `gpadmin` role. To allow any other roles to connect, or to allow connections from remote hosts, you configure HAWQ to allow such connections.
-
-## <a id="topic2"></a>Allowing Connections to HAWQ 
-
-Client access and authentication is controlled by the standard PostgreSQL host-based authentication file, `pg_hba.conf`. In HAWQ, the `pg_hba.conf` file of the master instance controls client access and authentication to your HAWQ system. HAWQ segments have `pg_hba.conf` files that are configured to allow only client connections from the master host and never accept client connections. Do not alter the `pg_hba.conf` file on your segments.
-
-See [The pg\_hba.conf File](http://www.postgresql.org/docs/9.0/interactive/auth-pg-hba-conf.html) in the PostgreSQL documentation for more information.
-
-The general format of the `pg_hba.conf` file is a set of records, one per line. HAWQ ignores blank lines and any text after the `#` comment character. A record consists of a number of fields that are separated by spaces and/or tabs. Fields can contain white space if the field value is quoted. Records cannot be continued across lines. Each remote client access record has the following format:
-
-```
-host|hostssl|hostnossl���<database>���<role>���<CIDR-address>|<IP-address>,<IP-mask>���<authentication-method>
-```
-
-Each UNIX-domain socket access record has the following format:
-
-```
-local���<database>���<role>���<authentication-method>
-```
-
-The following table describes meaning of each field.
-
-|Field|Description|
-|-----|-----------|
-|local|Matches connection attempts using UNIX-domain sockets. Without a record of this type, UNIX-domain socket connections are disallowed.|
-|host|Matches connection attempts made using TCP/IP. Remote TCP/IP connections will not be possible unless the server is started with an appropriate value for the listen\_addresses server configuration parameter.|
-|hostssl|Matches connection attempts made using TCP/IP, but only when the connection is made with SSL encryption. SSL must be enabled at server start time by setting the ssl configuration parameter|
-|hostnossl|Matches connection attempts made over TCP/IP that do not use SSL.|
-|\<database\>|Specifies which database names this record matches. The value `all` specifies that it matches all databases. Multiple database names can be supplied by separating them with commas. A separate file containing database names can be specified by preceding the file name with @.|
-|\<role\>|Specifies which database role names this record matches. The value `all` specifies that it matches all roles. If the specified role is a group and you want all members of that group to be included, precede the role name with a +. Multiple role names can be supplied by separating them with commas. A separate file containing role names can be specified by preceding the file name with @.|
-|\<CIDR-address\>|Specifies the client machine IP address range that this record matches. It contains an IP address in standard dotted decimal notation and a CIDR mask length. IP addresses can only be specified numerically, not as domain or host names. The mask length indicates the number of high-order bits of the client IP address that must match. Bits to the right of this must be zero in the given IP address. There must not be any white space between the IP address, the /, and the CIDR mask length. Typical examples of a CIDR-address are 192.0.2.0/32 for a single host, or 192.0.2.2/24 for a small network, or 192.0.2.3/16 for a larger one. To specify a single host, use a CIDR mask of 32 for IPv4 or 128 for IPv6. In a network address, do not omit trailing zeroes.|
-|\<IP-address\>, \<IP-mask\>|These fields can be used as an alternative to the CIDR-address notation. Instead of specifying the mask length, the actual mask is specified in a separate column. For example, 255.255.255.255 represents a CIDR mask length of 32. These fields only apply to host, hostssl, and hostnossl records.|
-|\<authentication-method\>|Specifies the authentication method to use when connecting. HAWQ supports the [authentication methods](http://www.postgresql.org/docs/9.0/static/auth-methods.html) supported by PostgreSQL 9.0.|
-
-### <a id="topic3"></a>Editing the pg\_hba.conf File 
-
-This example shows how to edit the `pg_hba.conf` file of the master to allow remote client access to all databases from all roles using encrypted password authentication.
-
-**Note:** For a more secure system, consider removing all connections that use trust authentication from your master `pg_hba.conf`. Trust authentication means the role is granted access without any authentication, therefore bypassing all security. Replace trust entries with ident authentication if your system has an ident service available.
-
-#### <a id="ip144328"></a>Editing pg\_hba.conf 
-
-1.  Obtain the master data directory location from the `hawq_master_directory` property value in `hawq-site.xml` and use a text editor to open the `pg_hba.conf` file in this directory.
-2.  Add a line to the file for each type of connection you want to allow. Records are read sequentially, so the order of the records is significant. Typically, earlier records will have tight connection match parameters and weaker authentication methods, while later records will have looser match parameters and stronger authentication methods. For example:
-
-    ```
-    # allow the gpadmin user local access to all databases
-    # using ident authentication
-    local ��all ��gpadmin ��ident ��������sameuser
-    host ���all ��gpadmin ��127.0.0.1/32 �ident
-    host ���all ��gpadmin ��::1/128 ������ident
-    # allow the 'dba' role access to any database from any
-    # host with IP address 192.168.x.x and use md5 encrypted
-    # passwords to authenticate the user
-    # Note that to use SHA-256 encryption, replace *md5* with
-    # password in the line below
-    host ���all ��dba ��192.168.0.0/32 �md5
-    # allow all roles access to any database from any
-    # host and use ldap to authenticate the user. HAWQ role
-    # names must match the LDAP common name.
-    host ���all ��all ��192.168.0.0/32 �ldap ldapserver=usldap1
-    ldapport=1389 ldapprefix="cn="
-    ldapsuffix=",ou=People,dc=company,dc=com"
-    ```
-
-3.  Save and close the file.
-4.  Reload the `pg_hba.conf `configuration file for your changes to take effect. Include the `-M fast` option if you have active/open database connections:
-
-    ``` bash
-    $ hawq stop cluster -u [-M fast]
-    ```
-    
-
-
-## <a id="topic4"></a>Limiting Concurrent Connections 
-
-HAWQ allocates some resources on a per-connection basis, so setting the maximum number of connections allowed is recommended.
-
-To limit the number of active concurrent sessions to your HAWQ system, you can configure the `max_connections` server configuration parameter on master or the `seg_max_connections` server configuration parameter on segments. These parameters are *local* parameters, meaning that you must set them in the `hawq-site.xml` file of all HAWQ instances.
-
-When you set `max_connections`, you must also set the dependent parameter `max_prepared_transactions`. This value must be at least as large as the value of `max_connections`, and all HAWQ instances should be set to the same value.
-
-Example `$GPHOME/etc/hawq-site.xml` configuration:
-
-``` xml
-  <property>
-      <name>max_connections</name>
-      <value>500</value>
-  </property>
-  <property>
-      <name>max_prepared_transactions</name>
-      <value>1000</value>
-  </property>
-  <property>
-      <name>seg_max_connections</name>
-      <value>3200</value>
-  </property>
-```
-
-**Note:** Raising the values of these parameters may cause HAWQ to request more shared memory. To mitigate this effect, consider decreasing other memory-related server configuration parameters such as [gp\_cached\_segworkers\_threshold](../reference/guc/parameter_definitions.html#gp_cached_segworkers_threshold).
-
-
-### <a id="ip142411"></a>Setting the number of allowed connections
-
-You will perform different procedures to set connection-related server configuration parameters for your HAWQ cluster depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update server configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set server configuration parameters.
-
-If you use Ambari to manage your cluster:
-
-1. Set the `max_connections`, `seg_max_connections`, and `max_prepared_transactions` configuration properties via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down.
-2. Select **Service Actions > Restart All** to load the updated configuration.
-
-If you manage your cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-    
-2.  Use the `hawq config` utility to set the values of the `max_connections`, `seg_max_connections`, and `max_prepared_transactions` parameters to values appropriate for your deployment. For example: 
-
-    ``` bash
-    $ hawq config -c max_connections -v 100
-    $ hawq config -c seg_max_connections -v 6400
-    $ hawq config -c max_prepared_transactions -v 200
-    ```
-
-    The value of `max_prepared_transactions` must be greater than or equal to `max_connections`.
-
-5.  Load the new configuration values by restarting your HAWQ cluster:
-
-    ``` bash
-    $ hawq restart cluster
-    ```
-
-6.  Use the `-s` option to `hawq config` to display server configuration parameter values:
-
-    ``` bash
-    $ hawq config -s max_connections
-    $ hawq config -s seg_max_connections
-    ```
-
-
-## <a id="topic5"></a>Encrypting Client/Server Connections 
-
-Enable SSL for client connections to HAWQ to encrypt the data passed over the network between the client and the database.
-
-HAWQ has native support for SSL connections between the client and the master server. SSL connections prevent third parties from snooping on the packets, and also prevent man-in-the-middle attacks. SSL should be used whenever the client connection goes through an insecure link, and must be used whenever client certificate authentication is used.
-
-Enabling SSL requires that OpenSSL be installed on both the client and the master server systems. HAWQ can be started with SSL enabled by setting the server configuration parameter `ssl` to `on` in the master `hawq-site.xml`. When starting in SSL mode, the server will look for the files `server.key` \(server private key\) and `server.crt` \(server certificate\) in the master data directory. These files must be set up correctly before an SSL-enabled HAWQ system can start.
-
-**Important:** Do not protect the private key with a passphrase. The server does not prompt for a passphrase for the private key, and the database startup fails with an error if one is required.
-
-A self-signed certificate can be used for testing, but a certificate signed by a certificate authority \(CA\) should be used in production, so the client can verify the identity of the server. Either a global or local CA can be used. If all the clients are local to the organization, a local CA is recommended.
-
-### <a id="topic6"></a>Creating a Self-signed Certificate without a Passphrase for Testing Only 
-
-To create a quick self-signed certificate for the server for testing, use the following OpenSSL command:
-
-```
-# openssl req -new -text -out server.req
-```
-
-Enter the information requested by the prompts. Be sure to enter the local host name as *Common Name*. The challenge password can be left blank.
-
-The program will generate a key that is passphrase protected, and does not accept a passphrase that is less than four characters long.
-
-To use this certificate with HAWQ, remove the passphrase with the following commands:
-
-```
-# openssl rsa -in privkey.pem -out server.key
-# rm privkey.pem
-```
-
-Enter the old passphrase when prompted to unlock the existing key.
-
-Then, enter the following command to turn the certificate into a self-signed certificate and to copy the key and certificate to a location where the server will look for them.
-
-``` 
-# openssl req -x509 -in server.req -text -key server.key -out server.crt
-```
-
-Finally, change the permissions on the key with the following command. The server will reject the file if the permissions are less restrictive than these.
-
-```
-# chmod og-rwx server.key
-```
-
-For more details on how to create your server private key and certificate, refer to the [OpenSSL documentation](https://www.openssl.org/docs/).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/disable-kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/disable-kerberos.html.md.erb b/clientaccess/disable-kerberos.html.md.erb
deleted file mode 100644
index 5646eec..0000000
--- a/clientaccess/disable-kerberos.html.md.erb
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: Disabling Kerberos Security
----
-
-Follow these steps to disable Kerberos security for HAWQ and PXF for manual installations.
-
-**Note:** If you install or manage your cluster using Ambari, then the HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you disable security for Hadoop. The following instructions are only necessary for manual installations, or when Hadoop security is disabled outside of Ambari.
-
-1.  Disable Kerberos on the Hadoop cluster on which you use HAWQ.
-2.  Disable security for HAWQ:
-    1.  Login to the HAWQ database master server as the `gpadmin` user:
-
-        ``` bash
-        $ ssh hawq_master_fqdn
-        ```
-
-    2.  Run the following command to set up HAWQ environment variables:
-
-        ``` bash
-        $ source /usr/local/hawq/greenplum_path.sh
-        ```
-
-    3.  Start HAWQ if necessary:
-
-        ``` bash
-        $ hawq start -a
-        ```
-
-    4.  Run the following command to disable security:
-
-        ``` bash
-        $ hawq config --masteronly -c enable_secure_filesystem -v \u201coff\u201d
-        ```
-
-    5.  Change the permission of the HAWQ HDFS data directory:
-
-        ``` bash
-        $ sudo -u hdfs hdfs dfs -chown -R gpadmin:gpadmin /hawq_data
-        ```
-
-    6.  On the HAWQ master node and on all segment server nodes, edit the `/usr/local/hawq/etc/hdfs-client.xml` file to disable kerberos security. Comment or remove the following properties in each file:
-
-        ``` xml
-        <!--
-        <property>
-          <name>hadoop.security.authentication</name>
-          <value>kerberos</value>
-        </property>
-
-        <property>
-          <name>dfs.namenode.kerberos.principal</name>
-          <value>nn/_HOST@LOCAL.DOMAIN</value>
-        </property>
-        -->
-        ```
-
-    7.  Restart HAWQ:
-
-        ``` bash
-        $ hawq restart -a -M fast
-        ```
-
-3.  Disable security for PXF:
-    1.  On each PXF node, edit the `/etc/gphd/pxf/conf/pxf-site.xml` to comment or remove the properties:
-
-        ``` xml
-        <!--
-        <property>
-            <name>pxf.service.kerberos.keytab</name>
-            <value>/etc/security/phd/keytabs/pxf.service.keytab</value>
-            <description>path to keytab file owned by pxf service
-            with permissions 0400</description>
-        </property>
-
-        <property>
-            <name>pxf.service.kerberos.principal</name>
-            <value>pxf/_HOST@PHD.LOCAL</value>
-            <description>Kerberos principal pxf service should use.
-            _HOST is replaced automatically with hostnames
-            FQDN</description>
-        </property>
-        -->
-        ```
-
-    2.  Restart the PXF service.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-connecting-with-psql.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-connecting-with-psql.html.md.erb b/clientaccess/g-connecting-with-psql.html.md.erb
deleted file mode 100644
index 0fa501c..0000000
--- a/clientaccess/g-connecting-with-psql.html.md.erb
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Connecting with psql
----
-
-Depending on the default values used or the environment variables you have set, the following examples show how to access a database via `psql`:
-
-``` bash
-$ psql -d gpdatabase -h master_host -p 5432 -U `gpadmin`
-```
-
-``` bash
-$ psql gpdatabase
-```
-
-``` bash
-$ psql
-```
-
-If a user-defined database has not yet been created, you can access the system by connecting to the `template1` database. For example:
-
-``` bash
-$ psql template1
-```
-
-After connecting to a database, `psql` provides a prompt with the name of the database to which `psql` is currently connected, followed by the string `=>` \(or `=#` if you are the database superuser\). For example:
-
-``` sql
-gpdatabase=>
-```
-
-At the prompt, you may type in SQL commands. A SQL command must end with a `;` \(semicolon\) in order to be sent to the server and executed. For example:
-
-``` sql
-=> SELECT * FROM mytable;
-```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-database-application-interfaces.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-database-application-interfaces.html.md.erb b/clientaccess/g-database-application-interfaces.html.md.erb
deleted file mode 100644
index 29e22c5..0000000
--- a/clientaccess/g-database-application-interfaces.html.md.erb
+++ /dev/null
@@ -1,96 +0,0 @@
----
-title: HAWQ Database Drivers and APIs
----
-
-You may want to connect your existing Business Intelligence (BI) or Analytics applications with HAWQ. The database application programming interfaces most commonly used with HAWQ are the Postgres and ODBC and JDBC APIs.
-
-HAWQ provides the following connectivity tools for connecting to the database:
-
-  - ODBC driver
-  - JDBC driver
-  - `libpq` - PostgreSQL C API
-
-## <a id="dbdriver"></a>HAWQ Drivers
-
-ODBC and JDBC drivers for HAWQ are available as a separate download from Pivotal Network [Pivotal Network](https://network.pivotal.io/products/pivotal-hdb).
-
-### <a id="odbc_driver"></a>ODBC Driver
-
-The ODBC API specifies a standard set of C interfaces for accessing database management systems.  For additional information on using the ODBC API, refer to the [ODBC Programmer's Reference](https://msdn.microsoft.com/en-us/library/ms714177(v=vs.85).aspx) documentation.
-
-HAWQ supports the DataDirect ODBC Driver. Installation instructions for this driver are provided on the Pivotal Network driver download page. Refer to [HAWQ ODBC Driver](http://media.datadirect.com/download/docs/odbc/allodbc/#page/odbc%2Fthe-greenplum-wire-protocol-driver.html%23) for HAWQ-specific ODBC driver information.
-
-#### <a id="odbc_driver_connurl"></a>Connection Data Source
-The information required by the HAWQ ODBC driver to connect to a database is typically stored in a named data source. Depending on your platform, you may use [GUI](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FData_Source_Configuration_through_a_GUI_14.html%23) or [command line](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FData_Source_Configuration_in_the_UNIX_2fLinux_odbc_13.html%23) tools to create your data source definition. On Linux, ODBC data sources are typically defined in a file named `odbc.ini`. 
-
-Commonly-specified HAWQ ODBC data source connection properties include:
-
-| Property Name                                                    | Value Description                                                                                                                                                                                         |
-|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Database | Name of the database to which you want to connect. |
-| Driver   | Full path to the ODBC driver library file.                                                                                           |
-| HostName              | HAWQ master host name.                                                                                     |
-| MaxLongVarcharSize      | Maximum size of columns of type long varchar.                                                                                      |
-| Password              | Password used to connect to the specified database.                                                                                       |
-| PortNumber              | HAWQ master database port number.                                                                                      |
-
-Refer to [Connection Option Descriptions](http://media.datadirect.com/download/docs/odbc/allodbc/#page/odbc%2Fgreenplum-connection-option-descriptions.html%23) for a list of ODBC connection properties supported by the HAWQ DataDirect ODBC driver.
-
-Example HAWQ DataDirect ODBC driver data source definition:
-
-``` shell
-[HAWQ-201]
-Driver=/usr/local/hawq_drivers/odbc/lib/ddgplm27.so
-Description=DataDirect 7.1 Greenplum Wire Protocol - for HAWQ
-Database=getstartdb
-HostName=hdm1
-PortNumber=5432
-Password=changeme
-MaxLongVarcharSize=8192
-```
-
-The first line, `[HAWQ-201]`, identifies the name of the data source.
-
-ODBC connection properties may also be specified in a connection string identifying either a data source name, the name of a file data source, or the name of a driver.  A HAWQ ODBC connection string has the following format:
-
-``` shell
-([DSN=<data_source_name>]|[FILEDSN=<filename.dsn>]|[DRIVER=<driver_name>])[;<attribute=<value>[;...]]
-```
-
-For additional information on specifying a HAWQ ODBC connection string, refer to [Using a Connection String](http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc%2FUsing_a_Connection_String_16.html%23).
-
-### <a id="jdbc_driver"></a>JDBC Driver
-The JDBC API specifies a standard set of Java interfaces to SQL-compliant databases. For additional information on using the JDBC API, refer to the [Java JDBC API](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) documentation.
-
-HAWQ supports the DataDirect JDBC Driver. Installation instructions for this driver are provided on the Pivotal Network driver download page. Refer to [HAWQ JDBC Driver](http://media.datadirect.com/download/docs/jdbc/alljdbc/help.html#page/jdbcconnect%2Fgreenplum-driver.html%23) for HAWQ-specific JDBC driver information.
-
-#### <a id="jdbc_driver_connurl"></a>Connection URL
-Connection URLs for accessing the HAWQ DataDirect JDBC driver must be in the following format:
-
-``` shell
-jdbc:pivotal:greenplum://host:port[;<property>=<value>[;...]]
-```
-
-Commonly-specified HAWQ JDBC connection properties include:
-
-| Property Name                                                    | Value Description                                                                                                                                                                                         |
-|-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| DatabaseName | Name of the database to which you want to connect. |
-| User                         | Username used to connect to the specified database.                                                                                           |
-| Password              | Password used to connect to the specified database.                                                                                       |
-
-Refer to [Connection Properties](http://media.datadirect.com/download/docs/jdbc/alljdbc/help.html#page/jdbcconnect%2FConnection_Properties_10.html%23) for a list of JDBC connection properties supported by the HAWQ DataDirect JDBC driver.
-
-Example HAWQ JDBC connection string:
-
-``` shell
-jdbc:pivotal:greenplum://hdm1:5432;DatabaseName=getstartdb;User=hdbuser;Password=hdbpass
-```
-
-## <a id="libpq_api"></a>libpq API
-`libpq` is the C API to PostgreSQL/HAWQ. This API provides a set of library functions enabling client programs to pass queries to the PostgreSQL backend server and to receive the results of those queries.
-
-`libpq` is installed in the `lib/` directory of your HAWQ distribution. `libpq-fe.h`, the header file required for developing front-end PostgreSQL applications, can be found in the `include/` directory.
-
-For additional information on using the `libpq` API, refer to [libpq - C Library](https://www.postgresql.org/docs/8.2/static/libpq.html) in the PostgreSQL documentation.
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-establishing-a-database-session.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-establishing-a-database-session.html.md.erb b/clientaccess/g-establishing-a-database-session.html.md.erb
deleted file mode 100644
index a1c5f1c..0000000
--- a/clientaccess/g-establishing-a-database-session.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Establishing a Database Session
----
-
-Users can connect to HAWQ using a PostgreSQL-compatible client program, such as `psql`. Users and administrators *always* connect to HAWQ through the *master*; the segments cannot accept client connections.
-
-In order to establish a connection to the HAWQ master, you will need to know the following connection information and configure your client program accordingly.
-
-|Connection Parameter|Description|Environment Variable|
-|--------------------|-----------|--------------------|
-|Application name|The application name that is connecting to the database. The default value, held in the `application_name` connection parameter is *psql*.|`$PGAPPNAME`|
-|Database name|The name of the database to which you want to connect. For a newly initialized system, use the `template1` database to connect for the first time.|`$PGDATABASE`|
-|Host name|The host name of the HAWQ master. The default host is the local host.|`$PGHOST`|
-|Port|The port number that the HAWQ master instance is running on. The default is 5432.|`$PGPORT`|
-|User name|The database user \(role\) name to connect as. This is not necessarily the same as your OS user name. Check with your HAWQ administrator if you are not sure what you database user name is. Note that every HAWQ system has one superuser account that is created automatically at initialization time. This account has the same name as the OS name of the user who initialized the HAWQ system \(typically `gpadmin`\).|`$PGUSER`|
-
-[Connecting with psql](g-connecting-with-psql.html) provides example commands for connecting to HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-hawq-database-client-applications.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-hawq-database-client-applications.html.md.erb b/clientaccess/g-hawq-database-client-applications.html.md.erb
deleted file mode 100644
index a1e8ff3..0000000
--- a/clientaccess/g-hawq-database-client-applications.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: HAWQ Client Applications
----
-
-HAWQ comes installed with a number of client utility applications located in the `$GPHOME/bin` directory of your HAWQ master host installation. The following are the most commonly used client utility applications:
-
-|Name|Usage|
-|----|-----|
-|`createdb`|create a new database|
-|`createlang`|define a new procedural language|
-|`createuser`|define a new database role|
-|`dropdb`|remove a database|
-|`droplang`|remove a procedural language|
-|`dropuser`|remove a role|
-|`psql`|PostgreSQL interactive terminal|
-|`reindexdb`|reindex a database|
-|`vacuumdb`|garbage-collect and analyze a database|
-
-When using these client applications, you must connect to a database through the HAWQ master instance. You will need to know the name of your target database, the host name and port number of the master, and what database user name to connect as. This information can be provided on the command-line using the options `-d`, `-h`, `-p`, and `-U` respectively. If an argument is found that does not belong to any option, it will be interpreted as the database name first.
-
-All of these options have default values which will be used if the option is not specified. The default host is the local host. The default port number is 5432. The default user name is your OS system user name, as is the default database name. Note that OS user names and HAWQ user names are not necessarily the same.
-
-If the default values are not correct, you can set the environment variables `PGDATABASE`, `PGHOST`, `PGPORT`, and `PGUSER` to the appropriate values, or use a `psql``~/.pgpass` file to contain frequently-used passwords.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-supported-client-applications.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-supported-client-applications.html.md.erb b/clientaccess/g-supported-client-applications.html.md.erb
deleted file mode 100644
index 202f625..0000000
--- a/clientaccess/g-supported-client-applications.html.md.erb
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: Supported Client Applications
----
-
-Users can connect to HAWQ using various client applications:
-
--   A number of [HAWQ Client Applications](g-hawq-database-client-applications.html) are provided with your HAWQ installation. The `psql` client application provides an interactive command-line interface to HAWQ.
--   Using standard ODBC/JDBC Application Interfaces, such as ODBC and JDBC, users can connect their client applications to HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/g-troubleshooting-connection-problems.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/g-troubleshooting-connection-problems.html.md.erb b/clientaccess/g-troubleshooting-connection-problems.html.md.erb
deleted file mode 100644
index 0328606..0000000
--- a/clientaccess/g-troubleshooting-connection-problems.html.md.erb
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Troubleshooting Connection Problems
----
-
-A number of things can prevent a client application from successfully connecting to HAWQ. This topic explains some of the common causes of connection problems and how to correct them.
-
-|Problem|Solution|
-|-------|--------|
-|No pg\_hba.conf entry for host or user|To enable HAWQ to accept remote client connections, you must configure your HAWQ master instance so that connections are allowed from the client hosts and database users that will be connecting to HAWQ. This is done by adding the appropriate entries to the pg\_hba.conf configuration file \(located in the master instance's data directory\). For more detailed information, see [Allowing Connections to HAWQ](client_auth.html).|
-|HAWQ is not running|If the HAWQ master instance is down, users will not be able to connect. You can verify that the HAWQ system is up by running the `hawq state` utility on the HAWQ master host.|
-|Network problems<br/><br/>Interconnect timeouts|If users connect to the HAWQ master host from a remote client, network problems can prevent a connection \(for example, DNS host name resolution problems, the host system is down, and so on.\). To ensure that network problems are not the cause, connect to the HAWQ master host from the remote client host. For example: `ping hostname`. <br/><br/>If the system cannot resolve the host names and IP addresses of the hosts involved in HAWQ, queries and connections will fail. For some operations, connections to the HAWQ master use `localhost` and others use the actual host name, so you must be able to resolve both. If you encounter this error, first make sure you can connect to each host in your HAWQ array from the master host over the network. In the `/etc/hosts` file of the master and all segments, make sure you have the correct host names and IP addresses for all hosts involved in the HAWQ array. The `127.0.0.1` IP must resolve to `localho
 st`.|
-|Too many clients already|By default, HAWQ is configured to allow a maximum of 200 concurrent user connections on the master and 1280 connections on a segment. A connection attempt that causes that limit to be exceeded will be refused. This limit is controlled by the `max_connections` parameter on the master instance and by the `seg_max_connections` parameter on segment instances. If you change this setting for the master, you must also make appropriate changes at the segments.|
-|Query failure|Reverse DNS must be configured in your HAWQ cluster network. In cases where reverse DNS has not been configured, failing queries will generate "Failed to reverse DNS lookup for ip \<ip-address\>" warning messages to the HAWQ master node log file. |

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/index.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/index.md.erb b/clientaccess/index.md.erb
deleted file mode 100644
index c88adeb..0000000
--- a/clientaccess/index.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Managing Client Access
----
-
-This section explains how to configure client connections and authentication for HAWQ:
-
-*  <a class="subnav" href="./client_auth.html">Configuring Client Authentication</a>
-*  <a class="subnav" href="./ldap.html">Using LDAP Authentication with TLS/SSL</a>
-*  <a class="subnav" href="./kerberos.html">Using Kerberos Authentication</a>
-*  <a class="subnav" href="./disable-kerberos.html">Disabling Kerberos Security</a>
-*  <a class="subnav" href="./roles_privs.html">Managing Roles and Privileges</a>
-*  <a class="subnav" href="./g-establishing-a-database-session.html">Establishing a Database Session</a>
-*  <a class="subnav" href="./g-supported-client-applications.html">Supported Client Applications</a>
-*  <a class="subnav" href="./g-hawq-database-client-applications.html">HAWQ Client Applications</a>
-*  <a class="subnav" href="./g-connecting-with-psql.html">Connecting with psql</a>
-*  <a class="subnav" href="./g-database-application-interfaces.html">Database Application Interfaces</a>
-*  <a class="subnav" href="./g-troubleshooting-connection-problems.html">Troubleshooting Connection Problems</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/kerberos.html.md.erb b/clientaccess/kerberos.html.md.erb
deleted file mode 100644
index 2e7cfe5..0000000
--- a/clientaccess/kerberos.html.md.erb
+++ /dev/null
@@ -1,308 +0,0 @@
----
-title: Using Kerberos Authentication
----
-
-**Note:** The following steps for enabling Kerberos *are not required* if you install HAWQ using Ambari.
-
-You can control access to HAWQ with a Kerberos authentication server.
-
-HAWQ supports the Generic Security Service Application Program Interface \(GSSAPI\) with Kerberos authentication. GSSAPI provides automatic authentication \(single sign-on\) for systems that support it. You specify the HAWQ users \(roles\) that require Kerberos authentication in the HAWQ configuration file `pg_hba.conf`. The login fails if Kerberos authentication is not available when a role attempts to log in to HAWQ.
-
-Kerberos provides a secure, encrypted authentication service. It does not encrypt data exchanged between the client and database and provides no authorization services. To encrypt data exchanged over the network, you must use an SSL connection. To manage authorization for access to HAWQ databases and objects such as schemas and tables, you use settings in the `pg_hba.conf` file and privileges given to HAWQ users and roles within the database. For information about managing authorization privileges, see [Managing Roles and Privileges](roles_privs.html).
-
-For more information about Kerberos, see [http://web.mit.edu/kerberos/](http://web.mit.edu/kerberos/).
-
-## <a id="kerberos_prereq"></a>Requirements for Using Kerberos with HAWQ 
-
-The following items are required for using Kerberos with HAWQ:
-
--   Kerberos Key Distribution Center \(KDC\) server using the `krb5-server` library
--   Kerberos version 5 `krb5-libs` and `krb5-workstation` packages installed on the HAWQ master host
--   System time on the Kerberos server and HAWQ master host must be synchronized. \(Install Linux `ntp` package on both servers.\)
--   Network connectivity between the Kerberos server and the HAWQ master
--   Java 1.7.0\_17 or later is required to use Kerberos-authenticated JDBC on Red Hat Enterprise Linux 6.x
--   Java 1.6.0\_21 or later is required to use Kerberos-authenticated JDBC on Red Hat Enterprise Linux 4.x or 5.x
-
-## <a id="nr166539"></a>Enabling Kerberos Authentication for HAWQ 
-
-Complete the following tasks to set up Kerberos authentication with HAWQ:
-
-1.  Verify your system satisfies the prequisites for using Kerberos with HAWQ. See [Requirements for Using Kerberos with HAWQ](#kerberos_prereq).
-2.  Set up, or identify, a Kerberos Key Distribution Center \(KDC\) server to use for authentication. See [Install and Configure a Kerberos KDC Server](#task_setup_kdc).
-3.  Create and deploy principals for your HDFS cluster, and ensure that kerberos authentication is enabled and functioning for all HDFS services. See your Hadoop documentation for additional details.
-4.  In a Kerberos database on the KDC server, set up a Kerberos realm and principals on the server. For HAWQ, a principal is a HAWQ role that uses Kerberos authentication. In the Kerberos database, a realm groups together Kerberos principals that are HAWQ roles.
-5.  Create Kerberos keytab files for HAWQ. To access HAWQ, you create a service key known only by Kerberos and HAWQ. On the Kerberos server, the service key is stored in the Kerberos database.
-
-    On the HAWQ master, the service key is stored in key tables, which are files known as keytabs. The service keys are usually stored in the keytab file `/etc/krb5.keytab`. This service key is the equivalent of the service's password, and must be kept secure. Data that is meant to be read-only by the service is encrypted using this key.
-
-6.  Install the Kerberos client packages and the keytab file on HAWQ master.
-7.  Create a Kerberos ticket for `gpadmin` on the HAWQ master node using the keytab file. The ticket contains the Kerberos authentication credentials that grant access to the HAWQ.
-
-With Kerberos authentication configured on the HAWQ, you can use Kerberos for PSQL and JDBC.
-
-[Set up HAWQ with Kerberos for PSQL](#topic6)
-
-[Set up HAWQ with Kerberos for JDBC](#topic9)
-
-## <a id="task_setup_kdc"></a>Install and Configure a Kerberos KDC Server 
-
-Steps to set up a Kerberos Key Distribution Center \(KDC\) server on a Red Hat Enterprise Linux host for use with HAWQ.
-
-Follow these steps to install and configure a Kerberos Key Distribution Center \(KDC\) server on a Red Hat Enterprise Linux host.
-
-1.  Install the Kerberos server packages:
-
-    ```
-    sudo yum install krb5-libs krb5-server krb5-workstation
-    ```
-
-2.  Edit the `/etc/krb5.conf` configuration file. The following example shows a Kerberos server with a default `KRB.EXAMPLE.COM` realm.
-
-    ```
-    [logging]
-    �default = FILE:/var/log/krb5libs.log
-    �kdc = FILE:/var/log/krb5kdc.log
-    �admin_server = FILE:/var/log/kadmind.log
-
-    [libdefaults]
-    �default_realm = KRB.EXAMPLE.COM
-    �dns_lookup_realm = false
-    �dns_lookup_kdc = false
-    �ticket_lifetime = 24h
-    �renew_lifetime = 7d
-    �forwardable = true
-    �default_tgs_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-    �default_tkt_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-    �permitted_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-
-    [realms]
-    �KRB.EXAMPLE.COM = {
-    ��kdc = kerberos-gpdb:88
-    ��admin_server = kerberos-gpdb:749
-    ��default_domain = kerberos-gpdb
-     }
-
-    [domain_realm]
-    �.kerberos-gpdb = KRB.EXAMPLE.COM
-    �kerberos-gpdb = KRB.EXAMPLE.COM
-
-    [appdefaults]
-    �pam = {
-    ����debug = false
-    ����ticket_lifetime = 36000
-    ����renew_lifetime = 36000
-    ����forwardable = true
-    ����krb4_convert = false
-       }
-    ```
-
-    The `kdc` and `admin_server` keys in the `[realms]` section specify the host \(`kerberos-gpdb`\) and port where the Kerberos server is running. IP numbers can be used in place of host names.
-
-    If your Kerberos server manages authentication for other realms, you would instead add the `KRB.EXAMPLE.COM` realm in the `[realms]` and `[domain_realm]` section of the `kdc.conf` file. See the [Kerberos documentation](http://web.mit.edu/kerberos/krb5-latest/doc/) for information about the `kdc.conf` file.
-
-3.  To create a Kerberos KDC database, run the `kdb5_util`.
-
-    ```
-    kdb5_util create -s
-    ```
-
-    The `kdb5_util`create option creates the database to store keys for the Kerberos realms that are managed by this KDC server. The `-s` option creates a stash file. Without the stash file, every time the KDC server starts it requests a password.
-
-4.  Add an administrative user to the KDC database with the `kadmin.local` utility. Because it does not itself depend on Kerberos authentication, the `kadmin.local` utility allows you to add an initial administrative user to the local Kerberos server. To add the user `gpadmin` as an administrative user to the KDC database, run the following command:
-
-    ```
-    kadmin.local -q "addprinc gpadmin/admin"
-    ```
-
-    Most users do not need administrative access to the Kerberos server. They can use `kadmin` to manage their own principals \(for example, to change their own password\). For information about `kadmin`, see the [Kerberos documentation](http://web.mit.edu/kerberos/krb5-latest/doc/).
-
-5.  If needed, edit the `/var/kerberos/krb5kdc/kadm5.acl` file to grant the appropriate permissions to `gpadmin`.
-6.  Start the Kerberos daemons:
-
-    ```
-    /sbin/service krb5kdc start
-    /sbin/service kadmin start
-    ```
-
-7.  To start Kerberos automatically upon restart:
-
-    ```
-    /sbin/chkconfig krb5kdc on
-    /sbin/chkconfig kadmin on
-    ```
-
-
-## <a id="task_m43_vwl_2p"></a>Create HAWQ Roles in the KDC Database 
-
-Add principals to the Kerberos realm for HAWQ.
-
-Start `kadmin.local` in interactive mode, then add two principals to the HAWQ Realm.
-
-1.  Start `kadmin.local` in interactive mode:
-
-    ```
-    kadmin.local
-    ```
-
-2.  Add principals:
-
-    ```
-    kadmin.local: addprinc gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
-    kadmin.local: addprinc postgres/master.test.com@KRB.EXAMPLE.COM
-    ```
-
-    The `addprinc` commands prompt for passwords for each principal. The first `addprinc` creates a HAWQ user as a principal, `gpadmin/kerberos-gpdb`. The second `addprinc` command creates the `postgres` process on the HAWQ master host as a principal in the Kerberos KDC. This principal is required when using Kerberos authentication with HAWQ.
-
-3.  Create a Kerberos keytab file with `kadmin.local`. The following example creates a keytab file `gpdb-kerberos.keytab` in the current directory with authentication information for the two principals.
-
-    ```
-    kadmin.local: xst -k gpdb-kerberos.keytab
-        gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
-        postgres/master.test.com@KRB.EXAMPLE.COM
-    ```
-
-    You will copy this file to the HAWQ master host.
-
-4.  Exit `kadmin.local` interactive mode with the `quit` command:`kadmin.local: quit`
-
-## <a id="topic6"></a>Install and Configure the Kerberos Client 
-
-Steps to install the Kerberos client on the HAWQ master host.
-
-Install the Kerberos client libraries on the HAWQ master and configure the Kerberos client.
-
-1.  Install the Kerberos packages on the HAWQ master.
-
-    ```
-    sudo yum install krb5-libs krb5-workstation
-    ```
-
-2.  Ensure that the `/etc/krb5.conf` file is the same as the one that is on the Kerberos server.
-3.  Copy the `gpdb-kerberos.keytab` file that was generated on the Kerberos server to the HAWQ master host.
-4.  Remove any existing tickets with the Kerberos utility `kdestroy`. Run the utility as root.
-
-    ```
-    sudo kdestroy
-    ```
-
-5.  Use the Kerberos utility `kinit` to request a ticket using the keytab file on the HAWQ master for `gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM`. The `-t` option specifies the keytab file on the HAWQ master.
-
-    ```
-    # kinit -k -t gpdb-kerberos.keytab gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
-    ```
-
-6.  Use the Kerberos utility `klist` to display the contents of the Kerberos ticket cache on the HAWQ master. The following is an example:
-
-    ```screen
-    # klist
-    Ticket cache: FILE:/tmp/krb5cc_108061
-    Default principal: gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
-    Valid starting�����Expires������������Service principal
-    03/28/13 14:50:26��03/29/13 14:50:26��krbtgt/KRB.EXAMPLE.COM ����@KRB.EXAMPLE.COM
-    ����renew until 03/28/13 14:50:26
-    ```
-
-
-### <a id="topic7"></a>Set up HAWQ with Kerberos for PSQL 
-
-Configure a HAWQ to use Kerberos.
-
-After you have set up Kerberos on the HAWQ master, you can configure HAWQ to use Kerberos. For information on setting up the HAWQ master, see [Install and Configure the Kerberos Client](#topic6).
-
-1.  Create a HAWQ administrator role in the database `template1` for the Kerberos principal that is used as the database administrator. The following example uses `gpamin/kerberos-gpdb`.
-
-    ``` bash
-    $ psql template1 -c 'CREATE ROLE "gpadmin/kerberos-gpdb" LOGIN SUPERUSER;'
-
-    ```
-
-    The role you create in the database `template1` will be available in any new HAWQ that you create.
-
-2.  Modify `hawq-site.xml` to specify the location of the keytab file. For example, adding this line to the `hawq-site.xml` specifies the folder /home/gpadmin as the location of the keytab filegpdb-kerberos.keytab.
-
-    ``` xml
-      <property>
-          <name>krb_server_keyfile</name>
-          <value>/home/gpadmin/gpdb-kerberos.keytab</value>
-      </property>
-    ```
-
-3.  Modify the HAWQ file `pg_hba.conf` to enable Kerberos support. Then restart HAWQ \(`hawq restart -a`\). For example, adding the following line to `pg_hba.conf` adds GSSAPI and Kerberos support. The value for `krb_realm` is the Kerberos realm that is used for authentication to HAWQ.
-
-    ```
-    host all all 0.0.0.0/0 gss include_realm=0 krb_realm=KRB.EXAMPLE.COM
-    ```
-
-    For information about the `pg_hba.conf` file, see [The pg\_hba.conf file](http://www.postgresql.org/docs/9.0/static/auth-pg-hba-conf.html) in the Postgres documentation.
-
-4.  Create a ticket using `kinit` and show the tickets in the Kerberos ticket cache with `klist`.
-5.  As a test, log in to the database as the `gpadmin` role with the Kerberos credentials `gpadmin/kerberos-gpdb`:
-
-    ``` bash
-    $ psql -U "gpadmin/kerberos-gpdb" -h master.test template1
-    ```
-
-    A username map can be defined in the `pg_ident.conf` file and specified in the `pg_hba.conf` file to simplify logging into HAWQ. For example, this `psql` command logs into the default HAWQ on `mdw.proddb` as the Kerberos principal `adminuser/mdw.proddb`:
-
-    ``` bash
-    $ psql -U "adminuser/mdw.proddb" -h mdw.proddb
-    ```
-
-    If the default user is `adminuser`, the `pg_ident.conf` file and the `pg_hba.conf` file can be configured so that the `adminuser` can log in to the database as the Kerberos principal `adminuser/mdw.proddb` without specifying the `-U` option:
-
-    ``` bash
-    $ psql -h mdw.proddb
-    ```
-
-    The `pg_ident.conf` file defines the username map. This file is located in the HAWQ master data directory (identified by the `hawq_master_directory` property value in `hawq-site.xml`):
-
-    ```
-    # MAPNAME ��SYSTEM-USERNAME �������GP-USERNAME
-    mymap ������/^(.*)mdw\.proddb$���� adminuser
-    ```
-
-    The map can be specified in the `pg_hba.conf` file as part of the line that enables Kerberos support:
-
-    ```
-    host all all 0.0.0.0/0 krb5 include_realm=0 krb_realm=proddb map=mymap
-    ```
-
-    For more information about specifying username maps see [Username maps](http://www.postgresql.org/docs/9.0/static/auth-username-maps.html) in the Postgres documentation.
-
-6.  If a Kerberos principal is not a HAWQ user, a message similar to the following is displayed from the `psql` command line when the user attempts to log in to the database:
-
-    ```
-    psql: krb5_sendauth: Bad response
-    ```
-
-    The principal must be added as a HAWQ user.
-
-
-### <a id="topic9"></a>Set up HAWQ with Kerberos for JDBC 
-
-Enable Kerberos-authenticated JDBC access to HAWQ.
-
-You can configure HAWQ to use Kerberos to run user-defined Java functions.
-
-1.  Ensure that Kerberos is installed and configured on the HAWQ master. See [Install and Configure the Kerberos Client](#topic6).
-2.  Create the file `.java.login.config` in the folder `/home/gpadmin` and add the following text to the file:
-
-    ```
-    pgjdbc {
-    ��com.sun.security.auth.module.Krb5LoginModule required
-    ��doNotPrompt=true
-    ��useTicketCache=true
-    ��debug=true
-    ��client=true;
-    };
-    ```
-
-3.  Create a Java application that connects to HAWQ using Kerberos authentication. The following example database connection URL uses a PostgreSQL JDBC driver and specifies parameters for Kerberos authentication:
-
-    ```
-    jdbc:postgresql://mdw:5432/mytest?kerberosServerName=postgres&jaasApplicationName=pgjdbc&user=gpadmin/kerberos-gpdb
-    ```
-
-    The parameter names and values specified depend on how the Java application performs Kerberos authentication.
-
-4.  Test the Kerberos login by running a sample Java application from HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/ldap.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/ldap.html.md.erb b/clientaccess/ldap.html.md.erb
deleted file mode 100644
index 27b204f..0000000
--- a/clientaccess/ldap.html.md.erb
+++ /dev/null
@@ -1,116 +0,0 @@
----
-title: Using LDAP Authentication with TLS/SSL
----
-
-You can control access to HAWQ with an LDAP server and, optionally, secure the connection with encryption by adding parameters to pg\_hba.conf file entries.
-
-HAWQ supports LDAP authentication with the TLS/SSL protocol to encrypt communication with an LDAP server:
-
--   LDAP authentication with STARTTLS and TLS protocol \u2013 STARTTLS starts with a clear text connection \(no encryption\) and upgrades it to a secure connection \(with encryption\).
--   LDAP authentication with a secure connection and TLS/SSL \(LDAPS\) \u2013 HAWQ uses the TLS or SSL protocol based on the protocol that is used by the LDAP server.
-
-If no protocol is specified, HAWQ communicates with the LDAP server with a clear text connection.
-
-To use LDAP authentication, the HAWQ master host must be configured as an LDAP client. See your LDAP documentation for information about configuring LDAP clients.
-
-## Enabing LDAP Authentication with STARTTLS and TLS
-
-To enable STARTTLS with the TLS protocol, specify the `ldaptls` parameter with the value 1. The default port is 389. In this example, the authentication method parameters include the `ldaptls` parameter.
-
-```
-ldap ldapserver=ldap.example.com ldaptls=1 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```
-
-Specify a non-default port, with the `ldapport` parameter. In this example, the authentication method includes the `ldaptls` parameter and the `ldapport` parameter to specify the port 550.
-
-```
-ldap ldapserver=ldap.example.com ldaptls=1 ldapport=500 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```
-
-## Enabling LDAP Authentication with a Secure Connection and TLS/SSL
-
-To enable a secure connection with TLS/SSL, add `ldaps://` as the prefix to the LDAP server name specified in the `ldapserver` parameter. The default port is 636.
-
-This example `ldapserver` parameter specifies a secure connection and the TLS/SSL protocol for the LDAP server `ldap.example.com`.
-
-```
-ldapserver=ldaps://ldap.example.com
-```
-
-To specify a non-default port, add a colon \(:\) and the port number after the LDAP server name. This example `ldapserver` parameter includes the `ldaps://` prefix and the non-default port 550.
-
-```
-ldapserver=ldaps://ldap.example.com:550
-```
-
-### Notes
-
-HAWQ logs an error if the following are specified in a pg\_hba.conf file entry:
-
--   If both the `ldaps://` prefix and the `ldaptls=1` parameter are specified.
--   If both the `ldaps://` prefix and the `ldapport` parameter are specified.
-
-Enabling encrypted communication for LDAP authentication only encrypts the communication between HAWQ and the LDAP server.
-
-## Configuring Authentication with a System-wide OpenLDAP System
-
-If you have a system-wide OpenLDAP system and logins are configured to use LDAP with TLS or SSL in the pg_hba.conf file, logins may fail with the following message:
-
-```shell
-could not start LDAP TLS session: error code '-11'
-```
-
-To use an existing OpenLDAP system for authentication, HAWQ must be set up to use the LDAP server's CA certificate to validate user certificates. Follow these steps on both the master and standby hosts to configure HAWQ:
-
-1. Copy the base64-encoded root CA chain file from the Active Directory or LDAP server to
-the HAWQ master and standby master hosts. This example uses the directory `/etc/pki/tls/certs`.
-
-2. Change to the directory where you copied the CA certificate file and, as the root user, generate the hash for OpenLDAP:
-
-    ```
-    # cd /etc/pki/tls/certs
-    # openssl x509 -noout -hash -in <ca-certificate-file>
-    # ln -s <ca-certificate-file> <ca-certificate-file>.0
-    ```
-
-3. Configure an OpenLDAP configuration file for HAWQ with the CA certificate directory and certificate file specified.
-
-    As the root user, edit the OpenLDAP configuration file `/etc/openldap/ldap.conf`:
-
-    ```
-    SASL_NOCANON on
-    URI ldaps://ldapA.example.priv ldaps://ldapB.example.priv ldaps://ldapC.example.priv
-    BASE dc=example,dc=priv
-    TLS_CACERTDIR /etc/pki/tls/certs
-    TLS_CACERT /etc/pki/tls/certs/<ca-certificate-file>
-    ```
-
-    **Note**: For certificate validation to succeed, the hostname in the certificate must match a hostname in the URI property. Otherwise, you must also add `TLS_REQCERT allow` to the file.
-
-4. As the gpadmin user, edit `/usr/local/hawq/greenplum_path.sh` and add the following line.
-
-    ```bash
-    export LDAPCONF=/etc/openldap/ldap.conf
-    ```
-
-## Examples
-
-These are example entries from an pg\_hba.conf file.
-
-This example specifies LDAP authentication with no encryption between HAWQ and the LDAP server.
-
-```
-host all plainuser 0.0.0.0/0 ldap ldapserver=ldap.example.com ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```
-
-This example specifies LDAP authentication with the STARTTLS and TLS protocol between HAWQ and the LDAP server.
-
-```
-host all tlsuser 0.0.0.0/0 ldap ldapserver=ldap.example.com ldaptls=1 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```
-
-This example specifies LDAP authentication with a secure connection and TLS/SSL protocol between HAWQ and the LDAP server.
-
-```
-host all ldapsuser 0.0.0.0/0 ldap ldapserver=ldaps://ldap.example.com ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"
-```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/clientaccess/roles_privs.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/roles_privs.html.md.erb b/clientaccess/roles_privs.html.md.erb
deleted file mode 100644
index 4bdf3ee..0000000
--- a/clientaccess/roles_privs.html.md.erb
+++ /dev/null
@@ -1,285 +0,0 @@
----
-title: Managing Roles and Privileges
----
-
-The HAWQ authorization mechanism stores roles and permissions to access database objects in the database and is administered using SQL statements or command-line utilities.
-
-HAWQ manages database access permissions using *roles*. The concept of roles subsumes the concepts of *users* and *groups*. A role can be a database user, a group, or both. Roles can own database objects \(for example, tables\) and can assign privileges on those objects to other roles to control access to the objects. Roles can be members of other roles, thus a member role can inherit the object privileges of its parent role.
-
-Every HAWQ system contains a set of database roles \(users and groups\). Those roles are separate from the users and groups managed by the operating system on which the server runs. However, for convenience you may want to maintain a relationship between operating system user names and HAWQ role names, since many of the client applications use the current operating system user name as the default.
-
-In HAWQ, users log in and connect through the master instance, which then verifies their role and access privileges. The master then issues commands to the segment instances behind the scenes as the currently logged in role.
-
-Roles are defined at the system level, meaning they are valid for all databases in the system.
-
-In order to bootstrap the HAWQ system, a freshly initialized system always contains one predefined *superuser* role \(also referred to as the system user\). This role will have the same name as the operating system user that initialized the HAWQ system. Customarily, this role is named `gpadmin`. In order to create more roles you first have to connect as this initial role.
-
-## <a id="topic2"></a>Security Best Practices for Roles and Privileges 
-
--   **Secure the gpadmin system user.** HAWQ requires a UNIX user id to install and initialize the HAWQ system. This system user is referred to as `gpadmin` in the HAWQ documentation. This `gpadmin` user is the default database superuser in HAWQ, as well as the file system owner of the HAWQ installation and its underlying data files. This default administrator account is fundamental to the design of HAWQ. The system cannot run without it, and there is no way to limit the access of this gpadmin user id. Use roles to manage who has access to the database for specific purposes. You should only use the `gpadmin` account for system maintenance tasks such as expansion and upgrade. Anyone who logs on to a HAWQ host as this user id can read, alter or delete any data; specifically system catalog data and database access rights. Therefore, it is very important to secure the gpadmin user id and only provide access to essential system administrators. Administrators should only log in to HAWQ as
  `gpadmin` when performing certain system maintenance tasks \(such as upgrade or expansion\). Database users should never log on as `gpadmin`, and ETL or production workloads should never run as `gpadmin`.
--   **Assign a distinct role to each user that logs in.** For logging and auditing purposes, each user that is allowed to log in to HAWQ should be given their own database role. For applications or web services, consider creating a distinct role for each application or service. See [Creating New Roles \(Users\)](#topic3).
--   **Use groups to manage access privileges.** See [Role Membership](#topic5).
--   **Limit users who have the SUPERUSER role attribute.** Roles that are superusers bypass all access privilege checks in HAWQ, as well as resource queuing. Only system administrators should be given superuser rights. See [Altering Role Attributes](#topic4).
-
-## <a id="topic3"></a>Creating New Roles \(Users\) 
-
-A user-level role is considered to be a database role that can log in to the database and initiate a database session. Therefore, when you create a new user-level role using the `CREATE ROLE` command, you must specify the `LOGIN` privilege. For example:
-
-``` sql
-=# CREATE ROLE jsmith WITH LOGIN;
-```
-
-A database role may have a number of attributes that define what sort of tasks that role can perform in the database. You can set these attributes when you create the role, or later using the `ALTER ROLE` command. See [Table 1](#iq139556) for a description of the role attributes you can set.
-
-### <a id="topic4"></a>Altering Role Attributes 
-
-A database role may have a number of attributes that define what sort of tasks that role can perform in the database.
-
-<a id="iq139556"></a>
-
-|Attributes|Description|
-|----------|-----------|
-|SUPERUSER &#124; NOSUPERUSER|Determines if the role is a superuser. You must yourself be a superuser to create a new superuser. NOSUPERUSER is the default.|
-|CREATEDB &#124; NOCREATEDB|Determines if the role is allowed to create databases. NOCREATEDB is the default.|
-|CREATEROLE &#124; NOCREATEROLE|Determines if the role is allowed to create and manage other roles. NOCREATEROLE is the default.|
-|INHERIT &#124; NOINHERIT|Determines whether a role inherits the privileges of roles it is a member of. A role with the INHERIT attribute can automatically use whatever database privileges have been granted to all roles it is directly or indirectly a member of. INHERIT is the default.|
-|LOGIN &#124; NOLOGIN|Determines whether a role is allowed to log in. A role having the LOGIN attribute can be thought of as a user. Roles without this attribute are useful for managing database privileges \(groups\). NOLOGIN is the default.|
-|CONNECTION LIMIT *connlimit*|If role can log in, this specifies how many concurrent connections the role can make. -1 \(the default\) means no limit.|
-|PASSWORD '*password*'|Sets the role's password. If you do not plan to use password authentication you can omit this option. If no password is specified, the password will be set to null and password authentication will always fail for that user. A null password can optionally be written explicitly as PASSWORD NULL.|
-|ENCRYPTED &#124; UNENCRYPTED|Controls whether the password is stored encrypted in the system catalogs. The default behavior is determined by the configuration parameter `password_encryption` \(currently set to md5, for SHA-256 encryption, change this setting to password\). If the presented password string is already in encrypted format, then it is stored encrypted as-is, regardless of whether ENCRYPTED or UNENCRYPTED is specified \(since the system cannot decrypt the specified encrypted password string\). This allows reloading of encrypted passwords during dump/restore.|
-|VALID UNTIL '*timestamp*'|Sets a date and time after which the role's password is no longer valid. If omitted the password will be valid for all time.|
-|RESOURCE QUEUE *queue\_name*|Assigns the role to the named resource queue for workload management. Any statement that role issues is then subject to the resource queue's limits. Note that the RESOURCE QUEUE attribute is not inherited; it must be set on each user-level \(LOGIN\) role.|
-|DENY \{deny\_interval &#124; deny\_point\}|Restricts access during an interval, specified by day or day and time. For more information see [Time-based Authentication](#topic13).|
-
-You can set these attributes when you create the role, or later using the `ALTER ROLE` command. For example:
-
-``` sql
-=# ALTER ROLE jsmith WITH PASSWORD 'passwd123';
-=# ALTER ROLE admin VALID UNTIL 'infinity';
-=# ALTER ROLE jsmith LOGIN;
-=# ALTER ROLE jsmith RESOURCE QUEUE adhoc;
-=# ALTER ROLE jsmith DENY DAY 'Sunday';
-```
-
-## <a id="topic5"></a>Role Membership 
-
-It is frequently convenient to group users together to ease management of object privileges: that way, privileges can be granted to, or revoked from, a group as a whole. In HAWQ this is done by creating a role that represents the group, and then granting membership in the group role to individual user roles.
-
-Use the `CREATE ROLE` SQL command to create a new group role. For example:
-
-``` sql
-=# CREATE ROLE admin CREATEROLE CREATEDB;
-```
-
-Once the group role exists, you can add and remove members \(user roles\) using the `GRANT` and `REVOKE` commands. For example:
-
-``` sql
-=# GRANT admin TO john, sally;
-=# REVOKE admin FROM bob;
-```
-
-For managing object privileges, you would then grant the appropriate permissions to the group-level role only \(see [Table 2](#iq139925)\). The member user roles then inherit the object privileges of the group role. For example:
-
-``` sql
-=# GRANT ALL ON TABLE mytable TO admin;
-=# GRANT ALL ON SCHEMA myschema TO admin;
-=# GRANT ALL ON DATABASE mydb TO admin;
-```
-
-The role attributes `LOGIN`, `SUPERUSER`, `CREATEDB`, and `CREATEROLE` are never inherited as ordinary privileges on database objects are. User members must actually `SET ROLE` to a specific role having one of these attributes in order to make use of the attribute. In the above example, we gave `CREATEDB` and `CREATEROLE` to the `admin` role. If `sally` is a member of `admin`, she could issue the following command to assume the role attributes of the parent role:
-
-``` sql
-=> SET ROLE admin;
-```
-
-## <a id="topic6"></a>Managing Object Privileges 
-
-When an object \(table, view, sequence, database, function, language, schema, or tablespace\) is created, it is assigned an owner. The owner is normally the role that executed the creation statement. For most kinds of objects, the initial state is that only the owner \(or a superuser\) can do anything with the object. To allow other roles to use it, privileges must be granted. HAWQ supports the following privileges for each object type:
-
-<a id="iq139925"></a>
-
-|Object Type|Privileges|
-|-----------|----------|
-|Tables, Views, Sequences|SELECT <br/> INSERT <br/> RULE <br/> ALL|
-|External Tables|SELECT <br/> RULE <br/> ALL|
-|Databases|CONNECT<br/>CREATE<br/>TEMPORARY &#124; TEMP <br/> ALL|
-|Functions|EXECUTE|
-|Procedural Languages|USAGE|
-|Schemas|CREATE <br/> USAGE <br/> ALL|
-|Custom Protocol|SELECT <br/> INSERT <br/> RULE </br> ALL|
-
-**Note:** Privileges must be granted for each object individually. For example, granting ALL on a database does not grant full access to the objects within that database. It only grants all of the database-level privileges \(CONNECT, CREATE, TEMPORARY\) to the database itself.
-
-Use the `GRANT` SQL command to give a specified role privileges on an object. For example:
-
-``` sql
-=# GRANT INSERT ON mytable TO jsmith;
-```
-
-To revoke privileges, use the `REVOKE` command. For example:
-
-``` sql
-=# REVOKE ALL PRIVILEGES ON mytable FROM jsmith;
-```
-
-You can also use the `DROP OWNED` and `REASSIGN OWNED` commands for managing objects owned by deprecated roles \(Note: only an object's owner or a superuser can drop an object or reassign ownership\). For example:
-
-``` sql
-=# REASSIGN OWNED BY sally TO bob;
-=# DROP OWNED BY visitor;
-```
-
-### <a id="topic7"></a>Simulating Row and Column Level Access Control 
-
-Row-level or column-level access is not supported, nor is labeled security. Row-level and column-level access can be simulated using views to restrict the columns and/or rows that are selected. Row-level labels can be simulated by adding an extra column to the table to store sensitivity information, and then using views to control row-level access based on this column. Roles can then be granted access to the views rather than the base table.
-
-## <a id="topic8"></a>Encrypting Data 
-
-PostgreSQL provides an optional package of encryption/decryption functions called `pgcrypto`, which can also be installed and used in HAWQ. The `pgcrypto` package is not installed by default with HAWQ. However, you can download a `pgcrypto` package from [Pivotal Network](https://network.pivotal.io). 
-
-If you are building HAWQ from source files, then you should enable `pgcrypto` support as an option when compiling HAWQ.
-
-The `pgcrypto` functions allow database administrators to store certain columns of data in encrypted form. This adds an extra layer of protection for sensitive data, as data stored in HAWQ in encrypted form cannot be read by users who do not have the encryption key, nor be read directly from the disks.
-
-**Note:** The `pgcrypto` functions run inside the database server, which means that all the data and passwords move between `pgcrypto` and the client application in clear-text. For optimal security, consider also using SSL connections between the client and the HAWQ master server.
-
-## <a id="topic9"></a>Encrypting Passwords 
-
-This technical note outlines how to use a server parameter to implement SHA-256 encrypted password storage. Note that in order to use SHA-256 encryption for storage, the client authentication method must be set to `password` rather than the default, `MD5`. \(See [Encrypting Client/Server Connections](client_auth.html) for more details.\) This means that the password is transmitted in clear text over the network; to avoid this, set up SSL to encrypt the client server communication channel.
-
-### <a id="topic10"></a>Enabling SHA-256 Encryption 
-
-You can set your chosen encryption method system-wide or on a per-session basis. There are three encryption methods available: `SHA-256`, `SHA-256-FIPS`, and `MD5` \(for backward compatibility\). The `SHA-256-FIPS` method requires that FIPS compliant libraries are used.
-
-#### <a id="topic11"></a>System-wide 
-
-You will perform different procedures to set the encryption method (`password_hash_algorithm` server parameter) system-wide depending upon whether you manage your cluster from the command line or use Ambari. If you use Ambari to manage your HAWQ cluster, you must ensure that you update encryption method configuration parameters only via the Ambari Web UI. If you manage your HAWQ cluster from the command line, you will use the `hawq config` command line utility to set encryption method configuration parameters.
-
-If you use Ambari to manage your HAWQ cluster:
-
-1. Set the `password_hash_algorithm` configuration property via the HAWQ service **Configs > Advanced > Custom hawq-site** drop down. Valid values include `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\).
-2. Select **Service Actions > Restart All** to load the updated configuration.
-
-If you manage your HAWQ cluster from the command line:
-
-1.  Log in to the HAWQ master host as a HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
-
-    ``` shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-
-1. Use the `hawq config` utility to set `password_hash_algorithm` to `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\):
-
-    ``` shell
-    $ hawq config -c password_hash_algorithm -v 'SHA-256'
-    ```
-        
-    Or:
-        
-    ``` shell
-    $ hawq config -c password_hash_algorithm -v 'SHA-256-FIPS'
-    ```
-
-2. Reload the HAWQ configuration:
-
-    ``` shell
-    $ hawq stop cluster -u
-    ```
-
-3.  Verify the setting:
-
-    ``` bash
-    $ hawq config -s password_hash_algorithm
-    ```
-
-#### <a id="topic12"></a>Individual Session 
-
-To set the `password_hash_algorithm` server parameter for an individual database session:
-
-1.  Log in to your HAWQ instance as a superuser.
-2.  Set the `password_hash_algorithm` to `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\):
-
-    ``` sql
-    =# SET password_hash_algorithm = 'SHA-256'
-    SET
-    ```
-
-    or:
-
-    ``` sql
-    =# SET password_hash_algorithm = 'SHA-256-FIPS'
-    SET
-    ```
-
-3.  Verify the setting:
-
-    ``` sql
-    =# SHOW password_hash_algorithm;
-    password_hash_algorithm
-    ```
-
-    You will see:
-
-    ```
-    SHA-256
-    ```
-
-    or:
-
-    ```
-    SHA-256-FIPS
-    ```
-
-    **Example**
-
-    Following is an example of how the new setting works:
-
-4.  Login in as a super user and verify the password hash algorithm setting:
-
-    ``` sql
-    =# SHOW password_hash_algorithm
-    password_hash_algorithm
-    -------------------------------
-    SHA-256-FIPS
-    ```
-
-5.  Create a new role with password that has login privileges.
-
-    ``` sql
-    =# CREATE ROLE testdb WITH PASSWORD 'testdb12345#' LOGIN;
-    ```
-
-6.  Change the client authentication method to allow for storage of SHA-256 encrypted passwords:
-
-    Open the `pg_hba.conf` file on the master and add the following line:
-
-    ```
-    host all testdb 0.0.0.0/0 password
-    ```
-
-7.  Restart the cluster.
-8.  Login to the database as user just created `testdb`.
-
-    ``` bash
-    $ psql -U testdb
-    ```
-
-9.  Enter the correct password at the prompt.
-10. Verify that the password is stored as a SHA-256 hash.
-
-    Note that password hashes are stored in `pg_authid.rolpasswod`
-
-    1.  Login as the super user.
-    2.  Execute the following:
-
-        ``` sql
-        =# SELECT rolpassword FROM pg_authid WHERE rolname = 'testdb';
-        Rolpassword
-        -----------
-        sha256<64 hexidecimal characters>
-        ```
-
-
-## <a id="topic13"></a>Time-based Authentication 
-
-HAWQ enables the administrator to restrict access to certain times by role. Use the `CREATE ROLE` or `ALTER ROLE` commands to specify time-based constraints.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/BasicDataOperations.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/BasicDataOperations.html.md.erb b/datamgmt/BasicDataOperations.html.md.erb
deleted file mode 100644
index 66328c7..0000000
--- a/datamgmt/BasicDataOperations.html.md.erb
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: Basic Data Operations
----
-
-This topic describes basic data operations that you perform in HAWQ.
-
-## <a id="topic3"></a>Inserting Rows
-
-Use the `INSERT` command to create rows in a table. This command requires the table name and a value for each column in the table; you may optionally specify the column names in any order. If you do not specify column names, list the data values in the order of the columns in the table, separated by commas.
-
-For example, to specify the column names and the values to insert:
-
-``` sql
-INSERT INTO products (name, price, product_no) VALUES ('Cheese', 9.99, 1);
-```
-
-To specify only the values to insert:
-
-``` sql
-INSERT INTO products VALUES (1, 'Cheese', 9.99);
-```
-
-Usually, the data values are literals (constants), but you can also use scalar expressions. For example:
-
-``` sql
-INSERT INTO films SELECT * FROM tmp_films WHERE date_prod <
-'2004-05-07';
-```
-
-You can insert multiple rows in a single command. For example:
-
-``` sql
-INSERT INTO products (product_no, name, price) VALUES
-    (1, 'Cheese', 9.99),
-    (2, 'Bread', 1.99),
-    (3, 'Milk', 2.99);
-```
-
-To insert data into a partitioned table, you specify the root partitioned table, the table created with the `CREATE TABLE` command. You also can specify a leaf child table of the partitioned table in an `INSERT` command. An error is returned if the data is not valid for the specified leaf child table. Specifying a child table that is not a leaf child table in the `INSERT` command is not supported.
-
-To insert large amounts of data, use external tables or the `COPY` command. These load mechanisms are more efficient than `INSERT` for inserting large quantities of rows. See [Loading and Unloading Data](load/g-loading-and-unloading-data.html#topic1) for more information about bulk data loading.
-
-## <a id="topic9"></a>Vacuuming the System Catalog Tables
-
-Only HAWQ system catalog tables use multiple version concurrency control. Deleted or updated data rows in the catalog tables occupy physical space on disk even though new transactions cannot see them. Periodically running the `VACUUM` command removes these expired rows. 
-
-The `VACUUM` command also collects table-level statistics such as the number of rows and pages.
-
-For example:
-
-``` sql
-VACUUM pg_class;
-```
-
-### <a id="topic10"></a>Configuring the Free Space Map
-
-Expired rows are held in the *free space map*. The free space map must be sized large enough to hold all expired rows in your database. If not, a regular `VACUUM` command cannot reclaim space occupied by expired rows that overflow the free space map.
-
-**Note:** `VACUUM FULL` is not recommended with HAWQ because it is not safe for large tables and may take an unacceptably long time to complete. See [VACUUM](../reference/sql/VACUUM.html#topic1).
-
-Size the free space map with the following server configuration parameters:
-
--   `max_fsm_pages`
--   `max_fsm_relations`

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/datamgmt/ConcurrencyControl.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/ConcurrencyControl.html.md.erb b/datamgmt/ConcurrencyControl.html.md.erb
deleted file mode 100644
index 2ced135..0000000
--- a/datamgmt/ConcurrencyControl.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: Concurrency Control
----
-
-This topic discusses the mechanisms used in HAWQ to provide concurrency control.
-
-HAWQ and PostgreSQL do not use locks for concurrency control. They maintain data consistency using a multiversion model, Multiversion Concurrency Control (MVCC). MVCC achieves transaction isolation for each database session, and each query transaction sees a snapshot of data. This ensures the transaction sees consistent data that is not affected by other concurrent transactions.
-
-Because MVCC does not use explicit locks for concurrency control, lock contention is minimized and HAWQ maintains reasonable performance in multiuser environments. Locks acquired for querying (reading) data do not conflict with locks acquired for writing data.
-
-HAWQ provides multiple lock modes to control concurrent access to data in tables. Most HAWQ SQL commands automatically acquire the appropriate locks to ensure that referenced tables are not dropped or modified in incompatible ways while a command executes. For applications that cannot adapt easily to MVCC behavior, you can use the `LOCK` command to acquire explicit locks. However, proper use of MVCC generally provides better performance.
-
-<caption><span class="tablecap">Table 1. Lock Modes in HAWQ</span></caption>
-
-<a id="topic_f5l_qnh_kr__ix140861"></a>
-
-| Lock Mode              | Associated SQL Commands                                                             | Conflicts With                                                                                                          |
-|------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
-| ACCESS SHARE           | `SELECT`                                                                            | ACCESS EXCLUSIVE                                                                                                        |
-| ROW EXCLUSIVE          | `INSERT`, `COPY`                                                                    | SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                                                 |
-| SHARE UPDATE EXCLUSIVE | `VACUUM` (without `FULL`), `ANALYZE`                                                | SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                         |
-| SHARE                  | `CREATE INDEX`                                                                      | ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                 |
-| SHARE ROW EXCLUSIVE    | �                                                                                   | ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                          |
-| ACCESS EXCLUSIVE       | `ALTER TABLE`, `DROP TABLE`, `TRUNCATE`, `REINDEX`, `CLUSTER`, `VACUUM FULL`        | ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE |