You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2017/01/06 17:33:05 UTC

[50/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/FaultTolerance.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/FaultTolerance.html.md.erb b/admin/FaultTolerance.html.md.erb
deleted file mode 100644
index fc9de93..0000000
--- a/admin/FaultTolerance.html.md.erb
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: Understanding the Fault Tolerance Service
----
-
-The fault tolerance service (FTS) enables HAWQ to continue operating in the event that a segment node fails. The fault tolerance service runs automatically and requires no additional configuration requirements.
-
-Each segment runs a resource manager process that periodically sends (by default, every 30 seconds) the segment\u2019s status to the master's resource manager process. This interval is controlled by the `hawq_rm_segment_heartbeat_interval` server configuration parameter.
-
-When a segment encounters a critical error-- for example, a temporary directory on the segment fails due to a hardware error-- the segment reports that there is temporary directory failure to the HAWQ master through a heartbeat report. When the master receives the report, it marks the segment as DOWN in the `gp_segment_configuration` table. All changes to a segment's status are recorded in the `gp_configuration_history` catalog table, including the reason why the segment is marked as DOWN. When this segment is set to DOWN, master will not run query executors on the segment. The failed segment is fault-isolated from the rest of the cluster.
-
-Besides disk failure, there are other reasons why a segment can be marked as DOWN. For example, if HAWQ is running in YARN mode, every segment should have a NodeManager (Hadoop\u2019s YARN service) running on it, so that the segment can be considered a resource to HAWQ. However, if the NodeManager on a segment is not operating properly, this segment will also be marked as DOWN in `gp_segment_configuration table`. The corresponding reason for the failure is recorded into `gp_configuration_history`.
-
-**Note:** If a disk fails in a particular segment, the failure may cause either an HDFS error or a temporary directory error in HAWQ. HDFS errors are handled by the Hadoop HDFS service.
-
-##Viewing the Current Status of a Segment <a id="view_segment_status"></a>
-
-To view the current status of the segment, query the `gp_segment_configuration` table.
-
-If the status of a segment is DOWN, the "description" column displays the reason. The reason can include any of the following reasons, as single reasons or as a combination of several reasons, split by a semicolon (";").
-
-**Reason: heartbeat timeout**
-
-Master has not received a heartbeat from the segment. If you see this reason, make sure that HAWQ is running on the segment.
-
-If the segment reports a heartbeat at a later time, the segment is marked as UP.
-
-**Reason: failed probing segment**
-
-Master has probed the segment to verify that it is operating normally, and the segment response is NO.
-
-While a HAWQ instance is running, the Query Dispatcher finds that some Query Executors on the segment are not working normally. The resource manager process on master sends a message to this segment. When the segment resource manager receives the message from master, it checks whether its PostgreSQL postmaster process is working normally and sends a reply message to master. When master gets a reply message that indicates that this segment's postmaster process is not working normally, then the master marks the segment as DOWN with the reason "failed probing segment."
-
-Check the logs of the failed segment and try to restart the HAWQ instance.
-
-**Reason: communication error**
-
-Master cannot connect to the segment.
-
-Check the network connection between the master and the segment.
-
-**Reason: resource manager process was reset**
-
-If the timestamp of the segment resource manager process doesn\u2019t match the previous timestamp, it means that the resource manager process on segment has been restarted. In this case, HAWQ master needs to return the resources on this segment and marks the segment as DOWN. If the master receives a new heartbeat from this segment, it will mark it back to UP. 
-
-**Reason: no global node report**
-
-HAWQ is using YARN for resource management. No cluster report has been received for this segment. 
-
-Check that NodeManager is operating normally on this segment. 
-
-If not, try to start NodeManager on the segment. 
-After NodeManager is started, run `yarn node --list` to see if the node is in list. If so, this segment is set to UP.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb b/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
deleted file mode 100644
index b4284be..0000000
--- a/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
+++ /dev/null
@@ -1,223 +0,0 @@
----
-title: HAWQ Filespaces and High Availability Enabled HDFS
----
-
-If you initialized HAWQ without the HDFS High Availability \(HA\) feature, you can enable it by using the following procedure.
-
-## <a id="enablingthehdfsnamenodehafeature"></a>Enabling the HDFS NameNode HA Feature 
-
-To enable the HDFS NameNode HA feature for use with HAWQ, you need to perform the following tasks:
-
-1. Enable high availability in your HDFS cluster.
-1. Collect information about the target filespace.
-1. Stop the HAWQ cluster and backup the catalog (**Note:** Ambari users must perform this manual step.)
-1. Move the filespace location using the command line tool (**Note:** Ambari users must perform this manual step.)
-1. Reconfigure `${GPHOME}/etc/hdfs-client.xml` and `${GPHOME}/etc/hawq-site.xml` files. Then, synchronize updated configuration files to all HAWQ nodes.
-1. Start the HAWQ cluster and resynchronize the standby master after moving the filespace.
-
-
-### <a id="enablehahdfs"></a>Step 1: Enable High Availability in Your HDFS Cluster 
-
-Enable high availability for NameNodes in your HDFS cluster. See the documentation for your Hadoop distribution for instructions on how to do this. 
-
-**Note:** If you're using Ambari to manage your HDFS cluster, you can use the Enable NameNode HA Wizard. For example, [this Hortonworks HDP procedure](https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-user-guide/content/how_to_configure_namenode_high_availability.html) outlines how to do this in Ambari for HDP.
-
-### <a id="collectinginformationaboutthetargetfilespace"></a>Step 2: Collect Information about the Target Filespace 
-
-A default filespace named dfs\_system exists in the pg\_filespace catalog and the parameter, pg\_filespace\_entry, contains detailed information for each filespace.�
-
-To move the filespace location to a HA-enabled HDFS location, you must move the data to a new path on your HA-enabled HDFS cluster.
-
-1.  Use the following SQL query to gather information about the filespace located on HDFS:
-
-    ```sql
-    SELECT
-        fsname, fsedbid, fselocation
-    FROM
-        pg_filespace AS sp, pg_filespace_entry AS entry, pg_filesystem AS fs
-    WHERE
-        sp.fsfsys = fs.oid AND fs.fsysname = 'hdfs' AND sp.oid = entry.fsefsoid
-    ORDER BY
-        entry.fsedbid;
-    ```
-
-    The sample output is as follows:
-
-    ```
-		  fsname | fsedbid | fselocation
-	--------------+---------+-------------------------------------------------
-	cdbfast_fs_c | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_c
-	cdbfast_fs_b | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_b
-	cdbfast_fs_a | 0       | hdfs://hdfs-cluster/hawq//cdbfast_fs_a
-	dfs_system   | 0       | hdfs://test5:9000/hawq/hawq-1459499690
-	(4 rows)
-    ```
-
-    The output contains the following:
-    - HDFS paths that share the same prefix
-    - Current filespace location
-
-    **Note:** If you see `{replica=3}` in the filespace location, ignore this part of the prefix. This is a known issue.
-
-2.  To enable HA HDFS, you need the filespace name and the common prefix of your HDFS paths. The filespace location is formatted like a URL.
-
-	If the previous filespace location is 'hdfs://test5:9000/hawq/hawq-1459499690' and the HA HDFS common prefix is 'hdfs://hdfs-cluster', then the new filespace location should be 'hdfs://hdfs-cluster/hawq/hawq-1459499690'.
-
-    ```
-    Filespace Name: dfs_system
-    Old location: hdfs://test5:9000/hawq/hawq-1459499690
-    New location: hdfs://hdfs-cluster/hawq/hawq-1459499690
-    ```
-
-### <a id="stoppinghawqclusterandbackupcatalog"></a>Step 3: Stop the HAWQ Cluster and Back Up the Catalog 
-
-**Note:** Ambari users must perform this manual step.
-
-When you enable HA HDFS, you are�changing the HAWQ catalog and persistent tables. You cannot perform transactions while�persistent tables are being updated. Therefore, before you move the filespace location, back up the catalog. This is to ensure that you do not lose data due to a�hardware failure or during an operation \(such as killing the HAWQ process\).�
-
-
-1. If you defined a custom port for HAWQ master, export the `PGPORT` environment variable. For example:
-
-	```shell
-	export PGPORT=9000
-	```
-
-1. Save the HAWQ master data directory, found in the `hawq_master_directory` property value from `hawq-site.xml` to an environment variable.
- 
-	```bash
-	export MDATA_DIR=/path/to/hawq_master_directory
-	```
-
-1.  Disconnect all workload connections. Check the active connection with:
-
-    ```shell
-    $ psql -p ${PGPORT} -c "SELECT * FROM pg_catalog.pg_stat_activity" -d template1
-    ```
-    where `${PGPORT}` corresponds to the port number you optionally customized for HAWQ master. 
-    
-
-2.  Issue a checkpoint:�
-
-    ```shell
-    $ psql�-p ${PGPORT} -c "CHECKPOINT" -d template1
-    ```
-
-3.  Shut down the HAWQ cluster:�
-
-    ```shell
-    $ hawq stop cluster -a -M fast
-    ```
-
-4.  Copy the master data directory to a backup location:
-
-    ```shell
-    $ cp -r ${MDATA_DIR} /catalog/backup/location
-    ```
-	The master data directory contains the catalog. Fatal errors can occur due to hardware failure or if you fail to kill a HAWQ process before attempting a filespace location change. Make sure you back this directory up.
-
-### <a id="movingthefilespacelocation"></a>Step 4: Move the Filespace Location 
-
-**Note:** Ambari users must perform this manual step.
-
-HAWQ provides the command line tool, `hawq filespace`, to move the location of the filespace.
-
-1. If you defined a custom port for HAWQ master, export the `PGPORT` environment variable. For example:
-
-	```shell
-	export PGPORT=9000
-	```
-1. Run the following command to move a filespace location:
-
-	```shell
-	$ hawq filespace --movefilespace default --location=hdfs://hdfs-cluster/hawq_new_filespace
-	```
-	Specify `default` as the value of the `--movefilespace` option. Replace `hdfs://hdfs-cluster/hawq_new_filespace` with the new filespace location.
-
-#### **Important:** Potential Errors During Filespace Move
-
-Non-fatal error can occur if you provide invalid input or if you have not stopped HAWQ before attempting a filespace location change. Check that you have followed the instructions from the beginning, or correct the input error before you re-run `hawq filespace`.
-
-Fatal errors can occur due to hardware failure or if you fail to kill a HAWQ process before attempting a filespace location change. When a fatal error occurs, you will see the message, "PLEASE RESTORE MASTER DATA DIRECTORY" in the output. If this occurs, shut down the database and restore the `${MDATA_DIR}` that you backed up in Step 4.
-
-### <a id="configuregphomeetchdfsclientxml"></a>Step 5: Update HAWQ to Use NameNode HA by Reconfiguring hdfs-client.xml and hawq-site.xml 
-
-If you install and manage your cluster using command-line utilities, follow these steps to modify your HAWQ configuration to use the NameNode HA service.
-
-**Note:** These steps are not required if you use Ambari to manage HDFS and HAWQ, because Ambari makes these changes automatically after you enable NameNode HA.
-
-For command-line administrators:
-
-1. Edit the ` ${GPHOME}/etc/hdfs-client.xml` file on each segment and add the following NameNode properties:
-
-    ```xml
-    <property>
-     <name>dfs.ha.namenodes.hdpcluster</name>
-     <value>nn1,nn2</value>
-    </property>
-
-    <property>
-     <name>dfs.namenode.http-address.hdpcluster.nn1</name>
-     <value>ip-address-1.mycompany.com:50070</value>
-    </property>
-
-    <property>
-     <name>dfs.namenode.http-address.hdpcluster.nn2</name>
-     <value>ip-address-2.mycompany.com:50070</value>
-    </property>
-
-    <property>
-     <name>dfs.namenode.rpc-address.hdpcluster.nn1</name>
-     <value>ip-address-1.mycompany.com:8020</value>
-    </property>
-
-    <property>
-     <name>dfs.namenode.rpc-address.hdpcluster.nn2</name>
-     <value>ip-address-2.mycompany.com:8020</value>
-    </property>
-
-    <property>
-     <name>dfs.nameservices</name>
-     <value>hdpcluster</value>
-    </property>
-     ```
-
-    In the listing above:
-    * Replace `hdpcluster` with the actual service ID that is configured in HDFS.
-    * Replace `ip-address-2.mycompany.com:50070` with the actual NameNode RPC host and port number that is configured in HDFS.
-    * Replace `ip-address-1.mycompany.com:8020` with the actual NameNode HTTP host and port number that is configured in HDFS.
-    * The order of the NameNodes listed in `dfs.ha.namenodes.hdpcluster` is important for performance, especially when running secure HDFS. The first entry (`nn1` in the example above) should correspond to the active NameNode.
-
-2.  Change the following parameter in the `$GPHOME/etc/hawq-site.xml` file:
-
-    ```xml
-    <property>
-        <name>hawq_dfs_url</name>
-        <value>hdpcluster/hawq_default</value>
-        <description>URL for accessing HDFS.</description>
-    </property>
-    ```
-
-    In the listing above:
-    * Replace `hdpcluster` with the actual service ID that is configured in HDFS.
-    * Replace `/hawq_default` with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable.
-
-3. Copy the updated configuration files to all nodes in the cluster (as listed in `hawq_hosts`).
-
-	```shell
-	$ hawq scp -f hawq_hosts hdfs-client.xml hawq-site.xml =:$GPHOME/etc/
-	```
-
-### <a id="reinitializethestandbymaster"></a>Step 6: Restart the HAWQ Cluster and Resynchronize the Standby Master 
-
-1. Restart the HAWQ cluster:
-
-	```shell
-	$ hawq start cluster -a
-	```
-
-1. Moving the filespace to a new location renders the standby master catalog invalid. To update the standby, resync the standby master.  On the active master, run the following command to ensure that the standby master's catalog is resynced with the active master.
-
-	```shell
-	$ hawq init standby -n -M fast
-
-	```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/HighAvailability.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/HighAvailability.html.md.erb b/admin/HighAvailability.html.md.erb
deleted file mode 100644
index 0c2e32b..0000000
--- a/admin/HighAvailability.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: High Availability in HAWQ
----
-
-A HAWQ cluster can be made highly available by providing fault-tolerant hardware, by enabling HAWQ or HDFS high-availability features, and by performing regular monitoring and maintenance procedures to ensure the health of all system components.
-
-Hardware components eventually fail either due to normal wear or to unexpected circumstances. Loss of power can lead to temporarily unavailable components. You can make a system highly available by providing redundant standbys for components that can fail so services can continue uninterrupted when a failure does occur. In some cases, the cost of redundancy is higher than a user\u2019s tolerance for interruption in service. When this is the case, the goal is to ensure that full service is able to be restored, and can be restored within an expected timeframe.
-
-With HAWQ, fault tolerance and data availability is achieved with:
-
-* [Hardware Level Redundancy (RAID and JBOD)](#ha_raid)
-* [Master Mirroring](#ha_master_mirroring)
-* [Dual Clusters](#ha_dual_clusters)
-
-## <a id="ha_raid"></a>Hardware Level Redundancy (RAID and JBOD) 
-
-As a best practice, HAWQ deployments should use RAID for master nodes and JBOD for segment nodes. Using these hardware-level systems provides high performance redundancy for single disk failure without having to go into database level fault tolerance. RAID and JBOD provide a lower level of redundancy at the disk level.
-
-## <a id="ha_master_mirroring"></a>Master Mirroring 
-
-There are two masters in a highly available cluster, a primary and a standby. As with segments, the master and standby should be deployed on different hosts so that the cluster can tolerate a single host failure. Clients connect to the primary master and queries can be executed only on the primary master. The secondary master is kept up-to-date by replicating the write-ahead log (WAL) from the primary to the secondary.
-
-## <a id="ha_dual_clusters"></a>Dual Clusters 
-
-You can add another level of redundancy to your deployment by maintaining two HAWQ clusters, both storing the same data.
-
-The two main methods for keeping data synchronized on dual clusters are "dual ETL" and "backup/restore."
-
-Dual ETL provides a complete standby cluster with the same data as the primary cluster. ETL (extract, transform, and load) refers to the process of cleansing, transforming, validating, and loading incoming data into a data warehouse. With dual ETL, this process is executed twice in parallel, once on each cluster, and is validated each time. It also allows data to be queried on both clusters, doubling the query throughput.
-
-Applications can take advantage of both clusters and also ensure that the ETL is successful and validated on both clusters.
-
-To maintain a dual cluster with the backup/restore method, create backups of the primary cluster and restore them on the secondary cluster. This method takes longer to synchronize data on the secondary cluster than the dual ETL strategy, but requires less application logic to be developed. Populating a second cluster with backups is ideal in use cases where data modifications and ETL are performed daily or less frequently.
-
-See [Backing Up and Restoring HAWQ](BackingUpandRestoringHAWQDatabases.html) for instructions on how to backup and restore HAWQ.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/MasterMirroring.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/MasterMirroring.html.md.erb b/admin/MasterMirroring.html.md.erb
deleted file mode 100644
index b9352f0..0000000
--- a/admin/MasterMirroring.html.md.erb
+++ /dev/null
@@ -1,144 +0,0 @@
----
-title: Using Master Mirroring
----
-
-There are two masters in a HAWQ cluster-- a primary master and a standby master. Clients connect to the primary master and queries can be executed only on the primary master.
-
-You deploy a backup or mirror of the master instance on a separate host machine from the primary master so that the cluster can tolerate a single host failure. A backup master or standby master serves as a warm standby if the primary master becomes non-operational. You create a standby master from the primary master while the primary is online.
-
-The primary master continues to provide services to users while HAWQ takes a transactional snapshot of the primary master instance. In addition to taking a transactional snapshot and deploying it to the standby master, HAWQ also records changes to the primary master. After HAWQ deploys the snapshot to the standby master, HAWQ deploys the updates to synchronize the standby master with the primary master.
-
-After the primary master and standby master are synchronized, HAWQ keeps the standby master up to date using walsender and walreceiver, write-ahead log (WAL)-based replication processes. The walreceiver is a standby master process. The walsender process is a primary master process. The two processes use WAL-based streaming replication to keep the primary and standby masters synchronized.
-
-Since the master does not house user data, only system catalog tables are synchronized between the primary and standby masters. When these tables are updated, changes are automatically copied to the standby master to keep it current with the primary.
-
-*Figure 1: Master Mirroring in HAWQ*
-
-![](../mdimages/standby_master.jpg)
-
-
-If the primary master fails, the replication process stops, and an administrator can activate the standby master. Upon activation of the standby master, the replicated logs reconstruct the state of the primary master at the time of the last successfully committed transaction. The activated standby then functions as the HAWQ master, accepting connections on the port specified when the standby master was initialized.
-
-If the master fails, the administrator uses command line tools or Ambari to instruct the standby master to take over as the new primary master. 
-
-**Tip:** You can configure a virtual IP address for the master and standby so that client programs do not have to switch to a different network address when the \u2018active\u2019 master changes. If the master host fails, the virtual IP address can be swapped to the actual acting master.
-
-##Configuring Master Mirroring <a id="standby_master_configure"></a>
-
-You can configure a new HAWQ system with a standby master during HAWQ\u2019s installation process, or you can add a standby master later. This topic assumes you are adding a standby master to an existing node in your HAWQ cluster.
-
-###Add a standby master to an existing system
-
-1. Ensure the host machine for the standby master has been installed with HAWQ and configured accordingly:
-    * The gpadmin system user has been created.
-    * HAWQ binaries are installed.
-    * HAWQ environment variables are set.
-    * SSH keys have been exchanged.
-    * HAWQ Master Data directory has been created.
-
-2. Initialize the HAWQ master standby:
-
-    a. If you use Ambari to manage your cluster, follow the instructions in [Adding a HAWQ Standby Master](ambari-admin.html#amb-add-standby).
-
-    b. If you do not use Ambari, log in to the HAWQ master and re-initialize the HAWQ master standby node:
- 
-    ``` shell
-    $ ssh gpadmin@<hawq_master>
-    hawq_master$ . /usr/local/hawq/greenplum_path.sh
-    hawq_master$ hawq init standby -s <new_standby_master>
-    ```
-
-    where \<new\_standby\_master\> identifies the hostname of the standby master.
-
-3. Check the status of master mirroring by querying the `gp_master_mirroring system` view. See [Checking on the State of Master Mirroring](#standby_check) for instructions.
-
-4. To activate or failover to the standby master, see [Failing Over to a Standby Master](#standby_failover).
-
-##Failing Over to a Standby Master<a id="standby_failover"></a>
-
-If the primary master fails, log replication stops. You must explicitly activate the standby master in this circumstance.
-
-Upon activation of the standby master, HAWQ reconstructs the state of the master at the time of the last successfully committed transaction.
-
-###To activate the standby master
-
-1. Ensure that a standby master host has been configured for the system.
-
-2. Activate the standby master:
-
-    a. If you use Ambari to manage your cluster, follow the instructions in [Activating the HAWQ Standby Master](ambari-admin.html#amb-activate-standby).
-
-    b. If you do not use Ambari, log in to the HAWQ master and activate the HAWQ master standby node:
-
-	``` shell
-	hawq_master$ hawq activate standby
- 	```
-   After you activate the standby master, it becomes the active or primary master for the HAWQ cluster.
-
-4. (Optional, but recommended.) Configure a new standby master. See [Add a standby master to an existing system](#standby_master_configure) for instructions.
-	
-5. Check the status of the HAWQ cluster by executing the following command on the master:
-
-	```shell
-	hawq_master$ hawq state
-	```
-	
-	The newly-activated master's status should be **Active**. If you configured a new standby master, its status is **Passive**. When a standby master is not configured, the command displays `-No entries found`, the message indicating that no standby master instance is configured.
-
-6. Query the `gp_segment_configuration` table to verify that segments have registered themselves to the new master:
-
-    ``` shell
-    hawq_master$ psql dbname -c 'SELECT * FROM gp_segment_configuration;'
-    ```
-	
-7. Finally, check the status of master mirroring by querying the `gp_master_mirroring` system view. See [Checking on the State of Master Mirroring](#standby_check) for instructions.
-
-
-##Checking on the State of Master Mirroring <a id="standby_check"></a>
-
-To check on the status of master mirroring, query the `gp_master_mirroring` system view. This view provides information about the walsender process used for HAWQ master mirroring. 
-
-```shell
-hawq_master$ psql dbname -c 'SELECT * FROM gp_master_mirroring;'
-```
-
-If a standby master has not been set up for the cluster, you will see the following output:
-
-```
- summary_state  | detail_state | log_time | error_message
-----------------+--------------+----------+---------------
- Not Configured |              |          | 
-(1 row)
-```
-
-If the standby is configured and in sync with the master, you will see output similar to the following:
-
-```
- summary_state | detail_state | log_time               | error_message
----------------+--------------+------------------------+---------------
- Synchronized  |              | 2016-01-22 21:53:47+00 |
-(1 row)
-```
-
-##Resynchronizing Standby with the Master <a id="resync_master"></a>
-
-The standby can become out-of-date if the log synchronization process between the master and standby has stopped or has fallen behind. If this occurs, you will observe output similar to the following after querying the `gp_master_mirroring` view:
-
-```
-   summary_state  | detail_state | log_time               | error_message
-------------------+--------------+------------------------+---------------
- Not Synchronized |              |                        |
-(1 row)
-```
-
-To resynchronize the standby with the master:
-
-1. If you use Ambari to manage your cluster, follow the instructions in [Removing the HAWQ Standby Master](ambari-admin.html#amb-remove-standby).
-
-2. If you do not use Ambari, execute the following command on the HAWQ master:
-
-    ```shell
-    hawq_master$ hawq init standby -n
-    ```
-
-    This command stops and restarts the master and then synchronizes the standby.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/RecommendedMonitoringTasks.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/RecommendedMonitoringTasks.html.md.erb b/admin/RecommendedMonitoringTasks.html.md.erb
deleted file mode 100644
index 5083b44..0000000
--- a/admin/RecommendedMonitoringTasks.html.md.erb
+++ /dev/null
@@ -1,259 +0,0 @@
----
-title: Recommended Monitoring and Maintenance Tasks
----
-
-This section lists monitoring and maintenance activities recommended to ensure high availability and consistent performance of your HAWQ cluster.
-
-The tables in the following sections suggest activities that a HAWQ System Administrator can perform periodically to ensure that all components of the system are operating optimally. Monitoring activities help you to detect and diagnose problems early. Maintenance activities help you to keep the system up-to-date and avoid deteriorating performance, for example, from bloated system tables or diminishing free disk space.
-
-It is not necessary to implement all of these suggestions in every cluster; use the frequency and severity recommendations as a guide to implement measures according to your service requirements.
-
-## <a id="drr_5bg_rp"></a>Database State Monitoring Activities 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td><p>List segments that are currently down. If any rows are returned, this should generate a warning or alert.</p>
-    <p>Recommended frequency: run every 5 to 10 minutes</p><p>Severity: IMPORTANT</p></td>
-    <td>Run the following query in the `postgres` database:
-    <pre><code>SELECT * FROM gp_segment_configuration
-WHERE status <> 'u';
-</code></pre>
-  </td>
-  <td>If the query returns any rows, follow these steps to correct the problem:
-  <ol>
-    <li>Verify that the hosts with down segments are responsive.</li>
-    <li>If hosts are OK, check the pg_log files for the down segments to discover the root cause of the segments going down.</li>
-    </ol>
-    </td>
-    </tr>
-  <tr>
-    <td>
-      <p>Run a distributed query to test that it runs on all segments. One row should be returned for each segment.</p>
-      <p>Recommended frequency: run every 5 to 10 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Execute the following query in the `postgres` database:</p>
-      <pre><code>SELECT gp_segment_id, count(&#42;)
-FROM gp_dist_random('pg_class')
-GROUP BY 1;
-</code></pre>
-  </td>
-  <td>If this query fails, there is an issue dispatching to some segments in the cluster. This is a rare event. Check the hosts that are not able to be dispatched to ensure there is no hardware or networking issue.</td>
-  </tr>
-  <tr>
-    <td>
-      <p>Perform a basic check to see if the master is up and functioning.</p>
-      <p>Recommended frequency: run every 5 to 10 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Run the following query in the `postgres` database:</p>
-      <pre><code>SELECT count(&#42;) FROM gp_segment_configuration;</code></pre>
-    </td>
-    <td>
-      <p>If this query fails the active master may be down. Try again several times and then inspect the active master manually. If the active master is down, reboot or power cycle the active master to ensure no processes remain on the active master and then trigger the activation of the standby master.</p>
-    </td>
-  </tr>
-</table>
-
-## <a id="topic_y4c_4gg_rp"></a>Hardware and Operating System Monitoring 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td>
-      <p>Underlying platform check for maintenance required or system down of the hardware.</p>
-      <p>Recommended frequency: real-time, if possible, or every 15 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Set up system check for hardware and OS errors.</p>
-    </td>
-    <td>
-      <p>If required, remove a machine from the HAWQ cluster to resolve hardware and OS issues, then add it back to the cluster.</p>
-    </td>
-  </tr>
-  <tr>
-    <td>
-      <p>Check disk space usage on volumes used for HAWQ data storage and the OS. Recommended frequency: every 5 to 30 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Set up a disk space check.</p>
-      <ul>
-        <li>Set a threshold to raise an alert when a disk reaches a percentage of capacity. The recommended threshold is 75% full.</li>
-        <li>It is not recommended to run the system with capacities approaching 100%.</li>
-      </ul>
-    </td>
-    <td>
-      <p>Free space on the system by removing some data or files.</p>
-    </td>
-  </tr>
-  <tr>
-    <td>
-      <p>Check for errors or dropped packets on the network interfaces.</p>
-      <p>Recommended frequency: hourly</p>
-      <p>Severity: IMPORTANT</p>
-    </td>
-    <td>
-      <p>Set up a network interface checks.</p>
-    </td>
-    <td>
-      <p>Work with network and OS teams to resolve errors.</p>
-    </td>
-  </tr>
-  <tr>
-    <td>
-      <p>Check for RAID errors or degraded RAID performance.</p>
-      <p>Recommended frequency: every 5 minutes</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Set up a RAID check.</p>
-    </td>
-    <td>
-      <ul>
-        <li>Replace failed disks as soon as possible.</li>
-        <li>Work with system administration team to resolve other RAID or controller errors as soon as possible.</li>
-      </ul>
-    </td>
-  </tr>
-  <tr>
-    <td>
-      <p>Check for adequate I/O bandwidth and I/O skew.</p>
-      <p>Recommended frequency: when create a cluster or when hardware issues are suspected.</p>
-    </td>
-    <td>
-      <p>Run the HAWQ `hawq checkperf` utility.</p>
-    </td>
-    <td>
-      <p>The cluster may be under-specified if data transfer rates are not similar to the following:</p>
-      <ul>
-        <li>2GB per second disk read</li>
-        <li>1 GB per second disk write</li>
-        <li>10 Gigabit per second network read and write</li>
-      </ul>
-      <p>If transfer rates are lower than expected, consult with your data architect regarding performance expectations.</p>
-      <p>If the machines on the cluster display an uneven performance profile, work with the system administration team to fix faulty machines.</p>
-    </td>
-  </tr>
-</table>
-
-## <a id="maintentenance_check_scripts"></a>Data Maintenance 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td>Check for missing statistics on tables.</td>
-    <td>Check the `hawq_stats_missing` view in each database:
-    <pre><code>SELECT * FROM hawq_toolkit.hawq_stats_missing;</code></pre>
-    </td>
-    <td>Run <code>ANALYZE</code> on tables that are missing statistics.</td>
-  </tr>
-</table>
-
-## <a id="topic_dld_23h_rp"></a>Database Maintenance 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td>
-      <p>Mark deleted rows in HAWQ system catalogs (tables in the `pg_catalog` schema) so that the space they occupy can be reused.</p>
-      <p>Recommended frequency: daily</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Vacuum each system catalog:</p>
-      <pre><code>VACUUM &lt;<i>table</i>&gt;;</code></pre>
-    </td>
-    <td>Vacuum system catalogues regularly to prevent bloating.</td>
-  </tr>
-  <tr>
-    <td>
-    <p>Vacuum all system catalogs (tables in the <code>pg_catalog</code> schema) that are approaching <a href="../reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a>.</p>
-    <p>Recommended frequency: daily</p>
-    <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p><p>Vacuum an individual system catalog table:</p>
-      <pre><code>VACUUM &lt;<i>table</i>&gt;;</code></pre>
-    </td>
-    <td>After the <a href="../reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a> value is reached, VACUUM will no longer replace transaction IDs with <code>FrozenXID</code> while scanning a table. Perform vacuum on these tables before the limit is reached.</td>
-  </tr>
-    <td>
-      <p>Update table statistics.</p>
-      <p>Recommended frequency: after loading data and before executing queries</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>
-      <p>Analyze user tables:</p>
-      <pre><code>ANALYZEDB -d &lt;<i>database</i>&gt; -a</code></pre>
-    </td>
-    <td>Analyze updated tables regularly so that the optimizer can produce efficient query execution plans.</td>
-  </tr>
-  <tr>
-    <td>
-      <p>Backup the database data.</p>
-      <p>Recommended frequency: daily, or as required by your backup plan</p>
-      <p>Severity: CRITICAL</p>
-    </td>
-    <td>See <a href="BackingUpandRestoringHAWQDatabases.html">Backing Up and Restoring HAWQ</a> for a discussion of backup procedures.</td>
-    <td>Best practice is to have a current backup ready in case the database must be restored.</td>
-  </tr>
-  <tr>
-    <td>
-      <p>Vacuum system catalogs (tables in the <code>pg_catalog</code> schema) to maintain an efficient catalog.</p>
-      <p>Recommended frequency: weekly, or more often if database objects are created and dropped frequently</p>
-    </td>
-    <td>
-      <p><code>VACUUM</code> the system tables in each database.</p>
-    </td>
-    <td>The optimizer retrieves information from the system tables to create query plans. If system tables and indexes are allowed to become bloated over time, scanning the system tables increases query execution time.</td>
-  </tr>
-</table>
-
-## <a id="topic_idx_smh_rp"></a>Patching and Upgrading 
-
-<table>
-  <tr>
-    <th>Activity</th>
-    <th>Procedure</th>
-    <th>Corrective Actions</th>
-  </tr>
-  <tr>
-    <td>
-      <p>Ensure any bug fixes or enhancements are applied to the kernel.</p>
-      <p>Recommended frequency: at least every 6 months</p>
-      <p>Severity: IMPORTANT</p>
-    </td>
-    <td>Follow the vendor's instructions to update the Linux kernel.</td>
-    <td>Keep the kernel current to include bug fixes and security fixes, and to avoid difficult future upgrades.</td>
-  </tr>
-  <tr>
-    <td>
-      <p>Install HAWQ minor releases.</p>
-      <p>Recommended frequency: quarterly</p>
-      <p>Severity: IMPORTANT</p>
-    </td>
-    <td>Always upgrade to the latest in the series.</td>
-    <td>Keep the HAWQ software current to incorporate bug fixes, performance enhancements, and feature enhancements into your HAWQ cluster.</td>
-  </tr>
-</table>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/RunningHAWQ.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/RunningHAWQ.html.md.erb b/admin/RunningHAWQ.html.md.erb
deleted file mode 100644
index c7de1d5..0000000
--- a/admin/RunningHAWQ.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Running a HAWQ Cluster
----
-
-This section provides information for system administrators responsible for administering a HAWQ deployment.
-
-You should have some knowledge of Linux/UNIX system administration, database management systems, database administration, and structured query language \(SQL\) to administer a HAWQ cluster. Because HAWQ is based on PostgreSQL, you should also have some familiarity with PostgreSQL. The HAWQ documentation calls out similarities between HAWQ and PostgreSQL features throughout.
-
-## <a id="hawq_users"></a>HAWQ Users
-
-HAWQ supports users with both administrative and operating privileges. The HAWQ administrator may choose to manage the HAWQ cluster using either Ambari or the command line. [Managing HAWQ Using Ambari](../admin/ambari-admin.html) provides Ambari-specific HAWQ cluster administration procedures. [Starting and Stopping HAWQ](startstop.html), [Expanding a Cluster](ClusterExpansion.html), and [Removing a Node](ClusterShrink.html) describe specific command-line-managed HAWQ cluster administration procedures. Other topics in this guide are applicable to both Ambari- and command-line-managed HAWQ clusters.
-
-The default HAWQ admininstrator user is named `gpadmin`. The HAWQ admin may choose to assign administrative and/or operating HAWQ privileges to additional users.  Refer to [Configuring Client Authentication](../clientaccess/client_auth.html) and [Managing Roles and Privileges](../clientaccess/roles_privs.html) for additional information about HAWQ user configuration.
-
-## <a id="hawq_systems"></a>HAWQ Deployment Systems
-
-A typical HAWQ deployment includes single HDFS and HAWQ master and standby nodes and multiple HAWQ segment and HDFS data nodes. The HAWQ cluster may also include systems running the HAWQ Extension Framework (PXF) and other Hadoop services. Refer to [HAWQ Architecture](../overview/HAWQArchitecture.html) and [Select HAWQ Host Machines](../install/select-hosts.html) for information about the different systems in a HAWQ deployment and how they are configured.
-
-
-## <a id="hawq_env_databases"></a>HAWQ Databases
-
-[Creating and Managing Databases](../ddl/ddl-database.html) and [Creating and Managing Tables](../ddl/ddl-table.html) describe HAWQ database and table creation commands.
-
-You manage HAWQ databases at the command line using the [psql](../reference/cli/client_utilities/psql.html) utility, an interactive front-end to the HAWQ database. Configuring client access to HAWQ databases and tables may require information related to [Establishing a Database Session](../clientaccess/g-establishing-a-database-session.html).
-
-[HAWQ Database Drivers and APIs](../clientaccess/g-database-application-interfaces.html) identifies supported HAWQ database drivers and APIs for additional client access methods.
-
-## <a id="hawq_env_data"></a>HAWQ Data
-
-HAWQ internal data resides in HDFS. You may require access to data in different formats and locations in your data lake. You can use HAWQ and the HAWQ Extension Framework (PXF) to access and manage both internal and this external data:
-
-- [Managing Data with HAWQ](../datamgmt/dml.html) discusses the basic data operations and details regarding the loading and unloading semantics for HAWQ internal tables.
-- [Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html) describes PXF, an extensible framework you may use to query data external to HAWQ.
-
-## <a id="hawq_env_setup"></a>HAWQ Operating Environment
-
-Refer to [Introducing the HAWQ Operating Environment](setuphawqopenv.html) for a discussion of the HAWQ operating environment, including a procedure to set up the HAWQ environment. This section also provides an introduction to the important files and directories in a HAWQ installation.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/admin/ambari-admin.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ambari-admin.html.md.erb b/admin/ambari-admin.html.md.erb
deleted file mode 100644
index a5b2169..0000000
--- a/admin/ambari-admin.html.md.erb
+++ /dev/null
@@ -1,439 +0,0 @@
----
-title: Managing HAWQ Using Ambari
----
-
-Ambari provides an easy interface to perform some of the most common HAWQ and PXF Administration Tasks.
-
-## <a id="amb-yarn"></a>Integrating YARN for Resource Management
-
-HAWQ supports integration with YARN for global resource management. In a YARN managed environment, HAWQ can request resources (containers) dynamically from YARN, and return resources when HAWQ\u2019s workload is not heavy.
-
-See also [Integrating YARN with HAWQ](../resourcemgmt/YARNIntegration.html) for command-line instructions and additional details about using HAWQ with YARN.
-
-### When to Perform
-
-Follow this procedure if you have already installed YARN and HAWQ, but you are currently using the HAWQ Standalone mode (not YARN) for resource management. This procedure helps you configure YARN and HAWQ so that HAWQ uses YARN for resource management. This procedure assumes that you will use the default YARN queue for managing HAWQ.
-
-### Procedure
-
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Select **HAWQ** from the list of installed services.
-3.  Select the **Configs** tab, then the **Settings** tab.
-4.  Use the **Resource Manager** menu to change select the **YARN** option.
-5.  Click **Save**.<br/><br/>HAWQ will use the default YARN queue, and Ambari automatically configures settings for `hawq_rm_yarn_address`, `hawq_rm_yarn_app_name`, and `hawq_rm_yarn_scheduler_address` in the `hawq-site.xml` file.<br/><br/>If YARN HA was enabled, Ambari also automatically configures the `yarn.resourcemanager.ha` and `yarn.resourcemanager.scheduler.ha` properties in `yarn-site.xml`.
-6.  If you are using HDP 2.3, follow these additional instructions:
-    1. Select **YARN** from the list of installed services.
-    2. Select the **Configs** tab, then the **Advanced** tab.
-    3. Expand the **Advanced yarn-site** section.
-    4. Locate the `yarn.resourcemanager.system-metrics-publisher.enabled` property and change its value to `false`.
-    5. Click **Save**.
-6.  (Optional.)  When HAWQ is integrated with YARN and has no workload, HAWQ does not acquire any resources right away. HAWQ\u2019s resource manager only requests resources from YARN when HAWQ receives its first query request. In order to guarantee optimal resource allocation for subsequent queries and to avoid frequent YARN resource negotiation, you can adjust `hawq_rm_min_resource_perseg` so HAWQ receives at least some number of YARN containers per segment regardless of the size of the initial query. The default value is 2, which means HAWQ\u2019s resource manager acquires at least 2 YARN containers for each segment even if the first query\u2019s resource request is small.<br/><br/>This configuration property cannot exceed the capacity of HAWQ\u2019s YARN queue. For example, if HAWQ\u2019s queue capacity in YARN is no more than 50% of the whole cluster, and each YARN node has a maximum of 64GB memory and 16 vcores, then `hawq_rm_min_resource_perseg` in HAWQ cannot be set to more than 8 since HAW
 Q\u2019s resource manager acquires YARN containers by vcore. In the case above, the HAWQ resource manager acquires a YARN container quota of 4GB memory and 1 vcore.<br/><br/>To change this parameter, expand **Custom hawq-site** and click **Add Property ...** Then specify `hawq_rm_min_resource_perseg` as the key and enter the desired Value. Click **Add** to add the property definition.
-7.  (Optional.)  If the level of HAWQ\u2019s workload is lowered, then HAWQ's resource manager may have some idle YARN resources. You can adjust `hawq_rm_resource_idle_timeout` to let the HAWQ resource manager return idle resources more quickly or more slowly.<br/><br/>For example, when HAWQ's resource manager has to reacquire resources, it can cause latency for query resource requests. To let HAWQ resource manager retain resources longer in anticipation of an upcoming workload, increase the value of `hawq_rm_resource_idle_timeout`. The default value of `hawq_rm_resource_idle_timeout` is 300 seconds.<br/><br/>To change this parameter, expand **Custom hawq-site** and click **Add Property ...** Then specify `hawq_rm_resource_idle_timeout` as the key and enter the desired Value. Click **Add** to add the property definition.
-8.  Click **Save** to save your configuration changes.
-
-## <a id="move_yarn_rm"></a>Moving a YARN Resource Manager
-
-If you are using YARN to manage HAWQ resources and need to move a YARN resource manager, then you must update your HAWQ configuration.
-
-### When to Perform
-
-Use one of the following procedures to move YARN resource manager component from one node to another when HAWQ is configured to use YARN as the global resource manager (`hawq_global_rm_type` is `yarn`). The exact procedure you should use depends on whether you have enabled high availability in YARN.
-
-**Note:** In a Kerberos-secured environment, you must update <code>hadoop.proxyuser.yarn.hosts</code> property in HDFS <code>core-site.xml</code> before running a service check. The values should be set to the current YARN Resource Managers.</p>
-
-### Procedure (Single YARN Resource Manager)
-
-1. Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-1. Click **YARN** in the list of installed services.
-1. Select **Move ResourceManager**, and complete the steps in the Ambari wizard to move the Resource Manager to a new host.
-1. After moving the Resource Manager successfully in YARN, click **HAWQ** in the list of installed services.
-1. On the HAWQ **Configs** page, select the **Advanced** tab.
-1. Under Advanced hawq-site section, update the following HAWQ properties:
-   - `hawq_rm_yarn_address`. Enter the same value defined in the `yarn.resourcemanager.address` property of `yarn-site.xml`.
-   - `hawq_rm_yarn_scheduler_address`. Enter the same value in the `yarn.resourcemanager.scheduler.address` property of `yarn-site.xml`.
-1. Restart all HAWQ components so that the configurations get updated on all HAWQ hosts.
-1. Run HAWQ Service Check, as described in [Performing a HAWQ Service Check](#amb-service-check), to ensure that HAWQ is operating properly.
-
-### Procedure (Highly Available YARN Resource Managers)
-
-1. Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-1. Click **YARN** in the list of installed services.
-1. Select **Move ResourceManager**, and complete the steps in the Ambari wizard to move the Resource Manager to a new host.
-1. After moving the Resource Manager successfully in YARN, click **HAWQ** in the list of installed services.
-1. On the HAWQ **Configs** page, select the **Advanced** tab.
-1. Under `Custom yarn-client` section, update the HAWQ properties `yarn.resourcemanager.ha` and `yarn.resourcemanager.scheduler.ha`. These parameter values should be updated to match the corresponding parameters for the YARN service. Check the values under **ResourceManager hosts** in the **Resource Manager** section of the **Advanced** configurations for the YARN service.
-1. Restart all HAWQ components so that the configuration change is updated on all HAWQ hosts. You can ignore the warning about the values of `hawq_rm_yarn_address` and `hawq_rm_yarn_scheduler_address` in `hawq-site.xml` not matching the values in `yarn-site.xml`, and click **Proceed Anyway**.
-1. Run HAWQ Service Check, as described in [Performing a HAWQ Service Check](#amb-service-check), to ensure that HAWQ is operating properly.
-
-
-## <a id="amb-service-check"></a>Performing a HAWQ Service Check
-
-A HAWQ Service check uses the `hawq state` command to display the configuration and status of segment hosts in a HAWQ Cluster. It also performs tests to ensure that HAWQ can write to and read from tables, and to ensure that HAWQ can write to and read from HDFS external tables using PXF.
-
-### When to Perform
-* Execute this procedure immediately after any common maintenance operations, such as adding, activating, or removing the HAWQ Master Standby.
-* Execute this procedure as a first step in troubleshooting problems in accessing HDFS data.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-4. Select **Service Actions > Run Service Check**, then click **OK** to perform the service check.
-
-    Ambari displays the **HAWQ Service Check** task in the list of background operations. If any test fails, then Ambari displays a red error icon next to the task.  
-5. Click the **HAWQ Service Check** task to view the actual log messages that are generated while performing the task. The log messages display the basic configuration and status of HAWQ segments, as well as the results of the HAWQ and PXF tests (if PXF is installed).
-
-6. Click **OK** to dismiss the log messages or list of background tasks.
-
-## <a id="amb-config-check"></a>Performing a Configuration Check
-
-A configuration check determines if operating system parameters on the HAWQ host machines match their recommended settings. You can also perform this procedure from the command line using the `hawq check` command. The `hawq check` command is run against all HAWQ hosts.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3. (Optional) Perform this step if you want to view or modify the host configuration parameters that are evaluated during the HAWQ config check:
-   1. Select the **Configs** tab, then select the **Advanced** tab in the settings.
-   1. Expand **Advanced Hawq Check** to view or change the list of parameters that are checked with a `hawq check` command or with the Ambari HAWQ Config check.
-
-         **Note:** All parameter entries are stored in the `/usr/local/hawq/etc/hawq_check.cnf` file. Click the **Set Recommended** button if you want to restore the file to its original contents.
-4. Select **Service Actions > Run HAWQ Config Check**, then click **OK** to perform the configuration check.
-
-    Ambari displays the **Run HAWQ Config Check** task in the list of background operations. If any parameter does not meet the specification defined in `/usr/local/hawq/etc/hawq_check.cnf`, then Ambari displays a red error icon next to the task.  
-5. Click the **Run HAWQ Config Check** task to view the actual log messages that are generated while performing the task. Address any configuration errors on the indicated host machines.
-
-6. Click **OK** to dismiss the log messages or list of background tasks.
-
-## <a id="amb-restart"></a>Performing a Rolling Restart
-Ambari provides the ability to restart a HAWQ cluster by restarting one or more segments at a time until all segments (or all segments with stale configurations) restart. You can specify a delay between restarting segments, and Ambari can stop the process if a specified number of segments fail to restart. Performing a rolling restart in this manner can help ensure that some HAWQ segments are available to service client requests.
-
-**Note:** If you do not need to preserve client connections, you can instead perform an full restart of the entire HAWQ cluster using **Service Actions > Restart All**.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3.  Select **Service Actions > Restart HAWQ Segments**.
-4. In the Restart HAWQ Segments page:
-   * Specify the number of segments that you want Ambari to restart at a time.
-   * Specify the number of seconds Ambari should wait before restarting the next batch of HAWQ segments.
-   * Specify the number of restart failures that may occur before Ambari stops the rolling restart process.
-   * Select **Only restart HAWQ Segments with stale configs** if you want to limit the restart process to those hosts.
-   * Select **Turn On Maintenance Mode for HAWQ** to enable maintenance mode before starting the rolling restart process. This suppresses alerts that are normally generated when a segment goes offline.
-5. Click **Trigger Rolling Restart** to begin the restart process.
-
-   Ambari displays the **Rolling Restart of HAWQ segments** task in the list of background operations, and indicates the current batch of segments that it is restarting. Click the name of the task to view the log messages generated during the restart. If any segment fails to restart, Ambari displays a red warning icon next to the task.
-
-## <a id="bulk-lifecycle"></a>Performing Host-Level Actions on HAWQ Segment and PXF Hosts
-
-Ambari host-level actions enable you to perform actions on one or more hosts in the cluster at once. With HAWQ clusters, you can apply the **Start**, **Stop**, or **Restart** actions to one or more HAWQ segment hosts or PXF hosts. Using the host-level actions saves you the trouble of accessing individual hosts in Ambari and applying service actions one-by-one.
-
-### When to Perform
-*  Use the Ambari host-level actions when you have a large number of hosts in your cluster and you want to start, stop, or restart all HAWQ segment hosts or all PXF hosts as part of regularly-scheduled maintenance.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Select the **Hosts** tab at the top of the screen to display a list of all hosts in the cluster.
-3.  To apply a host-level action to all HAWQ segment hosts or PXF hosts, select an action using the applicable menu:
-    *  **Actions > Filtered Hosts > HAWQ Segments >** [ **Start** | **Stop** |  **Restart** ]
-    *  **Actions > Filtered Hosts > PXF Hosts >** [ **Start** | **Stop** |  **Restart** ]
-4.  To apply a host level action to a subset of HAWQ segments or PXF hosts:
-    1.  Filter the list of available hosts using one of the filter options:
-        *  **Filter > HAWQ Segments**
-        *  **Filter > PXF Hosts**
-    2.  Use the check boxes to select the hosts to which you want to apply the action.
-    3.  Select **Actions > Selected Hosts >** [ **Start** | **Stop** |  **Restart** ] to apply the action to your selected hosts.
-
-
-## <a id="amb-expand"></a>Expanding the HAWQ Cluster
-
-Apache HAWQ supports dynamic node expansion. You can add segment nodes while HAWQ is running without having to suspend or terminate cluster operations.
-
-### Guidelines for Cluster Expansion
-
-This topic provides some guidelines around expanding your HAWQ cluster.
-
-There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
-
--  When you add a new node, install both a DataNode and a HAWQ segment on the new node.  If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node.
--  After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
--  Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, select the **Service Actions > Clear HAWQ's HDFS Metadata Cache** option in Ambari.
--  Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command.
--  If you are using hash tables, consider updating the `default_hash_table_bucket_number` server configuration parameter to a larger value after expanding the cluster but before redistributing the hash tables.
-
-### Procedure
-First ensure that the new node(s) has been configured per the instructions found in [Apache HAWQ System Requirements](../requirements/system-requirements.html) and [Select HAWQ Host Machines](../install/select-hosts.html).
-
-1.  If you have any user-defined function (UDF) libraries installed in your existing HAWQ cluster, install them on the new node(s) that you want to add to the HAWQ cluster.
-2.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-3.  Click **HAWQ** in the list of installed services.
-4.  Select the **Configs** tab, then select the **Advanced** tab in the settings.
-5.  Expand the **General** section, and ensure that the **Exchange SSH Keys** property (`hawq_ssh_keys`) is set to `true`.  Change this property to `true` if needed, and click **Save** to continue. Ambari must be able to exchange SSH keys with any hosts that you add to the cluster in the following steps.
-6.  Select the **Hosts** tab at the top of the screen to display the Hosts summary.
-7.  If the host(s) that you want to add are not currently listed in the Hosts summary page, follow these steps:
-    1. Select **Actions > Add New Hosts** to start the Add Host Wizard.
-    2. Follow the initial steps of the Add Host Wizard to identify the new host, specify SSH keys or manually register the host, and confirm the new host(s) to add.
-
-         See [Set Up Password-less SSH](http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_Installing_HDP_AMB/content/_set_up_password-less_ssh.html) in the HDP documentation if you need more information about performing these tasks.
-    3. When you reach the Assign Slaves and Clients page, ensure that the **DataNode**, **HAWQ Segment**, and **PXF** (if the PXF service is installed) components are selected. Select additional components as necessary for your cluster.
-    4. Complete the wizard to add the new host and install the selected components.
-8. If the host(s) that you want to add already appear in the Hosts summary, follow these steps:
-   1. Click the hostname that you want to add to the HAWQ cluster from the list of hosts.
-   2. In the Components summary, ensure that the host already runs the DataNode component. If it does not, select **Add > DataNode** and then click **Confirm Add**.  Click **OK** when the task completes.
-   3. In the Components summary, select **Add > HAWQ Segment**.
-   4. Click **Confirm Add** to acknowledge the component to add. Click **OK** when the task completes.
-   5. In the Components summary, select **Add > PXF**.
-   6. Click **Confirm Add** to acknowledge the component to add. Click **OK** when the task completes.
-17. (Optional) If you are using hash tables, adjust the **Default buckets for Hash Distributed tables** setting (`default_hash_table_bucket_number`) on the HAWQ service's **Configs > Settings** tab. Update this property's value by multiplying the new number of nodes in the cluster by the appropriate number indicated below.
-
-    |Number of Nodes After Expansion|Suggested default\_hash\_table\_bucket\_number value|
-    |---------------|------------------------------------------|
-    |<= 85|6 \* \#nodes|
-    |\> 85 and <= 102|5 \* \#nodes|
-    |\> 102 and <= 128|4 \* \#nodes|
-    |\> 128 and <= 170|3 \* \#nodes|
-    |\> 170 and <= 256|2 \* \#nodes|
-    |\> 256 and <= 512|1 \* \#nodes|
-    |\> 512|512|
-18.  Ambari requires the HAWQ service to be restarted in order to apply the configuration changes. If you need to apply the configuration *without* restarting HAWQ (for dynamic cluster expansion), then you can use the HAWQ CLI commands described in [Manually Updating the HAWQ Configuration](#manual-config-steps) *instead* of following this step.
-    <br/><br/>Stop and then start the HAWQ service to apply your configuration changes via Ambari. Select **Service Actions > Stop**, followed by **Service Actions > Start** to ensure that the HAWQ Master starts before the newly-added segment. During the HAWQ startup, Ambari exchanges ssh keys for the `gpadmin` user, and applies the new configuration.
-    >**Note:** Do not use the **Restart All** service action to complete this step.
-19.  Consider the impact of rebalancing HDFS to other components, such as HBase, before you complete this step.
-    <br/><br/>Rebalance your HDFS data by selecting the **HDFS** service and then choosing **Service Actions > Rebalance HDFS**. Follow the Ambari instructions to complete the rebalance action.
-20.  Speed up the clearing of the metadata cache by first selecting the **HAWQ** service and then selecting **Service Actions > Clear HAWQ's HDFS Metadata Cache**.
-21.  If you are using hash distributed tables and wish to take advantage of the performance benefits of using a larger cluster, redistribute the data in all hash-distributed tables by using either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command. You should redistribute the table data if you modified the `default_hash_table_bucket_number` configuration parameter.
-
-    **Note:** The redistribution of table data can take a significant amount of time.
-22.  (Optional.) If you changed the **Exchange SSH Keys** property value before adding the host(s), change the value back to `false` after Ambari exchanges keys with the new hosts. This prevents Ambari from exchanging keys with all hosts every time the HAWQ master is started or restarted.
-
-23.  (Optional.) If you enabled temporary password-based authentication while preparing/configuring your HAWQ host systems, turn off password-based authentication as described in [Apache HAWQ System Requirements](../requirements/system-requirements.html#topic_pwdlessssh).
-
-#### <a id="manual-config-steps"></a>Manually Updating the HAWQ Configuration
-If you need to expand your HAWQ cluster without restarting the HAWQ service, follow these steps to manually apply the new HAWQ configuration. (Use these steps *instead* of following Step 7 in the above procedure.):
-
-1.  Update your configuration to use the new `default_hash_table_bucket_number` value that you calculated:
-  1. SSH into the HAWQ master host as the `gpadmin` user:
-    ```shell
-    $ ssh gpadmin@<HAWQ_MASTER_HOST>
-    ```
-   2. Source the `greenplum_path.sh` file to update the shell environment:
-    ```shell
-    $ source /usr/local/hawq/greenplum_path.sh
-    ```
-   3. Verify the current value of `default_hash_table_bucket_number`:
-    ```shell
-    $ hawq config -s default_hash_table_bucket_number
-    ```
-   4. Update `default_hash_table_bucket_number` to the new value that you calculated:
-    ```shell
-    $ hawq config -c default_hash_table_bucket_number -v <new_value>
-    ```
-   5. Reload the configuration without restarting the cluster:
-    ```shell
-    $ hawq stop cluster -u
-    ```
-   6. Verify that the `default_hash_table_bucket_number` value was updated:
-    ```shell
-    $ hawq config -s default_hash_table_bucket_number
-    ```
-2.  Edit the `/usr/local/hawq/etc/slaves` file and add the new HAWQ hostname(s) to the end of the file. Separate multiple hosts with new lines. For example, after adding host4 and host5 to a cluster already contains hosts 1-3, the updated file contents would be:
-
-     ```
-     host1
-     host2
-     host3
-     host4
-     host5
-     ```
-3.  Continue with Step 8 in the previous procedure, [Expanding the HAWQ Cluster](#amb-expand).  When the HAWQ service is ready to be restarted via Ambari, Ambari will refresh the new configurations.
-
-## <a id="amb-activate-standby"></a>Activating the HAWQ Standby Master
-Activating the HAWQ Standby Master promotes the standby host as the new HAWQ Master host. The previous HAWQ Master configuration is automatically removed from the cluster.
-
-### When to Perform
-* Execute this procedure immediately if the HAWQ Master fails or becomes unreachable.
-* If you want to take the current HAWQ Master host offline for maintenance, execute this procedure during a scheduled maintenance period. This procedure requires a restart of the HAWQ service.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3.  Select **Service Actions > Activate HAWQ Standby Master** to start the Activate HAWQ Standby Master Wizard.
-4.  Read the description of the Wizard and click **Next** to review the tasks that will be performed.
-5.  Ambari displays the host name of the current HAWQ Master that will be removed from the cluster, as well as the HAWQ Standby Master host that will be activated. The information is provided only for review and cannot be edited on this page. Click **Next** to confirm the operation.
-6. Click **OK** to confirm that you want to perform the procedure, as it is not possible to roll back the operation using Ambari.
-
-   Ambari displays a list of tasks that are performed to activate the standby server and remove the previous HAWQ Master host. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
-7. Click **Complete** after the Wizard finishes all tasks.
-
-   **Important:** After the Wizard completes, your HAWQ cluster no longer includes a HAWQ Standby Master host. As a best practice, follow the instructions in [Adding a HAWQ Standby Master](#amb-add-standby) to configure a new one.
-
-## <a id="amb-add-standby"></a>Adding a HAWQ Standby Master
-
-The HAWQ Standby Master serves as a backup of the HAWQ Master host, and is an important part of providing high availability for the HAWQ cluster. When your cluster uses a standby master, you can activate the standby if the active HAWQ Master host fails or becomes unreachable.
-
-### When to Perform
-* Execute this procedure during a scheduled maintenance period, because it requires a restart of the HAWQ service.
-* Adding a HAWQ standby master is recommended as a best practice for all new clusters to provide high availability.
-* Add a new standby master soon after you activate an existing standby master to ensure that the cluster has a backup master service.
-
-### Procedure
-
-1.  Select an existing host in the cluster to run the HAWQ standby master. You cannot run the standby master on the same host that runs the HAWQ master. Also, do not run a standby master on the node where you deployed the Ambari server; if the Ambari postgres instance is running on the same port as the HAWQ master posgres instance, initialization fails and will leave the cluster in an inconsistent state.
-1. Login to the HAWQ host that you chose to run the standby master and determine if there is an existing HAWQ master directory (for example, `/data/hawq/master`) on the machine. If the directory exists, rename the directory. For example:
-
-    ```shell
-    $ mv /data/hawq/master /data/hawq/master-old
-    ```
-
-   **Note:**  If a HAWQ master directory exists on the host when you configure the HAWQ standby master, then the standby master may be initialized with stale data. Rename any existing master directory before you proceed.
-   
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3.  Select **Service Actions > Add HAWQ Standby Master** to start the Add HAWQ Standby Master Wizard.
-4.  Read the Get Started page for information about HAWQ the standby master and to acknowledge that the procedure requires a service restart. Click **Next** to display the Select Host page.
-5.  Use the dropdown menu to select a host to use for the HAWQ Standby Master. Click **Next** to display the Review page.
-
-    **Note:**
-    * The Current HAWQ Master host is shown only for reference. You cannot change the HAWQ Master host when you configure a standby master.
-    * You cannot place the standby master on the same host as the HAWQ master.
-6. Review the information to verify the host on which the HAWQ Standby Master will be installed. Click **Back** to change your selection or **Next** to continue.
-7. Confirm that you have renamed any existing HAWQ master data directory on the selected host machine, as described earlier in this procedure. If an existing master data directory exists, the new HAWQ Standby Master may be initialized with stale data and can place the cluster in an inconsistent state. Click **Confirm** to continue.
-
-     Ambari displays a list of tasks that are performed to install the standby master server and reconfigure the cluster. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
-7. Click **Complete** after the Wizard finishes all tasks.
-
-## <a id="amb-remove-standby"></a>Removing the HAWQ Standby Master
-
-This service action enables you to remove the HAWQ Standby Master component in situations where you may need to reinstall the component.
-
-### When to Perform
-* Execute this procedure if you need to decommission or replace theHAWQ Standby Master host.
-* Execute this procedure and then add the HAWQ Standby Master once again, if the HAWQ Standby Master is unable to synchronize with the HAWQ Master and you need to reinitialize the service.
-* Execute this procedure during a scheduled maintenance period, because it requires a restart of the HAWQ service.
-
-### Procedure
-1.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\)
-2.  Click **HAWQ** in the list of installed services.
-3.  Select **Service Actions > Remove HAWQ Standby Master** to start the Remove HAWQ Standby Master Wizard.
-4.  Read the Get Started page for information about the procedure and to acknowledge that the procedure requires a service restart. Click **Next** to display the Review page.
-5.  Ambari displays the HAWQ Standby Master host that will be removed from the cluster configuration. Click **Next** to continue, then click **OK** to confirm.
-
-     Ambari displays a list of tasks that are performed to remove the standby master from the cluster. Click on any of the tasks to view progress or to view the actual log messages that are generated while performing the task.
-
-7. Click **Complete** after the Wizard finishes all tasks.
-
-      **Important:** After the Wizard completes, your HAWQ cluster no longer includes a HAWQ Standby Master host. As a best practice, follow the instructions in [Adding a HAWQ Standby Master](#amb-add-standby) to configure a new one.
-
-## <a id="hdp-upgrade"></a>Upgrading the HDP Stack
-
-If you install HAWQ using Ambari 2.2.2 with the HDP 2.3 stack, before you attempt to upgrade to HDP 2.4 you must use Ambari to change the `dfs.allow.truncate` property to `false`. Ambari will display a configuration warning with this setting, but it is required in order to complete the upgrade; choose **Proceed Anyway** when Ambari warns you about the configured value of `dfs.allow.truncate`.
-
-After you complete the upgrade to HDP 2.4, change the value of `dfs.allow.truncate` back to `true` to ensure that HAWQ can operate as intended.
-
-## <a id="gpadmin-password-change"></a>Changing the HAWQ gpadmin Password
-The password issued by the Ambari web console is used for the `hawq ssh-exkeys` utility, which is run during the start phase of the HAWQ Master.
-Ambari stores and uses its own copy of the gpadmin password, independently of the host system. Passwords on the master and slave nodes are not automatically updated and synchronized with Ambari. Not updating the Ambari system user password causes Ambari to behave as if the gpadmin password was never changed \(it keeps using the old password\).
-
-If passwordless ssh has not been set up, `hawq ssh-exkeys` attempts to exchange the key by using the password provided by the Ambari web console. If the password on the host machine differs from the HAWQ System User password recognized on Ambari, exchanging the key with the HAWQ Master fails. Components without passwordless ssh might not be registered with the HAWQ cluster.
-
-### When to Perform
-You should change the gpadmin password when:
-
-* The gpadmin password on the host machines has expired.
-* You want to change passwords as part of normal system security procedures.
-When updating the gpadmin password, it must be kept in synch with the gpadmin user on the HAWQ hosts. This requires manually changing the password on the Master and Slave hosts, then updating the Ambari password.
-
-###Procedure
-All of the listed steps are mandatory. This ensures that HAWQ service remains fully functional.
-
-1.  Use a script to manually change the password for the gpadmin user on all HAWQ hosts \(all Master and Slave component hosts\). To manually update the password, you must have ssh access to all host machines as the gpadmin user. Generate a hosts file to use with the `hawq ssh` command to reset the password on all hosts. Use a text editor to create a file that lists the hostname of the master node, the standby master node, and each segment node used in the cluster. Specify one hostname per line, for example:
-
-    ```
-    mdw
-    smdw
-    sdw1
-    sdw2
-    sdw3
-    ```
-
-    You can then use a command similar to the following to change the password on all hosts that are listed in the file:
-
-    ```shell
-    $ hawq ssh -f hawq_hosts 'echo "gpadmin:newpassword" | /usr/sbin/chpasswd'
-    ```    
-
-    **Note:** Be sure to make appropriate user and password system administrative changes in order to prevent operational disruption. For example, you may need to disable the password expiration policy for the `gpadmin` account.
-2.  Access the Ambari web console at http://ambari.server.hostname:8080, and login as the "admin" user. \(The default password is also "admin".\) Then perform the following steps:
-    1. Click **HAWQ** in the list of installed services.
-    2. On the HAWQ Server Configs page, go to the **Advanced** tab and update the **HAWQ System User Password** to the new password specified in the script.
-    3. Click **Save** to save the updated configuration.
-    4. Restart HAWQ service to propagate the configuration change to all Ambari agents.
-
-    This will synchronize the password on the host machines with the password that you specified in Ambari.
-
-## <a id="gpadmin-setup-alert"></a>Setting Up Alerts
- 
-Alerts advise you of when a HAWQ process is down or not responding, or when certain conditions requiring attention occur.
-Alerts can be created for the Master, Standby Master, Segments, and PXF components. You can also set up custom alert groups to monitor these conditions and send email notifications when they occur.
-
-### When to Perform
-Alerts are enabled by default. You might want to disable alert functions when performing system operations in maintenance mode and then re-enable them after returning to normal operation.
-
-You can configure alerts to display messages for all system status changes or only for conditions of interest, such as warnings or critical conditions. Alerts can advise you if there are communication issues between the HAWQ Master and HAWQ segments, or if the HAWQ Master, Standby Master, a segment, or the PXF service is down or not responding. 
-
-You can configure Ambari to check for alerts at specified intervals, on a particular service or host, and what level of criticality you want to trigger an alert (OK, WARNING, or CRITICAL).
-
-### Procedure
-Ambari can show Alerts and also configure certain status conditions. 
-
-#### Viewing Alerts
-To view the current alert information for HAWQ, click the **Groups** button at the top left of the Alerts page, then select **HAWQ Default** in the drop-down menu, then click on the **Alert** button at the top of the Ambari console. Ambari will display a list of all available alert functions and their current status. 
-
-To check PXF alerts, click the **Groups** dropdown button at the top left of the Alerts page. Select **PXF Default** in the dropdown menu. Alerts are displayed on the PXF Status page.
-
-To view the current Alert settings, click on the name of the alert.
-
-The Alerts you can view are as follows:
-
-* HAWQ Master Process:
-This alert is triggered when the HAWQ Master process is down or not responding. 
-
-* HAWQ Segment Process:
-This alert is triggered when a HAWQ Segment on a node is down or not responding.  
-
-* HAWQ Standby Master Process:
-This alert is triggered when the HAWQ Standby Master process is down or not responding. If no standby is present, the Alert shows as **NONE**. 
-
-* HAWQ Standby Master Sync Status:
-This alert is triggered when the HAWQ Standby Master is not synchronized with the HAWQ Master. Using this Alert eliminates the need to check the gp\_master\_mirroring catalog table to determine if the Standby Master is fully synchronized. 
-If no standby Master is present, the status will show as **UNKNOWN**.
-   If this Alert is triggered, go to the HAWQ **Services** tab and click on the **Service Action** button to re-sync the HAWQ Standby Master with the HAWQ Master.
-   
-* HAWQ Segment Registration Status:
-This alert is triggered when any of the HAWQ Segments fail to register with the HAWQ Master. This indicates that the HAWQ segments having an up status in the gp\_segment\_configuration table do not match the HAWQ Segments listed in the /usr/local/hawq/etc/slaves file on the HAWQ Master. 
-
-* Percent HAWQ Segment Status Available:
-This Alert monitors the percentage of HAWQ segments available versus total segments. 
-   Alerts for **WARN**, and **CRITICAL** are displayed when the number of unresponsive HAWQ segments in the cluster is greater than the specified threshold. Otherwise, the status will show as **OK**.
-
-* PXF Process Alerts:
-PXF Process alerts are triggered when a PXF process on a node is down or not responding on the network. If PXF Alerts are enabled, the Alert status is shown on the PXF Status page.
-
-#### Setting the Monitoring Inteval
-You can customize how often you wish the system to check for certain conditions. The default interval for checking the HAWQ system is 1 minute. 
-
-To customize the interval, perform the following steps:
-
-1.  Click on the name of the Alert you want to edit. 
-2.  When the Configuration screen appears, click **Edit**. 
-3.  Enter a number for how often to check status for the selected Alert, then click **Save**. The interval must be specified in whole minutes.
-
-
-#### Setting the Available HAWQ Segment Threshold
-HAWQ monitors the percentage of available HAWQ segments and can send an alert when a specified percent of unresponsive segments is reached. 
-
-To set the threshold for the unresponsive segments that will trigger an alert:
-
-   1.  Click on **Percent HAWQ Segments Available**. 
-   2.  Click **Edit**. Enter the percentage of total segments to create a **Warning** alert (default is 10 percent of the total segments) or **Critical** alert (default is 25 percent of total segments).
-   3.  Click **Save** when done.
-   Alerts for **WARN**, and **CRITICAL** will be displayed when the number of unresponsive HAWQ segments in the cluster is greater than the specified percentage. 
-