You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2016/10/14 23:25:03 UTC

[1/4] incubator-hawq-docs git commit: pxf json plugin - misc updates

Repository: incubator-hawq-docs
Updated Branches:
  refs/heads/develop a819abd79 -> f4bd3ef5a


pxf json plugin - misc updates

- use <> for options
- include link to CREATE EXTERNAL TABLE ref page


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/d1b7d017
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/d1b7d017
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/d1b7d017

Branch: refs/heads/develop
Commit: d1b7d017e50a2d32ad302e2a418138c9e3b4ce3b
Parents: a819abd
Author: Lisa Owen <lo...@pivotal.io>
Authored: Wed Oct 12 08:41:15 2016 -0700
Committer: Lisa Owen <lo...@pivotal.io>
Committed: Wed Oct 12 08:41:15 2016 -0700

----------------------------------------------------------------------
 pxf/JsonPXF.html.md.erb | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/d1b7d017/pxf/JsonPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/JsonPXF.html.md.erb b/pxf/JsonPXF.html.md.erb
index 5283349..97195ad 100644
--- a/pxf/JsonPXF.html.md.erb
+++ b/pxf/JsonPXF.html.md.erb
@@ -141,25 +141,25 @@ Once loaded to HDFS, JSON data may be queried and analyzed via HAWQ.
 Use the following syntax to create an external table representing JSON data:�
 
 ``` sql
-CREATE EXTERNAL TABLE table_name 
-    ( column_name data_type [, ...] | LIKE other_table )
-LOCATION ( 'pxf://host[:port]/path-to-data?PROFILE=Json[&IDENTIFIER=value]' )
+CREATE EXTERNAL TABLE <table_name> 
+    ( <column_name> <data_type> [, ...] | LIKE <other_table> )
+LOCATION ( 'pxf://<host>[:<port>]/<path-to-data>?PROFILE=Json[&IDENTIFIER=<value>]' )
       FORMAT 'CUSTOM' ( FORMATTER='pxfwritable_import' );
 ```
 JSON-plug-in-specific keywords and values used in the `CREATE EXTERNAL TABLE` call are described below.
 
 | Keyword  | Value |
 |-------|-------------------------------------|
-| host    | Specify the HDFS NameNode in the `host` field. |
+| \<host\>    | Specify the HDFS NameNode in the \<host\> field. |
 | PROFILE    | The `PROFILE` keyword must specify the value `Json`. |
-| IDENTIFIER  | Include the `IDENTIFIER` keyword and value in the `LOCATION` string only when accessing a JSON file with multi-line records. `value` should identify the member name used to determine the encapsulating JSON object to return.  (If the JSON file is the multi-line record Example 2 above, `&IDENTIFIER=created_at` would be specified.) |  
+| IDENTIFIER  | Include the `IDENTIFIER` keyword and \<value\> in the `LOCATION` string only when accessing a JSON file with multi-line records. \<value\> should identify the member name used to determine the encapsulating JSON object to return.  (If the JSON file is the multi-line record Example 2 above, `&IDENTIFIER=created_at` would be specified.) |  
 | FORMAT    | The `FORMAT` clause must specify `CUSTOM`. |
 | FORMATTER    | The JSON `CUSTOM` format supports only the built-in `pxfwritable_import` `FORMATTER`. |
 
 
 ### Example 1 <a id="jsonexample1"></a>
 
-The following `CREATE EXTERNAL TABLE` SQL call creates a queryable external table based on the data in the single-line-per-record JSON example.
+The following [CREATE EXTERNAL TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) SQL call creates a queryable external table based on the data in the single-line-per-record JSON example.
 
 ``` sql 
 CREATE EXTERNAL TABLE sample_json_singleline_tbl(


[4/4] incubator-hawq-docs git commit: fix typos

Posted by yo...@apache.org.
fix typos


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/f4bd3ef5
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/f4bd3ef5
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/f4bd3ef5

Branch: refs/heads/develop
Commit: f4bd3ef5adafe09e818fcb9d7a77912cc536f875
Parents: d4d4283
Author: Lisa Owen <lo...@pivotal.io>
Authored: Fri Oct 14 15:20:35 2016 -0700
Committer: Lisa Owen <lo...@pivotal.io>
Committed: Fri Oct 14 15:20:35 2016 -0700

----------------------------------------------------------------------
 clientaccess/disable-kerberos.html.md.erb              | 2 +-
 reference/catalog/gp_configuration_history.html.md.erb | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/f4bd3ef5/clientaccess/disable-kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/disable-kerberos.html.md.erb b/clientaccess/disable-kerberos.html.md.erb
index b5d7eeb..5646eec 100644
--- a/clientaccess/disable-kerberos.html.md.erb
+++ b/clientaccess/disable-kerberos.html.md.erb
@@ -4,7 +4,7 @@ title: Disabling Kerberos Security
 
 Follow these steps to disable Kerberos security for HAWQ and PXF for manual installations.
 
-**Note:** If you install or manager your cluster using Ambari, then the HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you disable security for Hadoop. The following instructions are only necessary for manual installations, or when Hadoop security is disabled outside of Ambari.
+**Note:** If you install or manage your cluster using Ambari, then the HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you disable security for Hadoop. The following instructions are only necessary for manual installations, or when Hadoop security is disabled outside of Ambari.
 
 1.  Disable Kerberos on the Hadoop cluster on which you use HAWQ.
 2.  Disable security for HAWQ:

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/f4bd3ef5/reference/catalog/gp_configuration_history.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/catalog/gp_configuration_history.html.md.erb b/reference/catalog/gp_configuration_history.html.md.erb
index bfc9425..e501d55 100644
--- a/reference/catalog/gp_configuration_history.html.md.erb
+++ b/reference/catalog/gp_configuration_history.html.md.erb
@@ -2,7 +2,7 @@
 title: gp_configuration_history
 ---
 
-The `gp_configuration_history` table contains information about system changes related to fault detection and recovery operations. The HAWQ [fault tolerance service](../../admin/FaultTolerance.html) logs data to this table, as do certain related management utilities such as `hawq init`. For example, when you add a new segment to the system, records for these events are logged to `gp_configuration_history`. If a segment is marked as down by the fault tolerance service in the [gp_segment_configuration](gp_segment_configuration.html) catalog table, then the reason for being marked as down is recorded in this table.
+The `gp_configuration_history` table contains information about system changes related to fault detection and recovery operations. The HAWQ [fault tolerance service](../../admin/FaultTolerance.html) logs data to this table, as do certain related management utilities such as `hawq init`. For example, when you add a new segment to the system, records for these events are logged to `gp_configuration_history`. If a segment is marked as down by the fault tolerance service in the [gp\_segment\_configuration](gp_segment_configuration.html) catalog table, then the reason for being marked as down is recorded in this table.
 
 The event descriptions stored in this table may be helpful for troubleshooting serious system issues in collaboration with HAWQ support technicians.
 


[3/4] incubator-hawq-docs git commit: cluster expansion - include NodeManager config if using YARN

Posted by yo...@apache.org.
cluster expansion - include NodeManager config if using YARN


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/d4d42834
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/d4d42834
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/d4d42834

Branch: refs/heads/develop
Commit: d4d42834dd3020fde0ddaa205581ad0be8231bbe
Parents: cb62d8a
Author: Lisa Owen <lo...@pivotal.io>
Authored: Wed Oct 12 09:56:31 2016 -0700
Committer: Lisa Owen <lo...@pivotal.io>
Committed: Wed Oct 12 09:56:31 2016 -0700

----------------------------------------------------------------------
 admin/ClusterExpansion.html.md.erb | 2 +-
 admin/ambari-admin.html.md.erb     | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/d4d42834/admin/ClusterExpansion.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ClusterExpansion.html.md.erb b/admin/ClusterExpansion.html.md.erb
index d99c760..e4800d3 100644
--- a/admin/ClusterExpansion.html.md.erb
+++ b/admin/ClusterExpansion.html.md.erb
@@ -12,7 +12,7 @@ This topic provides some guidelines around expanding your HAWQ cluster.
 
 There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
 
--   When you add a new node, install both a DataNode and a physical segment on the new node.
+-   When you add a new node, install both a DataNode and a physical segment on the new node. If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node.
 -   After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
 -   Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, execute **`select gp_metadata_cache_clear();`**.
 -   Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/d4d42834/admin/ambari-admin.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ambari-admin.html.md.erb b/admin/ambari-admin.html.md.erb
index e41adc6..a5b2169 100644
--- a/admin/ambari-admin.html.md.erb
+++ b/admin/ambari-admin.html.md.erb
@@ -153,7 +153,7 @@ This topic provides some guidelines around expanding your HAWQ cluster.
 
 There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster:
 
--  When you add a new node, install both a DataNode and a HAWQ segment on the new node.
+-  When you add a new node, install both a DataNode and a HAWQ segment on the new node.  If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node.
 -  After adding a new node, you should always rebalance HDFS data to maintain cluster performance.
 -  Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, select the **Service Actions > Clear HAWQ's HDFS Metadata Cache** option in Ambari.
 -  Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command.


[2/4] incubator-hawq-docs git commit: client load tools - link to gpdb pivnet for windows loader

Posted by yo...@apache.org.
client load tools - link to gpdb pivnet for windows loader


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/cb62d8a0
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/cb62d8a0
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/cb62d8a0

Branch: refs/heads/develop
Commit: cb62d8a00681454f6edf7558558e67c5e4502e52
Parents: d1b7d01
Author: Lisa Owen <lo...@pivotal.io>
Authored: Wed Oct 12 08:51:48 2016 -0700
Committer: Lisa Owen <lo...@pivotal.io>
Committed: Wed Oct 12 08:51:48 2016 -0700

----------------------------------------------------------------------
 datamgmt/load/client-loadtools.html.md.erb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/cb62d8a0/datamgmt/load/client-loadtools.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/load/client-loadtools.html.md.erb b/datamgmt/load/client-loadtools.html.md.erb
index 9a83e61..66ef8c5 100644
--- a/datamgmt/load/client-loadtools.html.md.erb
+++ b/datamgmt/load/client-loadtools.html.md.erb
@@ -50,7 +50,7 @@ The HAWQ Load Tools for Windows requires that the 32-bit version of Python 2.5 b
 
 ### <a id="installloadrunwin"></a>Running the Windows Installer
 
-1. Download the `greenplum-loaders-4.3.x.x-build-n-WinXP-x86_32.msi` installer package from [Pivotal Network](https://network.pivotal.io/products/pivotal-hdb). Make note of the directory to which it was downloaded.
+1. Download the `greenplum-loaders-4.3.x.x-build-n-WinXP-x86_32.msi` installer package from [Pivotal Network](https://network.pivotal.io/products/pivotal-gpdb). Make note of the directory to which it was downloaded.
  
 2. Double-click the `greenplum-loaders-4.3.x.x-build-n-WinXP-x86_32.msi` file to launch the installer.
 3. Click **Next** on the **Welcome** screen.