You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2016/10/31 19:17:00 UTC

[1/3] incubator-hawq-docs git commit: Revise section on work_mem

Repository: incubator-hawq-docs
Updated Branches:
  refs/heads/develop d98b06d36 -> d99daae87


Revise section on work_mem


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/1014d883
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/1014d883
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/1014d883

Branch: refs/heads/develop
Commit: 1014d883684c963094286b0d19f60729c8391693
Parents: 5edd907
Author: Jane Beckman <jb...@pivotal.io>
Authored: Tue Oct 25 16:11:17 2016 -0700
Committer: Jane Beckman <jb...@pivotal.io>
Committed: Tue Oct 25 16:11:17 2016 -0700

----------------------------------------------------------------------
 bestpractices/querying_data_bestpractices.html.md.erb | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/1014d883/bestpractices/querying_data_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/querying_data_bestpractices.html.md.erb b/bestpractices/querying_data_bestpractices.html.md.erb
index 15fecfe..08a8e02 100644
--- a/bestpractices/querying_data_bestpractices.html.md.erb
+++ b/bestpractices/querying_data_bestpractices.html.md.erb
@@ -16,14 +16,14 @@ If a query performs poorly, examine its query plan and ask the following questio
     If the plan is not choosing the optimal join order, set `join_collapse_limit=1` and use explicit `JOIN` syntax in your SQL statement to force the legacy query optimizer (planner) to the specified join order. You can also collect more statistics on the relevant join columns.
 
 -   **Does the optimizer selectively scan partitioned tables?** If you use table partitioning, is the optimizer selectively scanning only the child tables required to satisfy the query predicates? Scans of the parent tables should return 0 rows since the parent tables do not contain any data. See [Verifying Your Partition Strategy](../ddl/ddl-partition.html#topic74) for an example of a query plan that shows a selective partition scan.
--   **Does the optimizer choose hash aggregate and hash join operations where applicable?** Hash operations are typically much faster than other types of joins or aggregations. Row comparison and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer to choose hash operations, there must be sufficient memory available to hold the estimated number of rows. Try increasing work memory to improve performance for a query. If possible, run an `EXPLAIN  ANALYZE` for the query to show which plan operations spilled to disk, how much work memory they used, and how much memory was required to avoid spilling to disk. For example:
+-   **Does the optimizer choose hash aggregate and hash join operations where applicable?** Hash operations are typically much faster than other types of joins or aggregations. Row comparison and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer to choose hash operations, there must be sufficient memory available to hold the estimated number of rows. You may wish to  run an `EXPLAIN  ANALYZE` for the query to show which plan operations spilled to disk, how much work memory they used, and how much memory was required to avoid spilling to disk. For example:
 
-    `Work_mem used: 23430K bytes avg, 23430K bytes max (seg0). Work_mem               wanted: 33649K bytes avg, 33649K bytes max (seg0) to lessen workfile I/O affecting 2               workers.`
+    `Work_mem used: 23430K bytes avg, 23430K bytes max (seg0). Work_mem wanted: 33649K bytes avg, 33649K bytes max (seg0) to lessen workfile I/O affecting 2               workers.`
+
+The "bytes wanted" (Work_mem) message from `EXPLAIN               ANALYZE` is based on the amount of data written to work files and is not exact.
 
 **Note**
 The *work\_mem* property is not configurable. Use resource queues to manage memory use. For more information on resource queues, see [Configuring Resource Management](../resourcemgmt/ConfigureResourceManagement.html) and [Working with Hierarchical Resource Queues](../resourcemgmt/ResourceQueues.html).
 
 
-    The "bytes wanted" message from `EXPLAIN               ANALYZE` is based on the amount of data written to work files and is not exact. The minimum `work_mem` needed can differ from the suggested value.
-
 


[2/3] incubator-hawq-docs git commit: Edit

Posted by yo...@apache.org.
Edit


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/eaf883c7
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/eaf883c7
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/eaf883c7

Branch: refs/heads/develop
Commit: eaf883c7fa88dabf6ad8c7a799c2b4cc4cf348b3
Parents: 1014d88
Author: Jane Beckman <jb...@pivotal.io>
Authored: Mon Oct 31 12:04:46 2016 -0700
Committer: Jane Beckman <jb...@pivotal.io>
Committed: Mon Oct 31 12:04:46 2016 -0700

----------------------------------------------------------------------
 bestpractices/querying_data_bestpractices.html.md.erb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/eaf883c7/bestpractices/querying_data_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/querying_data_bestpractices.html.md.erb b/bestpractices/querying_data_bestpractices.html.md.erb
index 08a8e02..ef41fbf 100644
--- a/bestpractices/querying_data_bestpractices.html.md.erb
+++ b/bestpractices/querying_data_bestpractices.html.md.erb
@@ -16,7 +16,7 @@ If a query performs poorly, examine its query plan and ask the following questio
     If the plan is not choosing the optimal join order, set `join_collapse_limit=1` and use explicit `JOIN` syntax in your SQL statement to force the legacy query optimizer (planner) to the specified join order. You can also collect more statistics on the relevant join columns.
 
 -   **Does the optimizer selectively scan partitioned tables?** If you use table partitioning, is the optimizer selectively scanning only the child tables required to satisfy the query predicates? Scans of the parent tables should return 0 rows since the parent tables do not contain any data. See [Verifying Your Partition Strategy](../ddl/ddl-partition.html#topic74) for an example of a query plan that shows a selective partition scan.
--   **Does the optimizer choose hash aggregate and hash join operations where applicable?** Hash operations are typically much faster than other types of joins or aggregations. Row comparison and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer to choose hash operations, there must be sufficient memory available to hold the estimated number of rows. You may wish to  run an `EXPLAIN  ANALYZE` for the query to show which plan operations spilled to disk, how much work memory they used, and how much memory was required to avoid spilling to disk. For example:
+-   **Does the optimizer choose hash aggregate and hash join operations where applicable?** Hash operations are typically much faster than other types of joins or aggregations. Row comparison and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer to choose hash operations, there must be sufficient memory available to hold the estimated number of rows. Run an `EXPLAIN  ANALYZE` for the query to show which plan operations spilled to disk, how much work memory they used, and how much memory was required to avoid spilling to disk. For example:
 
     `Work_mem used: 23430K bytes avg, 23430K bytes max (seg0). Work_mem wanted: 33649K bytes avg, 33649K bytes max (seg0) to lessen workfile I/O affecting 2               workers.`
 


[3/3] incubator-hawq-docs git commit: Merge branch 'feature/work_mem' of https://github.com/janebeckman/incubator-hawq-docs into develop

Posted by yo...@apache.org.
Merge branch 'feature/work_mem' of https://github.com/janebeckman/incubator-hawq-docs into develop


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/d99daae8
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/d99daae8
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/d99daae8

Branch: refs/heads/develop
Commit: d99daae878b0672a8d851d33b5d1bd8d087eae21
Parents: d98b06d eaf883c
Author: David Yozie <yo...@apache.org>
Authored: Mon Oct 31 12:16:54 2016 -0700
Committer: David Yozie <yo...@apache.org>
Committed: Mon Oct 31 12:16:54 2016 -0700

----------------------------------------------------------------------
 bestpractices/querying_data_bestpractices.html.md.erb | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------