You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@impala.apache.org by ar...@apache.org on 2019/01/09 21:03:39 UTC

impala git commit: IMPALA-8060: [DOCS] Restructured the admission control docs

Repository: impala
Updated Branches:
  refs/heads/master 274e96bd1 -> 56dea8438


IMPALA-8060: [DOCS] Restructured the admission control docs

- Created a new doc category called "Resource Management".
- Moved impala_admission under Resource Management.
- Move the config steps out of impala_admission.xml and
- created impala_admission_config.xml

Change-Id: Id42ae256b215fb267023197c5052f36bedb052a3
Reviewed-on: http://gerrit.cloudera.org:8080/12191
Reviewed-by: Bikramjeet Vig <bi...@cloudera.com>
Tested-by: Impala Public Jenkins <im...@cloudera.com>
Reviewed-by: Tim Armstrong <ta...@cloudera.com>


Project: http://git-wip-us.apache.org/repos/asf/impala/repo
Commit: http://git-wip-us.apache.org/repos/asf/impala/commit/56dea843
Tree: http://git-wip-us.apache.org/repos/asf/impala/tree/56dea843
Diff: http://git-wip-us.apache.org/repos/asf/impala/diff/56dea843

Branch: refs/heads/master
Commit: 56dea84382e0a4d4a158299555c9c45d36a48d1b
Parents: 274e96b
Author: Alex Rodoni <ar...@cloudera.com>
Authored: Tue Jan 8 18:22:01 2019 -0800
Committer: Alex Rodoni <ar...@cloudera.com>
Committed: Wed Jan 9 20:58:08 2019 +0000

----------------------------------------------------------------------
 docs/impala.ditamap                          |  13 +-
 docs/impala_keydefs.ditamap                  |   4 -
 docs/topics/impala_admission.xml             | 734 ++++++----------------
 docs/topics/impala_admission_config.xml      | 357 +++++++++++
 docs/topics/impala_dedicated_coordinator.xml |   8 +-
 docs/topics/impala_resource_management.xml   | 200 +-----
 6 files changed, 572 insertions(+), 744 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/impala/blob/56dea843/docs/impala.ditamap
----------------------------------------------------------------------
diff --git a/docs/impala.ditamap b/docs/impala.ditamap
index 06bcda5..746cd7e 100644
--- a/docs/impala.ditamap
+++ b/docs/impala.ditamap
@@ -65,12 +65,6 @@ under the License.
   </topicref>
   <topicref href="topics/impala_tutorial.xml"/>
   <topicref href="topics/impala_admin.xml">
-    <topicref audience="standalone" href="topics/impala_admission.xml"/>
-<!-- Removing below as it does not have useful content. AR 12/20/2018 -->
-    <!-- <topicref audience="standalone" href="topics/impala_resource_management.xml"/> -->
-
-<!--<topicref href="topics/impala_howto_rm.xml"/>-->
-
     <topicref href="topics/impala_timeouts.xml"/>
     <topicref href="topics/impala_proxy.xml"/>
     <topicref href="topics/impala_disk_space.xml"/>
@@ -298,8 +292,13 @@ under the License.
     <topicref href="topics/impala_perf_skew.xml"/>
     <topicref audience="hidden" href="topics/impala_perf_ddl.xml"/>
   </topicref>
-  <topicref href="topics/impala_scalability.xml"/>
+  <topicref href="topics/impala_scalability.xml">
     <topicref href="topics/impala_dedicated_coordinator.xml"/>
+  </topicref>
+  <topicref href="topics/impala_resource_management.xml">
+    <topicref href="topics/impala_admission.xml"/>
+    <topicref href="topics/impala_admission_config.xml"/>
+  </topicref>
   <topicref href="topics/impala_partitioning.xml"/>
   <topicref href="topics/impala_file_formats.xml">
     <topicref href="topics/impala_txtfile.xml"/>

http://git-wip-us.apache.org/repos/asf/impala/blob/56dea843/docs/impala_keydefs.ditamap
----------------------------------------------------------------------
diff --git a/docs/impala_keydefs.ditamap b/docs/impala_keydefs.ditamap
index a25f4d1..878e99d 100644
--- a/docs/impala_keydefs.ditamap
+++ b/docs/impala_keydefs.ditamap
@@ -10649,10 +10649,6 @@ under the License.
   <keydef href="topics/impala_admin.xml" keys="admin"/>
   <keydef href="topics/impala_admission.xml" keys="admission_control"/>
   <keydef href="topics/impala_resource_management.xml" keys="resource_management"/>
-  <keydef href="topics/impala_resource_management.xml#rm_enforcement" keys="rm_enforcement"/>
-  <keydef href="topics/impala_resource_management.xml#rm_query_options" keys="rm_query_options"/>
-  <keydef href="topics/impala_resource_management.xml#rm_limitations" keys="rm_limitations"/>
-<!--<keydef href="topics/impala_howto_rm.xml" keys="howto_impala_rm"/>-->
   <keydef href="topics/impala_timeouts.xml" keys="timeouts"/>
   <keydef href="topics/impala_proxy.xml" keys="proxy"/>
   <keydef href="topics/impala_disk_space.xml" keys="disk_space"/>

http://git-wip-us.apache.org/repos/asf/impala/blob/56dea843/docs/topics/impala_admission.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_admission.xml b/docs/topics/impala_admission.xml
index 7c13215..b424f8b 100644
--- a/docs/topics/impala_admission.xml
+++ b/docs/topics/impala_admission.xml
@@ -29,105 +29,129 @@ under the License.
       <data name="Category" value="Resource Management"/>
     </metadata>
   </prolog>
-
   <conbody>
-
     <p id="admission_control_intro"> Admission control is an Impala feature that
-      limits the number of concurrently running queries to avoid resource usage
-      spikes and out-of-memory conditions on busy clusters. New queries are
-      accepted and executed until certain a threshold is reached, such as too
-      many queries or too much total memory used across the cluster. When one of
-      these thresholds is reached, incoming queries are queued and are admitted
-      (that is, begin executing) when the resources become available. </p>
+      imposes limits on concurrent SQL queries, to avoid resource usage spikes
+      and out-of-memory conditions on busy clusters. The admission control
+      feature lets you set an upper limit on the number of concurrent Impala
+      queries and on the memory used by those queries. Any additional queries
+      are queued until the earlier ones finish, rather than being cancelled or
+      running slowly and causing contention. As other queries finish, the queued
+      queries are allowed to proceed. </p>
+    <p rev="2.5.0"> In <keyword keyref="impala25_full"/> and higher, you can
+      specify these limits and thresholds for each pool rather than globally.
+      That way, you can balance the resource usage and throughput between steady
+      well-defined workloads, rare resource-intensive queries, and ad-hoc
+      exploratory queries. </p>
     <p> In addition to the threshold values for currently executing queries, you
-      can place limits on the maximum number of queries that are queued,
-      waiting, and a limit on the amount of time they might wait before
+      can place limits on the maximum number of queries that are queued
+      (waiting) and a limit on the amount of time they might wait before
       returning with an error. These queue settings let you ensure that queries
       do not wait indefinitely so that you can detect and correct
         <q>starvation</q> scenarios. </p>
-    <p>
-      Queries, DML statements, and some DDL statements, including
+    <p> Queries, DML statements, and some DDL statements, including
         <codeph>CREATE TABLE AS SELECT</codeph> and <codeph>COMPUTE
-        STATS</codeph> are affected by admission control.
-    </p>
-    <p>
-      Enable this feature if your cluster is
-      underutilized at some times and overutilized at others. Overutilization is indicated by performance
-      bottlenecks and queries being cancelled due to out-of-memory conditions, when those same queries are
-      successful and perform well during times with less concurrent load. Admission control works as a safeguard to
-      avoid out-of-memory conditions during heavy concurrent usage.
-    </p>
-
-    <note conref="../shared/impala_common.xml#common/impala_llama_obsolete"/>
-
+        STATS</codeph> are affected by admission control. </p>
+    <p> On a busy cluster, you might find there is an optimal number of Impala
+      queries that run concurrently. For example, when the I/O capacity is fully
+      utilized by I/O-intensive queries, you might not find any throughput
+      benefit in running more concurrent queries. By allowing some queries to
+      run at full speed while others wait, rather than having all queries
+      contend for resources and run slowly, admission control can result in
+      higher overall throughput. </p>
+    <p> For another example, consider a memory-bound workload such as many large
+      joins or aggregation queries. Each such query could briefly use many
+      gigabytes of memory to process intermediate results. Because Impala by
+      default cancels queries that exceed the specified memory limit, running
+      multiple large-scale queries at once might require re-running some queries
+      that are cancelled. In this case, admission control improves the
+      reliability and stability of the overall workload by only allowing as many
+      concurrent queries as the overall memory of the cluster can accommodate. </p>
     <p outputclass="toc inpage"/>
   </conbody>
 
-  <concept id="admission_intro">
-
-    <title>Overview of Impala Admission Control</title>
-  <prolog>
-    <metadata>
-      <data name="Category" value="Concepts"/>
-    </metadata>
-  </prolog>
-
-    <conbody>
-
-      <p>
-        On a busy cluster, you might find there is an optimal number of Impala queries that run concurrently.
-        For example, when the I/O capacity is fully utilized by I/O-intensive queries,
-        you might not find any throughput benefit in running more concurrent queries.
-        By allowing some queries to run at full speed while others wait, rather than having
-        all queries contend for resources and run slowly, admission control can result in higher overall throughput.
-      </p>
-
-      <p> For another example, consider a memory-bound workload such as many
-        large joins or aggregation queries. Each such query could briefly use
-        many gigabytes of memory to process intermediate results. Because Impala
-        by default cancels queries that exceed the specified memory limit,
-        running multiple large-scale queries at once might require re-running
-        some queries that are cancelled. In this case, admission control
-        improves the reliability and stability of the overall workload by only
-        allowing as many concurrent queries as the overall memory of the cluster
-        can accommodate. </p>
-
-      <p>
-        The admission control feature lets you set an upper limit on the number of concurrent Impala
-        queries and on the memory used by those queries. Any additional queries are queued until the earlier ones
-        finish, rather than being cancelled or running slowly and causing contention. As other queries finish, the
-        queued queries are allowed to proceed.
-      </p>
-
-      <p rev="2.5.0">
-        In <keyword keyref="impala25_full"/> and higher, you can specify these limits and thresholds for each
-        pool rather than globally. That way, you can balance the resource usage and throughput
-        between steady well-defined workloads, rare resource-intensive queries, and ad hoc
-        exploratory queries.
-      </p>
-
-      <p>
-        For details on the internal workings of admission control, see
-        <xref href="impala_admission.xml#admission_architecture"/>.
-      </p>
-    </conbody>
-  </concept>
-
   <concept id="admission_concurrency">
     <title>Concurrent Queries and Admission Control</title>
     <conbody>
-      <p>
-        One way to limit resource usage through admission control is to set an upper limit
-        on the number of concurrent queries. This is the initial technique you might use
-        when you do not have extensive information about memory usage for your workload.
-        This setting can be specified separately for each dynamic resource pool.
-      </p>
-      <p>
-        You can combine this setting with the memory-based approach described in
-        <xref href="impala_admission.xml#admission_memory"/>. If either the maximum number of
-        or the expected memory usage of the concurrent queries is exceeded, subsequent queries
-        are queued until the concurrent workload falls below the threshold again.
-      </p>
+      <p> One way to limit resource usage through admission control is to set an
+        upper limit on the number of concurrent queries. This is the initial
+        technique you might use when you do not have extensive information about
+        memory usage for your workload. The settings can be specified separately
+        for each dynamic resource pool. </p>
+      <dl>
+        <dlentry>
+          <dt> Max Running Queries </dt>
+          <dd><p>Maximum number of concurrently running queries in this pool.
+              The default value is unlimited for Impala 2.5 or higher.
+              (optional)</p> The maximum number of queries that can run
+            concurrently in this pool. The default value is unlimited. Any
+            queries for this pool that exceed <uicontrol>Max Running
+              Queries</uicontrol> are added to the admission control queue until
+            other queries finish. You can use <uicontrol>Max Running
+              Queries</uicontrol> in the early stages of resource management,
+            when you do not have extensive data about query memory usage, to
+            determine if the cluster performs better overall if throttling is
+            applied to Impala queries. <p> For a workload with many small
+              queries, you typically specify a high value for this setting, or
+              leave the default setting of <q>unlimited</q>. For a workload with
+              expensive queries, where some number of concurrent queries
+              saturate the memory, I/O, CPU, or network capacity of the cluster,
+              set the value low enough that the cluster resources are not
+              overcommitted for Impala. </p><p>Once you have enabled
+              memory-based admission control using other pool settings, you can
+              still use <uicontrol>Max Running Queries</uicontrol> as a
+              safeguard. If queries exceed either the total estimated memory or
+              the maximum number of concurrent queries, they are added to the
+              queue. </p>
+          </dd>
+        </dlentry>
+      </dl>
+      <dl>
+        <dlentry>
+          <dt> Max Queued Queries </dt>
+          <dd> Maximum number of queries that can be queued in this pool. The
+            default value is 200 for Impala 2.1 or higher and 50 for previous
+            versions of Impala. (optional)</dd>
+        </dlentry>
+      </dl>
+      <dl>
+        <dlentry>
+          <dt> Queue Timeout </dt>
+          <dd> The amount of time, in milliseconds, that a query waits in the
+            admission control queue for this pool before being canceled. The
+            default value is 60,000 milliseconds. <p>It the following cases,
+                <uicontrol>Queue Timeout</uicontrol> is not significant, and you
+              can specify a high value to avoid canceling queries
+                unexpectedly:<ul id="ul_kzr_rbg_gw">
+                <li>In a low-concurrency workload where few or no queries are
+                  queued</li>
+                <li>In an environment without a strict SLA, where it does not
+                  matter if queries occasionally take longer than usual because
+                  they are held in admission control</li>
+              </ul>You might also need to increase the value to use Impala with
+              some business intelligence tools that have their own timeout
+              intervals for queries. </p><p>In a high-concurrency workload,
+              especially for queries with a tight SLA, long wait times in
+              admission control can cause a serious problem. For example, if a
+              query needs to run in 10 seconds, and you have tuned it so that it
+              runs in 8 seconds, it violates its SLA if it waits in the
+              admission control queue longer than 2 seconds. In a case like
+              this, set a low timeout value and monitor how many queries are
+              cancelled because of timeouts. This technique helps you to
+              discover capacity, tuning, and scaling problems early, and helps
+              avoid wasting resources by running expensive queries that have
+              already missed their SLA. </p><p> If you identify some queries
+              that can have a high timeout value, and others that benefit from a
+              low timeout value, you can create separate pools with different
+              values for this setting. </p>
+          </dd>
+        </dlentry>
+      </dl>
+      <p> You can combine these settings with the memory-based approach
+        described in <xref href="impala_admission.xml#admission_memory"/>. If
+        either the maximum number of or the expected memory usage of the
+        concurrent queries is exceeded, subsequent queries are queued until the
+        concurrent workload falls below the threshold again. </p>
     </conbody>
   </concept>
 
@@ -209,17 +233,6 @@ under the License.
               Memory.</p>
             <p>Minimum Query Memory Limit must be less than or equal to Maximum
               Query Memory Limit and Max Memory.</p>
-            <p>You set the settings in <codeph>llama-site.xml</codeph>. For
-              example:</p>
-            <codeblock>
-&lt;property>
-    &lt;name>impala.admission-control.<b>max-query-mem-limit</b>.root.default.regularPool&lt;/name>
-    &lt;value>1610612736&lt;/value>&lt;!--1.5GB-->
-  &lt;/property>
-  &lt;property>
-    &lt;name>impala.admission-control.<b>min-query-mem-limit</b>.root.default.regularPool&lt;/name>
-    &lt;value>52428800&lt;/value>&lt;!--50MB-->
-  &lt;/property></codeblock>
             <p>A user can override Impala’s choice of memory limit by setting
               the <codeph>MEM_LIMIT</codeph> query option. If the Clamp
               MEM_LIMIT Query Option setting is set to <codeph>TRUE</codeph> and
@@ -259,6 +272,18 @@ under the License.
             Memory Limit or Minimum Query Memory Limit is set.</dd>
         </dlentry>
       </dl>
+      <dl>
+        <dlentry>
+          <dt> Clamp MEM_LIMIT Query Option</dt>
+          <dd>If this field is not selected, the <codeph>MEM_LIMIT</codeph>
+            query option will not be bounded by the <b>Maximum Query Memory
+              Limit</b> and the <b>Minimum Query Memory Limit</b> values
+            specified for this resource pool. By default, this field is selected
+            in Impala 3.1 and higher. The field is disabled if both <b>Minimum
+              Query Memory Limit</b> and <b>Maximum Query Memory Limit</b> are
+            not set.</dd>
+        </dlentry>
+      </dl>
       <p
         conref="../shared/impala_common.xml#common/admission_control_mem_limit_interaction"/>
       <p>
@@ -269,6 +294,26 @@ under the License.
       </p>
     </conbody>
   </concept>
+  <concept id="set_per_query_memory_limits">
+    <title>Setting Per-query Memory Limits</title>
+    <conbody>
+      <p>Use per-query memory limits to prevent queries from consuming excessive
+        memory resources that impact other queries. We recommends that you set
+        the query memory limits whenever possible.</p>
+      <p>If you set the <b>Max Memory</b> for a resource pool, Impala attempts
+        to throttle queries if there is not enough memory to run them within the
+        specified resources.</p>
+      <p>Only use admission control with maximum memory resources if you can
+        ensure there are query memory limits. Set the pool <b>Maximum Query
+          Memory Limit</b> to be certain. You can override this setting with the
+          <codeph>MEM_LIMIT</codeph> query option, if necessary.</p>
+      <p>Typically, you set query memory limits using the <codeph>set
+          MEM_LIMIT=Xg;</codeph> query option. When you find the right value for
+        your business case, memory-based admission control works well. The
+        potential downside is that queries that attempt to use more memory might
+        perform poorly or even be cancelled.</p>
+    </conbody>
+  </concept>
 
   <concept id="admission_yarn">
 
@@ -388,18 +433,11 @@ under the License.
 
       </ul>
 
-      <p rev="">
-        In Impala 2.0 and higher, you can submit
-        a SQL <codeph>SET</codeph> statement from the client application
-        to change the <codeph>REQUEST_POOL</codeph> query option.
-        This option lets you submit queries to different resource pools,
-        as described in <xref href="impala_request_pool.xml#request_pool"/>.
-<!-- Commenting out as starting to be too old to mention.
-        Prior to Impala 2.0, that option was only settable
-        for a session through the <cmdname>impala-shell</cmdname> <codeph>SET</codeph> command, or cluster-wide through an
-        <cmdname>impalad</cmdname> startup option.
--->
-      </p>
+      <p rev=""> In Impala 2.0 and higher, you can submit a SQL
+          <codeph>SET</codeph> statement from the client application to change
+        the <codeph>REQUEST_POOL</codeph> query option. This option lets you
+        submit queries to different resource pools, as described in <xref
+          href="impala_request_pool.xml#request_pool"/>.  </p>
 
       <p>
         At any time, the set of queued queries could include queries submitted through multiple different Impala
@@ -463,431 +501,59 @@ under the License.
       </p>
     </conbody>
   </concept>
-
-
-  <concept id="admission_config">
-
-    <title>Configuring Admission Control</title>
-  <prolog>
-    <metadata>
-      <data name="Category" value="Configuring"/>
-    </metadata>
-  </prolog>
-
+  <concept id="admission_guidelines">
+    <title>Guidelines for Using Admission Control</title>
+    <prolog>
+      <metadata>
+        <data name="Category" value="Planning"/>
+        <data name="Category" value="Guidelines"/>
+        <data name="Category" value="Best Practices"/>
+      </metadata>
+    </prolog>
     <conbody>
-
-      <p>
-        The configuration options for admission control range from the simple (a single resource pool with a single
-        set of options) to the complex (multiple resource pools with different options, each pool handling queries
-        for a different set of users and groups).
-      </p>
-
-      <section id="admission_flags">
-
-        <title>Impala Service Flags for Admission Control (Advanced)</title>
-
-        <p>
-          The following Impala configuration options let you adjust the settings of the admission control feature. When supplying the
-          options on the <cmdname>impalad</cmdname> command line, prepend the option name with <codeph>--</codeph>.
-        </p>
-
-        <dl id="admission_control_option_list">
-          <dlentry id="queue_wait_timeout_ms">
-            <dt>
-              <codeph>queue_wait_timeout_ms</codeph>
-            </dt>
-            <dd>
-              <indexterm audience="hidden">--queue_wait_timeout_ms</indexterm>
-              <b>Purpose:</b> Maximum amount of time (in milliseconds) that a
-              request waits to be admitted before timing out.
-              <p>
-                <b>Type:</b> <codeph>int64</codeph>
-              </p>
-              <p>
-                <b>Default:</b> <codeph>60000</codeph>
-              </p>
-            </dd>
-          </dlentry>
-          <dlentry id="default_pool_max_requests">
-            <dt>
-              <codeph>default_pool_max_requests</codeph>
-            </dt>
-            <dd>
-              <indexterm audience="hidden">--default_pool_max_requests</indexterm>
-              <b>Purpose:</b> Maximum number of concurrent outstanding requests
-              allowed to run before incoming requests are queued. Because this
-              limit applies cluster-wide, but each Impala node makes independent
-              decisions to run queries immediately or queue them, it is a soft
-              limit; the overall number of concurrent queries might be slightly
-              higher during times of heavy load. A negative value indicates no
-              limit. Ignored if <codeph>fair_scheduler_config_path</codeph> and
-                <codeph>llama_site_path</codeph> are set. <p>
-                <b>Type:</b>
-                <codeph>int64</codeph>
-              </p>
-              <p>
-                <b>Default:</b>
-                <ph rev="2.5.0">-1, meaning unlimited (prior to <keyword keyref="impala25_full"/> the default was 200)</ph>
-              </p>
-            </dd>
-          </dlentry>
-          <dlentry id="default_pool_max_queued">
-            <dt>
-              <codeph>default_pool_max_queued</codeph>
-            </dt>
-            <dd>
-              <indexterm audience="hidden">--default_pool_max_queued</indexterm>
-              <b>Purpose:</b> Maximum number of requests allowed to be queued
-              before rejecting requests. Because this limit applies
-              cluster-wide, but each Impala node makes independent decisions to
-              run queries immediately or queue them, it is a soft limit; the
-              overall number of queued queries might be slightly higher during
-              times of heavy load. A negative value or 0 indicates requests are
-              always rejected once the maximum concurrent requests are
-              executing. Ignored if <codeph>fair_scheduler_config_path</codeph>
-              and <codeph>llama_site_path</codeph> are set. <p>
-                <b>Type:</b>
-                <codeph>int64</codeph>
-              </p>
-              <p>
-                <b>Default:</b>
-                <ph rev="2.5.0">unlimited</ph>
-              </p>
-            </dd>
-          </dlentry>
-          <dlentry id="default_pool_mem_limit">
-            <dt>
-              <codeph>default_pool_mem_limit</codeph>
-            </dt>
-            <dd>
-              <indexterm audience="hidden">--default_pool_mem_limit</indexterm>
-              <b>Purpose:</b> Maximum amount of memory (across the entire
-              cluster) that all outstanding requests in this pool can use before
-              new requests to this pool are queued. Specified in bytes,
-              megabytes, or gigabytes by a number followed by the suffix
-                <codeph>b</codeph> (optional), <codeph>m</codeph>, or
-                <codeph>g</codeph>, either uppercase or lowercase. You can
-              specify floating-point values for megabytes and gigabytes, to
-              represent fractional numbers such as <codeph>1.5</codeph>. You can
-              also specify it as a percentage of the physical memory by
-              specifying the suffix <codeph>%</codeph>. 0 or no setting
-              indicates no limit. Defaults to bytes if no unit is given. Because
-              this limit applies cluster-wide, but each Impala node makes
-              independent decisions to run queries immediately or queue them, it
-              is a soft limit; the overall memory used by concurrent queries
-              might be slightly higher during times of heavy load. Ignored if
-                <codeph>fair_scheduler_config_path</codeph> and
-                <codeph>llama_site_path</codeph> are set. <note
-                conref="../shared/impala_common.xml#common/admission_compute_stats"/>
-              <p conref="../shared/impala_common.xml#common/type_string"/>
-              <p>
-                <b>Default:</b>
-                <codeph>""</codeph> (empty string, meaning unlimited) </p>
-            </dd>
-          </dlentry>
-          <dlentry id="disable_pool_max_requests">
-            <dt>
-              <codeph>disable_pool_max_requests</codeph>
-            </dt>
-            <dd>
-              <indexterm audience="hidden">--disable_pool_max_requests</indexterm>
-              <b>Purpose:</b> Disables all per-pool limits on the maximum number
-              of running requests. <p>
-                <b>Type:</b> Boolean </p>
-              <p>
-                <b>Default:</b>
-                <codeph>false</codeph>
-              </p>
-            </dd>
-          </dlentry>
-          <dlentry id="disable_pool_mem_limits">
-            <dt>
-              <codeph>disable_pool_mem_limits</codeph>
-            </dt>
-            <dd>
-              <indexterm audience="hidden">--disable_pool_mem_limits</indexterm>
-              <b>Purpose:</b> Disables all per-pool mem limits. <p>
-                <b>Type:</b> Boolean </p>
-              <p>
-                <b>Default:</b>
-                <codeph>false</codeph>
-              </p>
-            </dd>
-          </dlentry>
-          <dlentry id="fair_scheduler_allocation_path">
-            <dt>
-              <codeph>fair_scheduler_allocation_path</codeph>
-            </dt>
-            <dd>
-              <indexterm audience="hidden"
-                >--fair_scheduler_allocation_path</indexterm>
-              <b>Purpose:</b> Path to the fair scheduler allocation file
-                (<codeph>fair-scheduler.xml</codeph>). <p
-                conref="../shared/impala_common.xml#common/type_string"/>
-              <p>
-                <b>Default:</b>
-                <codeph>""</codeph> (empty string) </p>
-              <p>
-                <b>Usage notes:</b> Admission control only uses a small subset
-                of the settings that can go in this file, as described below.
-                For details about all the Fair Scheduler configuration settings,
-                see the <xref
-                  href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Configuration"
-                  scope="external" format="html">Apache wiki</xref>. </p>
-            </dd>
-          </dlentry>
-          <dlentry id="llama_site_path">
-            <dt>
-              <codeph>llama_site_path</codeph>
-            </dt>
-            <dd>
-              <indexterm audience="hidden">--llama_site_path</indexterm>
-              <b>Purpose:</b> Path to the configuration file used by admission
-              control (<codeph>llama-site.xml</codeph>). If set,
-                <codeph>fair_scheduler_allocation_path</codeph> must also be
-              set. <p conref="../shared/impala_common.xml#common/type_string"/>
-              <p>
-                <b>Default:</b>
-                <codeph>""</codeph> (empty string) </p>
-              <p>
-                <b>Usage notes:</b> Admission control only uses a few of the
-                settings that can go in this file, as described below. </p>
-            </dd>
-          </dlentry>
-        </dl>
-      </section>
+      <p> The limits imposed by admission control are de-centrally managed
+          <q>soft</q> limits. Each Impala coordinator node makes its own
+        decisions about whether to allow queries to run immediately or to queue
+        them. These decisions rely on information passed back and forth between
+        nodes by the StateStore service. If a sudden surge in requests causes
+        more queries than anticipated to run concurrently, then the throughput
+        could decrease due to queries spilling to disk or contending for
+        resources. Or queries could be cancelled if they exceed the
+          <codeph>MEM_LIMIT</codeph> setting while running. </p>
+      <p> In <cmdname>impala-shell</cmdname>, you can also specify which
+        resource pool to direct queries to by setting the
+          <codeph>REQUEST_POOL</codeph> query option. </p>
+      <p> To see how admission control works for particular queries, examine the
+        profile output or the summary output for the query. <ul>
+          <li>Profile<p>The information is available through the
+                <codeph>PROFILE</codeph> statement in
+                <cmdname>impala-shell</cmdname> immediately after running a
+              query in the shell, on the <uicontrol>queries</uicontrol> page of
+              the Impala debug web UI, or in the Impala log file (basic
+              information at log level 1, more detailed information at log level
+              2). </p><p>The profile output contains details about the admission
+              decision, such as whether the query was queued or not and which
+              resource pool it was assigned to. It also includes the estimated
+              and actual memory usage for the query, so you can fine-tune the
+              configuration for the memory limits of the resource pools.
+            </p></li>
+          <li>Summary<p>Starting in <keyword keyref="impala31"/>, the
+              information is available in <cmdname>impala-shell</cmdname> when
+              the <codeph>LIVE_PROGRESS</codeph> or
+                <codeph>LIVE_SUMMARY</codeph> query option is set to
+                <codeph>TRUE</codeph>.</p><p>You can also start an
+                <codeph>impala-shell</codeph> session with the
+                <codeph>--live_progress</codeph> or
+                <codeph>--live_summary</codeph> flags to monitor all queries in
+              that <codeph>impala-shell</codeph> session.</p><p>The summary
+              output includes the queuing status consisting of whether the query
+              was queued and what was the latest queuing reason.</p></li>
+        </ul></p>
+      <p> For details about all the Fair Scheduler configuration settings, see
+          <xref keyref="FairScheduler">Fair Scheduler Configuration</xref>, in
+        particular the tags such as <codeph>&lt;queue&gt;</codeph> and
+          <codeph>&lt;aclSubmitApps&gt;</codeph> to map users and groups to
+        particular resource pools (queues). </p>
     </conbody>
-
-    <concept id="admission_config_manual">
-
-      <title>Configuring Admission Control Using the Command Line</title>
-
-      <conbody>
-
-        <p>
-          To configure admission control, use a combination of startup options for the Impala daemon and edit
-          or create the configuration files <filepath>fair-scheduler.xml</filepath> and
-            <filepath>llama-site.xml</filepath>.
-        </p>
-
-        <p>
-          For a straightforward configuration using a single resource pool named <codeph>default</codeph>, you can
-          specify configuration options on the command line and skip the <filepath>fair-scheduler.xml</filepath>
-          and <filepath>llama-site.xml</filepath> configuration files.
-        </p>
-
-        <p> For an advanced configuration with multiple resource pools using
-          different settings:<ol>
-            <li>Set up the <filepath>fair-scheduler.xml</filepath> and
-                <filepath>llama-site.xml</filepath> configuration files
-              manually.</li>
-            <li>Provide the paths to each one using the
-                <cmdname>impalad</cmdname> command-line options,
-                <codeph>--fair_scheduler_allocation_path</codeph> and
-                <codeph>--llama_site_path</codeph> respectively. </li>
-          </ol></p>
-
-        <p> The Impala admission control feature uses the Fair Scheduler
-          configuration settings to determine how to map users and groups to
-          different resource pools. For example, you might set up different
-          resource pools with separate memory limits, and maximum number of
-          concurrent and queued queries, for different categories of users
-          within your organization. For details about all the Fair Scheduler
-          configuration settings, see the <xref
-            href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Configuration"
-            scope="external" format="html">Apache wiki</xref>. </p>
-
-        <p> The Impala admission control feature uses a small subset of possible
-          settings from the <filepath>llama-site.xml</filepath> configuration
-          file: </p>
-
-<codeblock>llama.am.throttling.maximum.placed.reservations.<varname>queue_name</varname>
-llama.am.throttling.maximum.queued.reservations.<varname>queue_name</varname>
-<ph rev="2.5.0 IMPALA-2538">impala.admission-control.pool-default-query-options.<varname>queue_name</varname>
-impala.admission-control.pool-queue-timeout-ms.<varname>queue_name</varname></ph>
-</codeblock>
-
-        <p rev="2.5.0 IMPALA-2538"> The
-            <codeph>impala.admission-control.pool-queue-timeout-ms</codeph>
-          setting specifies the timeout value for this pool in milliseconds. </p>
-        <p rev="2.5.0 IMPALA-2538"
-            >The<codeph>impala.admission-control.pool-default-query-options</codeph>
-          settings designates the default query options for all queries that run
-          in this pool. Its argument value is a comma-delimited string of
-          'key=value' pairs, <codeph>'key1=val1,key2=val2, ...'</codeph>. For
-          example, this is where you might set a default memory limit for all
-          queries in the pool, using an argument such as
-            <codeph>MEM_LIMIT=5G</codeph>. </p>
-
-        <p rev="2.5.0 IMPALA-2538">
-          The <codeph>impala.admission-control.*</codeph> configuration settings are available in
-          <keyword keyref="impala25_full"/> and higher.
-        </p>
-
-      </conbody>
-    </concept>
-
-    <concept id="admission_examples">
-
-      <title>Example of Admission Control Configuration</title>
-
-      <conbody>
-
-        <p> Here are sample <filepath>fair-scheduler.xml</filepath> and
-            <filepath>llama-site.xml</filepath> files that define resource pools
-            <codeph>root.default</codeph>, <codeph>root.development</codeph>,
-          and <codeph>root.production</codeph>. These files define resource
-          pools for Impala admission control and are separate from the similar
-            <codeph>fair-scheduler.xml</codeph>that defines resource pools for
-          YARN.</p>
-
-        <p>
-          <b>fair-scheduler.xml:</b>
-        </p>
-
-        <p>
-          Although Impala does not use the <codeph>vcores</codeph> value, you must still specify it to satisfy
-          YARN requirements for the file contents.
-        </p>
-
-        <p>
-          Each <codeph>&lt;aclSubmitApps&gt;</codeph> tag (other than the one for <codeph>root</codeph>) contains
-          a comma-separated list of users, then a space, then a comma-separated list of groups; these are the
-          users and groups allowed to submit Impala statements to the corresponding resource pool.
-        </p>
-
-        <p>
-          If you leave the <codeph>&lt;aclSubmitApps&gt;</codeph> element empty for a pool, nobody can submit
-          directly to that pool; child pools can specify their own <codeph>&lt;aclSubmitApps&gt;</codeph> values
-          to authorize users and groups to submit to those pools.
-        </p>
-
-        <codeblock>&lt;allocations>
-
-    &lt;queue name="root">
-        &lt;aclSubmitApps> &lt;/aclSubmitApps>
-        &lt;queue name="default">
-            &lt;maxResources>50000 mb, 0 vcores&lt;/maxResources>
-            &lt;aclSubmitApps>*&lt;/aclSubmitApps>
-        &lt;/queue>
-        &lt;queue name="development">
-            &lt;maxResources>200000 mb, 0 vcores&lt;/maxResources>
-            &lt;aclSubmitApps>user1,user2 dev,ops,admin&lt;/aclSubmitApps>
-        &lt;/queue>
-        &lt;queue name="production">
-            &lt;maxResources>1000000 mb, 0 vcores&lt;/maxResources>
-            &lt;aclSubmitApps> ops,admin&lt;/aclSubmitApps>
-        &lt;/queue>
-    &lt;/queue>
-    &lt;queuePlacementPolicy>
-        &lt;rule name="specified" create="false"/>
-        &lt;rule name="default" />
-    &lt;/queuePlacementPolicy>
-&lt;/allocations>
-
-</codeblock>
-
-        <p>
-          <b>llama-site.xml:</b>
-        </p>
-
-        <codeblock rev="2.5.0 IMPALA-2538">
-&lt;?xml version="1.0" encoding="UTF-8"?>
-&lt;configuration>
-  &lt;property>
-    &lt;name>llama.am.throttling.maximum.placed.reservations.root.default&lt;/name>
-    &lt;value>10&lt;/value>
-  &lt;/property>
-  &lt;property>
-    &lt;name>llama.am.throttling.maximum.queued.reservations.root.default&lt;/name>
-    &lt;value>50&lt;/value>
-  &lt;/property>
-  &lt;property>
-    &lt;name>impala.admission-control.pool-default-query-options.root.default&lt;/name>
-    &lt;value>mem_limit=128m,query_timeout_s=20,max_io_buffers=10&lt;/value>
-  &lt;/property>
-  &lt;property>
-    &lt;name>impala.admission-control.pool-queue-timeout-ms.root.default&lt;/name>
-    &lt;value>30000&lt;/value>
-  &lt;/property>
-  &lt;property>
-    &lt;name>impala.admission-control.max-query-mem-limit.root.default.regularPool&lt;/name>
-    &lt;value>1610612736&lt;/value>&lt;!--1.5GB-->
-  &lt;/property>
-  &lt;property>
-    &lt;name>impala.admission-control.min-query-mem-limit.root.default.regularPool&lt;/name>
-    &lt;value>52428800&lt;/value>&lt;!--50MB-->
-  &lt;/property>
-  &lt;property>
-    &lt;name>impala.admission-control.clamp-mem-limit-query-option.root.default.regularPool&lt;/name>
-    &lt;value>true&lt;/value>
-  &lt;/property>
-</codeblock>
-
-      </conbody>
-    </concept>
-
-<!-- End Config -->
-
-  <concept id="admission_guidelines">
-      <title>Guidelines for Using Admission Control</title>
-      <prolog>
-        <metadata>
-          <data name="Category" value="Planning"/>
-          <data name="Category" value="Guidelines"/>
-          <data name="Category" value="Best Practices"/>
-        </metadata>
-      </prolog>
-      <conbody>
-        <p> The limits imposed by admission control are de-centrally managed
-            <q>soft</q> limits. Each Impala coordinator node makes its own
-          decisions about whether to allow queries to run immediately or to
-          queue them. These decisions rely on information passed back and forth
-          between nodes by the StateStore service. If a sudden surge in requests
-          causes more queries than anticipated to run concurrently, then the
-          throughput could decrease due to queries spilling to disk or
-          contending for resources. Or queries could be cancelled if they exceed
-          the <codeph>MEM_LIMIT</codeph> setting while running. </p>
-        <p>
-          In <cmdname>impala-shell</cmdname>, you can also specify which
-          resource pool to direct queries to by setting the
-            <codeph>REQUEST_POOL</codeph> query option.
-        </p>
-        <p> To see how admission control works for particular queries, examine
-          the profile output or the summary output for the query. <ul>
-            <li>Profile<p>The information is available through the
-                  <codeph>PROFILE</codeph> statement in
-                  <cmdname>impala-shell</cmdname> immediately after running a
-                query in the shell, on the <uicontrol>queries</uicontrol> page
-                of the Impala debug web UI, or in the Impala log file (basic
-                information at log level 1, more detailed information at log
-                level 2). </p><p>The profile output contains details about the
-                admission decision, such as whether the query was queued or not
-                and which resource pool it was assigned to. It also includes the
-                estimated and actual memory usage for the query, so you can
-                fine-tune the configuration for the memory limits of the
-                resource pools. </p></li>
-            <li>Summary<p>Starting in <keyword keyref="impala31"/>, the
-                information is available in <cmdname>impala-shell</cmdname> when
-                the <codeph>LIVE_PROGRESS</codeph> or
-                  <codeph>LIVE_SUMMARY</codeph> query option is set to
-                  <codeph>TRUE</codeph>.</p><p>You can also start an
-                  <codeph>impala-shell</codeph> session with the
-                  <codeph>--live_progress</codeph> or
-                  <codeph>--live_summary</codeph> flags to monitor all queries
-                in that <codeph>impala-shell</codeph> session.</p><p>The summary
-                output includes the queuing status consisting of whether the
-                query was queued and what was the latest queuing
-              reason.</p></li>
-          </ul></p>
-        <p>
-          For details about all the Fair Scheduler configuration settings, see
-            <xref keyref="FairScheduler">Fair Scheduler Configuration</xref>, in
-          particular the tags such as <codeph>&lt;queue&gt;</codeph> and
-            <codeph>&lt;aclSubmitApps&gt;</codeph> to map users and groups to
-          particular resource pools (queues).
-        </p>
-      </conbody>
-    </concept>
-</concept>
+  </concept>
 </concept>

http://git-wip-us.apache.org/repos/asf/impala/blob/56dea843/docs/topics/impala_admission_config.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_admission_config.xml b/docs/topics/impala_admission_config.xml
new file mode 100644
index 0000000..97f76f1
--- /dev/null
+++ b/docs/topics/impala_admission_config.xml
@@ -0,0 +1,357 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept rev="1.3.0" id="admission_control">
+
+  <title>Configuring Admission Control</title>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="Querying"/>
+      <data name="Category" value="Admission Control"/>
+      <data name="Category" value="Resource Management"/>
+    </metadata>
+  </prolog>
+  <conbody>
+    <p>Impala includes features that balance and maximize resources in your
+        <keyword keyref="hadoop_distro"/> cluster. This topic describes how you
+      can improve efficiency of your a <keyword keyref="hadoop_distro"/> cluster
+      using those features.</p>
+    <p> The configuration options for admission control range from the simple (a
+      single resource pool with a single set of options) to the complex
+      (multiple resource pools with different options, each pool handling
+      queries for a different set of users and groups). </p>
+  </conbody>
+  <concept id="concept_bz4_vxz_jgb">
+    <title>Configuring Admission Control in Command Line Interface</title>
+    <conbody>
+      <p> To configure admission control, use a combination of startup options
+        for the Impala daemon and edit or create the configuration files
+          <filepath>fair-scheduler.xml</filepath> and
+          <filepath>llama-site.xml</filepath>. </p>
+      <p> For a straightforward configuration using a single resource pool named
+          <codeph>default</codeph>, you can specify configuration options on the
+        command line and skip the <filepath>fair-scheduler.xml</filepath> and
+          <filepath>llama-site.xml</filepath> configuration files. </p>
+      <p> For an advanced configuration with multiple resource pools using
+        different settings:<ol>
+          <li>Set up the <filepath>fair-scheduler.xml</filepath> and
+              <filepath>llama-site.xml</filepath> configuration files
+            manually.</li>
+          <li>Provide the paths to each one using the <cmdname>impalad</cmdname>
+            command-line options,
+              <codeph>--fair_scheduler_allocation_path</codeph> and
+              <codeph>--llama_site_path</codeph> respectively. </li>
+        </ol></p>
+      <p> The Impala admission control feature uses the Fair Scheduler
+        configuration settings to determine how to map users and groups to
+        different resource pools. For example, you might set up different
+        resource pools with separate memory limits, and maximum number of
+        concurrent and queued queries, for different categories of users within
+        your organization. For details about all the Fair Scheduler
+        configuration settings, see the <xref
+          href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Configuration"
+          scope="external" format="html">Apache wiki</xref>. </p>
+      <p> The Impala admission control feature uses a small subset of possible
+        settings from the <filepath>llama-site.xml</filepath> configuration
+        file: </p>
+      <codeblock>llama.am.throttling.maximum.placed.reservations.<varname>queue_name</varname>
+llama.am.throttling.maximum.queued.reservations.<varname>queue_name</varname>
+<ph rev="2.5.0 IMPALA-2538">impala.admission-control.pool-default-query-options.<varname>queue_name</varname>
+impala.admission-control.pool-queue-timeout-ms.<varname>queue_name</varname></ph>
+</codeblock>
+      <p rev="2.5.0 IMPALA-2538"> The
+          <codeph>impala.admission-control.pool-queue-timeout-ms</codeph>
+        setting specifies the timeout value for this pool in milliseconds. </p>
+      <p rev="2.5.0 IMPALA-2538"
+          >The<codeph>impala.admission-control.pool-default-query-options</codeph>
+        settings designates the default query options for all queries that run
+        in this pool. Its argument value is a comma-delimited string of
+        'key=value' pairs, <codeph>'key1=val1,key2=val2, ...'</codeph>. For
+        example, this is where you might set a default memory limit for all
+        queries in the pool, using an argument such as
+          <codeph>MEM_LIMIT=5G</codeph>. </p>
+      <p rev="2.5.0 IMPALA-2538"> The
+          <codeph>impala.admission-control.*</codeph> configuration settings are
+        available in <keyword keyref="impala25_full"/> and higher. </p>
+    </conbody>
+    <concept id="concept_cz4_vxz_jgb">
+      <title>Example of Admission Control Configuration</title>
+      <conbody>
+        <p> Here are sample <filepath>fair-scheduler.xml</filepath> and
+            <filepath>llama-site.xml</filepath> files that define resource pools
+            <codeph>root.default</codeph>, <codeph>root.development</codeph>,
+          and <codeph>root.production</codeph>. These files define resource
+          pools for Impala admission control and are separate from the similar
+            <codeph>fair-scheduler.xml</codeph>that defines resource pools for
+          YARN.</p>
+        <p>
+          <b>fair-scheduler.xml:</b>
+        </p>
+        <p> Although Impala does not use the <codeph>vcores</codeph> value, you
+          must still specify it to satisfy YARN requirements for the file
+          contents. </p>
+        <p> Each <codeph>&lt;aclSubmitApps&gt;</codeph> tag (other than the one
+          for <codeph>root</codeph>) contains a comma-separated list of users,
+          then a space, then a comma-separated list of groups; these are the
+          users and groups allowed to submit Impala statements to the
+          corresponding resource pool. </p>
+        <p> If you leave the <codeph>&lt;aclSubmitApps&gt;</codeph> element
+          empty for a pool, nobody can submit directly to that pool; child pools
+          can specify their own <codeph>&lt;aclSubmitApps&gt;</codeph> values to
+          authorize users and groups to submit to those pools. </p>
+        <codeblock>&lt;allocations>
+
+    &lt;queue name="root">
+        &lt;aclSubmitApps> &lt;/aclSubmitApps>
+        &lt;queue name="default">
+            &lt;maxResources>50000 mb, 0 vcores&lt;/maxResources>
+            &lt;aclSubmitApps>*&lt;/aclSubmitApps>
+        &lt;/queue>
+        &lt;queue name="development">
+            &lt;maxResources>200000 mb, 0 vcores&lt;/maxResources>
+            &lt;aclSubmitApps>user1,user2 dev,ops,admin&lt;/aclSubmitApps>
+        &lt;/queue>
+        &lt;queue name="production">
+            &lt;maxResources>1000000 mb, 0 vcores&lt;/maxResources>
+            &lt;aclSubmitApps> ops,admin&lt;/aclSubmitApps>
+        &lt;/queue>
+    &lt;/queue>
+    &lt;queuePlacementPolicy>
+        &lt;rule name="specified" create="false"/>
+        &lt;rule name="default" />
+    &lt;/queuePlacementPolicy>
+&lt;/allocations>
+
+</codeblock>
+        <p>
+          <b>llama-site.xml:</b>
+        </p>
+        <codeblock rev="2.5.0 IMPALA-2538">
+&lt;?xml version="1.0" encoding="UTF-8"?>
+&lt;configuration>
+  &lt;property>
+    &lt;name>llama.am.throttling.maximum.placed.reservations.root.default&lt;/name>
+    &lt;value>10&lt;/value>
+  &lt;/property>
+  &lt;property>
+    &lt;name>llama.am.throttling.maximum.queued.reservations.root.default&lt;/name>
+    &lt;value>50&lt;/value>
+  &lt;/property>
+  &lt;property>
+    &lt;name>impala.admission-control.pool-default-query-options.root.default&lt;/name>
+    &lt;value>mem_limit=128m,query_timeout_s=20,max_io_buffers=10&lt;/value>
+  &lt;/property>
+  &lt;property>
+    &lt;name>impala.admission-control.<b>pool-queue-timeout-ms</b>.root.default&lt;/name>
+    &lt;value>30000&lt;/value>
+  &lt;/property>
+  &lt;property>
+    &lt;name>impala.admission-control.<b>max-query-mem-limit</b>.root.default.regularPool&lt;/name>
+    &lt;value>1610612736&lt;/value>&lt;!--1.5GB-->
+  &lt;/property>
+  &lt;property>
+    &lt;name>impala.admission-control.<b>min-query-mem-limit</b>.root.default.regularPool&lt;/name>
+    &lt;value>52428800&lt;/value>&lt;!--50MB-->
+  &lt;/property>
+  &lt;property>
+    &lt;name>impala.admission-control.<b>clamp-mem-limit-query-option</b>.root.default.regularPool&lt;/name>
+    &lt;value>true&lt;/value>
+  &lt;/property>
+</codeblock>
+      </conbody>
+    </concept>
+  </concept>
+  <concept id="concept_zy4_vxz_jgb">
+    <title>Configuring Cluster-wide Admission Control</title>
+    <prolog>
+      <metadata>
+        <data name="Category" value="Configuring"/>
+      </metadata>
+    </prolog>
+    <conbody>
+      <note type="important"> These settings only apply if you enable admission
+        control but leave dynamic resource pools disabled. In <keyword
+          keyref="impala25_full"/> and higher, we recommend that you set up
+        dynamic resource pools and customize the settings for each pool as
+        described in <xref href="#concept_bz4_vxz_jgb" format="dita"/>.</note>
+      <p> The following Impala configuration options let you adjust the settings
+        of the admission control feature. When supplying the options on the
+          <cmdname>impalad</cmdname> command line, prepend the option name with
+          <codeph>--</codeph>. </p>
+      <dl>
+        <dlentry>
+          <dt>
+            <codeph>queue_wait_timeout_ms</codeph>
+          </dt>
+          <dd>
+            <b>Purpose:</b> Maximum amount of time (in milliseconds) that a
+            request waits to be admitted before timing out. <p>
+              <b>Type:</b>
+              <codeph>int64</codeph>
+            </p>
+            <p>
+              <b>Default:</b>
+              <codeph>60000</codeph>
+            </p>
+          </dd>
+        </dlentry>
+        <dlentry>
+          <dt>
+            <codeph>default_pool_max_requests</codeph>
+          </dt>
+          <dd>
+            <b>Purpose:</b> Maximum number of concurrent outstanding requests
+            allowed to run before incoming requests are queued. Because this
+            limit applies cluster-wide, but each Impala node makes independent
+            decisions to run queries immediately or queue them, it is a soft
+            limit; the overall number of concurrent queries might be slightly
+            higher during times of heavy load. A negative value indicates no
+            limit. Ignored if <codeph>fair_scheduler_config_path</codeph> and
+              <codeph>llama_site_path</codeph> are set. <p>
+              <b>Type:</b>
+              <codeph>int64</codeph>
+            </p>
+            <p>
+              <b>Default:</b>
+              <ph rev="2.5.0">-1, meaning unlimited (prior to <keyword
+                  keyref="impala25_full"/> the default was 200)</ph>
+            </p>
+          </dd>
+        </dlentry>
+        <dlentry>
+          <dt>
+            <codeph>default_pool_max_queued</codeph>
+          </dt>
+          <dd>
+            <b>Purpose:</b> Maximum number of requests allowed to be queued
+            before rejecting requests. Because this limit applies cluster-wide,
+            but each Impala node makes independent decisions to run queries
+            immediately or queue them, it is a soft limit; the overall number of
+            queued queries might be slightly higher during times of heavy load.
+            A negative value or 0 indicates requests are always rejected once
+            the maximum concurrent requests are executing. Ignored if
+              <codeph>fair_scheduler_config_path</codeph> and
+              <codeph>llama_site_path</codeph> are set. <p>
+              <b>Type:</b>
+              <codeph>int64</codeph>
+            </p>
+            <p>
+              <b>Default:</b>
+              <ph rev="2.5.0">unlimited</ph>
+            </p>
+          </dd>
+        </dlentry>
+        <dlentry>
+          <dt>
+            <codeph>default_pool_mem_limit</codeph>
+          </dt>
+          <dd>
+            <b>Purpose:</b> Maximum amount of memory (across the entire cluster)
+            that all outstanding requests in this pool can use before new
+            requests to this pool are queued. Specified in bytes, megabytes, or
+            gigabytes by a number followed by the suffix <codeph>b</codeph>
+            (optional), <codeph>m</codeph>, or <codeph>g</codeph>, either
+            uppercase or lowercase. You can specify floating-point values for
+            megabytes and gigabytes, to represent fractional numbers such as
+              <codeph>1.5</codeph>. You can also specify it as a percentage of
+            the physical memory by specifying the suffix <codeph>%</codeph>. 0
+            or no setting indicates no limit. Defaults to bytes if no unit is
+            given. Because this limit applies cluster-wide, but each Impala node
+            makes independent decisions to run queries immediately or queue
+            them, it is a soft limit; the overall memory used by concurrent
+            queries might be slightly higher during times of heavy load. Ignored
+            if <codeph>fair_scheduler_config_path</codeph> and
+              <codeph>llama_site_path</codeph> are set. <note
+              conref="../shared/impala_common.xml#common/admission_compute_stats"/>
+            <p conref="../shared/impala_common.xml#common/type_string"/>
+            <p>
+              <b>Default:</b>
+              <codeph>""</codeph> (empty string, meaning unlimited) </p>
+          </dd>
+        </dlentry>
+        <dlentry>
+          <dt>
+            <codeph>disable_pool_max_requests</codeph>
+          </dt>
+          <dd>
+            <b>Purpose:</b> Disables all per-pool limits on the maximum number
+            of running requests. <p>
+              <b>Type:</b> Boolean </p>
+            <p>
+              <b>Default:</b>
+              <codeph>false</codeph>
+            </p>
+          </dd>
+        </dlentry>
+        <dlentry>
+          <dt>
+            <codeph>disable_pool_mem_limits</codeph>
+          </dt>
+          <dd>
+            <b>Purpose:</b> Disables all per-pool mem limits. <p>
+              <b>Type:</b> Boolean </p>
+            <p>
+              <b>Default:</b>
+              <codeph>false</codeph>
+            </p>
+          </dd>
+        </dlentry>
+        <dlentry>
+          <dt>
+            <codeph>fair_scheduler_allocation_path</codeph>
+          </dt>
+          <dd>
+            <b>Purpose:</b> Path to the fair scheduler allocation file
+              (<codeph>fair-scheduler.xml</codeph>). <p
+              conref="../shared/impala_common.xml#common/type_string"/>
+            <p>
+              <b>Default:</b>
+              <codeph>""</codeph> (empty string) </p>
+            <p>
+              <b>Usage notes:</b> Admission control only uses a small subset of
+              the settings that can go in this file, as described below. For
+              details about all the Fair Scheduler configuration settings, see
+              the <xref
+                href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Configuration"
+                scope="external" format="html">Apache wiki</xref>. </p>
+          </dd>
+        </dlentry>
+        <dlentry>
+          <dt>
+            <codeph>llama_site_path</codeph>
+          </dt>
+          <dd>
+            <b>Purpose:</b> Path to the configuration file used by admission
+            control (<codeph>llama-site.xml</codeph>). If set,
+              <codeph>fair_scheduler_allocation_path</codeph> must also be set.
+              <p conref="../shared/impala_common.xml#common/type_string"/>
+            <p>
+              <b>Default:</b>
+              <codeph>""</codeph> (empty string) </p>
+            <p>
+              <b>Usage notes:</b> Admission control only uses a few of the
+              settings that can go in this file, as described below. </p>
+          </dd>
+        </dlentry>
+      </dl>
+    </conbody>
+  </concept>
+</concept>

http://git-wip-us.apache.org/repos/asf/impala/blob/56dea843/docs/topics/impala_dedicated_coordinator.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_dedicated_coordinator.xml b/docs/topics/impala_dedicated_coordinator.xml
index 73aa2cf..414bb95 100644
--- a/docs/topics/impala_dedicated_coordinator.xml
+++ b/docs/topics/impala_dedicated_coordinator.xml
@@ -124,11 +124,9 @@ under the License.
       </li>
     </ul>
 
-    <p>
-      If such scalability bottlenecks occur, in CDH 5.12 / Impala 2.9 and higher, you can assign
-      one dedicated role to each Impala daemon host, either as a coordinator or as an executor,
-      to address the issues.
-    </p>
+    <p> If such scalability bottlenecks occur, in Impala 2.9 and higher, you can
+      assign one dedicated role to each Impala daemon host, either as a
+      coordinator or as an executor, to address the issues. </p>
 
     <ul>
       <li>

http://git-wip-us.apache.org/repos/asf/impala/blob/56dea843/docs/topics/impala_resource_management.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_resource_management.xml b/docs/topics/impala_resource_management.xml
index 4a2a58a..38f83bb 100644
--- a/docs/topics/impala_resource_management.xml
+++ b/docs/topics/impala_resource_management.xml
@@ -20,7 +20,7 @@ under the License.
 <!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
 <concept rev="1.2" id="resource_management">
 
-  <title>Resource Management for Impala</title>
+  <title>Resource Management</title>
   <prolog>
     <metadata>
       <data name="Category" value="Impala"/>
@@ -31,199 +31,11 @@ under the License.
       <data name="Category" value="Data Analysts"/>
     </metadata>
   </prolog>
-
   <conbody>
-
-    <note conref="../shared/impala_common.xml#common/impala_llama_obsolete"/>
-
-    <p>
-      You can limit the CPU and memory resources used by Impala, to manage and prioritize workloads on clusters
-      that run jobs from many Hadoop components.
-    </p>
-
-    <p outputclass="toc inpage"/>
+    <p>Impala includes the features that balance and maximize resources in your
+        <keyword keyref="hadoop_distro"/> cluster. This topic describes how you
+      can improve efficiency of your a <keyword keyref="hadoop_distro"/> cluster
+      using the admission control feature. See the following topics for an
+      overview and the configuration steps for admission control.</p>
   </conbody>
-
-  <concept id="rm_enforcement">
-
-    <title>How Resource Limits Are Enforced</title>
-  <prolog>
-    <metadata>
-      <data name="Category" value="Concepts"/>
-    </metadata>
-  </prolog>
-
-    <conbody>
-
-      <p> Limits on memory usage are enforced by Impala's process memory limit
-        with the <codeph>MEM_LIMIT</codeph> query option. The admission control
-        feature checks this setting to decide how many queries can be safely run
-        at the same time. Then the Impala daemon enforces the limit by
-        activating the spill-to-disk mechanism when necessary, or cancelling a
-        query altogether if the limit is exceeded at runtime. </p>
-
-    </conbody>
-  </concept>
-
-<!--
-  <concept id="rm_enable">
-
-    <title>Enabling Resource Management for Impala</title>
-  <prolog>
-    <metadata>
-      <data name="Category" value="Configuring"/>
-      <data name="Category" value="Starting and Stopping"/>
-    </metadata>
-  </prolog>
-
-    <conbody>
-
-      <p>
-        To enable resource management for Impala, first you <xref href="#rm_prereqs">set up the YARN
-        service for your cluster</xref>. Then you <xref href="#rm_options">add startup options and customize
-        resource management settings</xref> for the Impala services.
-      </p>
-    </conbody>
-
-    <concept id="rm_prereqs">
-
-      <title>Required Setup for Resource Management with Impala</title>
-
-      <conbody>
-
-        <p>
-          YARN is the general-purpose service that manages resources for many Hadoop components within a
-          <keyword keyref="distro"/> cluster.
-        </p>
-
-      </conbody>
-    </concept>
-
-    <concept id="rm_options">
-
-      <title>impalad Startup Options for Resource Management</title>
-
-      <conbody>
-
-        <p id="resource_management_impalad_options">
-          The following startup options for <cmdname>impalad</cmdname> enable resource management and customize its
-          parameters for your cluster configuration:
-          <ul>
-            <li>
-              <codeph>-enable_rm</codeph>: Whether to enable resource management or not, either
-              <codeph>true</codeph> or <codeph>false</codeph>. The default is <codeph>false</codeph>. None of the
-              other resource management options have any effect unless <codeph>-enable_rm</codeph> is turned on.
-            </li>
-
-            <li>
-              <codeph>-cgroup_hierarchy_path</codeph>: Path where YARN will create cgroups for granted
-              resources. Impala assumes that the cgroup for an allocated container is created in the path
-              '<varname>cgroup_hierarchy_path</varname> + <varname>container_id</varname>'.
-            </li>
-
-            <li rev="1.4.0">
-              <codeph>-rm_always_use_defaults</codeph>: If this Boolean option is enabled, Impala ignores computed
-              estimates and always obtains the default memory and CPU allocation settings at the start of the
-              query. These default estimates are approximately 2 CPUs and 4 GB of memory, possibly varying slightly
-              depending on cluster size, workload, and so on. Where practical, enable
-              <codeph>-rm_always_use_defaults</codeph> whenever resource management is used, and relying on these
-              default values (that is, leaving out the two following options).
-            </li>
-
-            <li rev="1.4.0">
-              <codeph>-rm_default_memory=<varname>size</varname></codeph>: Optionally sets the default estimate for
-              memory usage for each query. You can use suffixes such as M and G for megabytes and gigabytes, the
-              same as with the <xref href="impala_mem_limit.xml#mem_limit">MEM_LIMIT</xref> query option. Only has
-              an effect when <codeph>-rm_always_use_defaults</codeph> is also enabled.
-            </li>
-
-            <li rev="1.4.0">
-              <codeph>-rm_default_cpu_cores</codeph>: Optionally sets the default estimate for number of virtual
-              CPU cores for each query. Only has an effect when <codeph>-rm_always_use_defaults</codeph> is also
-              enabled.
-            </li>
-          </ul>
-        </p>
-
-      </conbody>
-    </concept>
--->
-
-    <concept id="rm_query_options">
-
-      <title>impala-shell Query Options for Resource Management</title>
-  <prolog>
-    <metadata>
-      <data name="Category" value="Impala Query Options"/>
-    </metadata>
-  </prolog>
-
-      <conbody>
-
-        <p>
-          Before issuing SQL statements through the <cmdname>impala-shell</cmdname> interpreter, you can use the
-          <codeph>SET</codeph> command to configure the following parameters related to resource management:
-        </p>
-
-        <ul id="ul_nzt_twf_jp">
-          <li>
-            <xref href="impala_explain_level.xml#explain_level"/>
-          </li>
-
-          <li>
-            <xref href="impala_mem_limit.xml#mem_limit"/>
-          </li>
-
-        </ul>
-      </conbody>
-    </concept>
-
-<!-- Parent topic is going away, so former subtopic is hoisted up a level.
-  </concept>
--->
-
-  <concept id="rm_limitations">
-
-    <title>Limitations of Resource Management for Impala</title>
-
-    <conbody>
-
-<!-- Conditionalizing some content here with audience="hidden" because there are already some XML comments
-     inside the list, so not practical to enclose the whole thing in XML comments. -->
-
-      <p audience="hidden">
-        Currently, Impala has the following limitations for resource management of Impala queries:
-      </p>
-
-      <ul audience="hidden">
-        <li>
-          Table statistics are required, and column statistics are highly valuable, for Impala to produce accurate
-          estimates of how much memory to request from YARN. See
-          <xref href="impala_perf_stats.xml#perf_table_stats"/> and
-          <xref href="impala_perf_stats.xml#perf_column_stats"/> for instructions on gathering both kinds of
-          statistics, and <xref href="impala_explain.xml#explain"/> for the extended <codeph>EXPLAIN</codeph>
-          output where you can check that statistics are available for a specific table and set of columns.
-        </li>
-
-        <li>
-          If the Impala estimate of required memory is lower than is actually required for a query, Impala
-          dynamically expands the amount of requested memory.
-<!--          Impala will cancel the query when it exceeds the requested memory size. -->
-          Queries might still be cancelled if the reservation expansion fails, for example if there are
-          insufficient remaining resources for that pool, or the expansion request takes long enough that it
-          exceeds the query timeout interval, or because of YARN preemption.
-<!--          This could happen in some cases with complex queries, even when table and column statistics are available. -->
-          You can see the actual memory usage after a failed query by issuing a <codeph>PROFILE</codeph> command in
-          <cmdname>impala-shell</cmdname>. Specify a larger memory figure with the <codeph>MEM_LIMIT</codeph>
-          query option and re-try the query.
-        </li>
-      </ul>
-
-      <p rev="2.0.0">
-        The <codeph>MEM_LIMIT</codeph> query option, and the other resource-related query options, are settable
-        through the ODBC or JDBC interfaces in Impala 2.0 and higher. This is a former limitation that is now
-        lifted.
-      </p>
-    </conbody>
-  </concept>
 </concept>