You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2016/11/01 23:23:04 UTC

[3/6] incubator-hawq-docs git commit: uppercase for SQL keywords

uppercase for SQL keywords


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/c40bcad1
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/c40bcad1
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/c40bcad1

Branch: refs/heads/develop
Commit: c40bcad1d8de923ad7c872e6a534bd2b73ea2594
Parents: 015cf58
Author: Lisa Owen <lo...@pivotal.io>
Authored: Tue Nov 1 12:51:49 2016 -0700
Committer: Lisa Owen <lo...@pivotal.io>
Committed: Tue Nov 1 12:51:49 2016 -0700

----------------------------------------------------------------------
 ...ckingUpandRestoringHAWQDatabases.html.md.erb | 24 ++++++++++----------
 admin/ClusterExpansion.html.md.erb              |  6 ++---
 ...esandHighAvailabilityEnabledHDFS.html.md.erb |  8 +++----
 admin/monitor.html.md.erb                       | 10 ++++----
 clientaccess/kerberos.html.md.erb               |  2 +-
 clientaccess/roles_privs.html.md.erb            | 12 +++++-----
 ddl/ddl-database.html.md.erb                    |  2 +-
 ddl/ddl-tablespace.html.md.erb                  |  4 ++--
 plext/builtin_langs.html.md.erb                 |  2 +-
 plext/using_pljava.html.md.erb                  |  4 ++--
 plext/using_plpython.html.md.erb                |  6 ++---
 plext/using_plr.html.md.erb                     |  2 +-
 .../ConfigureResourceManagement.html.md.erb     |  8 +++----
 resourcemgmt/ResourceManagerStatus.html.md.erb  |  6 ++---
 resourcemgmt/ResourceQueues.html.md.erb         |  4 ++--
 15 files changed, 50 insertions(+), 50 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/BackingUpandRestoringHAWQDatabases.html.md.erb b/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
index e9bd526..78b0dec 100644
--- a/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
+++ b/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
@@ -191,10 +191,10 @@ This example of using `gpfdist`�backs up and restores a 1TB `tpch` database. To
     master_host$ psql tpch
     ```
     ```sql
-    tpch=# create writable external table wext_orders (like orders)
-    tpch-# location('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') format 'CSV';
-    tpch=# create writable external table wext_lineitem (like lineitem)
-    tpch-# location('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') format 'CSV';
+    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_orders (LIKE orders)
+    tpch-# LOCATION('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') FORMAT 'CSV';
+    tpch=# CREATE WRITABLE EXTERNAL TABLE wext_lineitem (LIKE lineitem)
+    tpch-# LOCATION('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') FORMAT 'CSV';
     ```
 
     The sample shows two tables in the `tpch` database: `orders` and�`line item`. The sample shows that two corresponding external tables are created. Specify a location or each `gpfdist` instance in the `LOCATION` clause. This sample uses the CSV text format here, but you can also choose other delimited text formats. For more information, see the `CREATE EXTERNAL TABLE` SQL command.
@@ -202,10 +202,10 @@ This example of using `gpfdist`�backs up and restores a 1TB `tpch` database. To
 4.  Unload data to the external tables:
 
     ```sql
-    tpch=# begin;
-    tpch=# insert into wext_orders select * from orders;
-    tpch=# insert into wext_lineitem select * from lineitem;
-    tpch=# commit;
+    tpch=# BEGIN;
+    tpch=# INSERT INTO wext_orders SELECT * FROM orders;
+    tpch=# INSERT INTO wext_lineitem SELECT * FROM lineitem;
+    tpch=# COMMIT;
     ```
 
 5.  **\(Optional\)** Stop `gpfdist` servers to free ports for other processes:
@@ -242,8 +242,8 @@ This example of using `gpfdist`�backs up and restores a 1TB `tpch` database. To
     ```
     
     ```sql
-    tpch2=# create external table rext_orders (like orders) location('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') format 'CSV';
-    tpch2=# create external table rext_lineitem (like lineitem) location('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') format 'CSV';
+    tpch2=# CREATE EXTERNAL TABLE rext_orders (LIKE orders) LOCATION('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') FORMAT 'CSV';
+    tpch2=# CREATE EXTERNAL TABLE rext_lineitem (LIKE lineitem) LOCATION('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') FORMAT 'CSV';
     ```
 
     **Note:** The location clause is the same as the writable external table above.
@@ -251,8 +251,8 @@ This example of using `gpfdist`�backs up and restores a 1TB `tpch` database. To
 4.  Load data back from external tables:
 
     ```sql
-    tpch2=# insert into orders select * from rext_orders;
-    tpch2=# insert into lineitem select * from rext_lineitem;
+    tpch2=# INSERT INTO orders SELECT * FROM rext_orders;
+    tpch2=# INSERT INTO lineitem SELECT * FROM rext_lineitem;
     ```
 
 5.  Run the `ANALYZE` command after data loading:

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/admin/ClusterExpansion.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ClusterExpansion.html.md.erb b/admin/ClusterExpansion.html.md.erb
index e4800d3..d3d921b 100644
--- a/admin/ClusterExpansion.html.md.erb
+++ b/admin/ClusterExpansion.html.md.erb
@@ -97,7 +97,7 @@ For example purposes in this procedure, we are adding a new node named `sdw4`.
     ```
     
     ```sql
-    postgres=# select * from gp_segment_configuration;
+    postgres=# SELECT * FROM gp_segment_configuration;
     ```
     
     ```
@@ -164,7 +164,7 @@ For example purposes in this procedure, we are adding a new node named `sdw4`.
     ```
     
     ```sql
-    postgres=# select * from gp_segment_configuration ;
+    postgres=# SELECT * FROM gp_segment_configuration ;
     ```
     
     ```
@@ -203,7 +203,7 @@ For example purposes in this procedure, we are adding a new node named `sdw4`.
     ```
     
     ```sql
-    postgres=# select gp_metadata_cache_clear();
+    postgres=# SELECT gp_metadata_cache_clear();
     ```
 
 16. After expansion, if the new size of your cluster is greater than or equal \(#nodes >=4\) to 4, change the value of the `output.replace-datanode-on-failure` HDFS parameter in `hdfs-client.xml` to `false`.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb b/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
index 3147033..b725207 100644
--- a/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
+++ b/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
@@ -34,9 +34,9 @@ To move the filespace location to a HA-enabled HDFS location, you must move the
     SELECT
         fsname, fsedbid, fselocation
     FROM
-        pg_filespace as sp, pg_filespace_entry as entry, pg_filesystem as fs
+        pg_filespace AS sp, pg_filespace_entry AS entry, pg_filesystem AS fs
     WHERE
-        sp.fsfsys = fs.oid and fs.fsysname = 'hdfs' and sp.oid = entry.fsefsoid
+        sp.fsfsys = fs.oid AND fs.fsysname = 'hdfs' AND sp.oid = entry.fsefsoid
     ORDER BY
         entry.fsedbid;
     ```
@@ -91,7 +91,7 @@ When you enable HA HDFS, you are�changing the HAWQ catalog and persistent table
 1.  Disconnect all workload connections. Check the active connection with:
 
     ```shell
-    $ psql -p ${PGPORT} -c "select * from pg_catalog.pg_stat_activity" -d template1
+    $ psql -p ${PGPORT} -c "SELECT * FROM pg_catalog.pg_stat_activity" -d template1
     ```
     where `${PGPORT}` corresponds to the port number you optionally customized for HAWQ master. 
     
@@ -99,7 +99,7 @@ When you enable HA HDFS, you are�changing the HAWQ catalog and persistent table
 2.  Issue a checkpoint:�
 
     ```shell
-    $ psql�-p ${PGPORT} -c "checkpoint" -d template1
+    $ psql�-p ${PGPORT} -c "CHECKPOINT" -d template1
     ```
 
 3.  Shut down the HAWQ cluster:�

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/admin/monitor.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/monitor.html.md.erb b/admin/monitor.html.md.erb
index d1fbf31..8395b99 100644
--- a/admin/monitor.html.md.erb
+++ b/admin/monitor.html.md.erb
@@ -57,7 +57,7 @@ The *hawq\_toolkit* administrative schema contains several views for checking th
 ```sql
 => SELECT relname AS name, sotdsize AS size, sotdtoastsize
 AS toast, sotdadditionalsize AS other
-FROM hawq_size_of_table_disk as sotd, pg_class
+FROM hawq_size_of_table_disk AS sotd, pg_class
 WHERE sotd.sotdoid=pg_class.oid ORDER BY relname;
 ```
 
@@ -66,7 +66,7 @@ WHERE sotd.sotdoid=pg_class.oid ORDER BY relname;
 The *hawq\_toolkit* administrative schema contains a number of views for checking index sizes. To see the total size of all index\(es\) on a table, use the *hawq\_size\_of\_all\_table\_indexes* view. To see the size of a particular index, use the *hawq\_size\_of\_index* view. The index sizing views list tables and indexes by object ID \(not by name\). To check the size of an index by name, you must look up the relation name \(`relname`\) in the *pg\_class* table. For example:
 
 ```sql
-=> SELECT soisize, relname as indexname
+=> SELECT soisize, relname AS indexname
 FROM pg_class, hawq_size_of_index
 WHERE pg_class.oid=hawq_size_of_index.soioid
 AND pg_class.relkind='i';
@@ -81,9 +81,9 @@ HAWQ tracks various metadata information in its system catalogs about the object
 You can use the system views *pg\_stat\_operations* and *pg\_stat\_partition\_operations* to look up actions performed on an object, such as a table. For example, to see the actions performed on a table, such as when it was created and when it was last analyzed:
 
 ```sql
-=> SELECT schemaname as schema, objname as table,
-usename as role, actionname as action,
-subtype as type, statime as time
+=> SELECT schemaname AS schema, objname AS table,
+usename AS role, actionname AS action,
+subtype AS type, statime AS time
 FROM pg_stat_operations
 WHERE objname='cust';
 ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/clientaccess/kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/kerberos.html.md.erb b/clientaccess/kerberos.html.md.erb
index a609471..75f583e 100644
--- a/clientaccess/kerberos.html.md.erb
+++ b/clientaccess/kerberos.html.md.erb
@@ -212,7 +212,7 @@ After you have set up Kerberos on the HAWQ master, you can configure HAWQ to use
 1.  Create a HAWQ administrator role in the database `template1` for the Kerberos principal that is used as the database administrator. The following example uses `gpamin/kerberos-gpdb`.
 
     ``` bash
-    $ psql template1 -c 'create role "gpadmin/kerberos-gpdb" login superuser;'
+    $ psql template1 -c 'CREATE ROLE "gpadmin/kerberos-gpdb" LOGIN SUPERUSER;'
 
     ```
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/clientaccess/roles_privs.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/roles_privs.html.md.erb b/clientaccess/roles_privs.html.md.erb
index 4503951..2738dd3 100644
--- a/clientaccess/roles_privs.html.md.erb
+++ b/clientaccess/roles_privs.html.md.erb
@@ -203,21 +203,21 @@ To set the `password_hash_algorithm` server parameter for an individual session:
 2.  Set the `password_hash_algorithm` to `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\):
 
     ``` sql
-    # set password_hash_algorithm = 'SHA-256'
+    # SET password_hash_algorithm = 'SHA-256'
     SET
     ```
 
     or:
 
     ``` sql
-    # set password_hash_algorithm = 'SHA-256-FIPS'
+    # SET password_hash_algorithm = 'SHA-256-FIPS'
     SET
     ```
 
 3.  Verify the setting:
 
     ``` sql
-    # show password_hash_algorithm;
+    # SHOW password_hash_algorithm;
     password_hash_algorithm
     ```
 
@@ -240,7 +240,7 @@ To set the `password_hash_algorithm` server parameter for an individual session:
 4.  Login in as a super user and verify the password hash algorithm setting:
 
     ``` sql
-    # show password_hash_algorithm
+    # SHOW password_hash_algorithm
     password_hash_algorithm
     -------------------------------
     SHA-256-FIPS
@@ -249,7 +249,7 @@ To set the `password_hash_algorithm` server parameter for an individual session:
 5.  Create a new role with password that has login privileges.
 
     ``` sql
-    create role testdb with password 'testdb12345#' LOGIN;
+    CREATE ROLE testdb WITH PASSWORD 'testdb12345#' LOGIN;
     ```
 
 6.  Change the client authentication method to allow for storage of SHA-256 encrypted passwords:
@@ -276,7 +276,7 @@ To set the `password_hash_algorithm` server parameter for an individual session:
     2.  Execute the following:
 
         ``` sql
-        # select rolpassword from pg_authid where rolname = 'testdb';
+        # SELECT rolpassword FROM pg_authid WHERE rolname = 'testdb';
         Rolpassword
         -----------
         sha256<64 hexidecimal characters>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/ddl/ddl-database.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-database.html.md.erb b/ddl/ddl-database.html.md.erb
index 6f1be26..2ef9f9f 100644
--- a/ddl/ddl-database.html.md.erb
+++ b/ddl/ddl-database.html.md.erb
@@ -45,7 +45,7 @@ By default, a new database is created by cloning the standard system database te
 If you are working in the `psql` client program, you can use the `\l` meta-command to show the list of databases and templates in your HAWQ system. If using another client program and you are a superuser, you can query the list of databases from the `pg_database` system catalog table. For example:
 
 ``` sql
-=> SELECT datname from pg_database;
+=> SELECT datname FROM pg_database;
 ```
 
 ## <a id="topic7"></a>Altering a Database 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/ddl/ddl-tablespace.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-tablespace.html.md.erb b/ddl/ddl-tablespace.html.md.erb
index 8ead2f0..8720665 100644
--- a/ddl/ddl-tablespace.html.md.erb
+++ b/ddl/ddl-tablespace.html.md.erb
@@ -134,8 +134,8 @@ These tablespaces use the system default filespace, `pg_system`, the data direct
 To see filespace information, look in the *pg\_filespace* and *pg\_filespace\_entry* catalog tables. You can join these tables with *pg\_tablespace* to see the full definition of a tablespace. For example:
 
 ``` sql
-=# SELECT spcname as tblspc, fsname as filespc,
-          fsedbid as seg_dbid, fselocation as datadir
+=# SELECT spcname AS tblspc, fsname AS filespc,
+          fsedbid AS seg_dbid, fselocation AS datadir
    FROM   pg_tablespace pgts, pg_filespace pgfs,
           pg_filespace_entry pgfse
    WHERE  pgts.spcfsoid=pgfse.fsefsoid

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/plext/builtin_langs.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/builtin_langs.html.md.erb b/plext/builtin_langs.html.md.erb
index e98486b..01891e8 100644
--- a/plext/builtin_langs.html.md.erb
+++ b/plext/builtin_langs.html.md.erb
@@ -22,7 +22,7 @@ gpadmin=# CREATE FUNCTION count_orders() RETURNS bigint AS $$
  SELECT count(*) FROM orders;
 $$ LANGUAGE SQL;
 CREATE FUNCTION
-gpadmin=# select count_orders();
+gpadmin=# SELECT count_orders();
  my_count 
 ----------
    830513

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/plext/using_pljava.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_pljava.html.md.erb b/plext/using_pljava.html.md.erb
index d19fbbe..bab31dc 100644
--- a/plext/using_pljava.html.md.erb
+++ b/plext/using_pljava.html.md.erb
@@ -136,7 +136,7 @@ Perform the following steps as the `gpadmin` user:
     To affect only the *current* database session, set the `pljava_classpath` configuration parameter at the `psql` prompt:
 	
 	 ``` sql
-	 psql> set pljava_classpath='myclasses.jar';
+	 psql> SET pljava_classpath='myclasses.jar';
 	 ```
 
     To affect *all* sessions, set the `pljava_classpath` server configuration parameter and restart the HAWQ cluster:
@@ -635,7 +635,7 @@ $ hawq restart cluster
 From the `psql` command line, run the following command to show the installed JAR files.
 
 ```shell
-psql# show pljava_classpath
+psql# SHOW pljava_classpath
 ```
 
 The following SQL commands create a table and define a Java function to test the method in the JAR file:

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/plext/using_plpython.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_plpython.html.md.erb b/plext/using_plpython.html.md.erb
index 5a9123c..970f966 100644
--- a/plext/using_plpython.html.md.erb
+++ b/plext/using_plpython.html.md.erb
@@ -144,7 +144,7 @@ In terms of performance, importing a Python module is an expensive operation and
 
 ```sql
 psql=#
-   CREATE FUNCTION pytest() returns text as $$ 
+   CREATE FUNCTION pytest() RETRUNS text AS $$ 
       if 'mymodule' not in GD:
         import mymodule
         GD['mymodule'] = mymodule
@@ -434,8 +434,8 @@ This PL/Python UDF imports the NumPy module. The function returns SUCCESS if the
 
 ```sql
 CREATE OR REPLACE FUNCTION plpy_test(x int)
-returns text
-as $$
+RETURNS text
+AS $$
   try:
       from numpy import *
       return 'SUCCESS'

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/plext/using_plr.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_plr.html.md.erb b/plext/using_plr.html.md.erb
index 49d207f..367a1d0 100644
--- a/plext/using_plr.html.md.erb
+++ b/plext/using_plr.html.md.erb
@@ -28,7 +28,7 @@ The following `CREATE TABLE` command uses the `r_norm` function to populate the
 
 ```sql
 CREATE TABLE test_norm_var
-  AS SELECT id, r_norm(10,0,1) as x
+  AS SELECT id, r_norm(10,0,1) AS x
   FROM (SELECT generate_series(1,30:: bigint) AS ID) foo
   DISTRIBUTED BY (id);
 ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/resourcemgmt/ConfigureResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/resourcemgmt/ConfigureResourceManagement.html.md.erb b/resourcemgmt/ConfigureResourceManagement.html.md.erb
index 23fe860..1b66068 100644
--- a/resourcemgmt/ConfigureResourceManagement.html.md.erb
+++ b/resourcemgmt/ConfigureResourceManagement.html.md.erb
@@ -89,13 +89,13 @@ However, the changed resource quota for the virtual segment cannot exceed the re
 In the following example, when executing the next query statement, the HAWQ resource manager will attempt to allocate 10 virtual segments and each segment has a 256MB memory quota.
 
 ``` sql
-postgres=# set hawq_rm_stmt_vseg_memory='256mb';
+postgres=# SET hawq_rm_stmt_vseg_memory='256mb';
 SET
-postgres=# set hawq_rm_stmt_nvseg=10;
+postgres=# SET hawq_rm_stmt_nvseg=10;
 SET
-postgres=# create table t(i integer);
+postgres=# CREATE TABLE t(i integer);
 CREATE TABLE
-postgres=# insert into t values(1);
+postgres=# INSERT INTO t VALUES(1);
 INSERT 0 1
 ```
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/resourcemgmt/ResourceManagerStatus.html.md.erb
----------------------------------------------------------------------
diff --git a/resourcemgmt/ResourceManagerStatus.html.md.erb b/resourcemgmt/ResourceManagerStatus.html.md.erb
index 07762a4..4029642 100644
--- a/resourcemgmt/ResourceManagerStatus.html.md.erb
+++ b/resourcemgmt/ResourceManagerStatus.html.md.erb
@@ -12,7 +12,7 @@ Any query execution requiring resource allocation from HAWQ resource manager has
 The following is an example query to obtain connection track status:
 
 ``` sql
-postgres=# select * from dump_resource_manager_status(1);
+postgres=# SELECT * FROM dump_resource_manager_status(1);
 ```
 
 ``` pre
@@ -59,7 +59,7 @@ Besides the information provided in pg\_resqueue\_status, you can also get YARN
 The following is a query to obtain resource queue status:
 
 ``` sql
-postgres=# select * from dump_resource_manager_status(2);
+postgres=# SELECT * FROM dump_resource_manager_status(2);
 ```
 
 ``` pre
@@ -104,7 +104,7 @@ QUEUSE(alloc=(0 MB,0.000000 CORE):request=(0 MB,0.000000 CORE):inuse=(0 MB,0.000
 Use the following query to obtain the status of a HAWQ segment.
 
 ``` sql
-postgres=# select * from dump_resource_manager_status(3);
+postgres=# SELECT * FROM dump_resource_manager_status(3);
 ```
 
 ``` pre

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/c40bcad1/resourcemgmt/ResourceQueues.html.md.erb
----------------------------------------------------------------------
diff --git a/resourcemgmt/ResourceQueues.html.md.erb b/resourcemgmt/ResourceQueues.html.md.erb
index 2c9ea48..cd019c6 100644
--- a/resourcemgmt/ResourceQueues.html.md.erb
+++ b/resourcemgmt/ResourceQueues.html.md.erb
@@ -54,7 +54,7 @@ The minimum value that can be configured is 3, and the maximum is 1024.
 To check the currently configured limit, you can execute the following command:
 
 ``` sql
-postgres=# show hawq_rm_nresqueue_limit;
+postgres=# SHOW hawq_rm_nresqueue_limit;
 ```
 
 ``` pre
@@ -164,7 +164,7 @@ The query displays all the attributes and their values of the selected resource
 You can also check the runtime status of existing resource queues by querying the `pg_resqueue_status` view:
 
 ``` sql
-postgres=# select * from pg_resqueue_status;
+postgres=# SELECT * FROM pg_resqueue_status;
 ```