You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@trafodion.apache.org by gt...@apache.org on 2016/11/03 06:05:31 UTC

[01/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Repository: incubator-trafodion
Updated Branches:
  refs/heads/master 6862bf724 -> e26b20601


http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/index.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/index.adoc b/docs/sql_reference/src/asciidoc/index.adoc
index f9ffad6..03aeaf2 100644
--- a/docs/sql_reference/src/asciidoc/index.adoc
+++ b/docs/sql_reference/src/asciidoc/index.adoc
@@ -1,67 +1,70 @@
-////
-* @@@ START COPYRIGHT @@@                                                         
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@ 
-////
-
-= SQL Reference Manual
-:doctype: book
-:numbered:
-:toc: left
-:toclevels: 3
-:toc-title: Table of Contents
-:icons: font
-:iconsdir: icons
-:experimental:
-:source-language: text
-:revnumber: {project-version}
-:title-logo-image: {project-logo}
-:project-name: {project-name}
-
-:images: ../images
-:leveloffset: 1
-
-// The directory is called _chapters because asciidoctor skips direct
-// processing of files found in directories starting with an _. This
-// prevents each chapter being built as its own book.
-
-include::../../shared/license.txt[]
-<<<
-include::../../shared/acknowledgements.txt[]
-
-<<<
-include::../../shared/revisions.txt[]
-
-include::asciidoc/_chapters/about.adoc[]
-include::asciidoc/_chapters/introduction.adoc[]
-include::asciidoc/_chapters/sql_statements.adoc[]
-include::asciidoc/_chapters/sql_utilities.adoc[]
-include::asciidoc/_chapters/sql_language_elements.adoc[]
-include::asciidoc/_chapters/sql_clauses.adoc[]
-include::asciidoc/_chapters/sql_functions_and_expressions.adoc[]
-include::asciidoc/_chapters/olap_functions.adoc[]
-include::asciidoc/_chapters/runtime_stats.adoc[]
-include::asciidoc/_chapters/reserved_words.adoc[]
-include::asciidoc/_chapters/limits.adoc[]
-
-
-
-
-
+////
+* @@@ START COPYRIGHT @@@                                                         
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@ 
+////
+
+= SQL Reference Manual
+:doctype: book
+:numbered:
+:toc: left
+:toclevels: 3
+:toc-title: Table of Contents
+:icons: font
+:iconsdir: icons
+:experimental:
+:source-language: text
+:revnumber: {project-version}
+:title-logo-image: {project-logo}
+:project-name: {project-name}
+
+:images: ../images
+:leveloffset: 1
+
+// The directory is called _chapters because asciidoctor skips direct
+// processing of files found in directories starting with an _. This
+// prevents each chapter being built as its own book.
+
+include::../../shared/license.txt[]
+
+<<<
+
+include::../../shared/acknowledgements.txt[]
+
+<<<
+
+include::../../shared/revisions.txt[]
+
+include::asciidoc/_chapters/about.adoc[]
+include::asciidoc/_chapters/introduction.adoc[]
+include::asciidoc/_chapters/sql_statements.adoc[]
+include::asciidoc/_chapters/sql_utilities.adoc[]
+include::asciidoc/_chapters/sql_language_elements.adoc[]
+include::asciidoc/_chapters/sql_clauses.adoc[]
+include::asciidoc/_chapters/sql_functions_and_expressions.adoc[]
+include::asciidoc/_chapters/olap_functions.adoc[]
+include::asciidoc/_chapters/runtime_stats.adoc[]
+include::asciidoc/_chapters/reserved_words.adoc[]
+include::asciidoc/_chapters/limits.adoc[]
+
+
+
+
+


[04/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/sql_language_elements.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/sql_language_elements.adoc b/docs/sql_reference/src/asciidoc/_chapters/sql_language_elements.adoc
index e00218d..4bd94e8 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/sql_language_elements.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/sql_language_elements.adoc
@@ -1,4088 +1,4088 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[sql_language_elements]]
-= SQL Language Elements
-
-{project-name} SQL language elements, which include data types, expressions, functions, identifiers, literals, and
-predicates, occur within the syntax of SQL statements. The statement and command topics support the syntactical
-and semantic descriptions of the language elements in this section.
-
-[[_authorization_ids]]
-== Authorization IDs
-
-An authorization ID is used for an authorization operation. Authorization is the process of validating that a
-database user has permission to perform a specified SQL operation. Externally, the authorization ID is a regular
-or delimited case-insensitive identifier that can have a maximum of 128 characters. See
-<<case_insensitive_delimited_identifiers,Case-Insensitive Delimited Identifiers>>.
-Internally, the authorization ID is associated with a 32-bit number that the database generates and uses for
-efficient access and storage.
-
-All authorization IDs share the same name space. An authorization ID can be a database user name or a role name.
-Therefore, a database user and a role cannot share the same name.
-
-An authorization ID can be the PUBLIC authorization ID, which represents all present and future authorization IDs.
-An authorization ID cannot be SYSTEM, which is the implicit grantor of privileges to the creator of objects.
-
-[[character_sets]]
-== Character Sets
-
-You can specify ISO88591 or UTF8 for a character column definition. The use of UTF8 permits you to store characters
-from many different languages.
-
-<<<
-[[columns]]
-== Columns
-
-A column is a vertical component of a table and is the relational representation of a field in a record. A column
-contains one data value for each row of the table.
-
-A column value is the smallest unit of data that can be selected from or updated in a table. Each column has a name
-that is an SQL identifier and is unique within the table or view that contains the column.
-
-[[column_references]]
-=== Column References
-
-A qualified column name, or column reference, is a column name qualified by the name of the table or view to which
-the column belongs, or by a correlation name.
-
-If a query refers to columns that have the same name but belong to different tables, you must use a qualified column
-name to refer to the columns within the query. You must also refer to a column by a qualified column name if you join
-a table with itself within a query to compare one row of the table with other rows in the same table.
-
-The syntax of a column reference or qualified column name is:
-
-```
-{table-name | view-name | correlation-name}.column-name
-```
-
-If you define a correlation name for a table in the FROM clause of a statement, you must use that correlation name if
-you need to qualify the column name within the statement.
-
-If you do not define an explicit correlation name in the FROM clause, you can qualify the column name with the name of
-the table or view that contains the column. See <<correlation_names,Correlation Names>>.
-
-<<<
-[[derived_column_names]]
-=== Derived Column Names
-
-A derived column is an SQL value expression that appears as an item in the select list of a SELECT statement. An explicit
-name for a derived column is an SQL identifier associated with the derived column. The syntax of a derived column name is:
-
-```
-column-expression [[AS] column-name]
-```
-
-The column expression can simply be a column reference. The expression is optionally followed by the AS keyword and the
-name of the derived column.
-
-If you do not assign a name to derived columns, the headings for unnamed columns in query result tables appear as (EXPR).
-Use the AS clause to assign names that are meaningful to you, which is important if you have more than one derived column
-in your select list.
-
-[[examples_of_derived_column_names]]
-==== Examples of Derived Column Names
-
-These two examples show how to use names for derived columns.
-
-* The first example shows (EXPR) as the column heading of the SELECT result table:
-+
-```
-SELECT AVG (salary) FROM persnl.employee; (EXPR)
-
-----------------
-49441.52
-
---- 1 row(s) selected.
-```
-
-* The second example shows AVERAGE SALARY as the column heading:
-+
-```
-SELECT AVG (salary) AS "AVERAGE SALARY"
-FROM persnl.employee; "AVERAGE SALARY"
-
-----------------
-49441.52
-
---- 1 row(s) selected.
-```
-
-[[column_default_settings]]
-=== Column Default Settings
-
-You can define specific default settings for columns when the table is created. The CREATE TABLE statement defines the
-default settings for columns within tables. The default setting for a column is the value inserted in a row when an INSERT
-statement omits a value for a particular column.
-
-[[constraints]]
-== Constraints
-
-An SQL constraint is an object that protects the integrity of data in a table by specifying a condition that all the
-values in a particular column or set of columns of the table must satisfy.
-
-{project-name} SQL enforces these constraints on SQL tables:
-
-[cols="20%,80%"]
-|===
-| CHECK       | Column or table constraint specifying a condition must be satisfied for each row in the table.
-| FOREIGN KEY | Column or table constraint that specifies a referential constraint for the table, declaring that a
-column or set of columns (called a foreign key) in a table can contain only values that match those in a column or
-set of columns in the table specified in the REFERENCES clause.
-| NOT NULL    | Column constraint specifying the column cannot contain nulls.
-| PRIMARY KEY | Column or table constraint specifying the column or set of columns as the primary key for the table.
-| UNIQUE      | Column or table constraint that specifies that the column or set of columns cannot contain more than
-one occurrence of the same value or set of values.
-|=== 
-
-[[creating_or_adding_constraints_on_sql_tables]]
-=== Creating or Adding Constraints on SQL Tables
-
-To create constraints on an SQL table when you create the table, use the NOT NULL, UNIQUE, CHECK, FOREIGN KEY, or
-PRIMARY KEY clause of the CREATE TABLE statement.
-
-For more information on {project-name} SQL commands, see <<create_table_statement,CREATE TABLE Statement>> and
-<<alter_table_statement,ALTER TABLE Statement>>.
-
-[[constraint_names]]
-=== Constraint Names
-
-When you create a constraint, you can specify a name for it or allow a name to be generated by {project-name} SQL.
-You can optionally specify both column and table constraint names. Constraint names are ANSI logical names.
-See <<database_object_names,Database Object Names>>. Constraint names are in the same name space as tables and
-views, so a constraint name cannot have the same name s a table or view.
-
-The name you specify can be fully qualified or not. If you specify the schema parts of the name, they must match
-those parts of the affected table and must be unique among table, view, and constraint names in that schema. If you
-omit the schema portion of the name you specify, {project-name} SQL expands the name by using the schema for the table.
-
-If you do not specify a constraint name, {project-name} SQL constructs an SQL identifier as the name for the constraint
-and qualifies it with the schema of the table. The identifier consists of the table name concatenated with a
-system-generated unique identifier.
-
-[[correlation_names]]
-== Correlation Names
-
-A correlation name is a name you can associate with a table reference that is a table, view, or subquery in a SELECT
-statement to:
-
-* Distinguish a table or view from another table or view referred to in a statement
-* Distinguish different uses of the same table
-* Make the query shorter
-
-
-A correlation name can be explicit or implicit.
-
-[[explicit_correlation_names]]
-=== Explicit Correlation Names
-
-An explicit correlation name for a table reference is an SQL identifier associated with the table reference in the FROM
-clause of a SELECT statement. See <<identifiers,Identifiers>>. The correlation name must be unique within the FROM clause.
-For more information about the FROM clause, table references, and correlation names, see <<select_statement,SELECT Statement>>.
-
-The syntax of a correlation name for the different forms of a table reference within a FROM clause is the same:
-
-```
-{table | view | (query-expression)} [AS]correlation-name
-```
-
-A table or view is optionally followed by the AS keyword and the correlation name. A derived table, resulting from the
-evaluation of a query expression, must be followed by the AS keyword and the correlation name. An explicit correlation
-name is known only to the statement in which you define it. You can use the same identifier as a correlation name in
-another statement.
-
-[[implicit_correlation_names]]
-=== Implicit Correlation Names
-
-A table or view reference that has no explicit correlation name has an implicit correlation name. The implicit correlation
-name is the table or view name qualified with the schema names.
-
-You cannot use an implicit correlation name for a reference that has an explicit correlation name within the statement.
-
-[[examples_of_correlation_names]]
-=== Examples of Correlation Names
-
-This query refers to two tables, ORDERS and CUSTOMER, that contain columns named CUSTNUM. In the WHERE clause, one column
-reference is qualified by an implicit correlation name (ORDERS) and the other by an explicit correlation name (C):
-
-```
-SELECT ordernum, custname FROM orders, customer c
-WHERE orders.custnum = c.custnum AND orders.custnum = 543;
-```
-
-[[database_objects]]
-== Database Objects
-
-A database object is an SQL entity that exists in a name space. SQL statements can access {project-name} SQL database objects.
-The subsections listed below describe these {project-name} SQL database objects.
-
-* <<constraints,Constraints>>
-* <<indexes,Indexes>>
-* <<tables,Tables>>
-* <<views,Views>>
-
-[[ownership]]
-=== Ownership
-
-In {project-name} SQL, the creator of an object owns the object defined in the schema and has all privileges on the object.
-In addition, you can use the GRANT and REVOKE statements to grant access privileges for a table or view to specified users.
-
-For more information, see the <<grant_statement,GRANT Statement>> and <<revoke_statement,REVOKE Statement>>. For
-information on privileges on tables and views, see <<create_table_statement,CREATE TABLE Statement>> and
-<<create_view_statement,CREATE VIEW Statement>>.
-
-[[database_object_names]]
-== Database Object Names
-
-DML statements can refer to {project-name} SQL database objects. To refer to a database object in a statement, use an appropriate
-database object name. For information on the types of database objects see <<database_objects,Database Objects>>.
-
-<<<
-[[logical_names_for_sql_objects]]
-=== Logical Names for SQL Objects
-
-You may refer to an SQL table, view, constraint, library, function, or procedure by using a one-part, two-part, or three-part
-logical name, also called an ANSI name:
-
-```
-catalog-name.schema-name.object-name
-```
-
-In this three-part name, _catalog-name_ is the name of the catalog, which is TRAFODION for {project-name} SQL objects that map to
-HBase tables. _schema-name_ is the name of the schema, and _object-name_ is the simple name of the table, view, constraint,
-library, function, or procedure. Each of the parts is an SQL identifier. See <<identifiers,Identifiers>>.
-
-{project-name} SQL automatically qualifies an object name with a schema name unless you explicitly specify schema names with the
-object name. If you do not set a schema name for the session using a SET SCHEMA statement, the default schema is SEABASE,
-which exists in the TRAFODION catalog. See <<set_schema_statement,SET SCHEMA Statement>>. A one-part name _object-name_ is
-qualified implicitly with the default schema.
-
-You can qualify a column name in a {project-name} SQL statement by using a three-part, two-part, or one-part object name, or a
-correlation name.
-
-[[sql_object_namespaces]]
-=== SQL Object Namespaces
-
-{project-name} SQL objects are organized in a hierarchical manner. Database objects exist in schemas, which are themselves
-contained in a catalog called TRAFODION. A catalog is a collection of schemas. Schema names must be unique within the catalog.
-
-Multiple objects with the same name can exist provided that each belongs to a different name space. {project-name} SQL supports these
-namespaces:
-
-* Index
-* Functions and procedures
-* Library
-* Schema label
-* Table value object (table, view, constraint)
-
-Objects in one schema can refer to objects in a different schema. Objects of a given name space are required to have
-unique names within a given schema.
-
-<<<
-[[data_types]]
-== Data Types
-
-{project-name} SQL data types are character, datetime, interval, or numeric (exact or approximate):
-
-[cols="2*"]
-|===
-| <<character_string_data_types,Character String Data Types>> | Fixed-length and variable-length character data types.
-| <<datetime_data_types,Datetime Data Types>>                 | DATE, TIME, and TIMESTAMP data types.
-| <<interval_data_types,Interval Data Types>>                 | Year-month intervals (years and months) and day-time intervals (days,
-hours, minutes, seconds, and fractions of a second).
-| <<numeric_data_types_,Numeric Data Types >>                 | Exact and approximate numeric data types.
-|===
-
-Each column in a table is associated with a data type. You can use the CAST expression to convert data to the data type that you specify. For
-more information, see <<cast_expression,CAST Expression>>.
-
-The following table summarizes the {project-name} SQL data types:
-
-[cols="13%,29%,29%,29%",options="header"]
-|===
-| Type | SQL Designation | Description | Size or Range^1^
-| Fixed-length character | CHAR[ACTER]          | Fixed-length character data            | 1 to 32707 characters^2^
-|                        | NCHAR                | Fixed-length character data in predefined national character set | 1 to 32707 bytes^3^ ^7^
-|                        | NATIONAL CHAR[ACTER] | Fixed-length character data in predefined national character set | 1 to 32707 bytes^3^ ^7^
-| Variable-length character | VARCHAR                      | Variable-length ASCII character string | 1 to 32703 characters^4^
-|                           | CHAR[ACTER] VARYING          | Variable-length ASCII character string | 1 to 32703 characters^4^
-|                           | NCHAR VARYING                | Variable-length ASCII character string | 1 to 32703 bytes^4^ ^8^
-|                           | NATIONAL CHAR[ACTER] VARYING | Variable-length ASCII character string | 1 to 32703 characters^4^ ^8^
-| Numeric
-| NUMERIC (1,_scale_) to +
-NUMERIC (128,_scale_)
-| Binary number with optional scale; signed or unsigned for 1 to 9 digits
-| 1 to 128 digits; stored: +
-1 to 4 digits in 2 bytes +
- +
-5 to 9 digits in 4 bytes +
- +
-10 to 128 digits in 8-64 bytes, depending on precision
-|                           | SMALLINT                      | Binary integer; signed or unsigned    | 0 to 65535 unsigned, -32768 to +32767 signed; stored in 2 bytes
-|                           | INTEGER                       | Binary integer; signed or unsigned    | 0 to 4294967295 unsigned, -2147483648 to +2147483647 signed; stored in 4 bytes
-|                           | LARGEINT                      | Binary integer; signed only           | -2**63 to +(2**63)-1; stored in 8 bytes
-| Numeric (extended numeric precision) | NUMERIC (precision 19 to 128) | Binary integer; signed or unsigned    | Stored as multiple chunks of 16-bit integers, with a minimum storage
-length of 8 bytes.
-| Floating point number
-| FLOAT[(_precision_)]
-| Floating point number; precision designates from 1 through 52 bits of precision
-| +/- 2.2250738585072014e-308 through +/-1.7976931348623157e+308; stored in 8 bytes
-|                                      | REAL                          | Floating point number (32 bits)        | +/- 1.17549435e-38 through +/ 3.40282347e+38; stored in 4 bytes
-|
-| DOUBLE PRECISION
-| Floating-point numbers (64 bits) with 1 through 52 bits of precision (52 bits of binary precision and 1 bits of exponent)
-| +/- 2.2250738585072014e-308 through +/-1.7976931348623157e+308; stored in 8 byte
-| Decimal number
-| DECIMAL (1,_scale_) to DECIMAL (18,_scale_)
-| Decimal number with optional scale; stored as ASCII characters; signed or unsigned for 1 to 9 digits; signed required for 10 or more digits
-| 1 to 18 digits. Byte length equals the number of digits. Sign is stored as the first bit of the leftmost byte.
-|
-| Date-Time
-| Point in time, using the Gregorian calendar and a 24 hour clock system. The five supported designations are listed below.
-| YEAR 0001-9999 +
-MONTH 1-12 +
-DAY 1-31 +
- +
-DAY constrained by MONTH and YEAR +
- +
-HOUR 0-23 +
-MINUTE 0-59 +
-SECOND 0-59 +
-FRACTION(n) 0-999999 +
- +
-in which n is the number of significant digits, from 1 to 6
-(default is 6; minimum is 1; maximum is 6). Actual database storage is
-incremental, as follows:
- +
-YEAR in 2 bytes +
-MONTH in 1 byte +
-DAY in 1 byte +
-HOUR in 1 byte +
-MINUTE in 1
-byte SECOND in 1 byte +
-FRACTION in 4 bytes +
-| | DATE                         | Date                                   | Format as YYYY-MM-DD; actual database storage size is 4 bytes
-| | TIME                         | Time of day, 24 hour clock, no time precision. Format as HH:MM:SS; actual database storage size is 3 bytes
-| | TIME (with time precision)   | Time of day, 24 hour clock, with time precision | Format as HH:MM:SS.FFFFFF; actual database storage size is 7 bytes
-| | TIMESTAMP                    | Point in time, no time precision | Format as YYYY-MM-DD HH:MM:SS; actual database storage size is 7 bytes
-| | TIMESTAMP (with time precision) Point in time, with time precision | Format as YYYY-MM-DD HH:MM:SS.FFFFFF; actual database storage size is 1 byte
-| Interval | INTERVAL | Duration of time; value is in the YEAR/MONTH range or the DAY/HOUR/MINUTE/YEAR/SECOND/FRACTION range
-| YEAR no constraint^6^ +
-MONTH 0-1 +
-DAY no contraint +
-HOUR 0-23 +
-MINUTE 0-59 +
-SECOND 0-59 +
-FRACTION(n) 0-999999 +
-in which n is the number of significant digits (default is 6; minimum is 1; maximum is 6); +
-stored in 2, 4, or 8 bytes depending on number of digits^2^
-|===
-
-* _scale_ is the number of digits to the right of the decimal.
-* _precision_ specifies the allowed number of decimal digits.
-
-
-1. The size of a column that allows null values is 2 bytes larger than the size for the defined data type.
-2.  The maximum row size is 32708 bytes, but the actual row size is less than that because of bytes used by
-null indicators, varchar column length indicators, and actual data encoding.
-3.  Storage size is the same as that required by CHAR data type but store only half as many characters depending
-on character set selection.
-4.  Storage size is reduced by 4 bytes for storage of the varying character length.
-5.  The maximum number of digits in an INTERVAL value is 18, including the digits in all INTERVAL fields of the value.
-Any INTERVAL field that is a starting field can have up to 18 digits minus the number of other digits in the INTERVAL value.
-6.  The maximum is 32707 if the national character set was specified at installation time to be ISO88591.
-The maximum is 16353 if the national character set was specified at installation time as UTF8.
-7.  The maximum is 32703 if the national character set was specified at installation time to be ISO88591.
-The maximum is 16351 if the national character set was specified at installation time as UTF8.
-
-
-<<<
-[[comparable_and_compatible_data_types]]
-=== Comparable and Compatible Data Types
-
-Two data types are comparable if a value of one data type can be compared to a value of the other data type.
-
-Two data types are compatible if a value of one data type can be assigned to a column of the other data type, and if
-columns of the two data types can be combined using arithmetic operations. Compatible data types are also comparable.
-
-Assignment and comparison are the basic operations of {project-name} SQL. Assignment operations are performed during the
-execution of INSERT and UPDATE statements. Comparison operations are performed during the execution of statements that
-include predicates, aggregate (or set) functions, and GROUP BY, HAVING, and ORDER BY clauses.
-
-The basic rule for both assignment and comparison is that the operands have compatible data types. Data types with
-different character sets cannot be compared without converting one character set to the other. However, the SQL compiler
-will usually generate the necessary code to do this conversion automatically.
-
-[[character_data_types]]
-==== Character Data Types
-
-Values of fixed and variable length character data types of the same character set are all character strings and are
-all mutually comparable and mutually assignable.
-
-When two strings are compared, the comparison is made with a temporary copy of the shorter string that has been padded
-on the right with blanks to have the same length as the longer string.
-
-[[datetime_data_types]]
-==== Datetime Data Types
-
-Values of type datetime are mutually comparable and mutually assignable only if the types have the same datetime fields.
-A DATE, TIME, or TIMESTAMP value can be compared with another value only if the other value has the same data type.
-
-All comparisons are chronological. For example, this predicate is true:
-
-```
-TIMESTAMP '2008-09-28 00:00:00' > TIMESTAMP '2008-06-26 00:00:00'
-```
-
-
-<<<
-[[interval_data_types]]
-==== Interval Data Types
-
-Values of type INTERVAL are mutually comparable and mutually assignable only if the types are either both year-month
-intervals or both day-time intervals.
-
-For example, this predicate is true:
-
-```
-INTERVAL '02-01' YEAR TO MONTH > INTERVAL '00-01' YEAR TO MONTH
-```
-
-The field components of the INTERVAL do not have to be the same. For example, this predicate is also true:
-
-```
-INTERVAL '02-01' YEAR TO MONTH > INTERVAL '01' YEAR
-```
-
-[[numeric_data_types]]
-==== Numeric Data Types
-
-Values of the approximate data types FLOAT, REAL, and DOUBLE PRECISION, and values of the exact data types NUMERIC,
-DECIMAL, INTEGER, SMALLINT, and LARGEINT, are all numbers and are all mutually comparable and mutually assignable.
-
-When an approximate data type value is assigned to a column with exact data type, rounding might occur, and the
-fractional part might be truncated. When an exact data type value is assigned to a column with approximate data type,
-the result might not be identical to the original number.
-
-When two numbers are compared, the comparison is made with a temporary copy of one of the numbers, according to defined
-rules of conversion. For example, if one number is INTEGER and the other is DECIMAL, the comparison is made with a
-temporary copy of the integer converted to a decimal.
-
-[[extended_numeric_precision]]
-===== Extended Numeric Precision
-
-{project-name} SQL provides support for extended numeric precision data type. Extended numeric precision is an extension to
-the NUMERIC(x,y) data type where no theoretical limit exists on precision. It is a software data type, which means that
-the underlying hardware does not support it and all computations are performed by software. Computations using this data
-type may not match the performance of other hardware supported data types.
-
-<<<
-[[considerations_for_extended_numeric_precision_data_type]]
-===== Considerations for Extended NUMERIC Precision Data Type
-
-Consider these points and limitations for extended NUMERIC precision data type:
-
-
-* May cost more than other data type options.
-* Is a software data type.
-* Cannot be compared to data types that are supported by hardware.
-* If your application requires extended NUMERIC precision arithmetic
-expressions, specify the required precision in the table DDL or as
-explicit extended precision type casts of your select list items. The
-default system behavior is to treat user-specified extended precision
-expressions as extended precision values. Conversely, non-user-specified
-(that is, temporary, intermediate) extended precision expressions may
-lose precision. In the following example, the precision appears to lose
-one digit because the system treats the sum of two NUMERIC(18,4) type
-columns as NUMERIC(18,4). NUMERIC(18) is the longest non-extended
-precision numeric type. NUMERIC(19) is the shortest extended precision
-numeric type. The system actually computes the sum of 2 NUMERIC(18,4)
-columns as an extended precision NUMERIC(19,4) sum. But because no
-user-specified extended precision columns exist, the system casts the
-sum back to the user-specified type of NUMERIC(18,4).
-+    
-```
-CREATE TABLE T(a NUMERIC(18,4), B NUMERIC(18,4));
-INSERT INTO T VALUES (1.1234, 2.1234);
-
->> SELECT A+B FROM T;
-
-(EXPR)
---------------
-3.246
-```
-+
-If this behavior is not acceptable, you can use one of these options:
-+
-** Specify the column type as NUMERIC(19,4). For example, CREATE TABLE T(A NUMERIC(19,4), B NUMERIC(19,4)); or
-** Cast the sum as NUMERIC(19,4). For example, SELECT CAST(A+B AS NUMERIC(19,4)) FROM T; or
-** Use an extended precision literal in the expression. For example, SELECT A+B*1.00000000000000000000 FROM T;.
-+
-Note the result for the previous example when changing to NUMERIC(19,4):
-+
-```
-SELECT CAST(A+B AS NUMERIC(19,4)) FROM T;
-
-(EXPR)
-------------
-3.2468
-```
-+
-When displaying output results in the command interface of a
-client-based tool, casting a select list item to an extended precision
-numeric type is acceptable. However, when retrieving an extended
-precision select list item into an application program's host variable,
-you must first convert the extended precision numeric type into a string
-data type. For example:
-+
-```
-SELECT CAST(CAST(A+B AS NUMERIC(19,4)) AS CHAR(24)) FROM T;
-
-(EXPR)
-
-------------
-3.2468
-```
-+
-NOTE: An application program can convert an externalized extended
-precision value in string form into a numeric value it can handle. But,
-an application program cannot correctly interpret an extended precision
-value in internal form.
-
-[[rules_for_extended_numeric_precision_data_type]]
-===== Rules for Extended NUMERIC Precision Data Type
-
-These rules apply:
-
-* No limit on maximum precision.
-* Supported in all DDL and DML statements where regular NUMERIC data type is supported.
-* Allowed as part of key columns for hash partitioned tables only.
-* NUMERIC type with precision 10 through 18.
-** UNSIGNED is supported as extended NUMERIC precision data type
-** SIGNED is supported as 64-bit integer
-* CAST function allows conversion between regular NUMERIC and extended NUMERIC precision data type.
-* Parameters in SQL queries support extended NUMERIC precision data type.
-
-<<<
-[[example_of_extended_numeric_precision_data_type]]
-===== Example of Extended NUMERIC Precision Data Type
-
-```
->>CREATE TABLE t( n NUMERIC(128,30));
-
---- SQL operation complete.
-
->>SHOWDDL TABLE t;
-CREATE TABLE SCH.T
-  (
-      N NUMERIC(128, 30) DEFAULT NULL
-  )
-;
-
---- SQL operation complete.
-
->>
-```
-
-<<<
-[[character_string_data_types]]
-=== Character String Data Types
-
-{project-name} SQL includes both fixed-length character data and variable-length character data. You cannot compare character data to
-numeric, datetime, or interval data.
-
-* `_character-type_` is:
-+
-```
-CHAR[ACTER] [(_length_ [CHARACTERS])] [_char-set_] [UPSHIFT] [[NOT]CASESPECIFIC]
-| CHAR[ACTER] VARYING(_length_) [CHARACTERS][_char-set_] [UPSHIFT] [[NOT]CASESPECIFIC]
-| VARCHAR(_length_) [CHARACTERS] [_char-set_] [UPSHIFT] [[NOT]CASESPECIFIC]
-| NCHAR [(_length_)] [CHARACTERS] [UPSHIFT] [[NOT]CASESPECIFIC]
-| NCHAR VARYING (_length_) [CHARACTERS] [UPSHIFT] [[NOT]CASESPECIFIC]
-| NATIONAL CHAR[ACTER] [(_length_)] [CHARACTERS] [UPSHIFT] [[NOT]CASESPECIFIC]
-| NATIONAL CHAR[ACTER] VARYING (_length_) [CHARACTERS] [UPSHIFT] [[NOT]CASESPECIFIC]
-```
-
-* `_char-set_` is
-+
-```
-CHARACTER SET char-set-name
-```
-
-CHAR, NCHAR, and NATIONAL CHAR are fixed-length character types. CHAR
-VARYING, VARCHAR, NCHAR VARYING and NATIONAL CHAR VARYING are
-varying-length character types.
-
-* `_length_`
-+
-is a positive integer that specifies the number of characters allowed in
-the column. You must specify a value for _length_.
-
-* `_char-set-name_`
-+
-is the character set name, which can be ISO88591 or UTF8.
-
-* `CHAR[ACTER] [(_length_ [CHARACTERS])] [_char-set_] [UPSHIFT] [[NOT]CASESPECIFIC]`
-+
-specifies a column with fixed-length character data.
-
-* `CHAR[ACTER] VARYING (_length_) [CHARACTERS] [_char-set_] [UPSHIFT] [[NOT]CASESPECIFIC]`
-+
-specifies a column with varying-length character data. VARYING specifies
-that the number of characters stored in the column can be fewer than the
-_length_.
-+
-<<<
-+
-Values in a column declared as VARYING can be logically and physically
-shorter than the maximum length, but the maximum internal size of a
-VARYING column is actually four bytes larger than the size required for
-an equivalent column that is not VARYING.
-
-* `VARCHAR (_length_) [_char-set_] [UPSHIFT] [[NOT]CASESPECIFIC]`
-+
-specifies a column with varying-length character data. VARCHAR is
-equivalent to data type CHAR[ACTER] VARYING.
-
-* `NCHAR [(_length_)] [UPSHIFT] [[NOT]CASESPECIFIC], NATIONAL CHAR[ACTER] [(_length_)] [UPSHIFT] [[NOT]CASESPECIFIC]`
-+
-specifies a column with data in the predefined national character set.
-
-* `NCHAR VARYING [(_length_)] [UPSHIFT] [[NOT]CASESPECIFIC], NATIONAL CHAR[ACTER] VARYING (_length_) [UPSHIFT] [[NOT]CASESPECIFIC]`
-+
-specifies a column with varying-length data in the predefined national character set.
-
-[[considerations_for_character_string_data_types]]
-==== Considerations for Character String Data Types
-
-[[difference_between_char_and_varchar]]
-===== Difference Between CHAR and VARCHAR
-
-You can specify a fixed-length character column as CHAR(_n_), where
-_n_ is the number of characters you want to store. However, if you store
-five characters into a column specified as CHAR(10), ten characters are
-stored where the rightmost five characters are blank.
-
-If you do not want to have blanks added to your character string, you
-can specify a variable-length character column as VARCHAR(_n_), where
-_n_ is the maximum number of characters you want to store. If you store
-five characters in a column specified as VARCHAR(10), only the five
-characters are stored logically\u2014without blank padding.
-
-<<<
-[[nchar_columns_in_sql_tables]]
-===== NCHAR Columns in SQL Tables
-
-In {project-name} SQL, the NCHAR type specification is equivalent to:
-
-
-* NATIONAL CHARACTER
-* NATIONAL CHAR
-* CHAR &#8230; CHARACTER SET &#8230;, where the character set is the character set for NCHAR
-
-Similarly, you can use NCHAR VARYING, NATIONAL CHARACTER VARYING, NATIONAL CHAR
-VARYING, and VARCHAR &#8230; CHARACTER SET &#8230; , where the character set is
-the character set for NCHAR. The character set for NCHAR is determined
-when {project-name} SQL is installed.
-
-<<<
-[[datetime_data_types]]
-=== Datetime Data Types
-
-A value of datetime data type represents a point in time according to
-the Gregorian calendar and a 24-hour clock in local civil time (LCT). A
-datetime item can represent a date, a time, or a date and time.
-
-When a numeric value is added to or subtracted from a date type, the
-numeric value is automatically casted to an INTERVAL DAY value. When a
-numeric value is added to or subtracted from a time type or a timestamp
-type, the numeric value is automatically casted to an INTERVAL SECOND
-value. For information on CAST, see <<cast
-expression,CAST
-Expression>>.
-
-{project-name} SQL accepts dates, such as October 5 to 14, 1582, that were
-omitted from the Gregorian calendar. This functionality is a {project-name}
-SQL extension.
-
-The range of times that a datetime value can represent is:
-
-```
-January 1, 1 A.D., 00:00:00.000000 (low value) December 31, 9999, 23:59:59.999999 (high value)
-```
-
-{project-name} SQL has three datetime data types:
-
-* `_datetime-type_` is:
-+
-```
-  DATE
-| TIME [(_time-precision_)]
-| TIMESTAMP [(_timestamp-precision_)]
-```
-
-* `DATE`
-+
-specifies a datetime column that contains a date in the external form
-yyyy-mm-dd and stored in four bytes.
-
-* `TIME [(_time-precision_)]`
-+
-specifies a datetime column that, without the optional time-precision,
-contains a time in the external form hh:mm:ss and is stored in three
-bytes. _time-precision_ is an unsigned integer that specifies the number
-of digits in the fractional seconds and is stored in four bytes. The
-default for _time-precision_ is 0, and the maximum is 6.
-
-* `TIMESTAMP [(_timestamp-precision_)]`
-+
-specifies a datetime column that, without the optional
-_timestamp-precision_, contains a timestamp in the external form
-yyyy-mm-dd hh:mm:ss and is stored in seven bytes. _timestamp-precision_
-is an unsigned integer that specifies the number of digits in the
-fractional seconds and is stored in four bytes. The default for
-_timestamp-precision_ is 6, and the maximum is 6.
-
-
-[[considerations_for_datetime_data_types]]
-==== Considerations for Datetime Data Types
-
-[[datetime_ranges]]
-===== Datetime Ranges
-
-The range of values for the individual fields in a DATE, TIME, or
-TIMESTAMP column is specified as:
-
-
-[cols=","]
-|===
-| _yyyy_   | Year, from 0001 to 9999
-| _mm_     | Month, from 01 to 12
-| _dd_     | Day, from 01 to 31
-| _hh_     | Hour, from 00 to 23
-| _mm_     | Minute, from 00 to 59
-| _ss_     | Second, from 00 to 59
-| _msssss_ | Microsecond, from 000000 to 999999
-|===
-
-When you specify _datetime_value_ (FORMAT \u2018string\u2019) in the DML statement
-and the specified format is \u2018mm/dd/yyyy\u2019,\u2019MM/DD/YYYY\u2019, or \u2018yyyy/mm/dd\u2019
-or \u2018yyyy-mm-dd\u2019, the datetime type is automatically cast.
-
-<<<
-[[interval_data_types]]
-=== Interval Data Types
-
-Values of interval data type represent durations of time in year-month
-units (years and months) or in day-time units (days, hours, minutes,
-seconds, and fractions of a second).
-
-* `_interval-type_ is:`
-+
-```
-INTERVAL[-] { start-field TO end-field | single-field }
-```
-
-* `_start-field_ is:`
-+
-```
-{YEAR | MONTH | DAY | HOUR | MINUTE} [(_leading-precision_)]
-```
-
-* `_end-field_ is:
-+
-```
-YEAR | MONTH | DAY | HOUR | MINUTE | SECOND [(_fractional-precision_)]
-```
-
-* `_single-field_ is:`
-+
-```
-_start-field_ | SECOND [(_leading-precision_, _fractional-precision_)]
-```
-
-* `INTERVAL[-] { _start-field_ TO _end-field_ | _single-field_ }`
-+
-specifies a column that represents a duration of time as a year-month or
-day-time range or a single-field. The optional sign indicates if this is
-a positive or negative integer. If you omit the sign, it defaults to
-positive.
-+
-If the interval is specified as a range, the _start-field_ and
-_end-field_ must be in one of these categories:
-
-* `{YEAR | MONTH | DAY | HOUR | MINUTE} [(_leading-precision_)]`
-+
-specifies the _start-field_. A _start-field_ can have a
-_leading-precision_ up to 18 digits (the maximum depends on the number
-of fields in the interval). The _leading-precision_ is the number of digits allowed in the
-_start-field_. The default for _leading-precision_ is 2.
-
-* `YEAR | MONTH | DAY | HOUR | MINUTE | SECOND [(_fractional-precision_)]`
-+
-specifies the _end-field_. If the _end-field_ is SECOND, it can have a
-_fractional-precision_ up to 6 digits. The _fractional-precision_ is the
-number of digits of precision after the decimal point. The default for
-_fractional-precision_ is 6.
-
-* `start-field | SECOND [(_leading-precision_, _fractional-precision_)]`
-+
-specifies the _single-field_. If the _single-field_ is SECOND, the
-_leading-precision_ is the number of digits of precision before the
-decimal point, and
-the _fractional-precision_ is the number of digits of precision after
-the decimal point. The default for _leading-precision_ is 2, and the
-default for _fractional-precision_
-is 6. The maximum for _leading-precision_ is 18, and the maximum for
-_fractional-precision_ is 6.
-
-
-[[considerations_for_interval_data_types]]
-==== Considerations for Interval Data Types
-
-[[adding_or_subtracting_imprecise_interval_values]]
-===== Adding or Subtracting Imprecise Interval Values
-
-Adding or subtracting an interval that is any multiple of a MONTH, a
-YEAR, or a combination of these may result in a runtime error. For
-example, adding 1 MONTH to January 31, 2009 will result in an error
-because February 31 does not exist and it is not clear whether the user
-would want rounding back to February 28, 2009, rounding up to March 1,
-2009 or perhaps treating the interval 1 MONTH as if it were 30 days
-resulting in an answer of March 2, 2009. Similarly, subtracting 1 YEAR
-from February 29, 2008 will result in an error. See the descriptions for
-the <<add_months_function,ADD_MONTHS Function>>,
-<<date_add_function,DATE_ADD Function>>,
-<<date_sub_function,DATE_SUB Function>> , and <<dateadd_function,DATEADD Function>> for ways
-to add or subtract such intervals without getting errors at runtime.
-
-[[interval_leading_precision]]
-===== Interval Leading Precision
-
-The maximum for the _leading-precision_ depends on the number of fields
-in the interval and on the _fractional-precision_. The maximum is
-computed as:
-
-```
-[[18 - _fractional-precision_ - 2 * (_n_ - 1)]]
-_max-leading-precision_ = 18 - _fractional-precision_ - 2 * (_N_ - 1)
-```
-
-where _N_ is the number of fields in the interval.
-
-For example, the maximum number of digits for the _leading-precision_ in
-a column with data type INTERVAL YEAR TO MONTH is computed as: 18 \u2013 0 \u2013
-2 * (2 \u2013 1) = 16
-
-<<<
-[[interval_ranges]]
-===== Interval Ranges
-
-Within the definition of an interval range (other than a single field),
-the _start-field_ and
-_end-field_ can be any of the specified fields with these restrictions:
-
-* An interval range is either year-month or day-time\u2014that is, if the
-_start-field_ is YEAR, the _end-field_ is MONTH; if the _start-field_ is
-DAY, HOUR, or MINUTE, the _end-field_ is also a time field.
-* The _start-field_ must precede the _end-field_ within the hierarchy:
-YEAR, MONTH, DAY, HOUR, MINUTE, and SECOND.
-
-[[signed_intervals]]
-===== Signed Intervals
-
-To include a quoted string in a signed interval data type, the sign must
-be outside the quoted string. It can be before the entire literal or
-immediately before the duration enclosed in quotes.
-
-For example, for the interval \u201cminus (5 years 5 months) these formats
-are valid:
-
-```
-INTERVAL - '05-05'YEAR TO MONTH
-
-- INTERVAL '05-05' YEAR TO MONTH
-```
-
-[[overflow_conditions]]
-===== Overflow Conditions
-
-When you insert a fractional value into an INTERVAL data type field, if
-the fractional value is 0 (zero) it does not cause an overflow.
-Inserting value INTERVAL '1.000000' SECOND(6) into a field SECOND(0)
-does not cause a loss of value. Provided that the value fits in the
-target column without a loss of precision, {project-name} SQL does not return
-an overflow error.
-
-However, if the fractional value is > 0, an overflow occurs. Inserting
-value INTERVAL '1.000001' SECOND(6) causes a loss of value.
-
-<<<
-[[numeric_data_types]]
-=== Numeric Data Types
-
-Numeric data types are either exact or approximate. A numeric data type
-is compatible with any other numeric data type, but not with character,
-datetime, or interval data types.
-
-* `_exact-numeric-type_` is:
-+
-```
-   NUMERIC [(_precision_ [,_scale_])] [SIGNED|UNSIGNED]
-| SMALLINT [SIGNED|UNSIGNED]
-| INT[EGER] [SIGNED|UNSIGNED]
-| LARGEINT
-| DEC[IMAL] [(_precision_ [,_scale_])] [SIGNED|UNSIGNED]
-```
-
-* `_approximate-numeric-type_` is:
-+
-```
-   FLOAT [(_precision_)]
-| REAL
-| DOUBLE PRECISION
-```
-+
-Exact numeric data types are types that can represent a value exactly:
-NUMERIC, SMALLINT, INTEGER, LARGEINT, and DECIMAL.
-+
-Approximate numeric data types are types that do not necessarily
-represent a value exactly: FLOAT, REAL, and DOUBLE PRECISION.
-+
-A column in a {project-name} SQL table declared with a floating-point data
-type is stored in IEEE floating-point format and all computations on it
-are done assuming that. {project-name} SQL tables can contain only IEEE
-floating-point data.
-
-* `NUMERIC [(_precision_ [,_scale_])] [SIGNED|UNSIGNED]`
-+
-specifies an exact numeric column\u2014a two-byte binary number, SIGNED or
-UNSIGNED. _precision_ specifies the total number of digits and cannot
-exceed 128. If _precision_ is between 10 and 18, you must use a signed
-value to obtain the supported hardware data type. If precision is over
-18, you will receive the supported software data type. You will also
-receive the supported software data type if the precision type is
-between 10 and 18, and you specify UNSIGNED. _scale_ specifies the
-number of digits to the right of the decimal point.
-+
-The default is NUMERIC (9,0) SIGNED.
-
-* `SMALLINT [SIGNED|UNSIGNED]`
-+
-specifies an exact numeric column\u2014a two-byte binary integer, SIGNED or
-UNSIGNED. The
-column stores integers in the range unsigned 0 to 65535 or signed -32768
-to +32767. The default is SIGNED.
-
-* `INT[EGER] [SIGNED|UNSIGNED]`
-+
-specifies an exact numeric column\u2014a 4-byte binary integer, SIGNED or
-UNSIGNED. The column stores integers in the range unsigned 0 to
-4294967295 or signed -2147483648 to +2147483647.
-+
-The default is SIGNED.
-
-* `LARGEINT`
-+
-specifies an exact numeric column\u2014an 8-byte signed binary integer. The
-column stores integers
-in the range -2^63^ to +2^63^ -1 (approximately 9.223 times 10 to the
-eighteenth power).
-
-* `DEC[IMAL] [(_precision_ [,_scale_])] [SIGNED|UNSIGNED]`
-+
-specifies an exact numeric column\u2014a decimal number, SIGNED or
-UNSIGNED,stored as ASCII characters. _precision_ specifies the total
-number of digits and cannot exceed 18. If _precision_ is 10 or more, the
-value must be SIGNED. The sign is stored as the first bit of the
-leftmost byte. _scale_ specifies the number of digits to the right of
-the decimal point.
-+
-The default is DECIMAL (9,0) SIGNED.
-
-* `FLOAT [( precision )]`
-+
-specifies an approximate numeric column. The column stores
-floating-point numbers and
-designates from 1 through 54 bits of _precision_.
-The range is from +/- 2.2250738585072014e-308 through +/-1.7976931348623157e+308 stored in 8 bytes.
-+
-An IEEE FLOAT _precision_ data type is stored as an IEEE DOUBLE, that is, in 8 bytes, with the specified precision.
-+
-The default _precision_ is 54.
-
-* `REAL`
-+
-specifies a 4-byte approximate numeric column. The column stores 32-bit
-floating-point numbers with 23 bits of binary precision and 8 bits of
-exponent.
-+
-The minimum and maximum range is from +/- 1.17549435e-38 through +/ 3.40282347e+38.
-
-<<<
-* `DOUBLE PRECISION`
-+
-specifies an 8-byte approximate numeric column.
-+
-The column stores 64-bit floating-point numbers and designates from 1
-through 52 bits of _precision_.
-+
-An IEEE DOUBLE PRECISION data type is stored in 8 bytes with 52 bits of
-binary precision and 1 bits of exponent. The minimum and maximum range
-is from +/- 2.2250738585072014e-308 through +/-1.7976931348623157e+308.
-
-<<<
-[[expressions]]
-== Expressions
-
-An SQL value expression, called an expression, evaluates to a value.
-{project-name} SQL supports these types of expressions:
-
-
-[cols="30%,70%"]
-|===
-| <<character_value_expressions,Character Value Expressions>> | Operands can be combined with the concatenation operator (&#124;&#124;). +
- +
-Example: `'HOUSTON,' \|\| ' TEXAS'`
-| <<datetime_value_expressions,Datetime Value Expressions>> |  Operands can be combined in specific ways with arithmetic operators. +
- +
-Example: `CURRENT_DATE + INTERVAL '1' DAY`
-| <<interval_value_expressions,Interval Value Expressions>> | Operands can be combined in specific ways with addition and subtraction operators. +
- +
-Example: `INTERVAL '2' YEAR - INTERVAL '3' MONTH`
-| <<numeric_value_expressions,Numeric Value Expressions>> |  Operands can be combined in specific ways with arithmetic operators. +
- +
-Example: `SALARY * 1.10`
-|===
-
-
-The data type of an expression is the data type of the value of the
-expression.
-
-A value expression can be a character string literal, a numeric literal,
-a dynamic parameter, or a column name that specifies the value of the
-column in a row of a table. A value expression can also include
-functions and scalar subqueries.
-
-<<<
-[[character_value_expressions]]
-=== Character Value Expressions
-
-The operands of a character value expression\u2014called character
-primaries\u2014can be combined with the concatenation operator (||). The data
-type of a character primary is character string.
-
-* `_character-expression_` is:
-+
-```
-   character-primary
-| character-expression || character-primary
-```
-
-* `_character-primary_` is:
-+
-```
-   character-string-literal
-| column-reference
-| character-type-host-variable
-| dynamic parameter
-| character-value-function
-| aggregate-function
-| sequence-function
-| scalar-subquery
-| CASE-expression
-| CAST-expression
-| (character-expression)
-```
-
-Character (or string) value expressions are built from operands that can be:
-
-* Character string literals
-* Character string functions
-* Column references with character values
-* Dynamic parameters
-* Aggregate functions, sequence functions, scalar subqueries, CASE expressions, or CAST expressions that return character values
-
-<<<
-[[examples_of_character_value_expressions]]
-==== Examples of Character Value Expressions
-
-These are examples of character value expressions:
-
-
-[cols="40%,60%",options="header"]
-|===
-| Expression                                | Description
-| 'ABILENE'                                 | Character string literal.
-| 'ABILENE ' \|\|' TEXAS'                   | The concatenation of two string literals.
-| 'ABILENE ' \|\|' TEXAS ' \|\| x\u201955 53 41' | The concatenation of three string literals to form the literal: 'ABILENE TEXAS USA'
-| 'Customer ' \|\| custname                 | The concatenation of a string literal with the value in column CUSTNAME.
-| CAST (order_date AS CHAR(10))             | CAST function applied to a DATE value.
-|===
-
-<<<
-[[datetime_value_expressions]]
-=== Datetime Value Expressions
-
-The operands of a datetime value expression can be combined in specific
-ways with arithmetic operators.
-
-In this syntax diagram, the data type of a datetime primary is DATE,
-TIME, or TIMESTAMP. The data type of an interval term is INTERVAL.
-
-* `_datetime-expression_` is:
-+
-```
-  datetime-primary
-| interval-expression + datetime-primary
-| datetime-expression + interval-term
-| datetime-expression - interval-term
-```
-
-* `_datetime-primary_` is:
-+
-```
-  datetime-literal
-| column-reference
-| datetime-type-host-variable
-| dynamic parameter
-| datetime-value-function
-| aggregate-function
-| sequence-function
-| scalar-subquery
-| CASE-expression
-| CAST-expression
-| (datetime-expression)
-```
-
-* `_interval-term_` is:
-+
-```
-  interval-factor
-| numeric-term * interval-factor
-```
-
-* `_interval-factor_` is:
-+
-```
-[+|-] interval-primary
-```
-
-<<<
-* `_interval-primary_` is:
-+
-```
-  interval-literal
-| column-reference
-| interval-type-host-variable
-| dynamic parameter
-| aggregate-function
-| sequence-function
-| scalar-subquery
-| CASE-expression
-| CAST-expression
-| (interval-expression)
-```
-
-Datetime value expressions are built from operands that can be:
-
-* Interval value expressions
-* Datetime or interval literals
-* Dynamic parameters
-* Column references with datetime or interval values
-* Dynamic parameters
-* Datetime or interval value functions
-* Any aggregate functions, sequence functions, scalar subqueries, CASE
-expressions, or CAST expressions that return datetime or interval values
-
-[[considerations_for_datetime_value_expressions]]
-==== Considerations for Datetime Value Expressions
-
-[[data_type_of_result]]
-===== Data Type of Result
-
-In general, the data type of the result is the data type of the
-_datetime-primary_ part of the datetime expression. For example,
-datetime value expressions include:
-
-[cols="33%l,33%,33%",options="header"]
-|===
-| Datetime Expression | Description | Result Data Type
-| CURRENT_DATE + INTERVAL '1' DAY | The sum of the current date and an interval value of one day. | DATE
-| CURRENT_DATE + est_complete | The sum of the current date and the interval value in column EST_COMPLETE. | DATE
-| ( SELECT ship_timestamp FROM project WHERE projcode=1000) + INTERVAL '07:04' DAY TO HOUR
-| The sum of the ship timestamp for the specified project and an interval value of seven days, four hours.
-| TIMESTAMP
-|===
-
-The datetime primary in the first expression is CURRENT_DATE, a function
-that returns a value with DATE data type. Therefore, the data type of
-the result is DATE.
-
-In the last expression, the datetime primary is this scalar subquery:
-
-```
-( SELECT ship_timestamp FROM project WHERE projcode=1000 )
-```
- 
-The preceding subquery returns a value with TIMESTAMP data type.
-Therefore, the data type of the result is TIMESTAMP.
-
-[[restrictions_on_operations_with_datetime_or_interval_operands]]
-===== Restrictions on Operations With Datetime or Interval Operands
-
-You can use datetime and interval operands with arithmetic operators in
-a datetime value expression only in these combinations:
-
-[cols="25%,25%l,25%,25%",options="header"]
-|===
-| Operand 1 | Operator | Operand 2 | Result Type
-| Datetime  | + or \u2013   | Interval  | Datetime
-| Interval  | +        | Datetime  | Datetime
-|===
-
-
-When a numeric value is added to or subtracted from a DATE type, the
-numeric value is automatically casted to an INTERVAL DAY value. When a
-numeric value is added to or subtracted from a time type or a timestamp
-type, the numeric value is automatically casted to an INTERVAL SECOND
-value. For information on CAST, see <<cast expression,CAST Expression>>.
-For more information on INTERVALS, see 
-<<interval_value_expressions,Interval Value Expressions>>
-
-When using these operations, note:
-
-* Adding or subtracting an interval of months to a DATE value results in
-a value of the same day plus or minus the specified number of months.
-Because different months have different lengths, this is an approximate
-result.
-* Datetime and interval arithmetic can yield unexpected results,
-depending on how the fields are used. For example, execution of this
-expression (evaluated left to right) returns an error:
-+
-```
-DATE '2007-01-30' + INTERVAL '1' MONTH + INTERVAL '7' DAY
-```
-+
-In contrast, this expression (which adds the same values as the previous
-expression, but in a different order) correctly generates the value
-2007-03-06:
-+
-```
-DATE '2007-01-30' + INTERVAL '7' DAY + INTERVAL '1' MONTH
-```
-
-You can avoid these unexpected results by using the <<add_months_function,ADD_MONTHS Function>>.
-
-[[examples_of_datetime_value_expressions]]
-==== Examples of Datetime Value Expressions
-
-The PROJECT table consists of five columns that use the data types
-NUMERIC, VARCHAR, DATE, TIMESTAMP, and INTERVAL DAY. Suppose that you
-have inserted values into the PROJECT table. For example:
-
-```
-INSERT INTO persnl.project
-VALUES (1000,'SALT LAKE CITY',DATE '2007-04-10',
-TIMESTAMP '2007-04-21:08:15:00.00',INTERVAL '15' DAY);
-```
-
-The next examples use these values in the PROJECT table:
-
-[cols="4*",options="header"]
-|===
-| PROJCODE | START_DATE | SHIP_TIMESTAMP         | EST_COMPLETE
-| 1000     | 2007-04-10 | 2007-04-21 08:15:00.00 | 15
-| 945      | 2007-10-20 | 2007-12-21 08:15:00.00 | 30
-| 920      | 2007-02-21 | 2007-03-12 09:45:00.00 | 20
-| 134      | 2007-1 -20 | 2008-01-01 00:00:00.00 | 30
-|===
-
-* Add an interval value qualified by YEAR to a datetime value:
-+
-```
-SELECT start_date + INTERVAL '1' YEAR FROM persnl.project
-WHERE projcode = 1000;
-
-(EXPR)
-----------
-2008-04-10
-
---- 1 row(s) selected.
-```
-
-* Subtract an interval value qualified by MONTH from a datetime value:
-+
-```
-SELECT ship_timestamp - INTERVAL '1' MONTH FROM persnl.project
-WHERE projcode = 134;
-
-(EXPR)
---------------------------
-2007-12-01 00:00:00.000000
-
---- 1 row(s) selected.
-```
-+
-The result is 2007-12-01 00:00:00.00. The YEAR value is decremented by 1
-because subtracting a month from January 1 causes the date to be in the
-previous year.
-
-<<<
-* Add a column whose value is an interval qualified by DAY to a datetime
-value:
-+
-```
-SELECT start_date + est_complete FROM persnl.project
-WHERE projcode = 920;
-
-(EXPR)
-----------
-2007-03-12
-
---- 1 row(s) selected.
-```
-+
-The result of adding 20 days to 2008-02-21 is 2008-03-12. {project-name} SQL
-correctly handles 2008 as a leap year.
-
-* Subtract an interval value qualified by HOUR TO MINUTE from a datetime
-value:
-+
-```
-SELECT ship_timestamp - INTERVAL '15:30' HOUR TO MINUTE
-FROM persnl.project WHERE projcode = 1000;
-
-(EXPR)
---------------------------
-2008-04-20 16:45:00.000000
-```
-+
-The result of subtracting 15 hours and 30 minutes from 2007-04-21
-08:15:00.00 is 2007-04-20 16:45:00.00.
-
-<<<
-[[interval_value_expressions]]
-=== Interval Value Expressions
-
-The operands of an interval value expression can be combined in specific
-ways with addition and subtraction operators. In this syntax diagram,
-the data type of a datetime expression is DATE, TIME, or TIMESTAMP; the
-data type of an interval term or expression is INTERVAL.
-
-* `_interval-expression_` is:
-+
-```
-  interval-term
-| interval-expression + interval-term
-| interval-expression - interval-term
-| (datetime-expression - datetime-primary)
-     [interval-qualifier]
-```
-
-* `_interval-term_` is:
-+
-```
-  interval-factor
-| interval-term * numeric-factor
-| interval-term / numeric-factor
-| numeric-term * interval-factor
-```
-
-* `_interval-factor_` is:
-+
-```
-[+|-] interval-primary
-```
-
-* `_interval-primary_` is:
-+
-```
-interval-literal
-| column-reference
-| interval-type-host-variable
-| dynamic-parameter
-| aggregate-function
-| sequence-function
-| scalar-subquery
-| CASE-expression
-| CAST-expression
-| (interval-expression)
-```
-
-* `_numeric-factor_` is:
-+
-```
-  [+|-] numeric-primary
-| [+|-] numeric-primary ** numeric-factor
-```
-
-Interval value expressions are built from operands that can be:
-
-* Integers
-* Datetime value expressions
-* Interval literals
-* Column references with datetime or interval values
-* Dynamic parameters
-* Datetime or interval value functions
-* Aggregate functions, sequence functions, scalar subqueries, CASE expressions, or CAST expressions that return interval values
-
-
-For _interval-term_, _datetime-expression_, and _datetime-primary_, see <<datetime_value_[expressions,Datetime Value Expressions>>.
-
-If the interval expression is the difference of two datetime expressions, by default, the result is expressed in the least
-significant unit of measure for that interval. For date differences, the interval is expressed in days. For timestamp differences, the interval
-is expressed in fractional seconds.
-
-If the interval expression is the difference or sum of interval
-operands, the interval qualifiers of the operands are either year-month
-or day-time. If you are updating or inserting a value that is the result
-of adding or subtracting two interval qualifiers, the interval qualifier
-of the result depends on the interval qualifier of the target column.
-
-<<<
-[[considerations_for_interval_value_expressions]]
-==== Considerations for Interval Value Expressions
-
-[[start_and_end_fields]]
-===== Start and End Fields
-
-Within the definition of an interval range, the _start-field_ and
-_end-field_ can be any of the specified fields with these restrictions:
-
-
-* An interval is either year-month or day-time. If the _start-field_ is
-YEAR, the _end-field_ is MONTH; if the _start-field_ is DAY, HOUR, or
-MINUTE, the _end-field_ is also a time field.
-* The _start-field_ must precede the _end-field_ within the hierarchy
-YEAR, MONTH, DAY, HOUR, MINUTE, and SECOND.
-
-
-Within the definition of an interval expression, the _start-field_ and
-_end-field_ of all operands in the expression must be either year-month
-or day-time.
-
-[[interval_qualifier]]
-===== Interval Qualifier
-
-The rules for determining the interval qualifier of the result
-expression vary. For example, interval value expressions include:
-
-[cols="40%l,40%,20%l",options="header"]
-|===
-| Datetime Expression                    | Description                                                              | Result Data Type
-| CURRENT_DATE - start_date
-| By default, the interval difference between the current date and the value in column START_DATE is expressed
-in days. You are not required to specify the interval qualifier.
-| INTERVAL DAY (12)
-| INTERVAL '3' DAY - INTERVAL '2' DAY    | The difference of two interval literals. The result is 1 day.            | INTERVAL DAY (3)
-| INTERVAL '3' DAY + INTERVAL '2' DAY    | The sum of two interval literals. The result is 5 days.                  | INTERVAL DAY (3)
-| INTERVAL '2' YEAR - INTERVAL '3' MONTH | The difference of two interval literals. The result is 1 year, 9 months. | INTERVAL YEAR (3) TO MONTH
-|===
-
-
-[[restrictions_on_operations]]
-===== Restrictions on Operations
-
-You can use datetime and interval operands with arithmetic operators in
-an interval value expression only in these combinations:
-
-
-[cols="4*",options="header"]
-|===
-| Operand 1 | Operator | Operand 2 | Result Type
-| Datetime  | -        | Datetime  | Interval
-| Interval  | + or \u2013   | Interval  | Interval
-| Interval  | * or /   | Numeric   | Interval
-| Numeric   | *        | Interval  | Interval
-|===
-
-<<<
-This table lists valid combinations of datetime and interval arithmetic operators, and the data type of the result:
-
-
-[cols="2*",options="header"]
-|===
-| Operands                                      | Result type
-| Date + Interval or Interval + Date            | Date
-| Date + Numeric or Numeric + Date              | Date
-| Date - Numeric                                | Date
-| Date \u2013 Interval                               | Date
-| Date \u2013 Date                                   | Interval
-| Time + Interval or Interval + Time            | Time
-| Time + Numeric or Numeric + Time              | Time
-| Time - Number                                 | Time
-| Time \u2013 Interval                               | Time
-| Timestamp + Interval or Interval + Timestamp  | Timestamp
-| Timestamp + Numeric or Numeric + Timestamp    | Timestamp
-| Timestamp - Numeric                           | Timestamp
-| Timestamp \u2013 Interval                          | Timestamp
-| year-month Interval + year-month Interval     | year-month Interval
-| day-time Interval + day-time Interval         | day-time Interval
-| year-month Interval \u2013 year-month Interval     | year-month Interval
-| day-time Interval \u2013 day-time Interval         | day-time Interval
-| Time \u2013 Time                                   | Interval
-| Timestamp \u2013 Timestamp                         | Interval
-| Interval * Number or Number * Interval        | Interval
-| Interval / Number                             | Interval
-| Interval \u2013 Interval or Interval + Interval    | Interval
-|===
-
-
-When using these operations, note:
-
-
-* If you subtract a datetime value from another datetime value, both
-values must have the same data type. To get this result, use the CAST
-expression. For example:
-+
-```
-CAST (ship_timestamp AS DATE) - start_date
-```
-
-* If you subtract a datetime value from another datetime value, and you
-specify the interval qualifier, you must allow for the maximum number of
-digits in the result for the precision. For example:
-+
-```
-(CURRENT_TIMESTAMP - ship_timestamp) DAY(4) TO SECOND(6)
-```
-
-<<<
-* If you are updating a value that is the result of adding or
-subtracting two interval values, an SQL error occurs if the source value
-does not fit into the target column's range of interval fields. For
-example, this expression cannot replace an INTERVAL DAY column:
-+
-```
-INTERVAL '1' MONTH + INTERVAL '7' DAY
-```
-
-* If you multiply or divide an interval value by a numeric value
-expression, {project-name} SQL converts the interval value to its least
-significant subfield and then multiplies or divides it by the numeric
-value expression. The result has the same fields as the interval that
-was multiplied or divided. For example, this expression returns the
-value 5-02:
-+
-```
-INTERVAL '2-7' YEAR TO MONTH * 2
-```
-
-[[examples_of_interval_value_expressions]]
-==== Examples of Interval Value Expressions
-
-The PROJECT table consists of five columns using the data types NUMERIC,
-VARCHAR, DATE, TIMESTAMP, and INTERVAL DAY. Suppose that you have
-inserted values into the PROJECT table. For example:
-
-```
-INSERT INTO persnl.project
-VALUES (1000,'SALT LAKE CITY',DATE '2007-04-10',
-        TIMESTAMP '2007-04-21:08:15:00.00',INTERVAL '15' DAY);
-```
-
-The next example uses these values in the PROJECT table:
-
-[cols="4*",options="header"]
-|===
-| PROJCODE | START_DATE | SHIP_TIMESTAMP           | EST_COMPLETE
-| 1000     | 2007-04-10 | 2007-04-21:08:15:00.0000 | 15
-| 2000     | 2007-06-10 | 2007-07-21:08:30:00.0000 | 30
-| 2500     | 2007-10-10 | 2007-12-21:09:00:00.0000 | 60
-| 3000     | 2007-08-21 | 2007-10-21:08:10:00.0000 | 60
-| 4000     | 2007-09-21 | 2007-10-21:10:15:00.0000 | 30
-| 5000     | 2007-09-28 | 2007-10-28:09:25:01.1 1  | 30
-|===
-
-<<<
-* Suppose that the CURRENT_TIMESTAMP is 2000-01-06 1 :14:41.748703. Find
-the number of days, hours, minutes, seconds, and fractional seconds in
-the difference of the current timestamp and the SHIP_TIMESTAMP in the
-PROJECT table:
-+
-```
-SELECT projcode,
-   (CURRENT_TIMESTAMP - ship_timestamp) DAY(4) TO SECOND(6)
-FROM samdbcat.persnl.project;
-
-Project/Code (EXPR)
------------- ---------------------
-        1000 1355 02:58:57.087086
-        2000 1264 02:43:57.087086
-        2500 1111 02:13:57.087086
-        3000 1172 03:03:57.087086
-        4000 1172 00:58:57.087086
-        5000 1165 01:48:55.975986
-
---- 6 row(s) selected.
-```
-
-<<<
-[[numeric_value_expressions]]
-=== Numeric Value Expressions
-
-The operands of a numeric value expression can be combined in specific
-ways with arithmetic operators. In this syntax diagram, the data type of
-a term, factor, or numeric primary is numeric.
-
-```
-numeric-expression` is:
-  numeric-term
-| numeric-expression + numeric-term
-| numeric-expression - numeric-term
-
-numeric-term is:
-  numeric-factor
-| numeric-term * numeric-factor
-| numeric-term / numeric-factor
-
-numeric-factor is:
-  [+|-] numeric-primary
-| [+|-] numeric-primary ** numeric-factor
-
-numeric-primary is:
-  unsigned-numeric-literal
-| column-reference
-| numeric-type-host-variable
-| dynamic parameter
-| numeric-value-function
-| aggregate-function
-| sequence-function
-| scalar-subquery
-| CASE-expression
-| CAST-expression
-| (numeric-expression)
-```
-
-As shown in the preceding syntax diagram, numeric value expressions are
-built from operands that can be:
-
-
-* Numeric literals
-* Column references with numeric values
-* Dynamic parameters
-* Numeric value functions
-* Aggregate functions, sequence functions, scalar subqueries, CASE expressions, or CAST expressions that return numeric values
-
-<<<
-[[considerations_for_numeric_value_expressions]]
-==== Considerations for Numeric Value Expressions
-
-[[order_of_evaluation]]
-===== Order of Evaluation
-
-1.  Expressions within parentheses
-2.  Unary operators
-3.  Exponentiation
-4.  Multiplication and division
-5.  Addition and subtraction
-
-
-Operators at the same level are evaluated from left to right for all
-operators except exponentiation. Exponentiation operators at the same
-level are evaluated from right to left. For example,
-`X + Y + Z` is evaluated as `(X + Y) + Z`, whereas `X ** Y &#42;&#42; Z` is evaluated as `X &#42;&#42; (Y &#42;&#42; Z)`.
-
-[[additional_rules_for_arithmetic_operations]]
-===== Additional Rules for Arithmetic Operations
-
-Numeric expressions are evaluated according to these additional rules:
-
-* An expression with a numeric operator evaluates to null if any of the operands is null.
-* Dividing by 0 causes an error.
-* Exponentiation is allowed only with numeric data types. If the first
-operand is 0 (zero), the second operand must be greater than 0, and the
-result is 0. If the second operand is 0, the
-first operand cannot be 0, and the result is 1. If the first operand is
-negative, the second operand must be a value with an exact numeric data
-type and a scale of zero.
-* Exponentiation is subject to rounding error. In general, results of
-exponentiation should be considered approximate.
-
-[[precision_magnitude,_and_scale_of_arithmetic_results]]
-===== Precision, Magnitude, and Scale of Arithmetic Results
-
-The precision, magnitude, and scale are computed during the evaluation
-of an arithmetic expression. Precision is the maximum number of digits
-in the expression. Magnitude is the number of digits to the left of the
-decimal point. Scale is the number of digits to the right of the decimal point.
-
-For example, a column declared as NUMERIC (18, 5) has a precision of 18,
-a magnitude of 13, and a scale of 5. As another example, the literal
-12345.6789 has a precision of 9, a magnitude of 5, and a scale of 4.
-
-The maximum precision for exact numeric data types is 128 digits. The
-maximum precision for the REAL data type is approximately 7 decimal
-digits, and the maximum precision for the DOUBLE PRECISION data type is
-approximately 16 digits.
-
-When {project-name} SQL encounters an arithmetic operator in an expression,
-it applies these rules (with the restriction that if the precision
-becomes greater than 18, the resulting precision is set to 18 and the
-resulting scale is the maximum of 0 and (18- (_resulted precision_ -
-_resulted scale_)).
-
-
-* If the operator is + or -, the resulting scale is the maximum of the
-scales of the operands. The resulting precision is the maximum of the
-magnitudes of the operands, plus the scale of the result, plus 1.
-* If the operator is *, the resulting scale is the sum of the scales of
-the operands. The resulting precision is the sum of the magnitudes of
-the operands and the scale of the result.
-* If the operator is /, the resulting scale is the sum of the scale of
-the numerator and the magnitude of the denominator. The resulting
-magnitude is the sum of the magnitude of the numerator and the scale of
-the denominator.
-
-
-For example, if the numerator is NUMERIC (7, 3) and the denominator is
-NUMERIC (7, 5), the resulting scale is 3 plus 2 (or 5), and the
-resulting magnitude is 4 plus 5 (or 9). The expression result is NUMERIC
-(14, 5).
-
-[[conversion_of_numeric_types_for_arithmetic_operations]]
-===== Conversion of Numeric Types for Arithmetic Operations
-
-{project-name} SQL automatically converts between floating-point numeric
-types (REAL and DOUBLE PRECISION) and other numeric types. All numeric
-values in the expression are first converted to binary, with the maximum
-precision needed anywhere in the evaluation.
-
-
-[[examples_of_numeric_value_expressions]]
-==== Examples of Numeric Value Expressions
-
-
-These are examples of numeric value expressions:
-
-[cols="40%l,60%"]
-|===
-| -57                      | Numeric literal.
-| salary * 1.10            | The product of the values in the SALARY column and a numeric literal.
-| unit_price * qty_ordered | The product of the values in the UNIT_PRICE and QTY_ORDERED columns.
-| 12 * (7 - 4)             | An expression whose operands are numeric literals.
-| COUNT (DISTINCT city)    | Function applied to the values in a column.
-|===
-
-
-<<<
-[[identifiers]]
-== Identifiers
-
-SQL identifiers are names used to identify tables, views, columns, and
-other SQL entities. The two types of identifiers are regular and
-delimited. A delimited identifier is enclosed in double quotes (").
-Case-insensitive delimited identifiers are used only for user names and
-role names. Either regular, delimited, or case-sensitive delimited
-identifiers can contain up to 128 characters.
-
-[[regular_identifiers]]
-=== Regular Identifiers
-
-Regular identifiers begin with a letter (A through Z and a through z),
-but can also contain digits (0 through 9) or underscore characters (_).
-Regular identifiers are not case-sensitive. You cannot use a reserved
-word as a regular identifier.
-
-[[delimited_identifiers]]
-=== Delimited Identifiers
-
-Delimited identifiers are character strings that appear within double
-quote characters (") and consist of alphanumeric characters, including
-the underscore character (_) or a dash (-). Unlike regular identifiers,
-delimited identifiers are case-sensitive. {project-name} SQL does not support
-spaces or special characters in delimited identifiers given the
-constraints of the underlying HBase file system. You can use reserved
-words as delimited identifiers.
-
-[[case_insensitive_delimited_identifiers]]
-=== Case-Insensitive Delimited Identifiers
-
-Case-insensitive delimited identifiers, which are used for user names and
-roles, are character strings that appear within double quote characters
-(") and consist of alphanumeric characters
-(A through Z and a through z), digits (0 through 9), underscores (_), dashes (-), periods (.), at
-symbols (@), and forward slashes (/), except for the leading at sign (@)
-or leading forward slash (/) character.
-
-Unlike other delimited identifiers, case-insensitive-delimited
-identifiers are case-insensitive. Identifiers are up-shifted before
-being inserted into the SQL metadata. Thus, whether you specify a user's
-name as `"Penelope.Quan@company.com"`, `"PENELOPE.QUAN@company.com"`, or
-`"penelope.quan@company.com"`, the value stored in the metadata will be the
-same: `PENELOPE.QUAN@COMPANY.COM`.
-
-You can use reserved words as case-insensitive delimited identifiers.
-
-<<<
-[[examples_of_identifiers]]
-=== Examples of Identifiers
-
-* These are regular identifiers:
-+
-```
-mytable SALES2006
-Employee_Benefits_Selections
-CUSTOMER_BILLING_INFORMATION
-```
-+
-Because regular identifiers are case insensitive, SQL treats all these
-identifiers as alternative representations of mytable:
-+
-```
-mytable     MYTABLE     MyTable     mYtAbLe
-```
-
-* These are delimited identifiers:
-+
-```
-"mytable"
-"table"
-"CUSTOMER-BILLING-INFORMATION"
-```
-+
-Because delimited identifiers are case-sensitive, SQL treats the
-identifier "mytable" as different from the identifiers "MYTABLE" or
-"MyTable".
-+
-You can use reserved words as delimited identifiers. For example, table
-is not allowed as a regular identifier, but "table" is allowed as a
-delimited identifier.
-
-
-<<<
-[[indexes]]
-== Indexes
-
-An index is an ordered set of pointers to rows of a table. Each index is
-based on the values in one or more columns. Indexes are transparent to
-DML syntax.
-
-A one-to-one correspondence always exists between index rows and base
-table rows.
-
-[[sql_indexes]]
-=== SQL Indexes
-
-Each row in a {project-name} SQL index contains:
-
-* The columns specified in the CREATE INDEX statement
-* The clustering key of the underlying table (the user-defined
-clustering key)
-
-An index name is an SQL identifier. Indexes have their own name space
-within a schema, so an index name might be the same as a table or
-constraint name. However, no two indexes in a schema can have the same
-name.
-
-See <<create_index_statement,CREATE INDEX Statement>>.
-
-<<<
-[[keys]]
-== Keys
-
-[[clustering_keys]]
-=== Clustering Keys
-
-Every table has a clustering key, which is the set of columns that
-determine the order of the rows on disk. {project-name} SQL organizes records
-of a table or index by using a b-tree based on this clustering key.
-Therefore, the values of the clustering key act as logical row-ids.
-
-[[syskey]]
-=== SYSKEY
-
-When the STORE BY clause is specified with the _key-column-list_ clause,
-an additional column is appended to the _key-column-list_ called the
-SYSKEY.
-
-A SYSKEY (or system-defined clustering key) is a clustering key column
-which is defined by {project-name} SQL rather than by the user. Its type is
-LARGEINT SIGNED. When you insert a record in a table, {project-name} SQL
-automatically generates a value for the SYSKEY column. You cannot supply
-the value.
-
-You cannot specify a SYSKEY at insert time and you cannot update it
-after it has been generated. To see the value of the generated SYSKEY,
-include the SYSKEY column in the select list:
-
-```
-SELECT *, SYSKEY FROM t4;
-```
-
-[[index_keys]]
-=== Index Keys
-
-A one-to-one correspondence always exists between index rows and base
-table rows. Each row in a {project-name} SQL index contains:
-
-
-* The columns specified in the CREATE INDEX statement
-* The clustering (primary) key of the underlying table (the user-defined clustering key)
-
-
-For a non-unique index, the clustering key of the index is composed of
-both items. The clustering key cannot exceed 2048 bytes. Because the
-clustering key includes all the columns in the table, each row is also
-limited to 2048 bytes.
-
-For varying-length character columns, the length referred to in these
-byte limits is the defined column length, not the stored length. (The
-stored length is the expanded length, which includes two extra bytes for
-storing the data length of the item.)
-
-See <<create_index_statement,CREATE INDEX Statement>>.
-
-[[primary_keys]]
-=== Primary Keys
-
-A primary key is the column or set of columns that define the uniqueness
-constraint for a table. The columns cannot contain nulls, and only one
-primary key constraint can exist on a table.
-
-<<<
-[[literals]]
-== Literals
-
-A literal is a constant you can use in an expression, in a statement, or
-as a parameter value. An SQL literal can be one of these data types:
-
-[cols="40%,50%]
-|===
-| <<character_string_literals,Character String Literals>> | A series of characters enclosed in single quotes. +
- +
-Example: 'Planning'
-| <<datetime_literals,Datetime Literals>> | Begins with keyword DATE, TIME, or TIMESTAMP and followed by a character string. +
- +
-Example: DATE '1990-01-22'
-| <<interval_literals,Interval Literals>> | Begins with keyword INTERVAL and followed by a character string and an interval qualifier. +
- +
-Example: INTERVAL '2-7' YEAR TO MONTH
-| <<numeric_literals,Numeric Literals>> | A simple numeric literal (one without an exponent) or a numeric literal in scientific notation. +
- +
-Example: 99E-2
-|===
-
-[[character_string_literals]]
-=== Character String Literals
-
-A character string literal is a series of characters enclosed in single
-quotes.
-
-You can specify either a string of characters or a set of hexadecimal
-code values representing the characters in the string.
-
-* `[_character-set_ | N]_'string'_
-| [_character-set_ | N] X'_hex-code-value_. . . '
-| [_character-set_ | N]
-X'[_space_. . .]_hex-code-value_[[_space_. . .]_hex-code-value_. . .][_space_. . .]'
-_ character-set_`
-+
-specifies the character set ISO88591 or UTF8. The _character-set_
-specification of the string literal should correspond with the character
-set of the column definition, which is either ISO88591 or UTF8. If you
-omit the _character-set specification, {project-name} SQL initially assumes
-the ISO88591 character set if the string literal consists entirely of
-7-bit ASCII characters and UTF8 otherwise. (However, the initial
-assumption will later be changed if the string literal is used in a
-context that requires a character set different from the initial
-assumption.)
-
-* `N`
-+
-associates the string literal with the character set of the NATIONAL
-CHARACTER (NCHAR) data type. The character set for NCHAR is determined
-during the installation of {project-name} SQL. This value can be either UTF8
-(the default) or ISO88591.
-
-<<<
-* `'_string_'`
-+
-is a series of any input characters enclosed in single quotes. A single
-quote within a string is represented by two single quotes (''). A string
-can have a length of zero if you specify two single quotes ('') without
-a space in between.
-
-* `X`
-+
-indicates the hexadecimal string.
-
-* `'_hex-code-value_'`
-+
-represents the code value of a character in hexadecimal form enclosed in
-single quotes. It must contain an even number of hexadecimal digits. For
-ISO88591, each value must be two digits long. For UTF8, each value can
-be 2, 4, 6, or 8 hexadecimal digits long. If _hex-code-value_ is
-improperly formatted (for example, it contains an invalid hexadecimal
-digit or an odd number of hexadecimal digits), an error is returned.
-
-* `_space_`
-+
-is space sequences that can be added before or after _hex-code-value_
-for readability. The encoding for _space_ must be the TERMINAL_CHARSET
-for an interactive interface and the SQL module character set for the
-programmatic interface.
-
-[[considerations_for_character_string_literals]]
-==== Considerations for Character String Literals
-
-[[using_string_literals]]
-===== Using String Literals
-
-A string literal can be as long as a character column. See
-<<character_string_data_types,Character String Data Types>>.
-
-You can also use string literals in string value expressions\u2014for
-example, in expressions that use the concatenation operator (||) or in
-expressions that use functions returning string values.
-
-When specifying string literals:
-
-* Do not put a space between the character set qualifier and the
-character string literal. If you use this character string literal in a
-statement, {project-name} SQL returns an error.
-* To specify a single quotation mark within a string literal, use two
-consecutive single quotation marks.
-* To specify a string literal whose length is more than one line,
-separate the literal into several smaller string literals, and use the
-concatenation operator (||) to concatenate them.
-* Case is significant in string literals. Lowercase letters are not
-equivalent to the corresponding uppercase letters.
-* Leading and trailing spaces within a string literal are significant.
-* Alternately, a string whose length is more than one line can be
-written as a literal followed by a space, CR, or tab character, followed
-by another string literal.
-
-[[examples_of_character_string_literals]]
-==== Examples of Character String Literals
-
-* These data type column specifications are shown with examples of
-literals that can be stored in the columns.
-+
-[cols="50%l,50%l",options="header"]
-|===
-| Character String Data Type | Character String Literal Example
-| CHAR (12) UPSHIFT          | 'PLANNING'
-| VARCHAR (18)               | 'NEW YORK'
-|===
-
-* These are string literals:
-+
-```
-'This is a string literal.'
-'abc^&*'
-'1234.56'
-'This literal contains '' a single quotation mark.'
-```
-
-* This is a string literal concatenated over three lines:
-+
-```
-'This literal is' || '
-in three parts,' ||
-'specified over three lines.'
-```
-
-* This is a hexadecimal string literal representing the VARCHAR pattern
-of the ISO88591 string 'Strau�':
-+
-```
-_ISO88591 X'53 74 72 61 75 DF'
-```
-
-<<<
-[[datetime_literals]]
-=== Datetime Literals
-
-A datetime literal is a DATE, TIME, or TIMESTAMP constant you can use in
-an expression, in a statement, or as a parameter value. Datetime
-literals have the same range of valid values as the corresponding
-datetime data types. You cannot use leading or trailing spaces within a
-datetime string (within the single quotes).
-
-A datetime literal begins with the DATE, TIME, or TIMESTAMP keyword and
-can appear in default, USA, or European format.
-
-```
-DATE 'date' | TIME 'time' | TIMESTAMP 'timestamp'
-
-date is:
-  yyyy-mm-dd                              Default
-| mm/dd/yyyy                              USA
-| dd.mm.yyyy                              European
-
-time is:
-  hh:mm:ss.msssss                         Default
-| hh:mm:ss.msssss [am | pm]               USA
-| hh.mm.ss.msssss                         European
-
-timestamp is:
-  yyyy-mm-dd hh:mm:ss.msssss              Default
-| mm/dd/yyyy hh:mm:ss.msssss [am | pm]    USA
-| dd.mm.yyyy hh.mm.ss.msssss              European
-```
-
-* `_date,time,timestamp_`
-+
-specify the datetime literal strings whose component fields are:
-+
-[cols="30%l,70%"]
-|===
-| yyyy   | Year, from 0001 to 9999
-| mm     | Month, from 01 to 12
-| dd     | Day, from 01 to 31
-| hh     | Hour, from 00 to 23
-| mm     | Minute, from 00 to 59
-| ss     | Second, from 00 to 59
-| msssss | Microsecond, from 000000 to 999999
-| am     | AM or am, indicating time from midnight to before noon
-| pm     | PM or pm, indicating time from noon to before midnight
-|===
-
-[[examples_of_datetime_literals]]
-==== Examples of Datetime Literals
-
-* These are DATE literals in default, USA, and European formats, respectively:
-+
-```
-DATE '2008-01-22' DATE '01/22/2008' DATE '22.01.2008'
-```
-
-* These are TIME literals in default, USA, and European formats, respectively:
-+
-```
-TIME '13:40:05'
-TIME '01:40:05 PM'
-TIME '13.40.05'
-```
-
-* These are TIMESTAMP literals in default, USA, and European formats, respectively:
-+
-```
-TIMESTAMP '2008-01-22 13:40:05'
-TIMESTAMP '01/22/2008 01:40:05 PM'
-TIMESTAMP '22.01.2008 13.40.05'
-```
-
-<<<
-[[interval_literals]]
-=== Interval Literals
-
-An interval literal is a constant of data type INTERVAL that represents
-a positive or negative duration of time as a year-month or day-time
-interval; it begins with the keyword INTERVAL optionally preceded or
-followed by a minus sign (for negative duration). You cannot include
-leading or trailing spaces within an interval string (within single
-quotes).
-
-```
-[-]INTERVAL [-]{'year-month' | 'day:time'} interval-qualifier
-
-year-month is:
-  years [-months] | months
-
-day:time is:
-  days [[:]hours [:minutes [:seconds [.fraction]]]]
-| hours [:minutes [:seconds [.fraction]]]
-| minutes [:seconds [.fraction]]
-| seconds [.fraction]
-
-interval-qualifier is:
-  start-field TO end-field | single-field
-
-start-field is:
-  {YEAR | MONTH | DAY | HOUR | MINUTE} [(leading-precision)]
-
-end-field is:
-  YEAR | MONTH | DAY | HOUR | MINUTE | SECOND [(fractional-precision)]
-
-single-field is:
-  start-field | SECOND [(leading-precision,fractional-precision)]
-```
-
-* `_start-field_ TO _end-field_`
-+
-must be year-month or day-time.The _start-field_ you specify must
-precede the _end-field_ you specify in the list of field names.
-
-* `{YEAR &#124; MONTH &#124; DAY &#124; HOUR &#124; MINUTE} [(_leading-precision_)]`
-+
-specifies the _start-field_. A _start-field_ can have a
-_leading-precision_ up to 18 digits (the maximum depends on the number
-of fields in the interval). The
-_leading-precision_ is the number of digits allowed in the
-_start-field_. The default for _leading-precision_ is 2.
-
-* `YEAR &#124; MONTH &#124; DAY &#124; HOUR &#124; MINUTE &#124; SECOND [(_fractional-precision_)]`
-+
-specifies the _end-field_. If the _end-field_ is SECOND, it can have a
-_fractional-precision_ up to 6 digits. The _fractional-precision_ is the
-number
-of digits of precision after the decimal point. The default for
-_fractional-precision_ is 6.
-
-* `_start-field_ &#124; SECOND [(_leading-precision_, _fractional-precision_)]`
-+
-specifies the _single-field_. If the _single-field_ is SECOND, the
-_leading-precision_ is the number of digits of precision before the
-decimal point, and the _fractional-precision_ is the number of digits of
-precision after the decimal point.
-+
-The default for _leading-precision_ is 2, and the default for
-_fractional-precision_ is 1.  The maximum for _leading-precision_ is 18,
-and the maximum for _fractional-precision_ is 6.
-+
-See <<interval_data_types,Interval Data Types>> and
-<<interval_value_expressions,Interval Value Expressions>>.
-
-* `'_year-month_' &#124; '_day:time_'`
-+
-specifies the date and time components of an interval literal. The day
-and hour fields can be separated by a space or a colon. The interval
-literal strings are:
-+
-[cols="15%l,85%"]
-|===
-| years | Unsigned integer that specifies a number of years. _years_ can be up to 18 digits, or 16 digits if _months_
-is the end-field. The maximum for the _leading-precision_ is specified within the interval qualifier by either YEAR(18)
-or YEAR(16) TO MONTH.
-| months | Unsigned integer that specifies a number of months. Used as a starting field, _months_ can have up to 18
-digits. The maximum for the _leading-precision_ is specified by MONTH(18). Used as an ending field, the value of _months_
-must be in the range 0 to 1 .
-| days | Unsigned integer that specifies number of days. _days_ can have up to 18 digits if no end-field exists; 16 digits
-if _hours_ is the end-field; 14 digits if _minutes_ is the end-field; and 13-_f_ digits if _seconds_ is the end-field, where
-f is the _fraction_ less than or equal to 6. These maximums are specified by DAY(18), DAY(16) TO HOUR, DAY(14) TO
-MINUTE, and DAY(13-_f_) TO SECOND(_f_).
-| hours | Unsigned integer that specifies a number of hours. Used as a starting field, _hours_ can have up to 18 digits if
-no end-field exists; 16 digits if _minutes_ is the end-field; and 14-_f_ digits if _seconds_ is the end-field, where f is
-the _fraction_ less than or equal to 6. These maximums are specified by HOUR(18), HOUR(16) TO MINUTE, and HOUR(14-f) TO
-SECOND(_f_). Used as an ending field, the value of _hours_ must be in the range 0 to 23.
-| minutes | Unsigned integer that specifies a number of minutes. Used as a starting field, _minutes_ can have up to 18 digits
-if no end-field exists; and 16-f digits if _seconds_ is the end-field, where _f_ is the _fraction_ less than or equal to 6.
-These maximums are specified by MINUTE(18), and MINUTE(16-_f_) TO SECOND(_f_). Used as an ending field, the value of _minutes_
-must be in the range 0 to 59.
-| seconds | Unsigned integer that specifies a number of seconds. Used as a starting field, _seconds_ can have up to 18 digits,
-minus the number of digits f in the _fraction_ less than or equal to 6. This maximum is specified by SECOND(18-_f_, _f_). The
-value of _seconds_ must be in the range 0 to 59.9(_n_), where _n_ is the number of digits specified for seconds precision.
-| fraction | Unsigned integer that specifies a fraction of a second. When _seconds_ is used as an ending field, _fraction_ is
-limited to the number of digits specified by the _fractional-precision_ field following the SECOND keyword.
-|===
-
-<<<
-[[considerations_for_interval_literals]]
-==== Considerations for Interval Literals
-
-[[length_of_year_month_and_day_time_strings]]
-===== Length of Year-Month and Day-Time Strings
-
-An interval literal can contain a maximum of 18 digits, in the string
-following the INTERVAL keyword, plus a hyphen (-) that separates the
-year-month fields, and colons (:) that separate the day-time fields. You
-can also separate day and hour with a space.
-
-[[examples_of_interval_literals]]
-==== Examples of Interval Literals
-
-[cols="50%l,50%"]
-|===
-| INTERVAL '1' MONTH                       | Interval of 1 month
-| INTERVAL '7' DAY                         | Interval of 7 days
-| INTERVAL '2-7' YEAR TO MONTH             | Interval of 2 years, 7 months
-| INTERVAL '5:2:15:36.33' DAY TO SECOND(2) | Interval of 5 days, 2 hours, 15 minutes, and 36.33 seconds
-| INTERVAL - '5' DAY                       | Interval that subtracts 5 days
-| INTERVAL '100' DAY(3)                    | Interval of 100 days. This 

<TRUNCATED>


[06/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/sql_clauses.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/sql_clauses.adoc b/docs/sql_reference/src/asciidoc/_chapters/sql_clauses.adoc
index dbe39a3..450dd9f 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/sql_clauses.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/sql_clauses.adoc
@@ -1,1432 +1,1432 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[sql_clauses]]
-=  SQL Clauses
-
-Clauses are used by {project-name} SQL statements to specify default values,
-ways to sample or sort data, how to store physical data, and other
-details.
-
-This section describes:
-
-* <<default_clause,DEFAULT Clause>> specifies a default value for a column being created.
-* <<format_clause,FORMAT Clause>> specifies the format to use.
-* <<sample_clause,SAMPLE Clause>> specifies the sampling method used to select a subset of the intermediate result table of a SELECT statement.
-* <<sequence_by_clause,SEQUENCE BY Clause>> specifies the order in which to sort rows of the intermediate result table for calculating sequence functions.
-* <<transpose_clause,TRANSPOSE Clause>> generates, for each row of the SELECT source table, a row for each item in the transpose item list.
- 
-[[default_clause]]
-== DEFAULT Clause
-
-The DEFAULT option of the CREATE TABLE or ALTER TABLE _table-name_ ADD
-COLUMN statement specifies a default value for a column being created.
-
-The default value is used when a row is inserted in the table without a value for the column.
-
-```
-DEFAULT default | NO DEFAULT
-
-default is:
-  literal
-| NULL
-| CURRENTDATE
-| CURRENTTIME
-| CURRENTTIMESTAMP
-```
-
-* `NO DEFAULT`
-+
-specifies the column has no default value. You cannot specify NO DEFAULT
-in an ALTER TABLE statement. See <<alter_table_statement,ALTER TABLE Statement>>.
-
-[[syntax_for_default_clause]]
-=== Syntax for Default Clause
-
-* `DEFAULT _literal_`
-+
-is a literal of a data type compatible with the data type of the
-associated column.
-+
-For a character column, _literal_ must be a string literal of no more
-than 240 characters or the length of the column, whichever is less. The
-maximum length of a default value for a character column is 240 bytes
-(minus control characters) or the length of the column, whichever is
-less. Control characters consist of character set prefixes and single
-quote delimiter found in the text itself.
-+
-For a numeric column, _literal_ must be a numeric literal that does not
-exceed the defined length of the column. The number of digits to the
-right of the decimal point must not exceed the scale of the column, and
-the number of digits to the left of the decimal point must not exceed
-the number in the length (or length minus scale, if you specified scale
-for the column).
-+
-For a datetime column, _literal_ must be a datetime literal with a
-precision that matches the precision of the column.
-+
-For an INTERVAL column, _literal_ must be an INTERVAL literal that has
-the range of INTERVAL fields defined for the column.
-
-* `DEFAULT NULL`
-+
-specifies NULL as the default. This default can occur only with a column
-that allows null.
-
-* `DEFAULT CURRENT_DATE`
-+
-specifies the default value for the column as the value returned by the
-CURRENT_DATE function at the time of the operation that assigns a value
-to the column. This default can occur only with a column whose data type
-is DATE.
-
-* `DEFAULT CURRENT_TIME`
-+
-specifies the default value for the column as the value returned by the
-CURRENT_TIME function at the time of the operation that assigns a value
-to the column. This default can occur only with a column whose data type
-is TIME.
-
-* `DEFAULT CURRENT_TIMESTAMP`
-+
-specifies the default value for the column as the value returned by the
-CURRENT_TIMESTAMP function at the time of the operation that assigns a
-value to the column. This default can occur only with a column whose
-data type is TIMESTAMP.
-
-[[examples_of_default]]
-=== Examples of DEFAULT
-
-* This example uses DEFAULT clauses on CREATE TABLE to specify default column values:
-+
-```
-CREATE TABLE items
-( item_id CHAR(12) NO DEFAULT
-, description CHAR(50) DEFAULT NULL
-, num_on_hand INTEGER DEFAULT 0 NOT NULL
-) ;
-```
-
-* This example uses DEFAULT clauses on CREATE TABLE to specify default column values:
-+
-```
-CREATE TABLE persnl.project
-( projcode NUMERIC (4) UNSIGNED NO DEFAULT NOT NULL
-, empnum NUMERIC (4) UNSIGNED NO DEFAULT NOT NULL
-, projdesc VARCHAR (18) DEFAULT NULL
-, start_date DATE DEFAULT CURRENT_DATE
-, ship_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
-, est_complete INTERVAL DAY DEFAULT INTERVAL '30' DAY
-, PRIMARY KEY (projcode)
-) ;
-```
-
-<<<
-[[format_clause]]
-== FORMAT Clause
-
-The FORMAT clause specifies the output format for DATE values. It can
-also be used to specify the length of character output or to specify
-separating the digits of integer output with colons.
-
-* Date Formats:
-+
-```
-(FORMAT 'format-string') |
-
-(DATE, FORMAT 'format-string')
-
-format-string for Date Formats is:
-  YYYY-MM-DD
-  MM/DD/YYYY
-  YY/MM/DD
-  YYYY/MM/DD
-  YYYYMMDD
-  DD.MM.YYYY
-  DD-MM-YYYY
-  DD-MMM-YYYY
-```
-
-* Other Formats:
-+
-```
-(FORMAT 'format-string')
-
-format-string for other formats is:
-  XXX
-  99:99:99:99
- -99:99:99:99
-```
-
-* `YYYY-MM-DD`
-+
-specifies that the FORMAT clause output format is _year-month-day_.
-
-* `MM/DD/YYYY`
-+
-specifies that the FORMAT clause output format is _month/day/year_
-
-* `YY/MM/DD`
-+
-specifies that the FORMAT clause output format is _year/month/day_.
-
-* `YYYY/MM/DD`
-+
-specifies that the FORMAT clause output format is _year/month/day_.
-
-* `YYYYMMDD`
-+
-specifies that the FORMAT clause output format is _yearmonthday_.
-
-* `DD.MM.YYYY`
-+
-specifies that the FORMAT clause output format is _day.month.year_.
-
-* `DD-MM-YYYY`
-+
-specifies that the FORMAT clause output format is _day-month-year_.
-
-* `DD-MMM-YYYY`
-+
-specifies that the FORMAT clause output format is _day-month-year_.
-
-* `XXX`
-+
-specifies that the FORMAT clause output format is a string format. The
-input must be a numeric or string value.
-
-* `99:99:99:99`
-+
-specifies that the FORMAT clause output format is a timestamp. The input
-must be a numeric value.
-
-* `-99:99:99:99`
-+
-specifies that the FORMAT clause output format is a timestamp. The input
-must be a numeric value.
-
-[[considerations_for_date_formats]]
-=== Considerations for Date Formats
-
-The expression preceding the (FORMAT \u201d_format-string_') clause must be
-a DATE value.
-
-The expression preceding the (DATE, FORMAT _'format-string_') clause
-must be a quoted string in the USA, EUROPEAN, or DEFAULT date format.
-
-[[considerations_for_other_formats]]
-==== Considerations for Other Formats
-
-For XXX, the expression preceding the (FORMAT _'format-string_')
-clause must be a numeric value or a string value.
-
-For 99:99:99:99 and -99:99:99:99, the expression preceding the (FORMAT
-_'format-string_') clause must be a numeric value.
-
-[[examples_of_format]]
-=== Examples of FORMAT
-
-* The format string 'XXX' in this example will yield a sample result of abc:
-+
-```
-SELECT 'abcde' (FORMAT 'XXX') FROM (VALUES(1)) t;
-```
-
-* The format string 'YYYY-MM_DD' in this example will yield a sample result of 2008-07-17.
-+
-```
-SELECT CAST('2008-07-17' AS DATE) (FORMAT 'YYYY-MM-DD') FROM (VALUES(1)) t;
-```
-
-* The format string 'MM/DD/YYYY' in this example will yield a sample result of 07/17/2008.
-+
-```
-SELECT '2008-07-17' (DATE, FORMAT 'MM/DD/YYYY') FROM (VALUES(1)) t;
-```
-
-* The format string 'YY/MM/DD' in this example will yield a sample result of 08/07/17.
-+
-```
-SELECT '2008-07-17'(DATE, FORMAT 'YY/MM/DD') FROM (VALUES(1)) t;
-```
-
-* The format string 'YYYY/MM/DD' in this example will yield a sample result of 2008/07/17.
-+
-```
-SELECT '2008-07-17' (DATE, FORMAT 'YYYY/MM/DD') FROM (VALUES(1)) t;
-```
-
-* The format string 'YYYYMMDD' in this example will yield a sample result`of 20080717.
-+
-```
-SELECT '2008-07-17' (DATE, FORMAT 'YYYYMMDD') FROM (VALUES(1)) t;
-```
-
-* The format string 'DD.MM.YYYY' in this example will yield a sample result of 17.07.2008.
-+
-```
-SELECT '2008-07-17' (DATE, FORMAT 'DD.MM.YYYY') FROM (VALUES(1)) t;
-```
-
-* The format string 'DD-MMM-YYYY' in this example will yield a sample result of 17\u2013JUL-2008.
-+
-```
-SELECT '2008-07-17' (DATE, FORMAT 'DD-MMM-YYYY') FROM (VALUES(1)) t;
-```
-
-* The format string '99:99:99:99' in this example will yield a sample result of 12:34:56:78.
-+
-```
-SELECT 12345678 (FORMAT '99:99:99:99') FROM (VALUES(1)) t;
-```
-
-* The format string '-99:99:99:99' in this example will yield a sample result of -12:34:56:78.
-+
-```
-SELECT (-12345678) (FORMAT '-99:99:99:99') FROM (VALUES(1)) t;
-```
-
-<<<
-[[sample_clause]]
-== SAMPLE Clause
-
-The SAMPLE clause of the SELECT statement specifies the sampling method
-used to select a subset of the intermediate result table of a SELECT
-statement. The intermediate result table consists of the rows returned
-by a WHERE clause or, if no WHERE clause exists, the FROM clause. See
-<<select_statement,SELECT Statement>>.
-
-SAMPLE is a {project-name} SQL extension.
-
-```
-SAMPLE sampling-methodis:
-  RANDOM percent-size
-| FIRST rows-size
-        [SORT BY colname [ASC[ENDING]|DESC[ENDING]]
-          [,colname [ASC[ENDING] | DESC[ENDING]]]...]
-| PERIODIC rows-size EVERY number-rows ROWS
-           [SORT BY colname [ASC[ENDING] | DESC[ENDING]] 
-             [,colname [ASC[ENDING] | DESC[ENDING]]]...]
-
-percent-size is:
-  percent-result PERCENT [ROWS]
-| BALANCE WHEN condition
-    THEN percent-result PERCENT [ROWS]
-    [WHEN condition THEN percent-result PERCENT [ROWS]]... 
-    [ELSE percent-result PERCENT [ROWS]] END
-
-rows-size is:
-  number-rows ROWS
-| BALANCE WHEN condition THEN number-rows ROWS 
-          [WHEN condition THEN number-rows ROWS]... 
-          [ELSE number-rows ROWS] END
-```
-
-* `RANDOM _percent-size_`
-+
-directs {project-name} SQL to choose rows randomly (each row having an
-unbiased probability of being chosen) without replacement from the
-result table. The sampling size is determined by the _percent-size_,
-defined as:
-
-* `_percent-result_ PERCENT [ROWS] | BALANCE WHEN _condition_ THEN
-_percent-result_ PERCENT [ROWS] [WHEN _condition_ THEN _percent-result_
-PERCENT [ROWS]]&#8230; [ELSE _percent-result_ PERCENT [ROWS]] END`
-+
-specifies the value of the size for RANDOM sampling by using a percent
-of the result table. The value _percent-result_ must be a numeric
-literal.
-+
-You can determine the actual size of the sample. Suppose that _N_ rows
-exist in the intermediate result table. Each row is picked with a
-probability of _r_%, where _r_ is the sample size in PERCENT.
-Therefore, the actual size of the resulting sample is approximately _r_% of _N_. 
-The number of rows picked follows a binomial distribution with
-mean equal to _r_ *c_N_/100.
-+
-If you specify a sample size greater than 100 PERCENT, {project-name} SQL
-returns all the rows in the result table plus duplicate rows. The
-duplicate rows are picked from the result table according to the
-specified sampling method. This technique is called oversampling.
-
-** `ROWS`
-+
-specifies row sampling. Row sampling is the default.
-
-** `BALANCE`
-+
-If you specify a BALANCE expression, {project-name} SQL performs stratified
-sampling. The intermediate result table is divided into disjoint strata
-based on the WHEN conditions.
-+
-Each stratum is sampled independently by using the sampling size. For a
-given row, the stratum to which it belongs is determined by the first
-WHEN condition that is true for that row\u2014if a true condition exists. If
-no true condition exists, the row belongs to the ELSE stratum.
-
-* `FIRST _rows-size_ [SORT BY _colname_ [ASC[ENDING] | DESC[ENDING]]
-[,_colname_ [ASC[ENDING] | DESC[ENDING]]]&#8230;]`
-+
-directs {project-name} SQL to choose the first rows from the result table.
-You can specify the order of the rows to sample. Otherwise, {project-name}
-SQL chooses an arbitrary order. The sampling size is determined by the
-_rows-size_, defined as:
-
-* `_number-rows_ ROWS | BALANCE WHEN _condition_ THEN _number-rows_ ROWS
-[WHEN _condition_ THEN _number-rows_ ROWS]&#8230; [ELSE _number-rows_ ROWS] END`
-+
-specifies the value of the size for FIRST sampling by using the number
-of rows intended in the sample. The value _number-rows_ must be an
-integer literal.
-+
-You can determine the actual size of the sample. Suppose that _N_ rows
-exist in the intermediate result table. If the size _s_ of the sample is
-specified as a number of rows, the actual size of the resulting sample
-is the minimum of _s_ and _N_.
-
-* `PERIODIC _rows-size_ EVERY _number-rows_ ROWS [SORT BY _colname_
-[ASC[ENDING] | DESC[ENDING]] [,_colname_ [ASC[ENDING] |
-DESC[ENDING]]]&#8230;]`
-+
-directs {project-name} SQL to choose the first rows from each block (or
-period) of contiguous rows. This sampling method is equivalent to a
-separate FIRST sampling for each period, and the _rows-size_ is defined
-as in FIRST sampling.
-+
-The size of the period is specified as a number of rows. You can specify
-the order of the rows to sample. Otherwise, {project-name} SQL chooses an
-arbitrary order.
-+
-<<<
-+
-You can determine the actual size of the sample. Suppose that _N_ rows
-exist in the intermediate result table. If the size _s_ of the sample is
-specified as a number of rows and the size _p_ of the period is
-specified as a number of rows, the actual size of the resulting sample
-is calculated as:
-+
-```
-FLOOR (N/p) * s + _minimum_ (MOD (N, p), s)
-```
-+
-_minimum_ in this expression is used simply as the mathematical
-minimum of two values.
-
-[[considerations_for_sample]]
-=== Considerations for SAMPLE
-
-[[sample_rows]]
-==== Sample Rows
-
-In general, when you use the SAMPLE clause, the same query returns
-different sets of rows for each execution. The same set of rows is
-returned only when you use the FIRST and PERIODIC sampling methods with
-the SORT BY option, where no duplicates exist in the specified column
-combination for the sort.
-
-[[examples_of_sample]]
-=== Examples of SAMPLE
-
-* Suppose that the data-mining tables SALESPER, SALES, and DEPT have been
-created as:
-+
-```
-CREATE TABLE trafodion.mining.salesper
-( empid NUMERIC (4) UNSIGNED NOT NULL
-, dnum NUMERIC (4) UNSIGNED NOT NULL
-, salary NUMERIC (8,2) UNSIGNED
-, age INTEGER
-, sex CHAR (6)
-, PRIMARY KEY (empid) );
-
-CREATE TABLE trafodion.mining.sales
-( empid NUMERIC (4) UNSIGNED NOT NULL
-, product VARCHAR (20)
-, region CHAR (4)
-, amount NUMERIC (9,2) UNSIGNED
-, PRIMARY KEY (empid) );
-
-CREATE TABLE trafodion.mining.dept
-( dnum NUMERIC (4) UNSIGNED NOT NULL
-, name VARCHAR (20)
-, PRIMARY KEY (dnum) );
-```
-+
-Suppose, too, that sample data is inserted into this database.
-
-
-* Return the SALARY of the youngest 50 sales people:
-+
-```
-SELECT salary 
-FROM salesperson
-SAMPLE FIRST 50 ROWS 
-SORT BY age;
-
-SALARY
------------ 
-   90000.00
-   90000.00
-   28000.00
-   27000.12
-  136000.00
-   37000.40
-...
-
---- 50 row(s) selected.
-```
-
-* Return the SALARY of 50 sales people. In this case, the table is
-clustered on EMPID. If the optimizer chooses a plan to access rows using
-the primary access path, the result consists of salaries of the 50 sales
-people with the smallest employee identifiers.
-+
-```
-SELECT salary 
-FROM salesperson
-SAMPLE FIRST 50 ROWS;
-
-SALARY
------------ 
-  175500.00
-  137000.10
-  136000.00
-  138000.40
-   75000.00
-   90000.00
-...
-
---- 50 row(s) selected.
-```
-
-<<<
-* Return the SALARY of the youngest five sales people, skip the next 15
-rows, and repeat this process until no more rows exist in the
-intermediate result table. You cannot specify periodic sampling with the
-sample size larger than the period.
-+
-```
-SELECT salary 
-FROM salesperson
-SAMPLE PERIODIC 5 ROWS 
-EVERY 20 ROWS 
-SORT BY age;
-
-SALARY
------------ 
-   90000.00
-   90000.00
-   28000.00
-   27000.12
-  136000.00
-   36000.00
-...
-
---- 17 row(s) selected.
-```
-+
-In this example, 62 rows exist in the SALESPERSON table. For each set of
-20 rows, the first five rows are selected. The last set consists of two
-rows, both of which are selected.
-
-* Compute the average salary of a random 10 percent of the sales people.
-You will get a different result each time you run this query because it
-is based on a random sample.
-+
-```
-SELECT AVG(salary) 
-FROM salesperson
-SAMPLE RANDOM 10 PERCENT;
-
-(EXPR)
---------------------
-            61928.57
-
---- 1 row(s) selected.
-```
-
-<<<
-* This query illustrates sampling after execution of the WHERE clause
-has chosen the qualifying rows. The query computes the average salary of
-a random 10 percent of the sales people over 35 years of age. You will
-get a different result each time you run this query because it
-is based on a random sample.
-+
-```
-SELECT AVG(salary) 
-FROM salesperson 
-WHERE age > 35
-SAMPLE RANDOM 10 PERCENT;
-
-(EXPR)
---------------------
-            58000.00
-
---- 1 row(s) selected.
-```
-
-* Compute the average salary of a random 10 percent of sales people
-belonging to the CORPORATE department. The sample is taken from the join
-of the SALESPERSON and DEPARTMENT tables. You will get a different
-result each time you run this query because it is based on a random
-sample.
-+
-```
-SELECT AVG(salary)
-FROM salesperson S, department D 
-WHERE S.DNUM = D.DNUM AND D.NAME = 'CORPORATE' 
-SAMPLE RANDOM 10 PERCENT;
-
-(EXPR)
----------------------
-           106250.000
-
---- 1 row(s) selected.
-```
-
-<<<
-* In this example, the SALESPERSON table is first sampled and then
-joined with the DEPARTMENT table. This query computes the average salary
-of all the sales people belonging to the CORPORATE department in a
-random sample of 10 percent of the sales employees.
-+
-```
-SELECT AVG(salary)
-FROM 
-  ( SELECT salary, dnum FROM salesperson SAMPLE RANDOM 10 PERCENT ) AS S
-  , department D 
-WHERE S.DNUM = D.DNUM
-  AND D.NAME = 'CORPORATE';
-
-(EXPR)
---------------------
-
-37000.000
-
---- 1 row(s) selected.
-```
-+
-The results of this query and some of the results of previous queries
-might return null:
-+
-```
-SELECT AVG(salary)
-FROM 
-  ( SELECT salary, dnum FROM salesperson SAMPLE RANDOM 10 PERCENT ) AS S
-  , department D 
-WHERE S.DNUM = D.DNUM AND D.NAME = 'CORPORATE';
-
-(EXPR)
---------------------
-
-?
-
---- 1 row(s) selected.
-```
-+
-For this query execution, the number of rows returned by the embedded
-query is limited by the total number of rows in the SALESPERSON table.
-Therefore, it is possible that no rows satisfy the search condition in
-the WHERE clause.
-
-
-<<<
-* In this example, both the tables are sampled first and then joined.
-This query computes the average salary and the average sale amount
-generated from a random 10 percent of all the sales people and 20
-percent of all the sales transactions.
-+
-```
-SELECT AVG(salary), AVG(amount) 
-FROM ( SELECT salary, empid
-       FROM salesperson
-       SAMPLE RANDOM 10 PERCENT ) AS S,
-  ( SELECT amount, empid FROM sales
-    SAMPLE RANDOM 20 PERCENT ) AS T
-WHERE S.empid = T.empid;
-
-(EXPR)    (EXPR)
---------- --------- 
- 45000.00  31000.00
-
---- 1 row(s) selected.
-```
-
-* This example illustrates oversampling. This query retrieves 150
-percent of the sales transactions where the amount exceeds $1000. The
-result contains every row at least once, and 50 percent of the rows,
-picked randomly, occur twice.
-+
-```
-SELECT *
-FROM sales
-WHERE amount > 1000
-SAMPLE RANDOM 150 PERCENT;
-
-EMPID PRODUCT              REGION AMOUNT
------ -------------------- ------ ----------- 
-    1 PCGOLD, 30MB         E         30000.00
-   23 PCDIAMOND, 60MB      W         40000.00
-   23 PCDIAMOND, 60MB      W         40000.00
-   29 GRAPHICPRINTER, M1   N         11000.00
-   32 GRAPHICPRINTER, M2   S         15000.00
-   32 GRAPHICPRINTER, M2   S         15000.00
-  ... ...                  ...       ...
-
---- 88 row(s) selected.
-```
-
-<<<
-* The BALANCE option enables stratified sampling. Retrieve the age and
-salary of 1000 sales people such that 50 percent of the result are male
-and 50 percent female.
-+
-```
-SELECT age, sex, salary 
-FROM salesperson
-SAMPLE FIRST
-BALANCE 
-  WHEN sex = 'male' THEN 15 ROWS
-  WHEN sex = 'female' THEN 15 ROWS
-  END 
-ORDER BY age;
-+
-AGE         SEX    SALARY
------------ ------ -----------
-         22 male      28000.00
-         22 male      90000.00
-         22 female   136000.00
-         22 male      37000.40
-        ... ...            ...
-
---- 30 row(s) selected.
-```
-
-* Retrieve all sales records with the amount exceeding $10000 and a
-random sample of 10 percent of the remaining records:
-+
-```
-SELECT *
-FROM sales SAMPLE RANDOM
-BALANCE 
-  WHEN amount > 10000 
-  THEN 100 PERCENT 
-  ELSE 10 PERCENT
-END;
-
-PRODUCT              REGION AMOUNT
--------------------- ------ -----------
-PCGOLD, 30MB         E         30000.00
-PCDIAMOND, 60MB      W         40000.00
-GRAPHICPRINTER, M1   N         11000.00
-GRAPHICPRINTER, M2   S         15000.00
-...                  ...       ...
-MONITORCOLOR, M2     N         10500.00
-...                  ...       ...
-
---- 32 row(s) selected.
-```
-
-<<<
-* This query shows an example of stratified sampling where the
-conditions are not mutually exclusive:
-+
-```
-SELECT *
-FROM sales SAMPLE RANDOM
-BALANCE 
-  WHEN amount > 10000 THEN 100 PERCENT
-  WHEN product = 'PCGOLD, 30MB' THEN 25 PERCENT 
-  WHEN region = 'W' THEN 40 PERCENT
-  ELSE 10 PERCENT END;
-
-PRODUCT              REGION AMOUNT
--------------------- ------ -----------
-PCGOLD, 30MB         E         30000.00
-PCDIAMOND, 60MB      W         40000.00
-GRAPHICPRINTER, M1   N         11000.00
-GRAPHICPRINTER, M2   S         15000.00
-GRAPHICPRINTER, M3   S         20000.00
-LASERPRINTER, X1     W         42000.00
-...                  ...       ...
-
---- 30 row(s) selected.
-```
-
-<<<
-[[sequence_by_clause]]
-== SEQUENCE BY Clause
-
-The SEQUENCE BY clause of the SELECT statement specifies the order in
-which to sort the rows
-
-of the intermediate result table for calculating sequence functions.
-This option is used for processing time-sequenced rows in data mining
-applications. See <<select_statement>>.
-
-Sequence by is a {project-name} SQL extension.
-
-```
-SEQUENCE BY colname[ASC[ENDING]|DESC[ENDING]]
-   [,colname [ASC[ENDING] | DESC[ENDING]]]...
-```
-
-* `_colname_`
-_
-names a column in _select-list_ or a column in a table reference in the
-FROM clause of the SELECT statement. _colname_ is optionally qualified
-by a table, view, or correlation name; for example, CUSTOMER.CITY.
-
-* `ASC | DESC`
-+
-specifies the sort order. ASC is the default. For ordering an
-intermediate result table on a column that can contain null, nulls are
-considered equal to one another but greater than all other non-null
-values.
-+
-You must include a SEQUENCE BY clause if you include a sequence function
-in the select list of the SELECT statement. Otherwise, {project-name} SQL
-returns an error. Further, you cannot include a SEQUENCE BY clause if no
-sequence function exists in the select list. See
-<<sequence_functions,Sequence Functions>> .
-
-[[considerations_for_sequence_by]]
-=== Considerations for SEQUENCE BY
-
-* Sequence functions behave differently from set (or aggregate)
-functions and mathematical (or scalar) functions.
-* If you include both SEQUENCE BY and GROUP BY clauses in the same
-SELECT statement, the values of the sequence functions must be evaluated
-first and then become input for aggregate functions in the statement.
-** For a SELECT statement that contains both SEQUENCE BY and GROUP BY
-clauses, you can nest the sequence function in the aggregate function:
-+
-```
-SELECT 
-  ordernum
-, MAX(MOVINGSUM(qty_ordered, 3)) AS maxmovsum_qty
-, AVG(unit_price) AS avg_price
-FROM odetail 
-SEQUENCE BY partnum 
-GROUP BY ordernum;
-```
-
-* To use a sequence function as a grouping column, you must use a
-derived table for the SEQUENCE BY query and use the derived column in
-the GROUP BY clause:
-+
-```
-SELECT 
-  ordernum
-, movsum_qty
-, AVG(unit_price) 
-FROM
-  ( SELECT ordernum, MOVINGSUM(qty_ordered, 3), unit_price 
-    FROM odetail SEQUENCE BY partnum ) 
-  AS tab2 (ordernum, movsum_qty, unit_price) 
-GROUP BY ordernum, movsum_qty;
-```
-
-* To use an aggregate function as the argument to a sequence function,
-you must also use a derived table:
-+
-```
-SELECT MOVINGSUM(avg_price,2) 
-FROM
-  ( SELECT ordernum, AVG(unit_price) FROM odetail
-    GROUP BY ordernum)
-AS tab2 (ordernum, avg_price) 
-SEQUENCE BY ordernum;
-```
-
-* Like aggregate functions, sequence functions generate an intermediate
-result. If the query has a WHERE clause, its search condition is applied
-during the generation of the intermediate result. Therefore, you cannot
-use sequence functions in the WHERE clause of a SELECT statement.
-
-** This query returns an error:
-+
-```
-SELECT ordernum, partnum, RUNNINGAVG(unit_price) 
-FROM odetail
-WHERE ordernum > 800000 AND RUNNINGAVG(unit_price) > 350 
-SEQUENCE BY qty_ordered;
-```
-
-** Apply a search condition to the result of a sequence function, use a
-derived table for the SEQUENCE BY query, and use the derived column in
-the WHERE clause:
-+
-```
-SELECT ordernum, partnum, runavg_price 
-FROM
-  ( SELECT ordernum, partnum, RUNNINGAVG(unit_price) 
-    FROM odetail SEQUENCE BY qty_ordered)
-AS tab2 (ordernum, partnum, runavg_price) 
-WHERE ordernum > 800000 AND
-runavg_price > 350;
-```
-
-[[examples_of_sequence_by]]
-=== Examples of SEQUENCE BY
-
-* Sequentially number each row for the entire result and also number the
-rows for each part number:
-+
-```
-SELECT 
-  RUNNINGCOUNT(*) AS RCOUNT
-, MOVINGCOUNT(*,ROWS SINCE (d.partnum<>THIS(d.partnum))) AS MCOUNT
-, d.partnum
-FROM orders o, odetail d 
-WHERE o.ordernum=d.ordernum
-SEQUENCE BY d.partnum, o.order_date, o.ordernum 
-ORDER BY d.partnum, o.order_date, o.ordernum;
-
-RCOUNT               MCOUNT                Part/Num
--------------------- --------------------- --------
-                   1                     1      212
-                   2                     2      212
-                   3                     1      244
-                   4                     2      244
-                   5                     3      244
-                 ...                   ...      ...
-                  67                     1     7301
-                  68                     2     7301
-                  69                     3     7301
-                  70                     4     7301
-
---- 70 row(s) selected.
-```
-
-<<<
-* Show the orders for each date, the amount for each order item and the
-moving total for each order, and the running total of all the orders.
-The query sequences orders by date, order number, and part number. (The
-CAST function is used for readability only.)
-+
-```
-SELECT 
-  o.ordernum
-, CAST (MOVINGCOUNT(*,ROWS SINCE(THIS(o.ordernum) <> o.ordernum)) AS INT) AS MCOUNT
-, d.partnum
-, o.order_date
-, (d.unit_price * d.qty_ordered) AS AMOUNT
-, MOVINGSUM (d.unit_price * d.qty_ordered, SEQUENCE BY Clause 269 ROWS SINCE(THIS(o.ordernum)<>o.ordernum) ) AS ORDER_TOTAL
-, RUNNINGSUM (d.unit_price * d.qty_ordered) AS TOTAL_SALES
-FROM orders o, odetail d 
-WHERE o.ordernum=d.ordernum
-SEQUENCE BY o.order_date, o.ordernum, d.partnum 
-ORDER BY o.order_date, o.ordernum, d.partnum;
-
-Order/Num  MCOUNT      Part/Num Order/Date AMOUNT     ORDER_TOTAL    TOTAL_SALES
----------- ----------- -------- ---------- ---------- -------------- --------------
-    100250           1      244 2008-01-23   14000.00       14000.00       14000.00
-    100250           2     5103 2008-01-23    4000.00       18000.00       18000.00
-    100250           3     6500 2008-01-23     950.00       18950.00       18950.00
-    200300           1      244 2008-02-06   28000.00       28000.00       46950.00
-    200300           2     2001 2008-02-06   10000.00       38000.00       56950.00
-    200300           3     2002 2008-02-06   14000.00       52000.00       70950.00
-       ...         ...      ... ...          ...            ...                 ...
-    800660          18     7102 2008-10-09    1650.00      187360.00      113295.00             
-    800660          19     7301 2008-10-09    5100.00     192460.00      1118395.00
-
---- 69 row(s) selected.
-```
-+
-For example, for order number 200300, the ORDER_TOTAL is a moving sum
-within the order date 2008-02-06, and the TOTAL_SALES is a running sum
-for all orders. The current window for the moving sum is defined as ROWS
-SINCE (THIS(o.ordernum)<>o.ordernum), which restricts the ORDER_TOTAL to
-the current order number.
-
-<<<
-* Show the amount of time between orders by calculating the interval between two dates:
-+
-```
-SELECT RUNNINGCOUNT(*),o.order_date,DIFF1(o.order_date) 
-FROM orders o
-SEQUENCE BY o.order_date, o.ordernum 
-ORDER BY o.order_date, o.ordernum ;
-
-
-(EXPR)               Order/Date (EXPR)
--------------------- ---------- -------------
-                   1 2008-01-23             ?
-                   2 2008-02-06            14
-                   3 2008-02-17            11
-                   4 2008-03-03            14
-                   5 2008-03-19            16
-                   6 2008-03-19             0
-                   7 2008-03-27             8
-                   8 2008-04-10            14
-                   9 2008-04-20            10
-                  10 2008-05-12            22
-                  11 2008-06-01            20
-                  12 2008-07-21            50
-                  13 2008-10-09            80
-
---- 13 row(s) selected.
-```
-
-<<<
-[[transpose_clause]]
-== TRANSPOSE Clause
-
-The TRANSPOSE clause of the SELECT statement generates for each row of
-the SELECT source table a row for each item in the transpose item list.
-The result table of the TRANSPOSE clause has all the columns of the
-source table plus, for each transpose item list, a value column or
-columns and an optional key column.
-
-TRANSPOSE is a {project-name} SQL extension.
-
-```
-TRANSPOSE transpose-set [transpose-set]... 
-  [KEY BY key-colname]
-
-transpose-set is:
-   transpose-item-list AS transpose-col-list
-
-transpose-item-list is:
-  expression-list
-| (expression-list) [,(expression-list)]...
-
-expression-list is:
-  expression [,expression]...
-
-transpose-col-list is:
-  colname | (colname-list)
-
-colname-list is:
-  colname [,colname]...
-```
-
-* `_transpose-item-list_ AS _transpose-col-list_`
-+
-specifies a _transpose-set_, which correlates a _transpose-item-list_
-with a _transpose-col-list_. The _transpose-item-list_ can be a list
-of expressions or a list of expression lists enclosed in parentheses.
-The _transpose-col-list_ can be a single column name or a list of column
-names enclosed in parentheses.
-+
-For example, in the _transpose-set_ TRANSPOSE (A,X),(B,Y),(C,Z) AS
-(V1,V2), the items in the _transpose-item-list_ are (A,X),(B,Y), and
-(C,Z), and the _transpose-col-list_ is (V1,V2). The number of
-expressions in each item must be the same as the number of value columns
-in the column list.
-+
-In the example TRANSPOSE A,B,C AS V, the items are A,B, and C, and the
-value column is V. This form can be thought of as a shorter way of writing TRANSPOSE
-(A),(B),(C) AS (V).
-
-* `_transpose-item-list_`
-+
-specifies a list of items. An item is a value expression or a list of
-value expressions enclosed in parentheses.
-
-** `_expression-list_`
-+
-specifies a list of SQL value expressions, separated by commas. The
-expressions must have compatible data types.
-+
-For example, in the transpose set TRANSPOSE A,B,C AS V, the expressions
-A,B, and C have compatible data types.
-
-** `(_expression-list_) [,(_expression-list_)]&8230;`
-+
-specifies a list of expressions enclosed in parentheses, followed by
-another list of expressions enclosed in parentheses, and so on. The
-number of expressions within parentheses must be equal for each list.
-The expressions in the same ordinal position within the parentheses must
-have compatible data types.
-+
-For example, in the transpose set TRANSPOSE (A,X),(B,Y),(C,Z) AS
-(V1,V2), the expressions A,B, and C have compatible data types, and the
-expressions X,Y, and Z have compatible data types.
-
-* `_transpose-col-list_`
-+
-specifies the columns that consist of the evaluation of expressions in
-the item list as the expressions are applied to rows of the source
-table.
-
-** `_colname_`
-+
-is an SQL identifier that specifies a column name. It identifies the
-column consisting of the values in _expression-list_.
-+
-For example, in the transpose set TRANSPOSE A,B,C AS V, the column V
-corresponds to the values of the expressions A,B, and C.
-
-** `(_colname-list_)`
-+
-specifies a list of column names enclosed in parentheses. Each column
-consists of the values of the expressions in the same ordinal position
-within the parentheses in the transpose item list.
-+
-For example, in the transpose set TRANSPOSE (A,X),(B,Y),(C,Z) AS
-(V1,V2), the column V1 corresponds to the expressions A,B, and C, and
-the column V2 corresponds to the expressions X,Y, and Z.
-
-* `KEY BY _key-colname_`
-+
-optionally specifies which expression (the value in the transpose column
-list corresponds to) by its position in the item list. _key-colname_ is
-an SQL identifier. The data type of the key column is exact numeric, and
-the value is NOT NULL.
-
-[[considerations_for_transpose]]
-=== Considerations for TRANSPOSE
-
-[[multiple_transpose_clauses_and_sets]]
-==== Multiple TRANSPOSE Clauses and Sets
-
-* Multiple TRANSPOSE clauses can be used in the same query. For example:
-+
-```
-SELECT keycol1, valcol1, keycol2, valcol2 
-FROM mytable 
-TRANSPOSE a, b, c AS valcol1 KEY BY keycol1
-TRANSPOSE d, e, f AS valcol2 KEY BY keycol2
-```
-
-* A TRANSPOSE clause can contain multiple transpose sets. For example:
-+
-```
-SELECT keycol, valcol1, valcol2 
-FROM mytable 
-TRANSPOSE a, b, c AS valcol1
-          d, e, f AS valcol2 
-KEY BY keycol
-```
-
-[[degree_and_column_order_of_the_transpose_result]]
-==== Degree and Column Order of the TRANSPOSE Result
-
-The degree of the TRANSPOSE result is the degree of the source table
-(the result table derived from the table reference or references in the
-FROM clause and a WHERE clause if specified), plus one if the key column
-is specified, plus the cardinalities of all the transpose column lists.
-
-The columns of the TRANSPOSE result are ordered beginning with the
-columns of the source table, followed by the key column if specified,
-and then followed by the list of column names in the order in which they
-are specified.
-
-[[data_type_of_the_transpose_result]]
-==== Data Type of the TRANSPOSE Result
-
-The data type of each of the value columns is the union compatible data
-type of the corresponding expressions in the _transpose-item-list_.
-You cannot have expressions with data types that are not compatible in a
-_transpose-item-list_.
-
-For example, in TRANSPOSE (A,X),(B,Y),(C,Z) AS (V1,V2), the data type of
-V1 is the union compatible type for A, B, and C, and the data type of V2
-is the union compatible type for X, Y, and Z.
-
-See <<comparable_and_compatible_data_types,Comparable and Compatible Data Types>>.
-
-[[cardinality_of_the_transpose_result]]
-==== Cardinality of the TRANSPOSE Result
-
-The items in each _transpose-item-list_ are enumerated from 1 to N,
-where N is the total number of items in all the item lists in the
-transpose sets.
-
-In this example with a single transpose set, the value of N is 3:
-
-```
-TRANSPOSE (a,x),(b,y),(c,z) AS (v1,v2)
-```
-
-In this example with two transpose sets, the value of N is 5:
-
-```
-TRANSPOSE (a,x),(b,y),(c,z) AS (v1,v2) l,m AS v3
-```
-
-The values 1 to N are the key values _k_i. The items in each
-_transpose-item-list_ are the expression values _v_i.
-
-The cardinality of the result of the TRANSPOSE clause is the cardinality
-of the source table times N, the total number of items in all the
-transpose item lists.
-
-For each row of the source table and for each value in the key values
-_k_i, the TRANSPOSE result contains a row with all the attributes of
-the source table, the key value _k_i in the key column, the expression
-values vi in the value columns of the corresponding transpose set, and
-NULL in the value columns of other transpose sets.
-
-For example, consider this TRANSPOSE clause:
-
-```
-TRANSPOSE (a,x),(b,y),(c,z) AS (v1,v2) 
-           l,m AS v3
-KEY BY k
-```
-
-The value of N is 5. One row of the SELECT source table produces this
-TRANSPOSE result:
-
-[cols="5*",options="header"]
-|===
-| _columns-of-source_ | K | V1           | V2 | V3
-| _source-row_        | 1 | _value-of-A_ | _value-of-X_ | NULL
-| _source-row_        | 2 | _value-of-B_ | _value-of-Y_ | NULL
-| _source-row_        | 3 | _value-of-C_ | _value-of-Z_ | NULL
-| _source-row_        | 4 | NULL         | NULL         | _value-of-L_
-| _source-row_        | 5 | NULL         | NULL         | _value-of-M_
-|===
-
-<<<
-[[examples_of_transpose]]
-=== Examples of TRANSPOSE
-
-* Suppose that MYTABLE has been created as:
-+
-```
-CREATE TABLE mining.mytable
-( A INTEGER, B INTEGER, C INTEGER, D CHAR(2), E CHAR(2), F CHAR(2) );
-```
-+
-The table MYTABLE has columns A, B, C, D, E, and F with related data.
-The columns A, B, and C are type INTEGER, and columns D, E, and F are
-type CHAR.
-+
-[cols="6*",options="header"]
-|====
-| A | B  | C   | D  | E  | F
-| 1 | 10 | 100 | d1 | e1 | f1
-| 2 | 20 | 200 | d2 | e2 | f2
-|====
-
-* Suppose that MYTABLE has only the first three columns: A, B, and C.
-The result of the TRANSPOSE clause has three times as many rows (because
-three items exist in the transpose item list) as rows exist in MYTABLE:
-+
-```
-SELECT * FROM mytable 
-TRANSPOSE a, b, c AS valcol KEY BY keycol;
-```
-+
-The result table of the TRANSPOSE query is:
-+
-[cols="8*",options="header"]
-|===
-| A | B  | C   | D  | E  | F  | KEYCOL | VALCOL
-| 1 | 10 | 100 | d1 | e1 | f1 | 1      | 1
-| 1 | 10 | 100 | d1 | e1 | f1 | 2      | 10
-| 1 | 10 | 100 | d1 | e1 | f1 | 3      | 100
-| 2 | 20 | 200 | d2 | e2 | f2 | 1      | 2
-| 2 | 20 | 200 | d2 | e2 | f2 | 2      | 20
-| 2 | 20 | 200 | d2 | e2 | f2 | 3      | 200
-|===
-
-<<<
-* This query shows that the items in the transpose item list can be any
-valid scalar expressions:
-+
-```
-SELECT keycol, valcol, a, b, c FROM mytable 
-TRANSPOSE a + b, c + 3, 6 AS valcol KEY BY keycol;
-```
-+
-The result table of the TRANSPOSE query is:
-+
-[cols="5*",options="header"]
-|=====
-| KEYCOL | VALCOL | A | B  | C
-| 1      | 1      | 1 | 10 | 100
-| 2      | 103    | 1 | 10 | 100
-| 3      | 6      | 1 | 10 | 100
-| 1      | 22     | 2 | 20 | 200
-| 2      | 203    | 2 | 20 | 200
-| 3      | 6      | 2 | 20 | 200
-|=====
-
-* This query shows how the TRANSPOSE clause can be used with a GROUP BY
-clause. This query is typical of queries used to obtain cross-table
-information, where A, B, and C are the independent variables, and D is
-the dependent variable.
-+
-```
-SELECT keycol, valcol, d, COUNT(*) 
-FROM mytable 
-TRANSPOSE a, b, c AS valcol 
-KEY BY keycol 
-GROUP BY keycol, valcol, d;
-```
-+
-The result table of the TRANSPOSE query is:
-+
-[cols="4*",options="header"]
-|===
-| KEYCOL | VALCOL | D  | COUNT(*)
-| 1      | 1      | d1 | 1
-| 2      | 10     | d1 | 1
-| 3      | 100    | d1 | 1
-| 1      | 2      | d2 | 1
-| 2      | 20     | d2 | 1
-| 3      | 200    | d2 | 1
-|===
-
-<<< 
-* This query shows how to use COUNT applied to VALCOL. The result table
-of the TRANSPOSE query shows the number of distinct values in VALCOL.
-+
-```
-SELECT COUNT(DISTINCT valcol) FROM mytable 
-TRANSPOSE a, b, c AS valcol KEY BY keycol 
-GROUP BY keycol;
-
-(EXPR)
---------------------
-                   2
-                   2
-                   2
-
---- 3 row(s) selected.
-```
-
-* This query shows how multiple TRANSPOSE clauses can be used in the
-same query. The result table from this query has nine times as many rows
-as rows exist in MYTABLE:
-+
-```
-SELECT keycol1, valcol1, keycol2, valcol2 FROM mytable 
-TRANSPOSE a, b, c AS valcol1 KEY BY keycol1
-TRANSPOSE d, e, f AS valcol2 KEY BY keycol2;
-```
-+
-The result table of the TRANSPOSE query is:
-+
-[cols=",,,",options="header"]
-|===
-| KEYCOL1 | VALCOL1 | KEYCOL2 | VALCOL2
-| 1       | 1       | 1       | d1
-| 1       | 1       | 2       | e1
-| 1       | 1       | 3       | f1
-| 2       | 10      | 1       | d1
-| 2       | 10      | 2       | e1
-| 2       | 10      | 3       | f1
-| 3       | 100     | 1       | d1
-| 3       | 100     | 2       | e1
-| 3       | 100     | 3       | f1
-| 1       | 2       | 1       | d2
-| 1       | 2       | 2       | e2
-| 1       | 2       | 3       | f2
-| 2       | 20      | 1       | d2
-| 2       | 20      | 2       | e2
-| 2       | 20      | 3       | f2
-| 3       | 200     | 1       | d2
-| 3       | 200     | 2       | e2
-| 3       | 200     | 3       | f2
-|===
-
-* This query shows how a TRANSPOSE clause can contain multiple transpose
-sets\u2014that is, multiple _transpose-item-list_ AS _transpose-col-list_.
-The expressions A, B, and C are of type integer, and expressions D, E,
-and F are of type character.
-+
-```
-SELECT keycol, valcol1, valcol2 
-FROM mytable 
-TRANSPOSE a, b, c AS valcol1
-          d, e, f AS valcol2 
-KEY BY keycol;
-```
-+
-The result table of the TRANSPOSE query is:
-+
-[cols="3*",options="header"]
-|===
-| KEYCOL | VALCOL1 | VALCOL2
-| 1      | 1       | ?
-| 2      | 10      | ?
-| 3      | 100     | ?
-| 4      | ?       | d1
-| 5      | ?       | e1
-| 6      | ?       | f1
-| 1      | 2       | ?
-| 2      | 20      | ?
-| 3      | 200     | ?
-| 4      | ?       | d2
-| 5      | ?       | e2
-| 6      | ?       | f2
-|===
-+
-A question mark (?) in a value column indicates no value for the given KEYCOL.
-
-* This query shows how the preceding query can include a GROUP BY clause:
-+
-```
-SELECT keycol, valcol1, valcol2, COUNT(*) 
-FROM mytable 
-TRANSPOSE a, b, c AS valcol1
-          d, e, f AS valcol2 
-KEY BY keycol
-GROUP BY keycol, valcol1, valcol2;
-```
-+
-The result table of the TRANSPOSE query is:
-+
-[cols="4*",options="header"]
-|===
-| KEYCOL | VALCOL1 | VALCOL2 | (EXPR)
-| 1      | 1       | ?       | 1
-| 2      | 10      | ?       | 1
-| 3      | 100     | ?       | 1
-| 1      | 2       | ?       | 1
-| 2      | 20      | ?       | 1
-| 3      | 200     | ?       | 1
-| 4      | ?       | d2      | 1
-| 5      | ?       | e2      | 1
-| 6      | ?       | f2      | 1
-| 4      | ?       | d1      | 1
-| 5      | ?       | e1      | 1
-| 6      | ?       | f1      | 1
-|===
-
-* This query shows how an item in the transpose item list can contain a
-list of expressions and that the KEY BY clause is optional:
-+
-```
-SELECT * FROM mytable
-TRANSPOSE (1, A, 'abc'), (2, B, 'xyz') AS (VALCOL1, VALCOL2, VALCOL3);
-```
-+
-The result table of the TRANSPOSE query is:
-+
-[cols="9*",options="header"]
-|===
-| A | B  | C   | D  | E  | F  | VALCOL1 | VALCOL2 | VALCOL3
-| 1 | 10 | 100 | d1 | e1 | f1 | 1       | 1       | abc
-| 1 | 10 | 100 | d1 | e1 | f1 | 2       | 10      | xyz
-| 2 | 20 | 200 | d2 | e2 | f2 | 1       | 2       | abc
-| 2 | 20 | 200 | d2 | e2 | f2 | 2       | 20      | xyz
-|=== 
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[sql_clauses]]
+=  SQL Clauses
+
+Clauses are used by {project-name} SQL statements to specify default values,
+ways to sample or sort data, how to store physical data, and other
+details.
+
+This section describes:
+
+* <<default_clause,DEFAULT Clause>> specifies a default value for a column being created.
+* <<format_clause,FORMAT Clause>> specifies the format to use.
+* <<sample_clause,SAMPLE Clause>> specifies the sampling method used to select a subset of the intermediate result table of a SELECT statement.
+* <<sequence_by_clause,SEQUENCE BY Clause>> specifies the order in which to sort rows of the intermediate result table for calculating sequence functions.
+* <<transpose_clause,TRANSPOSE Clause>> generates, for each row of the SELECT source table, a row for each item in the transpose item list.
+ 
+[[default_clause]]
+== DEFAULT Clause
+
+The DEFAULT option of the CREATE TABLE or ALTER TABLE _table-name_ ADD
+COLUMN statement specifies a default value for a column being created.
+
+The default value is used when a row is inserted in the table without a value for the column.
+
+```
+DEFAULT default | NO DEFAULT
+
+default is:
+  literal
+| NULL
+| CURRENTDATE
+| CURRENTTIME
+| CURRENTTIMESTAMP
+```
+
+* `NO DEFAULT`
++
+specifies the column has no default value. You cannot specify NO DEFAULT
+in an ALTER TABLE statement. See <<alter_table_statement,ALTER TABLE Statement>>.
+
+[[syntax_for_default_clause]]
+=== Syntax for Default Clause
+
+* `DEFAULT _literal_`
++
+is a literal of a data type compatible with the data type of the
+associated column.
++
+For a character column, _literal_ must be a string literal of no more
+than 240 characters or the length of the column, whichever is less. The
+maximum length of a default value for a character column is 240 bytes
+(minus control characters) or the length of the column, whichever is
+less. Control characters consist of character set prefixes and single
+quote delimiter found in the text itself.
++
+For a numeric column, _literal_ must be a numeric literal that does not
+exceed the defined length of the column. The number of digits to the
+right of the decimal point must not exceed the scale of the column, and
+the number of digits to the left of the decimal point must not exceed
+the number in the length (or length minus scale, if you specified scale
+for the column).
++
+For a datetime column, _literal_ must be a datetime literal with a
+precision that matches the precision of the column.
++
+For an INTERVAL column, _literal_ must be an INTERVAL literal that has
+the range of INTERVAL fields defined for the column.
+
+* `DEFAULT NULL`
++
+specifies NULL as the default. This default can occur only with a column
+that allows null.
+
+* `DEFAULT CURRENT_DATE`
++
+specifies the default value for the column as the value returned by the
+CURRENT_DATE function at the time of the operation that assigns a value
+to the column. This default can occur only with a column whose data type
+is DATE.
+
+* `DEFAULT CURRENT_TIME`
++
+specifies the default value for the column as the value returned by the
+CURRENT_TIME function at the time of the operation that assigns a value
+to the column. This default can occur only with a column whose data type
+is TIME.
+
+* `DEFAULT CURRENT_TIMESTAMP`
++
+specifies the default value for the column as the value returned by the
+CURRENT_TIMESTAMP function at the time of the operation that assigns a
+value to the column. This default can occur only with a column whose
+data type is TIMESTAMP.
+
+[[examples_of_default]]
+=== Examples of DEFAULT
+
+* This example uses DEFAULT clauses on CREATE TABLE to specify default column values:
++
+```
+CREATE TABLE items
+( item_id CHAR(12) NO DEFAULT
+, description CHAR(50) DEFAULT NULL
+, num_on_hand INTEGER DEFAULT 0 NOT NULL
+) ;
+```
+
+* This example uses DEFAULT clauses on CREATE TABLE to specify default column values:
++
+```
+CREATE TABLE persnl.project
+( projcode NUMERIC (4) UNSIGNED NO DEFAULT NOT NULL
+, empnum NUMERIC (4) UNSIGNED NO DEFAULT NOT NULL
+, projdesc VARCHAR (18) DEFAULT NULL
+, start_date DATE DEFAULT CURRENT_DATE
+, ship_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
+, est_complete INTERVAL DAY DEFAULT INTERVAL '30' DAY
+, PRIMARY KEY (projcode)
+) ;
+```
+
+<<<
+[[format_clause]]
+== FORMAT Clause
+
+The FORMAT clause specifies the output format for DATE values. It can
+also be used to specify the length of character output or to specify
+separating the digits of integer output with colons.
+
+* Date Formats:
++
+```
+(FORMAT 'format-string') |
+
+(DATE, FORMAT 'format-string')
+
+format-string for Date Formats is:
+  YYYY-MM-DD
+  MM/DD/YYYY
+  YY/MM/DD
+  YYYY/MM/DD
+  YYYYMMDD
+  DD.MM.YYYY
+  DD-MM-YYYY
+  DD-MMM-YYYY
+```
+
+* Other Formats:
++
+```
+(FORMAT 'format-string')
+
+format-string for other formats is:
+  XXX
+  99:99:99:99
+ -99:99:99:99
+```
+
+* `YYYY-MM-DD`
++
+specifies that the FORMAT clause output format is _year-month-day_.
+
+* `MM/DD/YYYY`
++
+specifies that the FORMAT clause output format is _month/day/year_
+
+* `YY/MM/DD`
++
+specifies that the FORMAT clause output format is _year/month/day_.
+
+* `YYYY/MM/DD`
++
+specifies that the FORMAT clause output format is _year/month/day_.
+
+* `YYYYMMDD`
++
+specifies that the FORMAT clause output format is _yearmonthday_.
+
+* `DD.MM.YYYY`
++
+specifies that the FORMAT clause output format is _day.month.year_.
+
+* `DD-MM-YYYY`
++
+specifies that the FORMAT clause output format is _day-month-year_.
+
+* `DD-MMM-YYYY`
++
+specifies that the FORMAT clause output format is _day-month-year_.
+
+* `XXX`
++
+specifies that the FORMAT clause output format is a string format. The
+input must be a numeric or string value.
+
+* `99:99:99:99`
++
+specifies that the FORMAT clause output format is a timestamp. The input
+must be a numeric value.
+
+* `-99:99:99:99`
++
+specifies that the FORMAT clause output format is a timestamp. The input
+must be a numeric value.
+
+[[considerations_for_date_formats]]
+=== Considerations for Date Formats
+
+The expression preceding the (FORMAT \u201d_format-string_') clause must be
+a DATE value.
+
+The expression preceding the (DATE, FORMAT _'format-string_') clause
+must be a quoted string in the USA, EUROPEAN, or DEFAULT date format.
+
+[[considerations_for_other_formats]]
+==== Considerations for Other Formats
+
+For XXX, the expression preceding the (FORMAT _'format-string_')
+clause must be a numeric value or a string value.
+
+For 99:99:99:99 and -99:99:99:99, the expression preceding the (FORMAT
+_'format-string_') clause must be a numeric value.
+
+[[examples_of_format]]
+=== Examples of FORMAT
+
+* The format string 'XXX' in this example will yield a sample result of abc:
++
+```
+SELECT 'abcde' (FORMAT 'XXX') FROM (VALUES(1)) t;
+```
+
+* The format string 'YYYY-MM_DD' in this example will yield a sample result of 2008-07-17.
++
+```
+SELECT CAST('2008-07-17' AS DATE) (FORMAT 'YYYY-MM-DD') FROM (VALUES(1)) t;
+```
+
+* The format string 'MM/DD/YYYY' in this example will yield a sample result of 07/17/2008.
++
+```
+SELECT '2008-07-17' (DATE, FORMAT 'MM/DD/YYYY') FROM (VALUES(1)) t;
+```
+
+* The format string 'YY/MM/DD' in this example will yield a sample result of 08/07/17.
++
+```
+SELECT '2008-07-17'(DATE, FORMAT 'YY/MM/DD') FROM (VALUES(1)) t;
+```
+
+* The format string 'YYYY/MM/DD' in this example will yield a sample result of 2008/07/17.
++
+```
+SELECT '2008-07-17' (DATE, FORMAT 'YYYY/MM/DD') FROM (VALUES(1)) t;
+```
+
+* The format string 'YYYYMMDD' in this example will yield a sample result`of 20080717.
++
+```
+SELECT '2008-07-17' (DATE, FORMAT 'YYYYMMDD') FROM (VALUES(1)) t;
+```
+
+* The format string 'DD.MM.YYYY' in this example will yield a sample result of 17.07.2008.
++
+```
+SELECT '2008-07-17' (DATE, FORMAT 'DD.MM.YYYY') FROM (VALUES(1)) t;
+```
+
+* The format string 'DD-MMM-YYYY' in this example will yield a sample result of 17\u2013JUL-2008.
++
+```
+SELECT '2008-07-17' (DATE, FORMAT 'DD-MMM-YYYY') FROM (VALUES(1)) t;
+```
+
+* The format string '99:99:99:99' in this example will yield a sample result of 12:34:56:78.
++
+```
+SELECT 12345678 (FORMAT '99:99:99:99') FROM (VALUES(1)) t;
+```
+
+* The format string '-99:99:99:99' in this example will yield a sample result of -12:34:56:78.
++
+```
+SELECT (-12345678) (FORMAT '-99:99:99:99') FROM (VALUES(1)) t;
+```
+
+<<<
+[[sample_clause]]
+== SAMPLE Clause
+
+The SAMPLE clause of the SELECT statement specifies the sampling method
+used to select a subset of the intermediate result table of a SELECT
+statement. The intermediate result table consists of the rows returned
+by a WHERE clause or, if no WHERE clause exists, the FROM clause. See
+<<select_statement,SELECT Statement>>.
+
+SAMPLE is a {project-name} SQL extension.
+
+```
+SAMPLE sampling-methodis:
+  RANDOM percent-size
+| FIRST rows-size
+        [SORT BY colname [ASC[ENDING]|DESC[ENDING]]
+          [,colname [ASC[ENDING] | DESC[ENDING]]]...]
+| PERIODIC rows-size EVERY number-rows ROWS
+           [SORT BY colname [ASC[ENDING] | DESC[ENDING]] 
+             [,colname [ASC[ENDING] | DESC[ENDING]]]...]
+
+percent-size is:
+  percent-result PERCENT [ROWS]
+| BALANCE WHEN condition
+    THEN percent-result PERCENT [ROWS]
+    [WHEN condition THEN percent-result PERCENT [ROWS]]... 
+    [ELSE percent-result PERCENT [ROWS]] END
+
+rows-size is:
+  number-rows ROWS
+| BALANCE WHEN condition THEN number-rows ROWS 
+          [WHEN condition THEN number-rows ROWS]... 
+          [ELSE number-rows ROWS] END
+```
+
+* `RANDOM _percent-size_`
++
+directs {project-name} SQL to choose rows randomly (each row having an
+unbiased probability of being chosen) without replacement from the
+result table. The sampling size is determined by the _percent-size_,
+defined as:
+
+* `_percent-result_ PERCENT [ROWS] | BALANCE WHEN _condition_ THEN
+_percent-result_ PERCENT [ROWS] [WHEN _condition_ THEN _percent-result_
+PERCENT [ROWS]]&#8230; [ELSE _percent-result_ PERCENT [ROWS]] END`
++
+specifies the value of the size for RANDOM sampling by using a percent
+of the result table. The value _percent-result_ must be a numeric
+literal.
++
+You can determine the actual size of the sample. Suppose that _N_ rows
+exist in the intermediate result table. Each row is picked with a
+probability of _r_%, where _r_ is the sample size in PERCENT.
+Therefore, the actual size of the resulting sample is approximately _r_% of _N_. 
+The number of rows picked follows a binomial distribution with
+mean equal to _r_ *c_N_/100.
++
+If you specify a sample size greater than 100 PERCENT, {project-name} SQL
+returns all the rows in the result table plus duplicate rows. The
+duplicate rows are picked from the result table according to the
+specified sampling method. This technique is called oversampling.
+
+** `ROWS`
++
+specifies row sampling. Row sampling is the default.
+
+** `BALANCE`
++
+If you specify a BALANCE expression, {project-name} SQL performs stratified
+sampling. The intermediate result table is divided into disjoint strata
+based on the WHEN conditions.
++
+Each stratum is sampled independently by using the sampling size. For a
+given row, the stratum to which it belongs is determined by the first
+WHEN condition that is true for that row\u2014if a true condition exists. If
+no true condition exists, the row belongs to the ELSE stratum.
+
+* `FIRST _rows-size_ [SORT BY _colname_ [ASC[ENDING] | DESC[ENDING]]
+[,_colname_ [ASC[ENDING] | DESC[ENDING]]]&#8230;]`
++
+directs {project-name} SQL to choose the first rows from the result table.
+You can specify the order of the rows to sample. Otherwise, {project-name}
+SQL chooses an arbitrary order. The sampling size is determined by the
+_rows-size_, defined as:
+
+* `_number-rows_ ROWS | BALANCE WHEN _condition_ THEN _number-rows_ ROWS
+[WHEN _condition_ THEN _number-rows_ ROWS]&#8230; [ELSE _number-rows_ ROWS] END`
++
+specifies the value of the size for FIRST sampling by using the number
+of rows intended in the sample. The value _number-rows_ must be an
+integer literal.
++
+You can determine the actual size of the sample. Suppose that _N_ rows
+exist in the intermediate result table. If the size _s_ of the sample is
+specified as a number of rows, the actual size of the resulting sample
+is the minimum of _s_ and _N_.
+
+* `PERIODIC _rows-size_ EVERY _number-rows_ ROWS [SORT BY _colname_
+[ASC[ENDING] | DESC[ENDING]] [,_colname_ [ASC[ENDING] |
+DESC[ENDING]]]&#8230;]`
++
+directs {project-name} SQL to choose the first rows from each block (or
+period) of contiguous rows. This sampling method is equivalent to a
+separate FIRST sampling for each period, and the _rows-size_ is defined
+as in FIRST sampling.
++
+The size of the period is specified as a number of rows. You can specify
+the order of the rows to sample. Otherwise, {project-name} SQL chooses an
+arbitrary order.
++
+<<<
++
+You can determine the actual size of the sample. Suppose that _N_ rows
+exist in the intermediate result table. If the size _s_ of the sample is
+specified as a number of rows and the size _p_ of the period is
+specified as a number of rows, the actual size of the resulting sample
+is calculated as:
++
+```
+FLOOR (N/p) * s + _minimum_ (MOD (N, p), s)
+```
++
+_minimum_ in this expression is used simply as the mathematical
+minimum of two values.
+
+[[considerations_for_sample]]
+=== Considerations for SAMPLE
+
+[[sample_rows]]
+==== Sample Rows
+
+In general, when you use the SAMPLE clause, the same query returns
+different sets of rows for each execution. The same set of rows is
+returned only when you use the FIRST and PERIODIC sampling methods with
+the SORT BY option, where no duplicates exist in the specified column
+combination for the sort.
+
+[[examples_of_sample]]
+=== Examples of SAMPLE
+
+* Suppose that the data-mining tables SALESPER, SALES, and DEPT have been
+created as:
++
+```
+CREATE TABLE trafodion.mining.salesper
+( empid NUMERIC (4) UNSIGNED NOT NULL
+, dnum NUMERIC (4) UNSIGNED NOT NULL
+, salary NUMERIC (8,2) UNSIGNED
+, age INTEGER
+, sex CHAR (6)
+, PRIMARY KEY (empid) );
+
+CREATE TABLE trafodion.mining.sales
+( empid NUMERIC (4) UNSIGNED NOT NULL
+, product VARCHAR (20)
+, region CHAR (4)
+, amount NUMERIC (9,2) UNSIGNED
+, PRIMARY KEY (empid) );
+
+CREATE TABLE trafodion.mining.dept
+( dnum NUMERIC (4) UNSIGNED NOT NULL
+, name VARCHAR (20)
+, PRIMARY KEY (dnum) );
+```
++
+Suppose, too, that sample data is inserted into this database.
+
+
+* Return the SALARY of the youngest 50 sales people:
++
+```
+SELECT salary 
+FROM salesperson
+SAMPLE FIRST 50 ROWS 
+SORT BY age;
+
+SALARY
+----------- 
+   90000.00
+   90000.00
+   28000.00
+   27000.12
+  136000.00
+   37000.40
+...
+
+--- 50 row(s) selected.
+```
+
+* Return the SALARY of 50 sales people. In this case, the table is
+clustered on EMPID. If the optimizer chooses a plan to access rows using
+the primary access path, the result consists of salaries of the 50 sales
+people with the smallest employee identifiers.
++
+```
+SELECT salary 
+FROM salesperson
+SAMPLE FIRST 50 ROWS;
+
+SALARY
+----------- 
+  175500.00
+  137000.10
+  136000.00
+  138000.40
+   75000.00
+   90000.00
+...
+
+--- 50 row(s) selected.
+```
+
+<<<
+* Return the SALARY of the youngest five sales people, skip the next 15
+rows, and repeat this process until no more rows exist in the
+intermediate result table. You cannot specify periodic sampling with the
+sample size larger than the period.
++
+```
+SELECT salary 
+FROM salesperson
+SAMPLE PERIODIC 5 ROWS 
+EVERY 20 ROWS 
+SORT BY age;
+
+SALARY
+----------- 
+   90000.00
+   90000.00
+   28000.00
+   27000.12
+  136000.00
+   36000.00
+...
+
+--- 17 row(s) selected.
+```
++
+In this example, 62 rows exist in the SALESPERSON table. For each set of
+20 rows, the first five rows are selected. The last set consists of two
+rows, both of which are selected.
+
+* Compute the average salary of a random 10 percent of the sales people.
+You will get a different result each time you run this query because it
+is based on a random sample.
++
+```
+SELECT AVG(salary) 
+FROM salesperson
+SAMPLE RANDOM 10 PERCENT;
+
+(EXPR)
+--------------------
+            61928.57
+
+--- 1 row(s) selected.
+```
+
+<<<
+* This query illustrates sampling after execution of the WHERE clause
+has chosen the qualifying rows. The query computes the average salary of
+a random 10 percent of the sales people over 35 years of age. You will
+get a different result each time you run this query because it
+is based on a random sample.
++
+```
+SELECT AVG(salary) 
+FROM salesperson 
+WHERE age > 35
+SAMPLE RANDOM 10 PERCENT;
+
+(EXPR)
+--------------------
+            58000.00
+
+--- 1 row(s) selected.
+```
+
+* Compute the average salary of a random 10 percent of sales people
+belonging to the CORPORATE department. The sample is taken from the join
+of the SALESPERSON and DEPARTMENT tables. You will get a different
+result each time you run this query because it is based on a random
+sample.
++
+```
+SELECT AVG(salary)
+FROM salesperson S, department D 
+WHERE S.DNUM = D.DNUM AND D.NAME = 'CORPORATE' 
+SAMPLE RANDOM 10 PERCENT;
+
+(EXPR)
+---------------------
+           106250.000
+
+--- 1 row(s) selected.
+```
+
+<<<
+* In this example, the SALESPERSON table is first sampled and then
+joined with the DEPARTMENT table. This query computes the average salary
+of all the sales people belonging to the CORPORATE department in a
+random sample of 10 percent of the sales employees.
++
+```
+SELECT AVG(salary)
+FROM 
+  ( SELECT salary, dnum FROM salesperson SAMPLE RANDOM 10 PERCENT ) AS S
+  , department D 
+WHERE S.DNUM = D.DNUM
+  AND D.NAME = 'CORPORATE';
+
+(EXPR)
+--------------------
+
+37000.000
+
+--- 1 row(s) selected.
+```
++
+The results of this query and some of the results of previous queries
+might return null:
++
+```
+SELECT AVG(salary)
+FROM 
+  ( SELECT salary, dnum FROM salesperson SAMPLE RANDOM 10 PERCENT ) AS S
+  , department D 
+WHERE S.DNUM = D.DNUM AND D.NAME = 'CORPORATE';
+
+(EXPR)
+--------------------
+
+?
+
+--- 1 row(s) selected.
+```
++
+For this query execution, the number of rows returned by the embedded
+query is limited by the total number of rows in the SALESPERSON table.
+Therefore, it is possible that no rows satisfy the search condition in
+the WHERE clause.
+
+
+<<<
+* In this example, both the tables are sampled first and then joined.
+This query computes the average salary and the average sale amount
+generated from a random 10 percent of all the sales people and 20
+percent of all the sales transactions.
++
+```
+SELECT AVG(salary), AVG(amount) 
+FROM ( SELECT salary, empid
+       FROM salesperson
+       SAMPLE RANDOM 10 PERCENT ) AS S,
+  ( SELECT amount, empid FROM sales
+    SAMPLE RANDOM 20 PERCENT ) AS T
+WHERE S.empid = T.empid;
+
+(EXPR)    (EXPR)
+--------- --------- 
+ 45000.00  31000.00
+
+--- 1 row(s) selected.
+```
+
+* This example illustrates oversampling. This query retrieves 150
+percent of the sales transactions where the amount exceeds $1000. The
+result contains every row at least once, and 50 percent of the rows,
+picked randomly, occur twice.
++
+```
+SELECT *
+FROM sales
+WHERE amount > 1000
+SAMPLE RANDOM 150 PERCENT;
+
+EMPID PRODUCT              REGION AMOUNT
+----- -------------------- ------ ----------- 
+    1 PCGOLD, 30MB         E         30000.00
+   23 PCDIAMOND, 60MB      W         40000.00
+   23 PCDIAMOND, 60MB      W         40000.00
+   29 GRAPHICPRINTER, M1   N         11000.00
+   32 GRAPHICPRINTER, M2   S         15000.00
+   32 GRAPHICPRINTER, M2   S         15000.00
+  ... ...                  ...       ...
+
+--- 88 row(s) selected.
+```
+
+<<<
+* The BALANCE option enables stratified sampling. Retrieve the age and
+salary of 1000 sales people such that 50 percent of the result are male
+and 50 percent female.
++
+```
+SELECT age, sex, salary 
+FROM salesperson
+SAMPLE FIRST
+BALANCE 
+  WHEN sex = 'male' THEN 15 ROWS
+  WHEN sex = 'female' THEN 15 ROWS
+  END 
+ORDER BY age;
+
+AGE         SEX    SALARY
+----------- ------ -----------
+         22 male      28000.00
+         22 male      90000.00
+         22 female   136000.00
+         22 male      37000.40
+        ... ...            ...
+
+--- 30 row(s) selected.
+```
+
+* Retrieve all sales records with the amount exceeding $10000 and a
+random sample of 10 percent of the remaining records:
++
+```
+SELECT *
+FROM sales SAMPLE RANDOM
+BALANCE 
+  WHEN amount > 10000 
+  THEN 100 PERCENT 
+  ELSE 10 PERCENT
+END;
+
+PRODUCT              REGION AMOUNT
+-------------------- ------ -----------
+PCGOLD, 30MB         E         30000.00
+PCDIAMOND, 60MB      W         40000.00
+GRAPHICPRINTER, M1   N         11000.00
+GRAPHICPRINTER, M2   S         15000.00
+...                  ...       ...
+MONITORCOLOR, M2     N         10500.00
+...                  ...       ...
+
+--- 32 row(s) selected.
+```
+
+<<<
+* This query shows an example of stratified sampling where the
+conditions are not mutually exclusive:
++
+```
+SELECT *
+FROM sales SAMPLE RANDOM
+BALANCE 
+  WHEN amount > 10000 THEN 100 PERCENT
+  WHEN product = 'PCGOLD, 30MB' THEN 25 PERCENT 
+  WHEN region = 'W' THEN 40 PERCENT
+  ELSE 10 PERCENT END;
+
+PRODUCT              REGION AMOUNT
+-------------------- ------ -----------
+PCGOLD, 30MB         E         30000.00
+PCDIAMOND, 60MB      W         40000.00
+GRAPHICPRINTER, M1   N         11000.00
+GRAPHICPRINTER, M2   S         15000.00
+GRAPHICPRINTER, M3   S         20000.00
+LASERPRINTER, X1     W         42000.00
+...                  ...       ...
+
+--- 30 row(s) selected.
+```
+
+<<<
+[[sequence_by_clause]]
+== SEQUENCE BY Clause
+
+The SEQUENCE BY clause of the SELECT statement specifies the order in
+which to sort the rows
+
+of the intermediate result table for calculating sequence functions.
+This option is used for processing time-sequenced rows in data mining
+applications. See <<select_statement>>.
+
+Sequence by is a {project-name} SQL extension.
+
+```
+SEQUENCE BY colname[ASC[ENDING]|DESC[ENDING]]
+   [,colname [ASC[ENDING] | DESC[ENDING]]]...
+```
+
+* `_colname_`
+_
+names a column in _select-list_ or a column in a table reference in the
+FROM clause of the SELECT statement. _colname_ is optionally qualified
+by a table, view, or correlation name; for example, CUSTOMER.CITY.
+
+* `ASC | DESC`
++
+specifies the sort order. ASC is the default. For ordering an
+intermediate result table on a column that can contain null, nulls are
+considered equal to one another but greater than all other non-null
+values.
++
+You must include a SEQUENCE BY clause if you include a sequence function
+in the select list of the SELECT statement. Otherwise, {project-name} SQL
+returns an error. Further, you cannot include a SEQUENCE BY clause if no
+sequence function exists in the select list. See
+<<sequence_functions,Sequence Functions>> .
+
+[[considerations_for_sequence_by]]
+=== Considerations for SEQUENCE BY
+
+* Sequence functions behave differently from set (or aggregate)
+functions and mathematical (or scalar) functions.
+* If you include both SEQUENCE BY and GROUP BY clauses in the same
+SELECT statement, the values of the sequence functions must be evaluated
+first and then become input for aggregate functions in the statement.
+** For a SELECT statement that contains both SEQUENCE BY and GROUP BY
+clauses, you can nest the sequence function in the aggregate function:
++
+```
+SELECT 
+  ordernum
+, MAX(MOVINGSUM(qty_ordered, 3)) AS maxmovsum_qty
+, AVG(unit_price) AS avg_price
+FROM odetail 
+SEQUENCE BY partnum 
+GROUP BY ordernum;
+```
+
+* To use a sequence function as a grouping column, you must use a
+derived table for the SEQUENCE BY query and use the derived column in
+the GROUP BY clause:
++
+```
+SELECT 
+  ordernum
+, movsum_qty
+, AVG(unit_price) 
+FROM
+  ( SELECT ordernum, MOVINGSUM(qty_ordered, 3), unit_price 
+    FROM odetail SEQUENCE BY partnum ) 
+  AS tab2 (ordernum, movsum_qty, unit_price) 
+GROUP BY ordernum, movsum_qty;
+```
+
+* To use an aggregate function as the argument to a sequence function,
+you must also use a derived table:
++
+```
+SELECT MOVINGSUM(avg_price,2) 
+FROM
+  ( SELECT ordernum, AVG(unit_price) FROM odetail
+    GROUP BY ordernum)
+AS tab2 (ordernum, avg_price) 
+SEQUENCE BY ordernum;
+```
+
+* Like aggregate functions, sequence functions generate an intermediate
+result. If the query has a WHERE clause, its search condition is applied
+during the generation of the intermediate result. Therefore, you cannot
+use sequence functions in the WHERE clause of a SELECT statement.
+
+** This query returns an error:
++
+```
+SELECT ordernum, partnum, RUNNINGAVG(unit_price) 
+FROM odetail
+WHERE ordernum > 800000 AND RUNNINGAVG(unit_price) > 350 
+SEQUENCE BY qty_ordered;
+```
+
+** Apply a search condition to the result of a sequence function, use a
+derived table for the SEQUENCE BY query, and use the derived column in
+the WHERE clause:
++
+```
+SELECT ordernum, partnum, runavg_price 
+FROM
+  ( SELECT ordernum, partnum, RUNNINGAVG(unit_price) 
+    FROM odetail SEQUENCE BY qty_ordered)
+AS tab2 (ordernum, partnum, runavg_price) 
+WHERE ordernum > 800000 AND
+runavg_price > 350;
+```
+
+[[examples_of_sequence_by]]
+=== Examples of SEQUENCE BY
+
+* Sequentially number each row for the entire result and also number the
+rows for each part number:
++
+```
+SELECT 
+  RUNNINGCOUNT(*) AS RCOUNT
+, MOVINGCOUNT(*,ROWS SINCE (d.partnum<>THIS(d.partnum))) AS MCOUNT
+, d.partnum
+FROM orders o, odetail d 
+WHERE o.ordernum=d.ordernum
+SEQUENCE BY d.partnum, o.order_date, o.ordernum 
+ORDER BY d.partnum, o.order_date, o.ordernum;
+
+RCOUNT               MCOUNT                Part/Num
+-------------------- --------------------- --------
+                   1                     1      212
+                   2                     2      212
+                   3                     1      244
+                   4                     2      244
+                   5                     3      244
+                 ...                   ...      ...
+                  67                     1     7301
+                  68                     2     7301
+                  69                     3     7301
+                  70                     4     7301
+
+--- 70 row(s) selected.
+```
+
+<<<
+* Show the orders for each date, the amount for each order item and the
+moving total for each order, and the running total of all the orders.
+The query sequences orders by date, order number, and part number. (The
+CAST function is used for readability only.)
++
+```
+SELECT 
+  o.ordernum
+, CAST (MOVINGCOUNT(*,ROWS SINCE(THIS(o.ordernum) <> o.ordernum)) AS INT) AS MCOUNT
+, d.partnum
+, o.order_date
+, (d.unit_price * d.qty_ordered) AS AMOUNT
+, MOVINGSUM (d.unit_price * d.qty_ordered, SEQUENCE BY Clause 269 ROWS SINCE(THIS(o.ordernum)<>o.ordernum) ) AS ORDER_TOTAL
+, RUNNINGSUM (d.unit_price * d.qty_ordered) AS TOTAL_SALES
+FROM orders o, odetail d 
+WHERE o.ordernum=d.ordernum
+SEQUENCE BY o.order_date, o.ordernum, d.partnum 
+ORDER BY o.order_date, o.ordernum, d.partnum;
+
+Order/Num  MCOUNT      Part/Num Order/Date AMOUNT     ORDER_TOTAL    TOTAL_SALES
+---------- ----------- -------- ---------- ---------- -------------- --------------
+    100250           1      244 2008-01-23   14000.00       14000.00       14000.00
+    100250           2     5103 2008-01-23    4000.00       18000.00       18000.00
+    100250           3     6500 2008-01-23     950.00       18950.00       18950.00
+    200300           1      244 2008-02-06   28000.00       28000.00       46950.00
+    200300           2     2001 2008-02-06   10000.00       38000.00       56950.00
+    200300           3     2002 2008-02-06   14000.00       52000.00       70950.00
+       ...         ...      ... ...          ...            ...                 ...
+    800660          18     7102 2008-10-09    1650.00      187360.00      113295.00             
+    800660          19     7301 2008-10-09    5100.00     192460.00      1118395.00
+
+--- 69 row(s) selected.
+```
++
+For example, for order number 200300, the ORDER_TOTAL is a moving sum
+within the order date 2008-02-06, and the TOTAL_SALES is a running sum
+for all orders. The current window for the moving sum is defined as ROWS
+SINCE (THIS(o.ordernum)<>o.ordernum), which restricts the ORDER_TOTAL to
+the current order number.
+
+<<<
+* Show the amount of time between orders by calculating the interval between two dates:
++
+```
+SELECT RUNNINGCOUNT(*),o.order_date,DIFF1(o.order_date) 
+FROM orders o
+SEQUENCE BY o.order_date, o.ordernum 
+ORDER BY o.order_date, o.ordernum ;
+
+
+(EXPR)               Order/Date (EXPR)
+-------------------- ---------- -------------
+                   1 2008-01-23             ?
+                   2 2008-02-06            14
+                   3 2008-02-17            11
+                   4 2008-03-03            14
+                   5 2008-03-19            16
+                   6 2008-03-19             0
+                   7 2008-03-27             8
+                   8 2008-04-10            14
+                   9 2008-04-20            10
+                  10 2008-05-12            22
+                  11 2008-06-01            20
+                  12 2008-07-21            50
+                  13 2008-10-09            80
+
+--- 13 row(s) selected.
+```
+
+<<<
+[[transpose_clause]]
+== TRANSPOSE Clause
+
+The TRANSPOSE clause of the SELECT statement generates for each row of
+the SELECT source table a row for each item in the transpose item list.
+The result table of the TRANSPOSE clause has all the columns of the
+source table plus, for each transpose item list, a value column or
+columns and an optional key column.
+
+TRANSPOSE is a {project-name} SQL extension.
+
+```
+TRANSPOSE transpose-set [transpose-set]... 
+  [KEY BY key-colname]
+
+transpose-set is:
+   transpose-item-list AS transpose-col-list
+
+transpose-item-list is:
+  expression-list
+| (expression-list) [,(expression-list)]...
+
+expression-list is:
+  expression [,expression]...
+
+transpose-col-list is:
+  colname | (colname-list)
+
+colname-list is:
+  colname [,colname]...
+```
+
+* `_transpose-item-list_ AS _transpose-col-list_`
++
+specifies a _transpose-set_, which correlates a _transpose-item-list_
+with a _transpose-col-list_. The _transpose-item-list_ can be a list
+of expressions or a list of expression lists enclosed in parentheses.
+The _transpose-col-list_ can be a single column name or a list of column
+names enclosed in parentheses.
++
+For example, in the _transpose-set_ TRANSPOSE (A,X),(B,Y),(C,Z) AS
+(V1,V2), the items in the _transpose-item-list_ are (A,X),(B,Y), and
+(C,Z), and the _transpose-col-list_ is (V1,V2). The number of
+expressions in each item must be the same as the number of value columns
+in the column list.
++
+In the example TRANSPOSE A,B,C AS V, the items are A,B, and C, and the
+value column is V. This form can be thought of as a shorter way of writing TRANSPOSE
+(A),(B),(C) AS (V).
+
+* `_transpose-item-list_`
++
+specifies a list of items. An item is a value expression or a list of
+value expressions enclosed in parentheses.
+
+** `_expression-list_`
++
+specifies a list of SQL value expressions, separated by commas. The
+expressions must have compatible data types.
++
+For example, in the transpose set TRANSPOSE A,B,C AS V, the expressions
+A,B, and C have compatible data types.
+
+** `(_expression-list_) [,(_expression-list_)]&8230;`
++
+specifies a list of expressions enclosed in parentheses, followed by
+another list of expressions enclosed in parentheses, and so on. The
+number of expressions within parentheses must be equal for each list.
+The expressions in the same ordinal position within the parentheses must
+have compatible data types.
++
+For example, in the transpose set TRANSPOSE (A,X),(B,Y),(C,Z) AS
+(V1,V2), the expressions A,B, and C have compatible data types, and the
+expressions X,Y, and Z have compatible data types.
+
+* `_transpose-col-list_`
++
+specifies the columns that consist of the evaluation of expressions in
+the item list as the expressions are applied to rows of the source
+table.
+
+** `_colname_`
++
+is an SQL identifier that specifies a column name. It identifies the
+column consisting of the values in _expression-list_.
++
+For example, in the transpose set TRANSPOSE A,B,C AS V, the column V
+corresponds to the values of the expressions A,B, and C.
+
+** `(_colname-list_)`
++
+specifies a list of column names enclosed in parentheses. Each column
+consists of the values of the expressions in the same ordinal position
+within the parentheses in the transpose item list.
++
+For example, in the transpose set TRANSPOSE (A,X),(B,Y),(C,Z) AS
+(V1,V2), the column V1 corresponds to the expressions A,B, and C, and
+the column V2 corresponds to the expressions X,Y, and Z.
+
+* `KEY BY _key-colname_`
++
+optionally specifies which expression (the value in the transpose column
+list corresponds to) by its position in the item list. _key-colname_ is
+an SQL identifier. The data type of the key column is exact numeric, and
+the value is NOT NULL.
+
+[[considerations_for_transpose]]
+=== Considerations for TRANSPOSE
+
+[[multiple_transpose_clauses_and_sets]]
+==== Multiple TRANSPOSE Clauses and Sets
+
+* Multiple TRANSPOSE clauses can be used in the same query. For example:
++
+```
+SELECT keycol1, valcol1, keycol2, valcol2 
+FROM mytable 
+TRANSPOSE a, b, c AS valcol1 KEY BY keycol1
+TRANSPOSE d, e, f AS valcol2 KEY BY keycol2
+```
+
+* A TRANSPOSE clause can contain multiple transpose sets. For example:
++
+```
+SELECT keycol, valcol1, valcol2 
+FROM mytable 
+TRANSPOSE a, b, c AS valcol1
+          d, e, f AS valcol2 
+KEY BY keycol
+```
+
+[[degree_and_column_order_of_the_transpose_result]]
+==== Degree and Column Order of the TRANSPOSE Result
+
+The degree of the TRANSPOSE result is the degree of the source table
+(the result table derived from the table reference or references in the
+FROM clause and a WHERE clause if specified), plus one if the key column
+is specified, plus the cardinalities of all the transpose column lists.
+
+The columns of the TRANSPOSE result are ordered beginning with the
+columns of the source table, followed by the key column if specified,
+and then followed by the list of column names in the order in which they
+are specified.
+
+[[data_type_of_the_transpose_result]]
+==== Data Type of the TRANSPOSE Result
+
+The data type of each of the value columns is the union compatible data
+type of the corresponding expressions in the _transpose-item-list_.
+You cannot have expressions with data types that are not compatible in a
+_transpose-item-list_.
+
+For example, in TRANSPOSE (A,X),(B,Y),(C,Z) AS (V1,V2), the data type of
+V1 is the union compatible type for A, B, and C, and the data type of V2
+is the union compatible type for X, Y, and Z.
+
+See <<comparable_and_compatible_data_types,Comparable and Compatible Data Types>>.
+
+[[cardinality_of_the_transpose_result]]
+==== Cardinality of the TRANSPOSE Result
+
+The items in each _transpose-item-list_ are enumerated from 1 to N,
+where N is the total number of items in all the item lists in the
+transpose sets.
+
+In this example with a single transpose set, the value of N is 3:
+
+```
+TRANSPOSE (a,x),(b,y),(c,z) AS (v1,v2)
+```
+
+In this example with two transpose sets, the value of N is 5:
+
+```
+TRANSPOSE (a,x),(b,y),(c,z) AS (v1,v2) l,m AS v3
+```
+
+The values 1 to N are the key values _k_i. The items in each
+_transpose-item-list_ are the expression values _v_i.
+
+The cardinality of the result of the TRANSPOSE clause is the cardinality
+of the source table times N, the total number of items in all the
+transpose item lists.
+
+For each row of the source table and for each value in the key values
+_k_i, the TRANSPOSE result contains a row with all the attributes of
+the source table, the key value _k_i in the key column, the expression
+values vi in the value columns of the corresponding transpose set, and
+NULL in the value columns of other transpose sets.
+
+For example, consider this TRANSPOSE clause:
+
+```
+TRANSPOSE (a,x),(b,y),(c,z) AS (v1,v2) 
+           l,m AS v3
+KEY BY k
+```
+
+The value of N is 5. One row of the SELECT source table produces this
+TRANSPOSE result:
+
+[cols="5*",options="header"]
+|===
+| _columns-of-source_ | K | V1           | V2 | V3
+| _source-row_        | 1 | _value-of-A_ | _value-of-X_ | NULL
+| _source-row_        | 2 | _value-of-B_ | _value-of-Y_ | NULL
+| _source-row_        | 3 | _value-of-C_ | _value-of-Z_ | NULL
+| _source-row_        | 4 | NULL         | NULL         | _value-of-L_
+| _source-row_        | 5 | NULL         | NULL         | _value-of-M_
+|===
+
+<<<
+[[examples_of_transpose]]
+=== Examples of TRANSPOSE
+
+* Suppose that MYTABLE has been created as:
++
+```
+CREATE TABLE mining.mytable
+( A INTEGER, B INTEGER, C INTEGER, D CHAR(2), E CHAR(2), F CHAR(2) );
+```
++
+The table MYTABLE has columns A, B, C, D, E, and F with related data.
+The columns A, B, and C are type INTEGER, and columns D, E, and F are
+type CHAR.
++
+[cols="6*",options="header"]
+|====
+| A | B  | C   | D  | E  | F
+| 1 | 10 | 100 | d1 | e1 | f1
+| 2 | 20 | 200 | d2 | e2 | f2
+|====
+
+* Suppose that MYTABLE has only the first three columns: A, B, and C.
+The result of the TRANSPOSE clause has three times as many rows (because
+three items exist in the transpose item list) as rows exist in MYTABLE:
++
+```
+SELECT * FROM mytable 
+TRANSPOSE a, b, c AS valcol KEY BY keycol;
+```
++
+The result table of the TRANSPOSE query is:
++
+[cols="8*",options="header"]
+|===
+| A | B  | C   | D  | E  | F  | KEYCOL | VALCOL
+| 1 | 10 | 100 | d1 | e1 | f1 | 1      | 1
+| 1 | 10 | 100 | d1 | e1 | f1 | 2      | 10
+| 1 | 10 | 100 | d1 | e1 | f1 | 3      | 100
+| 2 | 20 | 200 | d2 | e2 | f2 | 1      | 2
+| 2 | 20 | 200 | d2 | e2 | f2 | 2      | 20
+| 2 | 20 | 200 | d2 | e2 | f2 | 3      | 200
+|===
+
+<<<
+* This query shows that the items in the transpose item list can be any
+valid scalar expressions:
++
+```
+SELECT keycol, valcol, a, b, c FROM mytable 
+TRANSPOSE a + b, c + 3, 6 AS valcol KEY BY keycol;
+```
++
+The result table of the TRANSPOSE query is:
++
+[cols="5*",options="header"]
+|=====
+| KEYCOL | VALCOL | A | B  | C
+| 1      | 1      | 1 | 10 | 100
+| 2      | 103    | 1 | 10 | 100
+| 3      | 6      | 1 | 10 | 100
+| 1      | 22     | 2 | 20 | 200
+| 2      | 203    | 2 | 20 | 200
+| 3      | 6      | 2 | 20 | 200
+|=====
+
+* This query shows how the TRANSPOSE clause can be used with a GROUP BY
+clause. This query is typical of queries used to obtain cross-table
+information, where A, B, and C are the independent variables, and D is
+the dependent variable.
++
+```
+SELECT keycol, valcol, d, COUNT(*) 
+FROM mytable 
+TRANSPOSE a, b, c AS valcol 
+KEY BY keycol 
+GROUP BY keycol, valcol, d;
+```
++
+The result table of the TRANSPOSE query is:
++
+[cols="4*",options="header"]
+|===
+| KEYCOL | VALCOL | D  | COUNT(*)
+| 1      | 1      | d1 | 1
+| 2      | 10     | d1 | 1
+| 3      | 100    | d1 | 1
+| 1      | 2      | d2 | 1
+| 2      | 20     | d2 | 1
+| 3      | 200    | d2 | 1
+|===
+
+<<< 
+* This query shows how to use COUNT applied to VALCOL. The result table
+of the TRANSPOSE query shows the number of distinct values in VALCOL.
++
+```
+SELECT COUNT(DISTINCT valcol) FROM mytable 
+TRANSPOSE a, b, c AS valcol KEY BY keycol 
+GROUP BY keycol;
+
+(EXPR)
+--------------------
+                   2
+                   2
+                   2
+
+--- 3 row(s) selected.
+```
+
+* This query shows how multiple TRANSPOSE clauses can be used in the
+same query. The result table from this query has nine times as many rows
+as rows exist in MYTABLE:
++
+```
+SELECT keycol1, valcol1, keycol2, valcol2 FROM mytable 
+TRANSPOSE a, b, c AS valcol1 KEY BY keycol1
+TRANSPOSE d, e, f AS valcol2 KEY BY keycol2;
+```
++
+The result table of the TRANSPOSE query is:
++
+[cols=",,,",options="header"]
+|===
+| KEYCOL1 | VALCOL1 | KEYCOL2 | VALCOL2
+| 1       | 1       | 1       | d1
+| 1       | 1       | 2       | e1
+| 1       | 1       | 3       | f1
+| 2       | 10      | 1       | d1
+| 2       | 10      | 2       | e1
+| 2       | 10      | 3       | f1
+| 3       | 100     | 1       | d1
+| 3       | 100     | 2       | e1
+| 3       | 100     | 3       | f1
+| 1       | 2       | 1       | d2
+| 1       | 2       | 2       | e2
+| 1       | 2       | 3       | f2
+| 2       | 20      | 1       | d2
+| 2       | 20      | 2       | e2
+| 2       | 20      | 3       | f2
+| 3       | 200     | 1       | d2
+| 3       | 200     | 2       | e2
+| 3       | 200     | 3       | f2
+|===
+
+* This query shows how a TRANSPOSE clause can contain multiple transpose
+sets\u2014that is, multiple _transpose-item-list_ AS _transpose-col-list_.
+The expressions A, B, and C are of type integer, and expressions D, E,
+and F are of type character.
++
+```
+SELECT keycol, valcol1, valcol2 
+FROM mytable 
+TRANSPOSE a, b, c AS valcol1
+          d, e, f AS valcol2 
+KEY BY keycol;
+```
++
+The result table of the TRANSPOSE query is:
++
+[cols="3*",options="header"]
+|===
+| KEYCOL | VALCOL1 | VALCOL2
+| 1      | 1       | ?
+| 2      | 10      | ?
+| 3      | 100     | ?
+| 4      | ?       | d1
+| 5      | ?       | e1
+| 6      | ?       | f1
+| 1      | 2       | ?
+| 2      | 20      | ?
+| 3      | 200     | ?
+| 4      | ?       | d2
+| 5      | ?       | e2
+| 6      | ?       | f2
+|===
++
+A question mark (?) in a value column indicates no value for the given KEYCOL.
+
+* This query shows how the preceding query can include a GROUP BY clause:
++
+```
+SELECT keycol, valcol1, valcol2, COUNT(*) 
+FROM mytable 
+TRANSPOSE a, b, c AS valcol1
+          d, e, f AS valcol2 
+KEY BY keycol
+GROUP BY keycol, valcol1, valcol2;
+```
++
+The result table of the TRANSPOSE query is:
++
+[cols="4*",options="header"]
+|===
+| KEYCOL | VALCOL1 | VALCOL2 | (EXPR)
+| 1      | 1       | ?       | 1
+| 2      | 10      | ?       | 1
+| 3      | 100     | ?       | 1
+| 1      | 2       | ?       | 1
+| 2      | 20      | ?       | 1
+| 3      | 200     | ?       | 1
+| 4      | ?       | d2      | 1
+| 5      | ?       | e2      | 1
+| 6      | ?       | f2      | 1
+| 4      | ?       | d1      | 1
+| 5      | ?       | e1      | 1
+| 6      | ?       | f1      | 1
+|===
+
+* This query shows how an item in the transpose item list can con

<TRUNCATED>


[10/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/resources/source/basicsql.cpp
----------------------------------------------------------------------
diff --git a/docs/client_install/src/resources/source/basicsql.cpp b/docs/client_install/src/resources/source/basicsql.cpp
index 9215a5d..aa4f2d8 100644
--- a/docs/client_install/src/resources/source/basicsql.cpp
+++ b/docs/client_install/src/resources/source/basicsql.cpp
@@ -1,394 +1,456 @@
-// @@@ START COPYRIGHT @@@
-//
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-//
-// @@@ END COPYRIGHT @@@
-
-#ifdef __linux
-    #include <unistd.h>
-#else
-    #include <windows.h>
-    #include <tchar.h>
-#endif
-
-//#include <stdarg.h>
-
-#include <stdio.h>
-#include <string.h>
-#include <stdlib.h>
-#include <assert.h>
-
-#include <sql.h>
-#include <sqlext.h>
-
-SQLHENV henv;
-SQLHDBC hdbc;
-SQLHSTMT hstmt;
-SQLHWND hWnd;
-
-#define MAX_SQLSTRING_LEN       1000
-#define STATE_SIZE              6
-#define MAX_CONNECT_STRING      256
-#define TRUE                    1
-#define FALSE                   0
-#define ARGS                    "d:u:p:"
-
-const char *SqlRetText(int rc)
-{
-  static char buffer[80];
-  switch (rc)
-  {
-    case SQL_SUCCESS:
-      return("SQL_SUCCESS");
-    case SQL_SUCCESS_WITH_INFO:
-      return("SQL_SUCCESS_WITH_INFO");
-    case SQL_NO_DATA:
-      return("SQL_NO_DATA");
-    case SQL_ERROR:
-      return("SQL_ERROR");
-    case SQL_INVALID_HANDLE:
-      return("SQL_INVALID_HANDLE");
-    case SQL_STILL_EXECUTING:
-      return("SQL_STILL_EXECUTING");
-    case SQL_NEED_DATA:
-     return("SQL_NEED_DATA");
-  }
-  sprintf(buffer,"SQL Error %d",rc);
-  return(buffer);
-}
-
-void CleanUp()
-{
-  printf("\nConnect Test FAILED!!!\n");
-  if(hstmt != SQL_NULL_HANDLE)
-    SQLFreeHandle(SQL_HANDLE_STMT,hstmt);
-    if(hdbc != SQL_NULL_HANDLE)
-    {
-      SQLDisconnect(hdbc);
-      SQLFreeHandle(SQL_HANDLE_DBC,hdbc);
-    }
-    if(henv != SQL_NULL_HANDLE)
-      SQLFreeHandle(SQL_HANDLE_ENV,henv);
-    exit(EXIT_FAILURE);
-}
-
-void LogDiagnostics(const char *sqlFunction, SQLRETURN rc, bool exitOnError=true)
-{             
-  SQLRETURN diagRC = SQL_SUCCESS;
-  SQLSMALLINT recordNumber;
-  SQLINTEGER nativeError;
-  SQLCHAR messageText[SQL_MAX_MESSAGE_LENGTH];
-  SQLCHAR sqlState[6];
-  int diagsPrinted = 0;
-  bool printedErrorLogHeader = false;
-        
-  printf("Function %s returned %s\n", sqlFunction, SqlRetText(rc));
-
-  /* Log any henv Diagnostics */
-  recordNumber = 1;
-  do
-  {
-    diagRC = SQLGetDiagRec( SQL_HANDLE_ENV
-                          , henv
-                          , recordNumber
-                          , sqlState
-                          , &nativeError
-                          , messageText
-                          , sizeof(messageText)
-                          , NULL
-                          );
-    if(diagRC==SQL_SUCCESS)
-    {
-      if(!printedErrorLogHeader)
-      {
-        printf("Diagnostics associated with environment handle:\n");
-        printedErrorLogHeader = true;
-      }
-      printf("\n\tSQL Diag %d\n\tNative Error: %ld\n\tSQL State:    %s\n\tMessage:      %s\n",
-             recordNumber,nativeError,sqlState,messageText);
-    }
-    recordNumber++;
-  } while (diagRC==SQL_SUCCESS);
-        
-  /* Log any hdbc Diagnostics */
-  recordNumber = 1;
-  printedErrorLogHeader = false;
-  do
-  {
-    diagRC = SQLGetDiagRec( SQL_HANDLE_DBC
-                          , hdbc
-                          , recordNumber
-                          , sqlState
-                          , &nativeError
-                          , messageText
-                          , sizeof(messageText)
-                          , NULL
-                          );
-    if(diagRC==SQL_SUCCESS)
-    {
-      if(!printedErrorLogHeader)
-      {
-        printf("Diagnostics associated with connection handle:\n");
-        printedErrorLogHeader = true;
-      }
-      printf("\n\tSQL Diag %d\n\tNative Error: %ld\n\tSQL State:    %s\n\tMessage:      %s\n",
-             recordNumber,nativeError,sqlState,messageText);
-    }
-    recordNumber++;
-  } while (diagRC==SQL_SUCCESS);
-
-  /* Log any hstmt Diagnostics */
-  recordNumber = 1;
-  printedErrorLogHeader = false;
-  do
-  {
-    diagRC = SQLGetDiagRec( SQL_HANDLE_STMT
-                          , hstmt
-                          , recordNumber
-                          , sqlState
-                          , &nativeError
-                          , messageText
-                          , sizeof(messageText)
-                          , NULL
-                          );
-    if(diagRC==SQL_SUCCESS)
-    {
-      if(!printedErrorLogHeader)
-      {
-        printf("Diagnostics associated with statmement handle:\n");
-        printedErrorLogHeader = true;
-      }
-      printf("\n\tSQL Diag %d\n\tNative Error: %ld\n\tSQL State:    %s\n\tMessage:      %s\n",
-             recordNumber,nativeError,sqlState,messageText);
-    }
-    recordNumber++;
-  } while (diagRC==SQL_SUCCESS);
-
-  if(exitOnError && rc!=SQL_SUCCESS_WITH_INFO)
-  CleanUp();
-}                     
-
-// Main Program
-int main (int argc, char *argv[])
-{
-  unsigned char dsnName[20];
-  unsigned char user[20];
-  unsigned char password[20];
-  SQLRETURN       returnCode;
-  bool            testPassed = true;
-  SQLCHAR         InConnStr[MAX_CONNECT_STRING];
-  SQLCHAR         OutConnStr[MAX_CONNECT_STRING];
-  SQLSMALLINT     ConnStrLength;
-  int errflag = 0;
-        
-  //optarg = NULL;
-  if (argc != 4)
-     errflag++;
-
-  if (!errflag )
-  {
-    strcpy ((char *)dsnName, argv[1]);
-    strcpy ((char *)dsnName, argv[1]);
-    strcpy ((char *)user, argv[2]);
-    strcpy ((char *)password, argv[3]);
-  }
-  
-  if (errflag) 
-  {
-    printf("Command line error.\n");
-    printf("Usage: %s <datasource> <userid> <password>\n", argv[0] );
-    return FALSE;
-  }
-
-  // Initialize handles to NULL
-  henv = SQL_NULL_HANDLE;
-  hstmt = SQL_NULL_HANDLE;
-  hdbc = SQL_NULL_HANDLE;
-
-  // Allocate Environment Handle
-  returnCode = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv)",returnCode);
-
-  // Set ODBC version to 3.0
-  returnCode = SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, (void*)SQL_OV_ODBC3, 0); 
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics( "SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, (void*)SQL_OV_ODBC3, 0)"
-                   , returnCode
-                   , false
-                   );
-
-  // Allocate Connection handle
-  returnCode = SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc)", returnCode);
-
-  //Connect to the database
-  sprintf((char*)InConnStr,"DSN=%s;UID=%s;PWD=%s;%c",(char*)dsnName, (char*)user, (char*)password,'\0');
-  printf("Using Connect String: %s\n", InConnStr);
-  returnCode = SQLDriverConnect( hdbc
-                               , hWnd
-                               , InConnStr
-                               , SQL_NTS
-                               , OutConnStr
-                               , sizeof(OutConnStr)
-                               , &ConnStrLength
-                               , SQL_DRIVER_NOPROMPT
-                               );
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLDriverConnect",returnCode);
-  printf("Successfully connected using SQLDriverConnect.\n");
-
-  //Allocate Statement handle
-  returnCode = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt)", returnCode);
-
-  printf("Drop sample table if it exists...\n");
-  //Drop the test table if it exists
-  //DROP IF EXISTS TASKS;
-  returnCode = SQLExecDirect(hstmt, (SQLCHAR*)"DROP TABLE IF EXISTS TASKS", SQL_NTS);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLExecDirect of DROP", returnCode);
-
-  printf("Creating sample table TASKS...\n");
-  //Create a test table in default schema
-  //CREATE TABLE TASKS (ID INT NOT NULL, TASK VARCHAR(10), LAST_UPDATE TIMESTAMP, PRIMARY KEY (C1));
-  returnCode =
-    SQLExecDirect
-    ( hstmt
-    , (SQLCHAR*)"CREATE TABLE TASKS (ID INT NOT NULL, TASK CHAR(20), COMPLETED DATE, PRIMARY KEY (ID))"
-    , SQL_NTS
-    );
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLExecDirect of CREATE", returnCode);
-  printf("Table TASKS created using SQLExecDirect.\n");
-
-  printf("Inserting data using SQLBindParameter, SQLPrepare, SQLExecute\n");
-  //Insert few rows into test table using bound parameters
-  //INSERT INTO TASKS VALUES (?, ?, ?);
-  SQLINTEGER intID;
-  SQLLEN cbID = 0, cbTask = SQL_NTS, cbCompleted = 0;
-  SQLCHAR strTask[200];
-  SQL_DATE_STRUCT dsCompleted;
-
-  returnCode = SQLBindParameter( hstmt
-			       , 1
-			       , SQL_PARAM_INPUT
-			       , SQL_C_SHORT
-			       , SQL_INTEGER
-			       , 0
-			       , 0
-			       , &intID
-			       , 0
-			       , &cbID
-			       );
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLBindParameter 1", returnCode);
-
-  returnCode = SQLBindParameter(hstmt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR, 0, 0, &strTask, 0, &cbTask);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLBindParameter 2", returnCode);
-
-  returnCode = SQLBindParameter( hstmt
-			       , 3
-			       , SQL_PARAM_INPUT
-			       , SQL_C_TYPE_DATE
-			       , SQL_DATE
-			       , sizeof(dsCompleted)
-			       , 0
-			       , &dsCompleted
-			       , 0
-			       , &cbCompleted
-			       );
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLBindParameter 3", returnCode);
-
-  returnCode = SQLPrepare(hstmt, (SQLCHAR*)"INSERT INTO TASKS VALUES (?, ?, ?)", SQL_NTS);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLPrepare of INSERT", returnCode);
-
-  intID = 1000;
-  strcpy ((char*)strTask, "CREATE REPORTS");
-  dsCompleted.year = 2014;
-  dsCompleted.month = 3;
-  dsCompleted.day = 22;
-
-  returnCode = SQLExecute(hstmt);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLExecute", returnCode);
-  printf("Data inserted.\n");
-
-  //Select rows from test table and fetch the data
-  //SELECT * from TASKS WHERE TASK LIKE '%REPORT%'
-  printf("Fetching data using SQLExecDirect, SQLFetch, SQLGetData\n");
-  returnCode = SQLExecDirect(hstmt, (SQLCHAR*)"SELECT ID, TASK, COMPLETED FROM TASKS", SQL_NTS);
-  if (!SQL_SUCCEEDED(returnCode))
-  LogDiagnostics("SQLExecDirect of SELECT", returnCode);
-        
-  //loop thru resultset
-  while (TRUE) 
-  {
-    returnCode = SQLFetch(hstmt);
-    if (returnCode == SQL_ERROR || returnCode == SQL_SUCCESS_WITH_INFO) 
-    {
-      LogDiagnostics("SQLFetch", returnCode);
-    }
-    if (returnCode == SQL_SUCCESS || returnCode == SQL_SUCCESS_WITH_INFO)
-    {
-      SQLGetData(hstmt, 1, SQL_C_SHORT, &intID, 0, &cbID);
-      SQLGetData(hstmt, 2, SQL_C_CHAR, strTask, 20, &cbTask);
-      SQLGetData(hstmt, 3, SQL_C_TYPE_DATE, &dsCompleted, sizeof(dsCompleted), &cbCompleted);
-      printf( "Data selected: %d %s %d-%d-%d\n"
-	    , intID
-	    , strTask
-	    , dsCompleted.year
-	    , dsCompleted.month
-	    , dsCompleted.day
-	    );
-    } 
-    else 
-      break;
-  }
-  
-  //Free Statement handle
-  returnCode = SQLFreeHandle(SQL_HANDLE_STMT, hstmt);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLFreeHandle(SQL_HANDLE_STMT, hstmt)", returnCode);
-  hstmt = SQL_NULL_HANDLE;
-
-  //Disconnect
-  returnCode = SQLDisconnect(hdbc);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLDisconnect(hdbc)", returnCode);
-
-  //Free Connection handle
-  returnCode = SQLFreeHandle(SQL_HANDLE_DBC, hdbc);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLFreeHandle(SQL_HANDLE_DBC, hdbc)", returnCode);
-  hdbc = SQL_NULL_HANDLE;
-
-  //Free Environment handle
-  returnCode = SQLFreeHandle(SQL_HANDLE_ENV, henv);
-  if (!SQL_SUCCEEDED(returnCode))
-     LogDiagnostics("SQLFreeHandle(SQL_HANDLE_ENV, henv)", returnCode);
-  henv = SQL_NULL_HANDLE;
-
-  printf("Basic SQL ODBC Test Passed!\n");
-  exit(EXIT_SUCCESS);
-}
+// @@@ START COPYRIGHT @@@
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+//
+// @@@ END COPYRIGHT @@@
+
+#ifdef __linux
+    #include <unistd.h>
+#else
+    #include <windows.h>
+    #include <tchar.h>
+#endif
+
+//#include <stdarg.h>
+
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <assert.h>
+
+#include <sql.h>
+#include <sqlext.h>
+
+SQLHENV henv ;
+SQLHDBC hdbc ;
+SQLHSTMT hstmt ;
+SQLHWND hWnd ;
+
+#define MAX_SQLSTRING_LEN       1000
+#define STATE_SIZE              6
+#define MAX_CONNECT_STRING      256
+#define TRUE                    1
+#define FALSE                   0
+#define ARGS                    "d:u:p:"
+
+const char *SqlRetText( int rc )
+{
+  static char buffer[80] ;
+  switch ( rc )
+  {
+    case SQL_SUCCESS:           return( "SQL_SUCCESS" ) ;
+    case SQL_SUCCESS_WITH_INFO: return( "SQL_SUCCESS_WITH_INFO" ) ;
+    case SQL_NO_DATA:           return( "SQL_NO_DATA" ) ;
+    case SQL_ERROR:             return( "SQL_ERROR" ) ;
+    case SQL_INVALID_HANDLE:    return( "SQL_INVALID_HANDLE" ) ;
+    case SQL_STILL_EXECUTING:   return( "SQL_STILL_EXECUTING" ) ;
+    case SQL_NEED_DATA:         return( "SQL_NEED_DATA" ) ;
+  }
+  
+  sprintf( buffer, "SQL Error %d", rc ) ;
+  return( buffer ) ;
+}
+
+void CleanUp()
+{
+  printf( "\nConnect Test FAILED!!!\n" ) ;
+
+  if ( hstmt != SQL_NULL_HANDLE )
+     SQLFreeHandle( SQL_HANDLE_STMT,hstmt ) ;
+
+  if( hdbc != SQL_NULL_HANDLE )
+  {
+    SQLDisconnect( hdbc ) ;
+    SQLFreeHandle( SQL_HANDLE_DBC, hdbc ) ;
+  }
+  
+  if ( henv != SQL_NULL_HANDLE )
+     SQLFreeHandle( SQL_HANDLE_ENV, henv ) ;
+
+  exit( EXIT_FAILURE ) ;
+}
+
+void LogDiagnostics( const char *sqlFunction
+		   , SQLRETURN rc
+		   , bool exitOnError=true
+		   )
+{             
+  SQLRETURN diagRC = SQL_SUCCESS ;
+  SQLSMALLINT recordNumber ;
+  SQLINTEGER nativeError ;
+  SQLCHAR messageText[SQL_MAX_MESSAGE_LENGTH] ;
+  SQLCHAR sqlState[6] ;
+  int diagsPrinted = 0 ;
+  bool printedErrorLogHeader = false ;
+        
+  printf( "Function %s returned %s\n"
+	, sqlFunction
+	, SqlRetText( rc )
+	) ;
+
+  /* Log any henv Diagnostics */
+  recordNumber = 1 ;
+  do
+  {
+    diagRC = SQLGetDiagRec( SQL_HANDLE_ENV
+                          , henv
+                          , recordNumber
+                          , sqlState
+                          , &nativeError
+                          , messageText
+                          , sizeof(messageText)
+                          , NULL
+                          ) ;
+    if ( diagRC==SQL_SUCCESS )
+    {
+      if( ! printedErrorLogHeader )
+      {
+        printf( "Diagnostics associated with environment handle:\n" ) ;
+        printedErrorLogHeader = true ;
+      }
+      
+      printf( "\n\tSQL Diag %d\n\tNative Error: %ld\n\tSQL State:    %s\n\tMessage:      %s\n"
+	    , recordNumber
+	    , nativeError
+	    , sqlState
+	    , messageText
+	    ) ;
+    }
+    
+    recordNumber++ ;
+
+  } while ( diagRC==SQL_SUCCESS ) ;
+        
+  /* Log any hdbc Diagnostics */
+  recordNumber = 1 ;
+  printedErrorLogHeader = false ;
+  do
+  {
+    diagRC = SQLGetDiagRec( SQL_HANDLE_DBC
+                          , hdbc
+                          , recordNumber
+                          , sqlState
+                          , &nativeError
+                          , messageText
+                          , sizeof(messageText)
+                          , NULL
+                          ) ;
+    if ( diagRC==SQL_SUCCESS )
+    {
+      if( !printedErrorLogHeader )
+      {
+        printf( "Diagnostics associated with connection handle:\n" ) ;
+        printedErrorLogHeader = true ;
+      }
+      
+      printf( "\n\tSQL Diag %d\n\tNative Error: %ld\n\tSQL State:    %s\n\tMessage:      %s\n"
+	    , recordNumber
+	    , nativeError
+	    , sqlState
+	    , messageText
+	    ) ;
+    }
+    
+    recordNumber++ ;
+
+  } while (diagRC==SQL_SUCCESS) ;
+
+  /* Log any hstmt Diagnostics */
+  recordNumber = 1 ;
+  printedErrorLogHeader = false ;
+  do
+  {
+    diagRC = SQLGetDiagRec( SQL_HANDLE_STMT
+                          , hstmt
+                          , recordNumber
+                          , sqlState
+                          , &nativeError
+                          , messageText
+                          , sizeof(messageText)
+                          , NULL
+                          ) ;
+    if (diagRC == SQL_SUCCESS )
+    {
+      if ( !printedErrorLogHeader )
+      {
+        printf( "Diagnostics associated with statmement handle:\n" ) ;
+        printedErrorLogHeader = true ;
+      }
+      
+      printf( "\n\tSQL Diag %d\n\tNative Error: %ld\n\tSQL State:    %s\n\tMessage:      %s\n"
+	    , recordNumber
+	    , nativeError
+	    , sqlState
+	    , messageText
+	    ) ;
+    }
+    
+    recordNumber++ ;
+
+  } while ( diagRC==SQL_SUCCESS ) ;
+
+  if ( exitOnError && rc != SQL_SUCCESS_WITH_INFO )
+     CleanUp() ;
+}                     
+
+// Main Program
+int main (int argc, char *argv[])
+{
+  unsigned char dsnName[20] ;
+  unsigned char user[20] ;
+  unsigned char password[20] ;
+  SQLRETURN     returnCode ;
+  bool          testPassed = true ;
+  SQLCHAR       InConnStr[MAX_CONNECT_STRING] ;
+  SQLCHAR       OutConnStr[MAX_CONNECT_STRING] ;
+  SQLSMALLINT   ConnStrLength ;
+  int           errflag = 0 ;
+        
+  //optarg = NULL ;
+  if ( argc != 4 )
+     errflag++ ;
+
+  if ( !errflag )
+  {
+    strcpy ( (char *) dsnName, argv[1] ) ;
+    strcpy ( (char *) user, argv[2] ) ;
+    strcpy ( (char *) password, argv[3] ) ;
+  }
+  
+  if ( errflag ) 
+  {
+    printf( "Command line error.\n" ) ;
+    printf( "Usage: %s <datasource> <userid> <password>\n", argv[0] ) ;
+    return FALSE ;
+  }
+
+  // Initialize handles to NULL
+  henv = SQL_NULL_HANDLE ;
+  hstmt = SQL_NULL_HANDLE ;
+  hdbc = SQL_NULL_HANDLE ;
+
+  // Allocate Environment Handle
+  returnCode = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv )"
+		   , returnCode
+		   ) ;
+
+  // Set ODBC version to 3.0
+  returnCode = SQLSetEnvAttr( henv, SQL_ATTR_ODBC_VERSION, (void*) SQL_OV_ODBC3, 0 ) ; 
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLSetEnvAttr( henv, SQL_ATTR_ODBC_VERSION, (void*)SQL_OV_ODBC3, 0 )"
+                   , returnCode
+                   , false
+                   ) ;
+
+  // Allocate Connection handle
+  returnCode = SQLAllocHandle( SQL_HANDLE_DBC, henv, &hdbc ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLAllocHandle( SQL_HANDLE_DBC, henv, &hdbc )", returnCode ) ;
+
+  // Connect to the database
+  sprintf( (char*) InConnStr
+	 , "DSN=%s;UID=%s;PWD=%s;%c"
+	 , (char*) dsnName
+	 , (char*) user
+	 , (char*) password
+	 , '\0'
+	 ) ;
+
+  printf( "Using Connect String: %s\n", InConnStr ) ;
+  returnCode = SQLDriverConnect( hdbc
+                               , hWnd
+                               , InConnStr
+                               , SQL_NTS
+                               , OutConnStr
+                               , sizeof( OutConnStr )
+                               , &ConnStrLength
+                               , SQL_DRIVER_NOPROMPT
+                               ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLDriverConnect", returnCode ) ;
+
+  printf( "Successfully connected using SQLDriverConnect.\n" ) ;
+
+  // Allocate Statement handle
+  returnCode = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLAllocHandle( SQL_HANDLE_STMT, hdbc, &hstmt )", returnCode ) ;
+
+  printf( "Drop sample table if it exists...\n" ) ;
+  // Drop the test table if it exists
+  // DROP IF EXISTS TASKS ;
+  returnCode = SQLExecDirect(hstmt, (SQLCHAR*) "DROP TABLE IF EXISTS TASKS", SQL_NTS ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLExecDirect of DROP", returnCode ) ;
+
+  printf( "Creating sample table TASKS...\n" ) ;
+
+  // Create a test table in default schema
+  // CREATE TABLE TASKS (ID INT NOT NULL, TASK VARCHAR(10), LAST_UPDATE TIMESTAMP, PRIMARY KEY (C1)) ;
+  returnCode =
+    SQLExecDirect
+    ( hstmt
+    , (SQLCHAR*) "CREATE TABLE TASKS (ID INT NOT NULL, TASK CHAR(20), COMPLETED DATE, PRIMARY KEY (ID))"
+    , SQL_NTS
+    ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLExecDirect of CREATE", returnCode ) ;
+
+  printf( "Table TASKS created using SQLExecDirect.\n" ) ;
+
+  printf( "Inserting data using SQLBindParameter, SQLPrepare, SQLExecute\n" ) ;
+
+  // Insert few rows into test table using bound parameters
+  // INSERT INTO TASKS VALUES (?, ?, ?) ;
+  SQLINTEGER intID ;
+  SQLLEN cbID = 0, cbTask = SQL_NTS, cbCompleted = 0 ;
+  SQLCHAR strTask[200] ;
+  SQL_DATE_STRUCT dsCompleted ;
+
+  returnCode = SQLBindParameter( hstmt
+			       , 1
+			       , SQL_PARAM_INPUT
+			       , SQL_C_SHORT
+			       , SQL_INTEGER
+			       , 0
+			       , 0
+			       , &intID
+			       , 0
+			       , &cbID
+			       ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLBindParameter 1", returnCode ) ;
+
+  returnCode = SQLBindParameter( hstmt
+			       , 2
+			       , SQL_PARAM_INPUT
+			       , SQL_C_CHAR
+			       , SQL_CHAR
+			       , 0
+			       , 0
+			       , &strTask
+			       , 0
+			       , &cbTask
+			       ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLBindParameter 2", returnCode ) ;
+
+  returnCode = SQLBindParameter( hstmt
+			       , 3
+			       , SQL_PARAM_INPUT
+			       , SQL_C_TYPE_DATE
+			       , SQL_DATE
+			       , sizeof(dsCompleted)
+			       , 0
+			       , &dsCompleted
+			       , 0
+			       , &cbCompleted
+			       ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLBindParameter 3", returnCode ) ;
+
+  returnCode = SQLPrepare( hstmt
+			 , (SQLCHAR*) "INSERT INTO TASKS VALUES (?, ?, ?)"
+			 , SQL_NTS
+			 ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLPrepare of INSERT", returnCode ) ;
+
+  intID = 1000 ;
+  strcpy ( (char*) strTask, "CREATE REPORTS" ) ;
+  dsCompleted.year = 2014 ;
+  dsCompleted.month = 3 ;
+  dsCompleted.day = 22 ;
+
+  returnCode = SQLExecute( hstmt ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLExecute", returnCode ) ;
+
+  printf( "Data inserted.\n" ) ;
+
+  // Select rows from test table and fetch the data
+  // SELECT * from TASKS WHERE TASK LIKE '%REPORT%'
+  printf( "Fetching data using SQLExecDirect, SQLFetch, SQLGetData\n" ) ;
+
+  returnCode = SQLExecDirect( hstmt
+			    , (SQLCHAR*) "SELECT ID, TASK, COMPLETED FROM TASKS"
+			    , SQL_NTS
+			    ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLExecDirect of SELECT", returnCode ) ;
+        
+  //loop thru resultset
+  while ( TRUE ) 
+  {
+    returnCode = SQLFetch( hstmt ) ;
+    if ( returnCode == SQL_ERROR || returnCode == SQL_SUCCESS_WITH_INFO ) 
+    {
+      LogDiagnostics( "SQLFetch", returnCode ) ;
+    }
+    
+    if ( returnCode == SQL_SUCCESS || returnCode == SQL_SUCCESS_WITH_INFO )
+    {
+      SQLGetData( hstmt, 1, SQL_C_SHORT, &intID, 0, &cbID ) ;
+      SQLGetData( hstmt, 2, SQL_C_CHAR, strTask, 20, &cbTask ) ;
+      SQLGetData( hstmt
+		, 3
+		, SQL_C_TYPE_DATE
+		, &dsCompleted
+		, sizeof( dsCompleted )
+		, &cbCompleted
+		) ;
+      printf( "Data selected: %d %s %d-%d-%d\n"
+	    , intID
+	    , strTask
+	    , dsCompleted.year
+	    , dsCompleted.month
+	    , dsCompleted.day
+	    ) ;
+    } 
+    else 
+      break ;
+  }
+  
+  // Free Statement handle
+  returnCode = SQLFreeHandle( SQL_HANDLE_STMT, hstmt ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLFreeHandle( SQL_HANDLE_STMT, hstmt )", returnCode ) ;
+  hstmt = SQL_NULL_HANDLE ;
+
+  // Disconnect
+  returnCode = SQLDisconnect(hdbc) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLDisconnect( hdbc )", returnCode ) ;
+
+  // Free Connection handle
+  returnCode = SQLFreeHandle( SQL_HANDLE_DBC, hdbc ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLFreeHandle( SQL_HANDLE_DBC, hdbc )", returnCode ) ;
+  hdbc = SQL_NULL_HANDLE ;
+
+  // Free Environment handle
+  returnCode = SQLFreeHandle( SQL_HANDLE_ENV, henv ) ;
+  if ( ! SQL_SUCCEEDED( returnCode ) )
+     LogDiagnostics( "SQLFreeHandle( SQL_HANDLE_ENV, henv )", returnCode ) ;
+  henv = SQL_NULL_HANDLE ;
+
+  printf( "Basic SQL ODBC Test Passed!\n" ) ;
+  exit( EXIT_SUCCESS ) ;
+}

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/resources/source/build.bat
----------------------------------------------------------------------
diff --git a/docs/client_install/src/resources/source/build.bat b/docs/client_install/src/resources/source/build.bat
index d8ecdda..8fde588 100644
--- a/docs/client_install/src/resources/source/build.bat
+++ b/docs/client_install/src/resources/source/build.bat
@@ -1,25 +1,25 @@
-@echo off
-REM @@@ START COPYRIGHT @@@
-REM
-REM Licensed to the Apache Software Foundation (ASF) under one
-REM or more contributor license agreements.  See the NOTICE file
-REM distributed with this work for additional information
-REM regarding copyright ownership.  The ASF licenses this file
-REM to you under the Apache License, Version 2.0 (the
-REM "License"); you may not use this file except in compliance
-REM with the License.  You may obtain a copy of the License at
-REM
-REM   http://www.apache.org/licenses/LICENSE-2.0
-REM
-REM Unless required by applicable law or agreed to in writing,
-REM software distributed under the License is distributed on an
-REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-REM KIND, either express or implied.  See the License for the
-REM specific language governing permissions and limitations
-REM under the License.
-REM
-REM @@@ END COPYRIGHT @@@
-
-CL /c /Zi /nologo /W3 /WX- /O2 /D "NDEBUG" /D "_CRT_SECURE_NO_DEPRECATE" /D "_MBCS" /Gm /EHsc /GS /fp:precise /Zc:wchar_t /Zc:forScope /Fo"./" /Gd /errorReport:queue basicsql.cpp 
-
-link /OUT:"./basicsql.exe" /NOLOGO "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /TLBID:1 /DYNAMICBASE /NXCOMPAT /MACHINE:X64 /ERRORREPORT:QUEUE basicsql.obj
+@echo off
+REM @@@ START COPYRIGHT @@@
+REM
+REM Licensed to the Apache Software Foundation (ASF) under one
+REM or more contributor license agreements.  See the NOTICE file
+REM distributed with this work for additional information
+REM regarding copyright ownership.  The ASF licenses this file
+REM to you under the Apache License, Version 2.0 (the
+REM "License"); you may not use this file except in compliance
+REM with the License.  You may obtain a copy of the License at
+REM
+REM   http://www.apache.org/licenses/LICENSE-2.0
+REM
+REM Unless required by applicable law or agreed to in writing,
+REM software distributed under the License is distributed on an
+REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+REM KIND, either express or implied.  See the License for the
+REM specific language governing permissions and limitations
+REM under the License.
+REM
+REM @@@ END COPYRIGHT @@@
+
+CL /c /Zi /nologo /W3 /WX- /O2 /D "NDEBUG" /D "_CRT_SECURE_NO_DEPRECATE" /D "_MBCS" /Gm /EHsc /GS /fp:precise /Zc:wchar_t /Zc:forScope /Fo"./" /Gd /errorReport:queue basicsql.cpp 
+
+link /OUT:"./basicsql.exe" /NOLOGO "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /TLBID:1 /DYNAMICBASE /NXCOMPAT /MACHINE:X64 /ERRORREPORT:QUEUE basicsql.obj

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/resources/source/run.bat
----------------------------------------------------------------------
diff --git a/docs/client_install/src/resources/source/run.bat b/docs/client_install/src/resources/source/run.bat
index 98997b9..3b695d1 100644
--- a/docs/client_install/src/resources/source/run.bat
+++ b/docs/client_install/src/resources/source/run.bat
@@ -1,23 +1,23 @@
-@echo off
-REM @@@ START COPYRIGHT @@@
-REM
-REM Licensed to the Apache Software Foundation (ASF) under one
-REM or more contributor license agreements.  See the NOTICE file
-REM distributed with this work for additional information
-REM regarding copyright ownership.  The ASF licenses this file
-REM to you under the Apache License, Version 2.0 (the
-REM "License"); you may not use this file except in compliance
-REM with the License.  You may obtain a copy of the License at
-REM
-REM   http://www.apache.org/licenses/LICENSE-2.0
-REM
-REM Unless required by applicable law or agreed to in writing,
-REM software distributed under the License is distributed on an
-REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-REM KIND, either express or implied.  See the License for the
-REM specific language governing permissions and limitations
-REM under the License.
-REM
-REM @@@ END COPYRIGHT @@@
-
-basicsql.exe Default_Datasource user1 pwd1
+@echo off
+REM @@@ START COPYRIGHT @@@
+REM
+REM Licensed to the Apache Software Foundation (ASF) under one
+REM or more contributor license agreements.  See the NOTICE file
+REM distributed with this work for additional information
+REM regarding copyright ownership.  The ASF licenses this file
+REM to you under the Apache License, Version 2.0 (the
+REM "License"); you may not use this file except in compliance
+REM with the License.  You may obtain a copy of the License at
+REM
+REM   http://www.apache.org/licenses/LICENSE-2.0
+REM
+REM Unless required by applicable law or agreed to in writing,
+REM software distributed under the License is distributed on an
+REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+REM KIND, either express or implied.  See the License for the
+REM specific language governing permissions and limitations
+REM under the License.
+REM
+REM @@@ END COPYRIGHT @@@
+
+basicsql.exe Default_Datasource user1 pwd1

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/resources/tableau/trafodion.tdc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/resources/tableau/trafodion.tdc b/docs/client_install/src/resources/tableau/trafodion.tdc
new file mode 100644
index 0000000..bfb1e1e
--- /dev/null
+++ b/docs/client_install/src/resources/tableau/trafodion.tdc
@@ -0,0 +1,16 @@
+<?xml version='1.0' encoding='utf-8' ?>
+<connection-customization class='genericodbc' enabled='true' version='9.3' dbname='TRAFODION' odbc-native-protocol='yes' odbc-use-connection-pooling='yes'> 
+  <vendor name='Trafodion' /> 
+  <driver name='TRAF ODBC 2.1' /> 
+  <customizations> 
+    <customization name='CAP_ISOLATION_LEVEL_SERIALIZABLE' value='no'/> 
+    <customization name='CAP_ISOLATION_LEVEL_READ_UNCOMMITTED' value='yes' /> 
+    <customization name='CAP_SET_ISOLATION_LEVEL_VIA_ODBC_API' value='no' /> 
+    <customization name='CAP_CREATE_TEMP_TABLES' value='no' />
+    <customization name='CAP_SUPPRESS_DISCOVERY_QUERIES' value='yes' />
+    <customization name='CAP_ODBC_METADATA_SUPPRESS_PREPARED_QUERY' value='yes' />
+    <customization name='CAP_ODBC_METADATA_SUPPRESS_SELECT_STAR' value='yes' />
+    <customization name='CAP_ODBC_METADATA_SUPPRESS_EXECUTED_QUERY' value='yes' />
+    <customization name='CAP_ODBC_METADATA_SUPRESS_SQLSTATISTICS_API' value='yes' />
+  </customizations> 
+</connection-customization>

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/resources/tableau/trafodion.tdc.template
----------------------------------------------------------------------
diff --git a/docs/client_install/src/resources/tableau/trafodion.tdc.template b/docs/client_install/src/resources/tableau/trafodion.tdc.template
new file mode 100644
index 0000000..8d71f39
--- /dev/null
+++ b/docs/client_install/src/resources/tableau/trafodion.tdc.template
@@ -0,0 +1,16 @@
+<?xml version='1.0' encoding='utf-8' ?>
+<connection-customization class='genericodbc' enabled='true' version='<tableau-version>' dbname='TRAFODION' odbc-native-protocol='yes' odbc-use-connection-pooling='yes'> 
+  <vendor name='Trafodion' /> 
+  <driver name='<trafodion-driver-name>' /> 
+  <customizations> 
+    <customization name='CAP_ISOLATION_LEVEL_SERIALIZABLE' value='no'/> 
+    <customization name='CAP_ISOLATION_LEVEL_READ_UNCOMMITTED' value='yes' /> 
+    <customization name='CAP_SET_ISOLATION_LEVEL_VIA_ODBC_API' value='no' /> 
+    <customization name='CAP_CREATE_TEMP_TABLES' value='no' />
+    <customization name='CAP_SUPPRESS_DISCOVERY_QUERIES' value='yes' />
+    <customization name='CAP_ODBC_METADATA_SUPPRESS_PREPARED_QUERY' value='yes' />
+    <customization name='CAP_ODBC_METADATA_SUPPRESS_SELECT_STAR' value='yes' />
+    <customization name='CAP_ODBC_METADATA_SUPPRESS_EXECUTED_QUERY' value='yes' />
+    <customization name='CAP_ODBC_METADATA_SUPRESS_SQLSTATISTICS_API' value='yes' />
+  </customizations> 
+</connection-customization>

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/messages_guide/src/asciidoc/_chapters/binder_msgs.adoc
----------------------------------------------------------------------
diff --git a/docs/messages_guide/src/asciidoc/_chapters/binder_msgs.adoc b/docs/messages_guide/src/asciidoc/_chapters/binder_msgs.adoc
index a597202..b66b3d3 100644
--- a/docs/messages_guide/src/asciidoc/_chapters/binder_msgs.adoc
+++ b/docs/messages_guide/src/asciidoc/_chapters/binder_msgs.adoc
@@ -2257,11 +2257,11 @@ statement.
 == SQL 4169
 
 ```
-Embedded delete statements are not allowed when using DECLARE ...
+Embedded delete statements are not allowed when using DECLARE . . .
 FOR UPDATE clause.
 ```
 
-*Cause:* You attempted to perform a DECLARE... FOR UPDATE clause that
+*Cause:* You attempted to perform a DECLARE. . . FOR UPDATE clause that
 included an embedded DELETE statement.
 
 *Effect:* {project-name} is unable to compile the
@@ -2797,7 +2797,7 @@ A CALL statement is not allowed within a compound statement.
 ```
 
 *Cause:* In the {project-name} database software statement being compiled, a
-CALL statement was present within a BEGIN...END block.
+CALL statement was present within a BEGIN. . . END block.
 
 *Effect:* {project-name} statement is not compiled.
 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/odb_user/src/asciidoc/_chapters/install.adoc
----------------------------------------------------------------------
diff --git a/docs/odb_user/src/asciidoc/_chapters/install.adoc b/docs/odb_user/src/asciidoc/_chapters/install.adoc
index f4ce867..5606daa 100644
Binary files a/docs/odb_user/src/asciidoc/_chapters/install.adoc and b/docs/odb_user/src/asciidoc/_chapters/install.adoc differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/spj_guide/src/resources/source/partLocations.java
----------------------------------------------------------------------
diff --git a/docs/spj_guide/src/resources/source/partLocations.java b/docs/spj_guide/src/resources/source/partLocations.java
new file mode 100644
index 0000000..88cdaae
--- /dev/null
+++ b/docs/spj_guide/src/resources/source/partLocations.java
@@ -0,0 +1,42 @@
+// The PARTLOCATIONS procedure accepts a part number and quantity and returns a
+// set of location codes that have the exact quantity and a set of location
+// codes that have more than that quantity.
+//
+// See http://trafodion.incubator.apache.org/docs/spj_guide/index.html#partlocations-procedure
+// for more documentation.
+public static void partLocations( int partNum
+				, int quantity
+				, ResultSet exactly[]
+				, ResultSet moreThan[]
+				) throws SQLException
+
+{
+   Connection conn =
+      DriverManager.getConnection( "jdbc:default:connection" ) ;
+
+   PreparedStatement getLocationsExact =
+      conn.prepareStatement( "SELECT L.loc_code, L.partnum, L.qty_on_hand "
+			   + "FROM trafodion.invent.partloc L "
+			   + "WHERE L.partnum = ? "
+			   + "  AND L.qty_on_hand = ? "
+			   + " ORDER BY L.partnum "
+			   ) ;
+
+   getLocationsExact.setInt( 1, partNum ) ;
+   getLocationsExact.setInt( 2, quantity) ;
+
+   PreparedStatement getLocationsMoreThan =
+      conn.prepareStatement( "SELECT L.loc_code, L.partnum, L.qty_on_hand "
+			   + "FROM trafodion.invent.partloc L "
+			   + "WHERE L.partnum = ? "
+			   + "  AND L.qty_on_hand > ? "
+			   + "ORDER BY L.partnum "
+			   ) ;
+
+   getLocationsMoreThan.setInt( 1, partNum ) ;
+   getLocationsMoreThan.setInt( 2, quantity) ;
+
+   exactly[0]  = getLocationsExact.executeQuery() ;
+   moreThan[0] = getLocationsMoreThan.executeQuery() ;
+
+} 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/spj_guide/src/resources/source/partlocations.java
----------------------------------------------------------------------
diff --git a/docs/spj_guide/src/resources/source/partlocations.java b/docs/spj_guide/src/resources/source/partlocations.java
deleted file mode 100644
index 88cdaae..0000000
--- a/docs/spj_guide/src/resources/source/partlocations.java
+++ /dev/null
@@ -1,42 +0,0 @@
-// The PARTLOCATIONS procedure accepts a part number and quantity and returns a
-// set of location codes that have the exact quantity and a set of location
-// codes that have more than that quantity.
-//
-// See http://trafodion.incubator.apache.org/docs/spj_guide/index.html#partlocations-procedure
-// for more documentation.
-public static void partLocations( int partNum
-				, int quantity
-				, ResultSet exactly[]
-				, ResultSet moreThan[]
-				) throws SQLException
-
-{
-   Connection conn =
-      DriverManager.getConnection( "jdbc:default:connection" ) ;
-
-   PreparedStatement getLocationsExact =
-      conn.prepareStatement( "SELECT L.loc_code, L.partnum, L.qty_on_hand "
-			   + "FROM trafodion.invent.partloc L "
-			   + "WHERE L.partnum = ? "
-			   + "  AND L.qty_on_hand = ? "
-			   + " ORDER BY L.partnum "
-			   ) ;
-
-   getLocationsExact.setInt( 1, partNum ) ;
-   getLocationsExact.setInt( 2, quantity) ;
-
-   PreparedStatement getLocationsMoreThan =
-      conn.prepareStatement( "SELECT L.loc_code, L.partnum, L.qty_on_hand "
-			   + "FROM trafodion.invent.partloc L "
-			   + "WHERE L.partnum = ? "
-			   + "  AND L.qty_on_hand > ? "
-			   + "ORDER BY L.partnum "
-			   ) ;
-
-   getLocationsMoreThan.setInt( 1, partNum ) ;
-   getLocationsMoreThan.setInt( 2, quantity) ;
-
-   exactly[0]  = getLocationsExact.executeQuery() ;
-   moreThan[0] = getLocationsMoreThan.executeQuery() ;
-
-} 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/spj_guide/src/resources/source/supplierInfo.java
----------------------------------------------------------------------
diff --git a/docs/spj_guide/src/resources/source/supplierInfo.java b/docs/spj_guide/src/resources/source/supplierInfo.java
new file mode 100644
index 0000000..c98a392
--- /dev/null
+++ b/docs/spj_guide/src/resources/source/supplierInfo.java
@@ -0,0 +1,38 @@
+// The SUPPLIERINFO procedure accepts a supplier number and returns the
+// supplier's name, street, city, state, and post code to separate output
+// parameters.
+//
+// See http://trafodion.incubator.apache.org/docs/spj_guide/index.html#supplierinfo-procedure
+// for more documentation.
+public static void supplierInfo( BigDecimal suppNum
+			       , String[] suppName
+			       , String[] streetAddr
+			       , String[] cityName
+			       , String[] stateName
+			       , String[] postCode
+			       ) throws SQLException
+{
+   Connection conn =
+      DriverManager.getConnection( "jdbc:default:connection" ) ;
+
+   PreparedStatement getSupplier =
+      conn.prepareStatement( "SELECT suppname, street, city, "
+			   + "       state, postcode "
+			   + "FROM trafodion.invent.supplier "
+			   + "WHERE suppnum = ?"  
+			   ) ;
+
+   getSupplier.setBigDecimal( 1, suppNum ) ;
+   ResultSet rs = getSupplier.executeQuery() ;
+   rs.next() ;
+
+   suppName[0]   = rs.getString( 1 ) ;
+   streetAddr[0] = rs.getString( 2 ) ;
+   cityName[0]   = rs.getString( 3 ) ;
+   stateName[0]  = rs.getString( 4 ) ;
+   postCode[0]   = rs.getString( 5 ) ;
+
+   rs.close() ;
+   conn.close() ;
+
+} 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/spj_guide/src/resources/source/supplierinfo.java
----------------------------------------------------------------------
diff --git a/docs/spj_guide/src/resources/source/supplierinfo.java b/docs/spj_guide/src/resources/source/supplierinfo.java
deleted file mode 100644
index c98a392..0000000
--- a/docs/spj_guide/src/resources/source/supplierinfo.java
+++ /dev/null
@@ -1,38 +0,0 @@
-// The SUPPLIERINFO procedure accepts a supplier number and returns the
-// supplier's name, street, city, state, and post code to separate output
-// parameters.
-//
-// See http://trafodion.incubator.apache.org/docs/spj_guide/index.html#supplierinfo-procedure
-// for more documentation.
-public static void supplierInfo( BigDecimal suppNum
-			       , String[] suppName
-			       , String[] streetAddr
-			       , String[] cityName
-			       , String[] stateName
-			       , String[] postCode
-			       ) throws SQLException
-{
-   Connection conn =
-      DriverManager.getConnection( "jdbc:default:connection" ) ;
-
-   PreparedStatement getSupplier =
-      conn.prepareStatement( "SELECT suppname, street, city, "
-			   + "       state, postcode "
-			   + "FROM trafodion.invent.supplier "
-			   + "WHERE suppnum = ?"  
-			   ) ;
-
-   getSupplier.setBigDecimal( 1, suppNum ) ;
-   ResultSet rs = getSupplier.executeQuery() ;
-   rs.next() ;
-
-   suppName[0]   = rs.getString( 1 ) ;
-   streetAddr[0] = rs.getString( 2 ) ;
-   cityName[0]   = rs.getString( 3 ) ;
-   stateName[0]  = rs.getString( 4 ) ;
-   postCode[0]   = rs.getString( 5 ) ;
-
-   rs.close() ;
-   conn.close() ;
-
-} 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/spj_guide/src/resources/source/supplyQuantities.java
----------------------------------------------------------------------
diff --git a/docs/spj_guide/src/resources/source/supplyQuantities.java b/docs/spj_guide/src/resources/source/supplyQuantities.java
new file mode 100644
index 0000000..59a6911
--- /dev/null
+++ b/docs/spj_guide/src/resources/source/supplyQuantities.java
@@ -0,0 +1,32 @@
+// The SUPPLYQUANTITIES procedure returns the average, minimum, and maximum
+// quantities of available parts in inventory to separate output
+// parameters.
+//
+// See http://trafodion.incubator.apache.org/docs/spj_guide/index.html#supplyquantities-procedure
+// for more documentation.
+public static void supplyQuantities( int[] avgQty
+				   , int[] minQty
+				   , int[] maxQty
+				   ) throws SQLException
+{
+   Connection conn =
+      DriverManager.getConnection( "jdbc:default:connection" ) ;
+
+   PreparedStatement getQty =
+      conn.prepareStatement( "SELECT AVG(qty_on_hand), "
+			   + "       MIN(qty_on_hand), "
+			   + "       MAX(qty_on_hand) "
+			   + "FROM trafodion.invent.partloc"
+			   ) ;
+
+   ResultSet rs = getQty.executeQuery() ;
+   rs.next() ;
+
+   avgQty[0] = rs.getInt( 1 ) ;
+   minQty[0] = rs.getInt( 2 ) ;
+   maxQty[0] = rs.getInt( 3 ) ;
+
+   rs.close() ;
+   conn.close() ;
+
+} 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/spj_guide/src/resources/source/supplyquantities.java
----------------------------------------------------------------------
diff --git a/docs/spj_guide/src/resources/source/supplyquantities.java b/docs/spj_guide/src/resources/source/supplyquantities.java
deleted file mode 100644
index 59a6911..0000000
--- a/docs/spj_guide/src/resources/source/supplyquantities.java
+++ /dev/null
@@ -1,32 +0,0 @@
-// The SUPPLYQUANTITIES procedure returns the average, minimum, and maximum
-// quantities of available parts in inventory to separate output
-// parameters.
-//
-// See http://trafodion.incubator.apache.org/docs/spj_guide/index.html#supplyquantities-procedure
-// for more documentation.
-public static void supplyQuantities( int[] avgQty
-				   , int[] minQty
-				   , int[] maxQty
-				   ) throws SQLException
-{
-   Connection conn =
-      DriverManager.getConnection( "jdbc:default:connection" ) ;
-
-   PreparedStatement getQty =
-      conn.prepareStatement( "SELECT AVG(qty_on_hand), "
-			   + "       MIN(qty_on_hand), "
-			   + "       MAX(qty_on_hand) "
-			   + "FROM trafodion.invent.partloc"
-			   ) ;
-
-   ResultSet rs = getQty.executeQuery() ;
-   rs.next() ;
-
-   avgQty[0] = rs.getInt( 1 ) ;
-   minQty[0] = rs.getInt( 2 ) ;
-   maxQty[0] = rs.getInt( 3 ) ;
-
-   rs.close() ;
-   conn.close() ;
-
-} 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/pom.xml
----------------------------------------------------------------------
diff --git a/docs/sql_reference/pom.xml b/docs/sql_reference/pom.xml
index c79a6a0..c5ebf3c 100644
--- a/docs/sql_reference/pom.xml
+++ b/docs/sql_reference/pom.xml
@@ -1,301 +1,301 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
- <!-- 
-* @@@ START COPYRIGHT @@@                                                       
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
--->
-  <modelVersion>4.0.0</modelVersion>
-  <groupId>org.apache.trafodion</groupId>
-  <artifactId>sql-reference-manual</artifactId>
-  <version>${env.TRAFODION_VER}</version>
-  <packaging>pom</packaging>
-  <name>Trafodion SQL Reference Manual</name>
-  <description>This manual describes reference information about the syntax of SQL statements, 
-               functions, and other SQL language elements supported by the Trafodion project\u2019s 
-               database software.
-  </description>
-  <url>http://trafodion.incubator.apache.org</url>
-  <inceptionYear>2015</inceptionYear>
-  <parent>
-    <groupId>org.apache.trafodion</groupId>
-    <artifactId>trafodion</artifactId>
-    <relativePath>../../pom.xml</relativePath>
-    <version>1.3.0</version>
-  </parent>
-
-
-  <licenses>
-    <license>
-      <name>The Apache Software License, Version 2.0</name>
-      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
-      <distribution>repo</distribution>
-      <comments>A business-friendly OSS license</comments>
-    </license>
-  </licenses>
-
-  <organization>
-    <name>Apache Software Foundation</name>
-    <url>http://www.apache.org</url>
-  </organization>
-
-  <issueManagement>
-    <system>JIRA</system>
-    <url>http://issues.apache.org/jira/browse/TRAFODION</url>
-  </issueManagement>
-
-  <scm>
-    <connection>scm:git:http://git-wip-us.apache.org/repos/asf/incubator-trafodion.git</connection>
-    <developerConnection>scm:git:https://git-wip-us.apache.org/repos/asf/incubator-trafodion.git</developerConnection>
-    <url>https://git-wip-us.apache.org/repos/asf?p=incubator-trafodion.git</url>
-    <tag>HEAD</tag>
-  </scm>
-
-  <ciManagement>
-    <system>Jenkins</system>
-    <url>https://jenkins.esgyn.com</url>
-  </ciManagement>
-
-  <properties>
-    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-    <asciidoctor.maven.plugin.version>1.5.2.1</asciidoctor.maven.plugin.version>
-    <asciidoctorj.pdf.version>1.5.0-alpha.11</asciidoctorj.pdf.version>
-    <asciidoctorj.version>1.5.4</asciidoctorj.version>
-    <rubygems.prawn.version>2.0.2</rubygems.prawn.version>
-    <jruby.version>9.0.4.0</jruby.version>
-    <dependency.locations.enabled>false</dependency.locations.enabled>
-  </properties>
-
-  <repositories>
-    <repository>
-      <id>rubygems-proxy-releases</id>
-      <name>RubyGems.org Proxy (Releases)</name>
-      <url>http://rubygems-proxy.torquebox.org/releases</url>
-      <releases>
-        <enabled>true</enabled>
-      </releases>
-      <snapshots>
-        <enabled>false</enabled>
-      </snapshots>
-    </repository>
-  </repositories>
-  
-  <dependencies>
-    <dependency>
-      <groupId>rubygems</groupId>
-      <artifactId>prawn</artifactId>
-      <version>${rubygems.prawn.version}</version>
-      <type>gem</type>
-      <scope>provided</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.jruby</groupId>
-      <artifactId>jruby-complete</artifactId>
-      <version>${jruby.version}</version>
-    </dependency>
-    <dependency>
-      <groupId>org.asciidoctor</groupId>
-      <artifactId>asciidoctorj</artifactId>
-      <version>${asciidoctorj.version}</version>
-    </dependency>
-  </dependencies>
-
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>de.saumya.mojo</groupId>
-        <artifactId>gem-maven-plugin</artifactId>
-        <version>1.0.10</version>
-        <configuration>
-          <!-- align JRuby version with AsciidoctorJ to avoid redundant downloading -->
-          <jrubyVersion>${jruby.version}</jrubyVersion>
-          <gemHome>${project.build.directory}/gems</gemHome>
-          <gemPath>${project.build.directory}/gems</gemPath>
-        </configuration>
-        <executions>
-          <execution>
-            <goals>
-              <goal>initialize</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-resources-plugin</artifactId>
-        <version>2.7</version>
-        <configuration>
-          <encoding>UTF-8</encoding>
-          <attributes>
-            <generateReports>false</generateReports>
-          </attributes>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.asciidoctor</groupId>
-        <artifactId>asciidoctor-maven-plugin</artifactId>
-        <version>${asciidoctor.maven.plugin.version}</version> 
-        <dependencies>
-          <dependency>
-            <groupId>org.asciidoctor</groupId>
-            <artifactId>asciidoctorj-pdf</artifactId>
-            <version>${asciidoctorj.pdf.version}</version>
-          </dependency>
-          <dependency>
-            <groupId>org.asciidoctor</groupId>
-            <artifactId>asciidoctorj</artifactId>
-            <version>${asciidoctorj.version}</version>
-          </dependency>
-        </dependencies>
-        <configuration>
-          <sourceDirectory>${basedir}/src</sourceDirectory>
-          <gemPath>${project.build.directory}/gems-provided</gemPath>
-        </configuration>
-        <executions>
-          <execution>
-            <id>generate-html-doc</id> 
-            <goals>
-              <goal>process-asciidoc</goal> 
-            </goals>
-            <phase>site</phase>
-            <configuration>
-              <doctype>book</doctype>
-              <backend>html5</backend>
-              <sourceHighlighter>coderay</sourceHighlighter>
-              <outputDirectory>${basedir}/target/site</outputDirectory>
-              <requires>
-                <require>${basedir}/../shared/google-analytics-postprocessor.rb</require>
-              </requires>
-              <attributes>
-                <!-- Location of centralized stylesheet -->
-                <stylesheet>${basedir}/../shared/trafodion-manuals.css</stylesheet>
-                <project-version>${env.TRAFODION_VER}</project-version>
-                <project-name>Trafodion</project-name>
-                <project-logo>${basedir}/../shared/trafodion-logo.jpg</project-logo>
-                <project-support>user@trafodion.incubator.apache.org</project-support>
-                <docs-url>http://trafodion.incubator.apache.org/docs</docs-url>
-                <build-date>${maven.build.timestamp}</build-date>
-                <google-analytics-account>UA-72491210-1</google-analytics-account>
-              </attributes>
-            </configuration>
-          </execution>
-          <execution>
-            <id>generate-pdf-doc</id>
-            <phase>site</phase>
-            <goals>
-              <goal>process-asciidoc</goal>
-            </goals>
-            <configuration>
-              <doctype>book</doctype>
-              <backend>pdf</backend>
-              <sourceHighlighter>coderay</sourceHighlighter>
-              <outputDirectory>${basedir}/target</outputDirectory>
-              <attributes>
-                <project-version>${env.TRAFODION_VER}</project-version>
-                <project-name>Trafodion</project-name>
-                <project-logo>${basedir}/../shared/trafodion-logo.jpg</project-logo>
-                <project-support>user@trafodion.incubator.apache.org</project-support>
-                <docs-url>http://trafodion.incubator.apache.org/docs</docs-url>
-                <build-date>${maven.build.timestamp}</build-date>
-                <pdf-stylesdir>${basedir}/../shared</pdf-stylesdir>
-                <pdf-style>trafodion</pdf-style>
-                <icons>font</icons>
-                <pagenums/>
-                <toc/>
-                <idprefix/>
-                <idseparator>-</idseparator>
-              </attributes>
-            </configuration>
-          </execution>
-        </executions>
-      </plugin> 
-      <!-- Copy files to the web-site end destinations. -->
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-antrun-plugin</artifactId>
-        <version>1.8</version>
-        <inherited>false</inherited>
-        <executions>
-          <execution>
-            <id>populate-release-directories</id>
-            <phase>post-site</phase>
-            <configuration>
-              <target name="Populate Release Directories">
-                <!-- The website uses the following organization for the docs/target/docs directory:
-                  - To ensure a known location, the base directory contains the LATEST version of the web book and the PDF files.
-                  - The know location is docs/target/docs/<document>
-                  - target/docs/<version>/<document> contains version-specific renderings of the documents.
-                  - target/docs/<version>/<document> contains the PDF version and the web book. The web book is named index.html
-                --> 
-                <!-- Copy the PDF file to its target directories -->
-                <copy file="${basedir}/target/index.pdf" tofile="${basedir}/../target/docs/sql_reference/Trafodion_SQL_Reference_Manual.pdf" />
-                <copy file="${basedir}/target/index.pdf" tofile="${basedir}/../target/docs/${project.version}/sql_reference/Trafodion_SQL_Reference_Manual.pdf" />
-                <!-- Copy the Web Book files to their target directories -->
-                <copy todir="${basedir}/../target/docs/sql_reference">
-                  <fileset dir="${basedir}/target/site">
-                    <include name="**/*.*"/>  <!--All sub-directories, too-->
-                  </fileset>
-                </copy>
-                <copy todir="${basedir}/../target/docs/${project.version}/sql_reference">
-                  <fileset dir="${basedir}/target/site">
-                    <include name="**/*.*"/>  <!--All sub-directories, too-->
-                  </fileset>
-                </copy>
-              </target>
-            </configuration>
-            <goals>
-              <goal>run</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-
-  <!-- Included because this is required. No reports are generated. -->
-  <reporting>
-    <excludeDefaults>true</excludeDefaults>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-project-info-reports-plugin</artifactId>
-        <version>2.8</version>
-        <reportSets>
-          <reportSet>
-            <reports>
-            </reports>
-          </reportSet>
-        </reportSets>
-      </plugin>
-    </plugins>
-  </reporting>
-
-  <distributionManagement>
-    <site>
-      <id>trafodion.incubator.apache.org</id>
-      <name>Trafodion Website at incubator.apache.org</name>
-      <!-- On why this is the tmp dir and not trafodion.incubator.apache.org, see
-      https://issues.apache.org/jira/browse/HBASE-7593?focusedCommentId=13555866&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13555866
-      -->
-      <url>file:///tmp</url>
-    </site>
-  </distributionManagement>
-</project>
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+ <!-- 
+* @@@ START COPYRIGHT @@@                                                       
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+-->
+  <modelVersion>4.0.0</modelVersion>
+  <groupId>org.apache.trafodion</groupId>
+  <artifactId>sql-reference-manual</artifactId>
+  <version>${env.TRAFODION_VER}</version>
+  <packaging>pom</packaging>
+  <name>Trafodion SQL Reference Manual</name>
+  <description>This manual describes reference information about the syntax of SQL statements, 
+               functions, and other SQL language elements supported by the Trafodion project\u2019s 
+               database software.
+  </description>
+  <url>http://trafodion.incubator.apache.org</url>
+  <inceptionYear>2015</inceptionYear>
+  <parent>
+    <groupId>org.apache.trafodion</groupId>
+    <artifactId>trafodion</artifactId>
+    <relativePath>../../pom.xml</relativePath>
+    <version>1.3.0</version>
+  </parent>
+
+
+  <licenses>
+    <license>
+      <name>The Apache Software License, Version 2.0</name>
+      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
+      <distribution>repo</distribution>
+      <comments>A business-friendly OSS license</comments>
+    </license>
+  </licenses>
+
+  <organization>
+    <name>Apache Software Foundation</name>
+    <url>http://www.apache.org</url>
+  </organization>
+
+  <issueManagement>
+    <system>JIRA</system>
+    <url>http://issues.apache.org/jira/browse/TRAFODION</url>
+  </issueManagement>
+
+  <scm>
+    <connection>scm:git:http://git-wip-us.apache.org/repos/asf/incubator-trafodion.git</connection>
+    <developerConnection>scm:git:https://git-wip-us.apache.org/repos/asf/incubator-trafodion.git</developerConnection>
+    <url>https://git-wip-us.apache.org/repos/asf?p=incubator-trafodion.git</url>
+    <tag>HEAD</tag>
+  </scm>
+
+  <ciManagement>
+    <system>Jenkins</system>
+    <url>https://jenkins.esgyn.com</url>
+  </ciManagement>
+
+  <properties>
+    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+    <asciidoctor.maven.plugin.version>1.5.2.1</asciidoctor.maven.plugin.version>
+    <asciidoctorj.pdf.version>1.5.0-alpha.11</asciidoctorj.pdf.version>
+    <asciidoctorj.version>1.5.4</asciidoctorj.version>
+    <rubygems.prawn.version>2.0.2</rubygems.prawn.version>
+    <jruby.version>9.0.4.0</jruby.version>
+    <dependency.locations.enabled>false</dependency.locations.enabled>
+  </properties>
+
+  <repositories>
+    <repository>
+      <id>rubygems-proxy-releases</id>
+      <name>RubyGems.org Proxy (Releases)</name>
+      <url>http://rubygems-proxy.torquebox.org/releases</url>
+      <releases>
+        <enabled>true</enabled>
+      </releases>
+      <snapshots>
+        <enabled>false</enabled>
+      </snapshots>
+    </repository>
+  </repositories>
+  
+  <dependencies>
+    <dependency>
+      <groupId>rubygems</groupId>
+      <artifactId>prawn</artifactId>
+      <version>${rubygems.prawn.version}</version>
+      <type>gem</type>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.jruby</groupId>
+      <artifactId>jruby-complete</artifactId>
+      <version>${jruby.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.asciidoctor</groupId>
+      <artifactId>asciidoctorj</artifactId>
+      <version>${asciidoctorj.version}</version>
+    </dependency>
+  </dependencies>
+
+  <build>
+    <plugins>
+      <plugin>
+        <groupId>de.saumya.mojo</groupId>
+        <artifactId>gem-maven-plugin</artifactId>
+        <version>1.0.10</version>
+        <configuration>
+          <!-- align JRuby version with AsciidoctorJ to avoid redundant downloading -->
+          <jrubyVersion>${jruby.version}</jrubyVersion>
+          <gemHome>${project.build.directory}/gems</gemHome>
+          <gemPath>${project.build.directory}/gems</gemPath>
+        </configuration>
+        <executions>
+          <execution>
+            <goals>
+              <goal>initialize</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-resources-plugin</artifactId>
+        <version>2.7</version>
+        <configuration>
+          <encoding>UTF-8</encoding>
+          <attributes>
+            <generateReports>false</generateReports>
+          </attributes>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.asciidoctor</groupId>
+        <artifactId>asciidoctor-maven-plugin</artifactId>
+        <version>${asciidoctor.maven.plugin.version}</version> 
+        <dependencies>
+          <dependency>
+            <groupId>org.asciidoctor</groupId>
+            <artifactId>asciidoctorj-pdf</artifactId>
+            <version>${asciidoctorj.pdf.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.asciidoctor</groupId>
+            <artifactId>asciidoctorj</artifactId>
+            <version>${asciidoctorj.version}</version>
+          </dependency>
+        </dependencies>
+        <configuration>
+          <sourceDirectory>${basedir}/src</sourceDirectory>
+          <gemPath>${project.build.directory}/gems-provided</gemPath>
+        </configuration>
+        <executions>
+          <execution>
+            <id>generate-html-doc</id> 
+            <goals>
+              <goal>process-asciidoc</goal> 
+            </goals>
+            <phase>site</phase>
+            <configuration>
+              <doctype>book</doctype>
+              <backend>html5</backend>
+              <sourceHighlighter>coderay</sourceHighlighter>
+              <outputDirectory>${basedir}/target/site</outputDirectory>
+              <requires>
+                <require>${basedir}/../shared/google-analytics-postprocessor.rb</require>
+              </requires>
+              <attributes>
+                <!-- Location of centralized stylesheet -->
+                <stylesheet>${basedir}/../shared/trafodion-manuals.css</stylesheet>
+                <project-version>${env.TRAFODION_VER}</project-version>
+                <project-name>Trafodion</project-name>
+                <project-logo>${basedir}/../shared/trafodion-logo.jpg</project-logo>
+                <project-support>user@trafodion.incubator.apache.org</project-support>
+                <docs-url>http://trafodion.incubator.apache.org/docs</docs-url>
+                <build-date>${maven.build.timestamp}</build-date>
+                <google-analytics-account>UA-72491210-1</google-analytics-account>
+              </attributes>
+            </configuration>
+          </execution>
+          <execution>
+            <id>generate-pdf-doc</id>
+            <phase>site</phase>
+            <goals>
+              <goal>process-asciidoc</goal>
+            </goals>
+            <configuration>
+              <doctype>book</doctype>
+              <backend>pdf</backend>
+              <sourceHighlighter>coderay</sourceHighlighter>
+              <outputDirectory>${basedir}/target</outputDirectory>
+              <attributes>
+                <project-version>${env.TRAFODION_VER}</project-version>
+                <project-name>Trafodion</project-name>
+                <project-logo>${basedir}/../shared/trafodion-logo.jpg</project-logo>
+                <project-support>user@trafodion.incubator.apache.org</project-support>
+                <docs-url>http://trafodion.incubator.apache.org/docs</docs-url>
+                <build-date>${maven.build.timestamp}</build-date>
+                <pdf-stylesdir>${basedir}/../shared</pdf-stylesdir>
+                <pdf-style>trafodion</pdf-style>
+                <icons>font</icons>
+                <pagenums/>
+                <toc/>
+                <idprefix/>
+                <idseparator>-</idseparator>
+              </attributes>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin> 
+      <!-- Copy files to the web-site end destinations. -->
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-antrun-plugin</artifactId>
+        <version>1.8</version>
+        <inherited>false</inherited>
+        <executions>
+          <execution>
+            <id>populate-release-directories</id>
+            <phase>post-site</phase>
+            <configuration>
+              <target name="Populate Release Directories">
+                <!-- The website uses the following organization for the docs/target/docs directory:
+                  - To ensure a known location, the base directory contains the LATEST version of the web book and the PDF files.
+                  - The know location is docs/target/docs/<document>
+                  - target/docs/<version>/<document> contains version-specific renderings of the documents.
+                  - target/docs/<version>/<document> contains the PDF version and the web book. The web book is named index.html
+                --> 
+                <!-- Copy the PDF file to its target directories -->
+                <copy file="${basedir}/target/index.pdf" tofile="${basedir}/../target/docs/sql_reference/Trafodion_SQL_Reference_Manual.pdf" />
+                <copy file="${basedir}/target/index.pdf" tofile="${basedir}/../target/docs/${project.version}/sql_reference/Trafodion_SQL_Reference_Manual.pdf" />
+                <!-- Copy the Web Book files to their target directories -->
+                <copy todir="${basedir}/../target/docs/sql_reference">
+                  <fileset dir="${basedir}/target/site">
+                    <include name="**/*.*"/>  <!--All sub-directories, too-->
+                  </fileset>
+                </copy>
+                <copy todir="${basedir}/../target/docs/${project.version}/sql_reference">
+                  <fileset dir="${basedir}/target/site">
+                    <include name="**/*.*"/>  <!--All sub-directories, too-->
+                  </fileset>
+                </copy>
+              </target>
+            </configuration>
+            <goals>
+              <goal>run</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+    </plugins>
+  </build>
+
+  <!-- Included because this is required. No reports are generated. -->
+  <reporting>
+    <excludeDefaults>true</excludeDefaults>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-project-info-reports-plugin</artifactId>
+        <version>2.8</version>
+        <reportSets>
+          <reportSet>
+            <reports>
+            </reports>
+          </reportSet>
+        </reportSets>
+      </plugin>
+    </plugins>
+  </reporting>
+
+  <distributionManagement>
+    <site>
+      <id>trafodion.incubator.apache.org</id>
+      <name>Trafodion Website at incubator.apache.org</name>
+      <!-- On why this is the tmp dir and not trafodion.incubator.apache.org, see
+      https://issues.apache.org/jira/browse/HBASE-7593?focusedCommentId=13555866&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13555866
+      -->
+      <url>file:///tmp</url>
+    </site>
+  </distributionManagement>
+</project>

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/about.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/about.adoc b/docs/sql_reference/src/asciidoc/_chapters/about.adoc
index b3b9ef6..7dc74ad 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/about.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/about.adoc
@@ -1,212 +1,212 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[About_This_Document]]
-= About This Document
-This manual describes reference information about the syntax of SQL statements, functions, and other
-SQL language elements supported by the {project-name} project\u2019s database software.
-
-{project-name} SQL statements and utilities are entered interactively or from script files using a client-based tool,
-such as the Trafodion Command Interface (TrafCI). To install and configure a client application that enables you
-to connect to and use a {project-name} database, see the
-{docs-url}/client_install/index.html[_{project-name} Client Installation Guide_].
-
-NOTE: In this manual, SQL language elements, statements, and clauses within statements are based on the
-ANSI SQL:1999 standard.
-
-[[Intended_Audience]]
-== Intended Audience
-This manual is intended for database administrators and application programmers who are using SQL to read, update,
-and create {project-name} SQL tables, which map to HBase tables, and to access native HBase and Hive tables.
-
-You should be familiar with structured query language (SQL) and with the American National Standard Database Language SQL:1999.
-
-<<<
-[[New_and_Changed_Information]]
-== New and Changed Information
-This edition includes updates for these new features:
-
-[cols="50%,50%",options="header"]
-|===
-| New Feature                                           | Location in the Manual
-| Incremental UPDATE STATISTICS                         | <<update_statistics_statement,UPDATE STATISTICS Statement>>
-|===
-
-<<<
-[[Document_Organization]]
-== Document Organization
-
-[cols="50%,50%",options="header"]
-|===
-|Chapter or Appendix                                              | Description
-| <<Introduction,Introduction>>                                   | Introduces {project-name} SQL and covers topics such as data consistency,
-transaction management, and ANSI compliance.
-| <<SQL_Statements,SQL Statements>>                               | Describes the SQL statements supported by {project-name} SQL.
-| <<SQL_Utilities,SQL Utilities>>                                 | Describes the SQL utilities supported by {project-name} SQL.
-| <<SQL_Language Elements,SQL Language Elements>>                 | Describes parts of the language, such as database objects, data types,
-expressions, identifiers, literals, and predicates, which occur within the syntax of {project-name} SQL statements.
-| <<SQL_Clauses,SQL Clauses>>                                     | Describes clauses used by {project-name} SQL statements.
-| <<SQL_Functions_and_Expressions,SQL Functions and Expressions>> | Describes specific functions and expressions that you can use in
-{project-name} SQL statements.
-| <<SQL_Runtime_Statistics,SQL Runtime Statistics>>               | Describes how to gather statistics for active queries or for the Runtime
-Management System (RMS) and describes the RMS counters that are returned.
-| <<OLAP_Functions,OLAP Functions>>                               | Describes specific on line analytical processing functions.
-| <<Reserved_Words,Appendix A: Reserved Words>>                   | Lists the words that are reserved in {project-name} SQL.
-| <<Limits,Appendix B: Limits>>                                  | Describes limits in {project-name} SQL.
-|===
-
-
-<<<
-== Notation Conventions
-This list summarizes the notation conventions for syntax presentation in this manual.
-
-* UPPERCASE LETTERS
-+
-Uppercase letters indicate keywords and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. 
-+
-```
-SELECT
-```
-
-* lowercase letters
-+
-Lowercase letters, regardless of font, indicate variable items that you supply. Items not enclosed in brackets are required.
-+
-```
-file-name
-```
-
-* &#91; &#93; Brackets 
-+
-Brackets enclose optional syntax items.
-+
-```
-DATETIME [start-field TO] end-field
-```
-+
-A group of items enclosed in brackets is a list from which you can choose one item or none.
-+
-The items in the list can be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines.
-+
-For example: 
-+
-```
-DROP SCHEMA schema [CASCADE]
-DROP SCHEMA schema [ CASCADE | RESTRICT ]
-```
-
-<<<
-* { } Braces 
-+
-Braces enclose required syntax items.
-+
-```
-FROM { grantee [, grantee ] ... }
-```
-+ 
-A group of items enclosed in braces is a list from which you are required to choose one item.
-+
-The items in the list can be arranged either vertically, with aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines.
-+
-For example:
-+
-```
-INTERVAL { start-field TO end-field }
-{ single-field } 
-INTERVAL { start-field TO end-field | single-field }
-``` 
-* | Vertical Line 
-+
-A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces.
-+
-```
-{expression | NULL} 
-```
-* &#8230; Ellipsis
-+
-An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times.
-+
-```
-ATTRIBUTE[S] attribute [, attribute] ...
-{, sql-expression } ...
-```
-+ 
-An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times.
-+
-For example:
-+
-```
-expression-n ...
-```
-
-<<<
-* Punctuation
-+
-Parentheses, commas, semicolons, and other symbols not previously described must be typed as shown.
-+
-```
-DAY (datetime-expression)
-@script-file 
-```
-+
-Quotation marks around a symbol such as a bracket or brace indicate the symbol is a required character that you must type as shown.
-+
-For example:
-+
-```
-"{" module-name [, module-name] ... "}"
-```
-
-* Item Spacing
-+
-Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma.
-+
-```
-DAY (datetime-expression) DAY(datetime-expression)
-```
-+
-If there is no space between two items, spaces are not permitted. In this example, no spaces are permitted between the period and any other items:
-+
-```
-myfile.sh
-```
-
-* Line Spacing
-+
-If the syntax of a command is too long to fit on a single line, each continuation line is indented three spaces and is separated from the preceding line by a blank line.
-+
-This spacing distinguishes items in a continuation line from items in a vertical list of selections. 
-+
-```
-match-value [NOT] LIKE _pattern
-   [ESCAPE esc-char-expression] 
-```
-
-<<<
-== Comments Encouraged
-We encourage your comments concerning this document. We are committed to providing documentation that meets your
-needs. Send any errors found, suggestions for improvement, or compliments to {project-support}.
-
-Include the document title and any comment, error found, or suggestion for improvement you have concerning this document.
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[About_This_Document]]
+= About This Document
+This manual describes reference information about the syntax of SQL statements, functions, and other
+SQL language elements supported by the {project-name} project\u2019s database software.
+
+{project-name} SQL statements and utilities are entered interactively or from script files using a client-based tool,
+such as the Trafodion Command Interface (TrafCI). To install and configure a client application that enables you
+to connect to and use a {project-name} database, see the
+{docs-url}/client_install/index.html[_{project-name} Client Installation Guide_].
+
+NOTE: In this manual, SQL language elements, statements, and clauses within statements are based on the
+ANSI SQL:1999 standard.
+
+[[Intended_Audience]]
+== Intended Audience
+This manual is intended for database administrators and application programmers who are using SQL to read, update,
+and create {project-name} SQL tables, which map to HBase tables, and to access native HBase and Hive tables.
+
+You should be familiar with structured query language (SQL) and with the American National Standard Database Language SQL:1999.
+
+<<<
+[[New_and_Changed_Information]]
+== New and Changed Information
+This edition includes updates for these new features:
+
+[cols="50%,50%",options="header"]
+|===
+| New Feature                                           | Location in the Manual
+| Incremental UPDATE STATISTICS                         | <<update_statistics_statement,UPDATE STATISTICS Statement>>
+|===
+
+<<<
+[[Document_Organization]]
+== Document Organization
+
+[cols="50%,50%",options="header"]
+|===
+|Chapter or Appendix                                              | Description
+| <<Introduction,Introduction>>                                   | Introduces {project-name} SQL and covers topics such as data consistency,
+transaction management, and ANSI compliance.
+| <<SQL_Statements,SQL Statements>>                               | Describes the SQL statements supported by {project-name} SQL.
+| <<SQL_Utilities,SQL Utilities>>                                 | Describes the SQL utilities supported by {project-name} SQL.
+| <<SQL_Language Elements,SQL Language Elements>>                 | Describes parts of the language, such as database objects, data types,
+expressions, identifiers, literals, and predicates, which occur within the syntax of {project-name} SQL statements.
+| <<SQL_Clauses,SQL Clauses>>                                     | Describes clauses used by {project-name} SQL statements.
+| <<SQL_Functions_and_Expressions,SQL Functions and Expressions>> | Describes specific functions and expressions that you can use in
+{project-name} SQL statements.
+| <<SQL_Runtime_Statistics,SQL Runtime Statistics>>               | Describes how to gather statistics for active queries or for the Runtime
+Management System (RMS) and describes the RMS counters that are returned.
+| <<OLAP_Functions,OLAP Functions>>                               | Describes specific on line analytical processing functions.
+| <<Reserved_Words,Appendix A: Reserved Words>>                   | Lists the words that are reserved in {project-name} SQL.
+| <<Limits,Appendix B: Limits>>                                  | Describes limits in {project-name} SQL.
+|===
+
+
+<<<
+== Notation Conventions
+This list summarizes the notation conventions for syntax presentation in this manual.
+
+* UPPERCASE LETTERS
++
+Uppercase letters indicate keywords and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. 
++
+```
+SELECT
+```
+
+* lowercase letters
++
+Lowercase letters, regardless of font, indicate variable items that you supply. Items not enclosed in brackets are required.
++
+```
+file-name
+```
+
+* &#91; &#93; Brackets 
++
+Brackets enclose optional syntax items.
++
+```
+DATETIME [start-field TO] end-field
+```
++
+A group of items enclosed in brackets is a list from which you can choose one item or none.
++
+The items in the list can be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines.
++
+For example: 
++
+```
+DROP SCHEMA schema [CASCADE]
+DROP SCHEMA schema [ CASCADE | RESTRICT ]
+```
+
+<<<
+* { } Braces 
++
+Braces enclose required syntax items.
++
+```
+FROM { grantee [, grantee ] ... }
+```
++ 
+A group of items enclosed in braces is a list from which you are required to choose one item.
++
+The items in the list can be arranged either vertically, with aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines.
++
+For example:
++
+```
+INTERVAL { start-field TO end-field }
+{ single-field } 
+INTERVAL { start-field TO end-field | single-field }
+``` 
+* | Vertical Line 
++
+A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces.
++
+```
+{expression | NULL} 
+```
+* &#8230; Ellipsis
++
+An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times.
++
+```
+ATTRIBUTE[S] attribute [, attribute] ...
+{, sql-expression } ...
+```
++ 
+An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times.
++
+For example:
++
+```
+expression-n ...
+```
+
+<<<
+* Punctuation
++
+Parentheses, commas, semicolons, and other symbols not previously described must be typed as shown.
++
+```
+DAY (datetime-expression)
+@script-file 
+```
++
+Quotation marks around a symbol such as a bracket or brace indicate the symbol is a required character that you must type as shown.
++
+For example:
++
+```
+"{" module-name [, module-name] ... "}"
+```
+
+* Item Spacing
++
+Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma.
++
+```
+DAY (datetime-expression) DAY(datetime-expression)
+```
++
+If there is no space between two items, spaces are not permitted. In this example, no spaces are permitted between the period and any other items:
++
+```
+myfile.sh
+```
+
+* Line Spacing
++
+If the syntax of a command is too long to fit on a single line, each continuation line is indented three spaces and is separated from the preceding line by a blank line.
++
+This spacing distinguishes items in a continuation line from items in a vertical list of selections. 
++
+```
+match-value [NOT] LIKE _pattern
+   [ESCAPE esc-char-expression] 
+```
+
+<<<
+== Comments Encouraged
+We encourage your comments concerning this document. We are committed to providing documentation that meets your
+needs. Send any errors found, suggestions for improvement, or compliments to {project-support}.
+
+Include the document title and any comment, error found, or suggestion for improvement you have concerning this document.


[11/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/preparation.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/preparation.adoc b/docs/client_install/src/asciidoc/_chapters/preparation.adoc
new file mode 100644
index 0000000..90b9ebb
--- /dev/null
+++ b/docs/client_install/src/asciidoc/_chapters/preparation.adoc
@@ -0,0 +1,273 @@
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+= Preparation
+
+{project-name} provides JDBC and ODBC drivers plus clients that use those drivers.
+In addition, you can configure third-party JDBC- and ODBC-based tools to work
+with {project-name}.
+
+Typically, you install and configure the client software in the following order:
+
+. JDBC and/or ODBC drivers. (Depending on what clients you plan to use.)
+. {project-name} clients. For example, trafci and odb.
+. Third-party clients. For example, DBVisualizer, SQuirell, and/or Tableau.
+
+If you don't plan to use JDBC-based clients, then please skip ahead to
+<<download-client-software, Download Client Software>>.
+
+[[java-setup]]
+== Java Setup
+
+The {project-name} JDBC Type 4 Driver requires Java 1.7 or higher. You need to set
+the Java path to the correct location.
+
+Depending on your planned usage, you install
+the Java Development Kit (JDK, if you plan to develop Java-based applications)
+or the Java Runtime Environment (JRE, if you plan to use packaged JDBC-based
+products only).
+
+[[java-validation]]
+=== Verify Java Version
+
+To display the Java version of the client workstation on the screen, enter:
+
+```
+java -version
+```
+
+.Example 1: Java Installed and PATH Variable Set Correctly
+
+```
+C:\> java -version
+
+java version "1.7.0_45" # This is the version you need to check
+Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
+Java HotSpot(TM) Client VM (build 24.45-b08, mixed mode, sharing)
+C:\>
+```
+
+If the version is not 1.7 or higher, then please upgrade you Java installation
+See <<java-install, Install Java>>.
+
+If the version is 1.7 or higher, then skip ahead to <<download-client-software, Download Client Software>>.
+
+.Example 2: Path Not Set
+
+```
+'java' is not recognized as an internal or external command, operable program or batch file.`
+```
+
+If you have installed Java, then this message indicates that you've not included
+the Java directory in your search path. See <<howto-setup-path, Set Up PATH Variable>>.
+
+[[java-install]]
+=== Install Java
+
+Refer to: http://www.java.com/en/download.
+
+Once installed, follow the instruction in <<howto-setup-path, Set Up PATH Variable>>
+to ensure that your Java environment has been set up properly.
+
+[[download-client-software]]
+== Download Client Software
+
+The {project-name} client software is available from the {download-url}[{project-name} Download] page. There is one
+`{project-name} Clients` package per release listed under *<version> Binaries*.
+
+The `{project-name} Clients` package consists of a zipped tar file that contains the {project-name} Clients tar file.
+The {project-name} Client binaries are in the `clients` folder, which contains the following files:
+
+[cols="30%,70%", options="header"]
+|===
+| File                               | Usage
+| `DISCLAIMER`                       | {project-name} Apache incubation disclaimer.
+| `JDBCT4.zip`                       | {project-name} JDBC Type 4 Driver.
+| `LICENCE`                          | Apache license.
+| `NOTICE`                           | Apache notice.
+| `odbc64_linux.tar.gz`              | {project-name} odb tool.
+| `TRAF_ODBC_Linux_Driver_64.tar.gz` | {project-name} ODBC driver for Linux.
+| `trafci.zip`                       | The {project-name} command interpreter `trafci`.
+| `TFODBC64-*.exe`                   | *[Not included in this release]*^1^ {project-name} ODBC Driver for Windows.
+|===
+
+^1^ License issues prevent us from including the ODBC Driver for Windows in this release. Contact 
+{project-support} for help obtaining the driver.
+
+<<<
+[[download-windows]]
+=== Windows Download
+
+Do the following:
+
+. Create a download folder on the client workstation. For example, `c:\trafodion`.
+
+. Open a Web browser and navigate to the {project-name} downloads site {download-url}.
+
+.  Orient yourself to the binaries for the release you're installing.
+Click on the `{project-name} Clients` link to start downloading the {project-name} clients tar file to your workstation.
+
+.  Place the `apache-trafodion-clients-*.tar.gz` file into the download folder.
+*  Unpack the `apache-trafodion-clients-\*.tar.gz` file using an unzip program of your choice. This creates
+an `apache-trafodion-clients-*.tar` file.
+* Unpack the `apache-trafodion-clients-*.tar` file using an unzip program of your choice.
+
+. Verify content of the `clients` directory:
++
+```
+DISCLAIMER JDBCT4.zip LICENSE NOTICE odb64_linux.tar.gz trafci.zip TRAF_ODBC_Linux_Driver_64.tar.gz
+```
++
+You use these files to install the different {project-name} clients.
+
+<<<
+[[download-linux]]
+=== Linux Download
+
+Do the following:
+
+. Create a download directory on the client workstation. For example, `$HOME/trafodion`.
+
+. Open a Web browser and navigate to the {project-name} downloads site {download-url}.
+
+.  Orient yourself to the binaries for the release you're installing.
+Right-click on the `{project-name} Clients` link and select *Copy link address*.
+
+.  Go to the download directory on the client workstation and use `wget` to download the client package
+using the URL you copied in step 3 above.
+
+.  Unpack the `apache-trafodion-clients-*.tar.gz` using `tar`.
++
+*Example*
++
+```
+$ mkdir $HOME/trafodion
+$ cd $HOME/trafodion
+$ wget <link to package>
+$ tar -xzvf apache-trafodion_clients-*-incubating.tar.gz
+$ cd clients
+$ ls
+DISCLAIMER  LICENSE  odb64_linux.tar.gz  TRAF_ODBC_Linux_Driver_64.tar.gz
+JDBCT4.zip  NOTICE   trafci.zip
+$
+```
++
+You use these files to install the different {project-name} clients.
+
+<<<
+[[unpack-client-software]]
+== Unpack Client Software
+
+The client packages are located on the `client` subdirectory where you unpacked
+the {project-name} distribution file. For example, `c:\trafodion\clients` (Windows)
+or `$HOME/trafodion/clients` (Linux).
+
+Unpack the client software and its dependencies you intend to use as follows.
+
+=== Unpack JDBC-Based Client Software
+
+[cols="30%,30%,40%a", options="header"]
+|===
+| File | Description | Recommended Target Directory 
+| `JDBCT4.zip` | JDBC Type 4 Driver | * *Windows:* `c:\trafodion\jdbct4`
++
+* *Linux:* `$HOME/trafodion/jdbct4`
+| `trafci.zip` | Command Interface | * *Windows:* `c:\trafodion\trafci`
++
+* *Linux:* `$HOME/trafodion/trafci`
+|===
+
+*Windows*
+
+Use your favorite compress/uncompress utility to unpack the file to the target directory
+defined in the table above.
+
+*Linux*
+
+Unpack the `.zip` file using the `unzip <file> -d <target-directory>` command:
+
+```
+$ cd $HOME/trafodion/clients
+$ unzip JDBCT4.zip -d $HOME/trafodion/jdbct4
+.
+.
+.
+$ unzip trafci.zip -d $HOME/trafodion/trafci
+.
+.
+.
+$ cd ..
+$ ls
+apache-trafodion_clients-2.0.1-incubating.tar.gz  clients  jdbct4  trafci
+$
+```
+
+Once complete, a fully-installed `c:\trafodion` (Windows) or
+`$HOME/trafodion` directory should contain the following directories:
+
+* `clients`: The compressed client software.
+* `jdbct4`: The {project-name} JDBC Type 4 driver installation directory.
+* `trafci`: The {project-name} Command Interpreter installation directory.
+
+=== Unpack ODBC-Based Client Software
+
+[cols="30%,30%,40%",options="header"]
+|===
+| File | Description | Recommended Target Directory 
+| `TRAF_ODBC_Linux_Driver_64.tar.gz` | Linux ODBC Driver | `$HOME/trafodion/odbc` 
+| `odb64_linux.tar.gz` | Linux odb Utility | `$HOME/trafodion/odb` 
+|===
+
+*Linux*
+
+Unpack the `.tar.gz` file using the `tar -xzvf <file> -C <target-directory>` command.
+
+```
+$ cd $HOME/trafodion/clients
+$ mkdir $HOME/trafodion/odbc
+$ tar -xzvf TRAF_ODBC_Linux_Driver_64.tar.gz -C $HOME/trafodion/odbc
+.
+.
+.
+$ mkdir $HOME/trafodion/odb
+$ tar -xzvf odb64_linux.tar.gz -C $HOME/trafodion/odb
+.
+.
+.
+$ cd ..
+$ ls
+apache-trafodion_clients-2.0.1-incubating.tar.gz  clients  odb  odbc
+```
+
+
+Once complete, a fully-installed `c:\trafodion` (Windows) or
+`$HOME/trafodion` directory should contain:
+
+* `clients`: The compressed client software.
+* `odb`: The {project-name} odb utility installation directory.
+* `odbc`: The {project-name} ODBC driver installation directory.
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/sample_prog.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/sample_prog.adoc b/docs/client_install/src/asciidoc/_chapters/sample_prog.adoc
index 138322c..2562319 100644
--- a/docs/client_install/src/asciidoc/_chapters/sample_prog.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/sample_prog.adoc
@@ -1,75 +1,75 @@
-////
-/**
- *@@@ START COPYRIGHT @@@
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- * @@@ END COPYRIGHT @@@
- */
-////
-
-[[odbc_sample_program]]
-== `basicsql` (Sample ODBC Program)
-This appendix provides the source code for the ODBC sample program, `basicsql,` which is not currently bundled with the ODBC drivers.
-
-This appendix also provides the code for the script files that are needed to build and run the sample program on Windows. See
-<<basicsql_build, Windows Build and Run Files for 'basicsql'>>.
-
-Copy and paste the code from this appendix into the recommended files. To build and run the sample program, see these instructions:
-
-* On Windows: <<win_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
-* On Linux:  <<linux_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
-
-=== `basicsql.cpp` Source Code
-You can download the `basicsql.cpp` example from
-http://trafodion.incubator.apache.org/docs/client_install/resources/source/basicsql.cpp.
-
-Alternatively, copy and paste the following code into a file named `basicsql.cpp`:
-
-[source, cplusplus]
-----
-include::{sourcedir}/basicsql.cpp[]
-----
-
-[[basicsql_build]]
-=== Windows Build and Run Files for `basicsql`
-
-The script files that are needed to build and run the sample program on Windows are not currently bundled with the ODBC driver.
-Copy and paste the code from this appendix into the recommended files. To build and run the sample program on Windows,
-see the instructions in <<win_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
-
-==== `build.bat` (Build Script)
-You can download the `build.bat` example from
-http://trafodion.incubator.apache.org/docs/client_install/resources/source/build.bat.
-
-Alternatively, copy and paste the following code into a file named `build.bat`, which is used to build the sample program on Windows:
-
-----
-include::{sourcedir}/build.bat[]
-----
-
-To build the sample program on Windows, see the instructions in <<win_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
-
-==== Run `run.bat`
-You can download the `run.bat` example from
-http://trafodion.incubator.apache.org/docs/client_install/resources/source/run.bat.
-
-Alternatively, copy and paste the following code into a file named `run.bat`, which is used to run the sample program on Windows:
-
-----
-include::{sourcedir}/run.bat[]
-----
-
-To run the sample program on Windows, see the instructions in <<win_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
+////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ */
+////
+
+[[odbc_sample_program]]
+== `basicsql` (Sample ODBC Program)
+This appendix provides the source code for the ODBC sample program, `basicsql,` which is not currently bundled with the ODBC drivers.
+
+This appendix also provides the code for the script files that are needed to build and run the sample program on Windows. See
+<<basicsql_build, Windows Build and Run Files for 'basicsql'>>.
+
+Copy and paste the code from this appendix into the recommended files. To build and run the sample program, see these instructions:
+
+* On Windows: <<win_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
+* On Linux:  <<linux_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
+
+=== `basicsql.cpp` Source Code
+You can download the `basicsql.cpp` example from
+http://trafodion.incubator.apache.org/docs/client_install/resources/source/basicsql.cpp.
+
+Alternatively, copy and paste the following code into a file named `basicsql.cpp`:
+
+[source, cplusplus]
+----
+include::{sourcedir}/basicsql.cpp[]
+----
+
+[[basicsql_build]]
+=== Windows Build and Run Files for `basicsql`
+
+The script files that are needed to build and run the sample program on Windows are not currently bundled with the ODBC driver.
+Copy and paste the code from this appendix into the recommended files. To build and run the sample program on Windows,
+see the instructions in <<win_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
+
+==== `build.bat` (Build Script)
+You can download the `build.bat` example from
+http://trafodion.incubator.apache.org/docs/client_install/resources/source/build.bat.
+
+Alternatively, copy and paste the following code into a file named `build.bat`, which is used to build the sample program on Windows:
+
+----
+include::{sourcedir}/build.bat[]
+----
+
+To build the sample program on Windows, see the instructions in <<win_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.
+
+==== Run `run.bat`
+You can download the `run.bat` example from
+http://trafodion.incubator.apache.org/docs/client_install/resources/source/run.bat.
+
+Alternatively, copy and paste the following code into a file named `run.bat`, which is used to run the sample program on Windows:
+
+----
+include::{sourcedir}/run.bat[]
+----
+
+To run the sample program on Windows, see the instructions in <<win_odbc_run_basicsql, Run Sample Program (`basicsql`)>>.

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/tableau.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/tableau.adoc b/docs/client_install/src/asciidoc/_chapters/tableau.adoc
new file mode 100644
index 0000000..9a9e615
--- /dev/null
+++ b/docs/client_install/src/asciidoc/_chapters/tableau.adoc
@@ -0,0 +1,83 @@
+////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ */
+////
+
+= Configure Tableau Client
+
+== Prerequisite Software
+
+Make sure that you have this software installed on your workstation:
+
+* {project-name} Windows ODBC Driver. See <<install-windows-odbc-driver,Install Windows ODBC Driver>>.
+* Tableau Software. See the http://www.tableau.com/trial/download-tableau[_Tableau website_].
+
+== Tableau Datasource Configuration (.tdc) File
+
+=== Create .tdc File
+
+The Tableau Datasource Configuration (`.tdc`) file is used to customize and tune ODBC connections.
+
+NOTE: You can download each sample documented herein by clicking the link provided with the
+sample name. For example, click on link:{tableau}/trafodion.tdc[trafodion.tdc]
+to download a `.tdc` file for Tableau 9.3 using Trafodion ODBC 2.1. +
+ +
+You can access the complete source directory at: {docs-url}/client_install/resources/tableau/
+
+The `.tdc` file contains version-specific settings that you need to modify. 
+
+.Template: link:resources/tableau/trafodion.tdc.template[`trafodion.tdc.template`]
+
+[source, xml]
+----
+include::{tableau}/trafodion.tdc.template[Trafodion `.tdc` template file]
+----
+
+Save this file as `trafodion.tdc` and change the following placeholders:
+
+* `<tableau-version>` - Change to the version of Tableau you're using. For example: `9.3`
+* `<trafodion-driver-name>` - Change to the name of the Trafodion ODBC driver you're using. For example: `TRAF ODBC 2.1`
+
+Once edited, your `trafodion.tdc` file should look similar to the example below.
+
+.Example: link:resources/tableau/trafodion.tdc[`trafodion.tdc`]
+
+[source, xml]
+----
+include::{tableau}/trafodion.tdc[`trafodion.tdc`]
+----
+
+=== Install .tdc File
+
+Copy the `trafodion.tdc` file to the `C:\Users\%USERNAME%\Documents\My Tableau Repository\Datasources` folder.
+
+Restart Tableay if it's running to pick up the configuration change.
+
+== Connnect to {project-name}
+
+. Configure a ODBC data source using the MS ODBC Administrator. See <<win_odbc_setup_data_source, Set Up ODBC Data Source>>.
+. Start Tableau.
+. Create a *New Database Connection* by selecting *Other Databases (ODBC)*.
+. Select your data source in the *DSN* dropdown.
+. Enter *Trafodion* in the *Database:* field.
+. Enter your *Username* and *Password*.
+. Click *OK* to connect to your {project-name} database.
++
+image:{images}/tableau_connect.jpg[width=400,height=400,alt="Tableau Database Connection Screen"]

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/trafci.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/trafci.adoc b/docs/client_install/src/asciidoc/_chapters/trafci.adoc
index 3ab5f7a..32bdc7b 100644
--- a/docs/client_install/src/asciidoc/_chapters/trafci.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/trafci.adoc
@@ -1,472 +1,512 @@
-////
-/**
- *@@@ START COPYRIGHT @@@
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- * @@@ END COPYRIGHT @@@
- */
-////
-
-[[trafci]]
-= Install trafci
-
-[[trafci-installation-requirements]]
-== Installation Requirements
-
-The Trafodion Command Interface (trafci) runs on Windows and Linux. Before installing trafci on the client workstation, 
-ensure that you've installed the JDBC Type-4 driver. See the <<jdbct4, Install JDBC Type-4 Driver>> chapter above.
-
-[[trafci_perl_python]]
-=== Install Perl or Python
-
-If you plan to use Perl or Python scripts with trafci, verify that you have Perl or Python installed on the client workstation. trafci supports
-these versions of Perl and Python:
-
-* Perl version 5.8.8
-* Python version 2.3.4
-
-If you do not have Perl or Python, download it from any open-source software provider. You can perform this installation procedure anytime
-before or after installing trafci.
-
-If you plan to run the sample scripts provided with trafci, verify that you have the Perl JavaServer and Jython (Java implementation of Python)
-installed on your client workstation. Use the trafci Installation Wizard to automatically download and install both the Perl JavaServer and
-Jython open source extensions. To download them manually, see the `README` in the samples directory.
-
-<<<
-[[trafci-installation-instructions]]
-== Installation Instructions
-
-You download and extract the {project-name} client package using the instructions in <<introduction-download, Download Installation Package>> above.
-
-1.  Change the directory to the `clients` subdirectory.
-
-2.  Extract the contents of the `trafci.zip` file to a location on your client workstation (for example, a folder named `trafci`) by
-using the unzip command (or the extract function of your compression software):
-+
-```
-cd $HOME/trafodion-download/clients
-unzip trafci.zip -d trafci
-cd trafci
-```
-+
-The command extracts these files:
-+
-* `README`
-* `trafciInstaller.jar`
-
-[[trafci-run-installer]]
-=== Run Executable JAR Installer
-
-When using the executable JAR file, `trafciInstaller.jar`, to install trafci, you have a choice of running the installer from the Installer
-Wizard Graphical User Interface (GUI) or from the command line:
-
-* <<trafci-wizard-install, Installer Wizard Steps>>
-* <<trafci-cmd-install, Command-Line Installation Steps>>
-
-[[trafci-wizard-install]]
-==== Installer Wizard Steps
-
-NOTE: On Linux, to run the Installer Wizard, you must have the X Window system installed on the client workstation. If the client workstation
-does not have the X Window system, see <<trafci-cmd-install, Command-Line Installation Steps>>.
-
-<<<
-===== Launching the Installer Wizard
-
-1.  Locate the `trafciInstaller.jar` file in the folder where you extracted the contents of the distribution (`.zip`) file.
-2.  Verify that the `trafciInstaller.jar` file appears as an executable JAR File. If not, skip the next two steps and go to Step 5.
-3.  Double-click the `trafciInstaller.jar` file to launch the Installer Wizard.
-4.  Proceed to <<trafci-using-wizard, Using the Installer Wizard>>.
-5.  At a command prompt, change to the directory where you extracted the installer files:
-+
-```
-cd installer-directory
-```
-+
-`_installer-directory_` is the directory where you extracted the installer file, `trafciInstaller.jar`.
-
-6.  Launch the Installer Wizard by entering:
-+
-```
-java -jar trafciInstaller.jar
-```
-
-7.  Proceed to <<trafci-using-wizard, Using the Installer Wizard>>.
-
-[[trafci-using-wizard]]
-===== Using the Installer Wizard
-
-When you execute `trafciInstaller.jar`, the Installer Wizard appears:
-
-image:{images}/InstallerWizardWelcome.jpg[trafci Installer Wizard welcome screen]
-
-1.  Click one of these buttons for the type of installation that you would like to perform:
-* *Standard Installation* to start the Installer Wizard, which guides you through installing both the core trafci components and the optional open
-source extensions
-* *Core Components* for a quick installation of the core trafci files
-* *Optional Components* if you have already installed the core trafci files but want to install only the optional open source extensions
-
-2.  After you have selected the components you wish to install, browse and select the JDBC JAR file and then specify an installation directory
-where you will install trafci.
-+
-image:{images}/InstallerWizardPaths.jpg[trafci select path dialog]
-
-3.  To locate the JDBC driver JAR file, click *Browse* next to the *JDBC Type 4 Driver JAR File*.
-4.  In the *Select JDBC Type 4 Driver JAR File* dialog box, navigate to and select the lib folder of the {project-name} JDBC driver, and then click *Open*.
-5.  Select `jdbcT4.jar` so that it appears in the *File Name* box, and then click *Select*. 
-+
-The Installer Wizard now displays the path of the JDBC driver JAR file.
-+
-image:{images}/JDBC_JAR_Path.jpg[trafci path to JDBC driver JAR file]
-6.  To install in the default location, proceed to Step 9. To install in your own preferred location, proceed to Step 7.
-
-7.  To find an installation location for trafci, click *Browse* next to the *Trafodion Command Interface* installation directory.
-8.  In the *Select Trafodion Command Interface Installation Directory* dialog box, select the folder where you want to install trafci so that
-the directory path appears in the *File Name* box, and then click *Select*.
-+
-The Installer Wizard displays the directory where trafci is installed.
-
-9.  Click *Next* to review the open-source legal disclaimer.
-10.  If you agree to the terms and conditions, select the check box, and click *Next*.
-+
-The Installer Wizard dialog box shows which components are available for you to download and install.
-+
-image:{images}/OptionalComponents.jpg[trafci component selection]
-
-11.  Select the optional components to be downloaded and installed. Each optional component is installed if the component box is checked.
-+
-If you want to change the download URL for the extensions, click *Edit URL*, and this dialog box appears:
-+
-image:{images}/PerlJavaServerURL.jpg[trafci edit download URL]
-+
-Type a new path, and click *OK*.
-+
-NOTE: Perl and Python must be installed for the respective extensions to work.
-
-12.  If you do not require a proxy server, proceed to Step 15.
-13.  If you require a proxy server, select *Use the following proxy settings* and enter the proxy server and port for downloading the open
-source extensions.
-+
-image:{images}/ProxySettings.jpg[trafci proxy settings]
-14.  Click *Detect Proxy Server(s)* to try to auto-detect your proxy settings. If trafci detects one or more proxy servers, it displays
-them in a drop-down menu next to the *Detect Proxy Server(s)* button.
-15.  Click *Install* to start the installation.
-
-16.  After the core trafci files are installed, the *Installation Status* dialog box appears indicating how many files were extracted to the
-installation directory:
-+
-image:{images}/Extracted_Files.jpg[trafci extracted files]
-+
-Click *OK* to continue the installation.
-17.  If you chose to install the optional components, the installer attempts to download and install them. The progress bar indicates the
-download progress of each file. In addition, an installation log provides details about the status of the download and installation of
-the components.
-+
-image:{images}/InstallComplete.jpg[trafci installation complete]
-18.  After all trafci files are installed, the Installer Wizard completes.
-19.  Click *Exit*.
-
-<<<
-[[trafci-cmd-install]]
-==== Command-Line Installation Steps
-
-1.  At a command prompt, change to the directory where you extracted the contents of the distribution (.zip) file:
-+
-```
-cd installer-directory
-```
-+
-`_installer-directory_` is the directory where you extracted the installer files.
-+
-*Example*
-+
-```
-$ cd $HOME/trafodion-download/clients/trafci
-$ ls
-README  trafciInstaller.jar
-```
-
-2.  Before launching the command-line installer, see the command options below:
-+
-```
-java -jar trafciInstaller.jar -help
-Usage: java -jar <installer jar> [ -help] | <-cm|-silent>
-   [-jdbcFile <jdbc filename>] [-installDir <install dir>] ]
-```
-+
-The `-silent` option installs the client without prompting you for options.
-+
-```
-java -jar trafciInstaller.jar -silent -jdbcFile "C:\JDBC\lib\jdbcT4.jar" -installDir C:\TRAFCI
-```
-+
-_-jdbcFile_ and _-installDir_ are optional parameters. If you do not specify those parameters, you will be prompted to enter them during
-installation.
-
-3.  Launch the command-line installer by entering this command:
-+
-```
-java -jar trafciInstaller.jar cm
-```
-+
-The command-line installer starts and prompts you to enter the type of installation:
-+
-```
-/home/myname/trafcitemp>java -jar trafciInstaller.jar cm
-********************************************************************
-****                                                              **
-**** Welcome to Trafodion Command Interface Installer             **
-****                                                              **
-**** NOTE: The installer requires a the JDBC Type 4               **
-****       Driver to be installed a on your workstation.          **
-********************************************************************
-Type Y for a standard installation, or N for optional components only.
-
-Standard Installation [Y]:
-```
-+
-* For a standard installation, type *Y* and press *Enter*.
-* To install the optional components only, type *N*, press *Enter*, and proceed to Step 7.
-+
-NOTE: All items in square brackets are default values. Press Enter to accept the default value.
-
-4.  Enter the full directory path and file name of the JDBC driver JAR file, `jdbcT4.jar`, which is located in the JDBC driver lib directory:
-+
-```
-JDBC Type 4 Driver JAR File
---------------------------------
-Enter the location and file name:
-```
-5.  Enter an existing directory where you would like to install trafci:
-+
-```
-Trafodion Command Interface
---------------------------------
-Enter the installation directory:
-```
-+
-The installation status appears, indicating how many files are installed in the installation directory:
-+
-```
-Extracted 18 files from the
-/home/myname/trafcitemp/trafciInstaller.jar archive into the
-/usr/local/trafci directory.
-Core TRAFCI files installed.
-Do you want to install the optional components? [Y]:
-```
-
-6.  If you do not wish to download and install the optional components, type *N* at the prompt and press Enter, and your installation
-is complete. Otherwise, type *Y*, press *Enter*, and proceed through the remainder of the installation.
-+
-<<<
-
-7.  Type *Y* and press *Enter* if you agree to the terms. If you are doing an optional install only, you are prompted to enter a valid trafci
-installation directory:
-+
-```
-Do you agree to these terms? (Y or N): Y
-
-Enter your installation directory:
-```
-
-8.  If you do not require a proxy server, type *N*, press *Enter*, and proceed to Step 10. Otherwise, type *Y*, press *Enter*,
-and proceed to Step 9.
-+
-```
-Use a proxy server? [N]:
-```
-
-9.  When prompted to auto-detect proxy servers, type *Y* and press *Enter* to direct trafci to detect your proxy settings.
-If trafci finds proxy servers, it displays them. If you type *N* and press *Enter*, trafci prompts you to enter the proxy server and port:
-+
-```
-Use a proxy server? [Y]: Y
-Attempt to auto-detect proxy server(s)? [Y]: N
-Enter the proxy server (do not include the port): myproxyserver.com
-Enter the proxy port: 8080
-```
-
-10.  You are prompted to select which optional components you wish to download and install. You can also change the download URL.
-+
-```
-Install Perl JavaServer extensions? [Y]: Y
-
-Perl JavaServer requires 3 files: Java.pm, JavaArray.pm, and JavaServer.jar
-http://search.cpan.org/src/METZZO/Java-4.7/[URL of the folder which contains these files [http://search.cpan.org/src/METZZO/Java-4.7/]:]
-
-Install Perl XML SAX Module? [Y]: Y
-
-Perl SAX XML Module URL (PerlSAX.pm)
-
-Install Jython, a Java implementation of Python? [Y]: Y
-
-Jython URL (jython_installer-2.2.jar)
-```
-
-11.  The setup proceeds to download and install the optional open-source components. As each component is retrieved, dots (.) are printed to
-indicate the progress of the download.
-+
-```
-Downloading Perl JavaServer [1 of 3] - Java.pm
-......................... 100%
-Downloading Perl JavaServer [2 of 3] - JavaArray.pm1
-......................... 100%
-Downloading Perl JavaServer [3 of 3] - JavaServer.jar
-......................... 100%
-Successfully added settings.pl
-Downloading Perl XML SAX Module [1 of 1] - PerlSAX.pm
-......................... 100%
-Downloading Jython [1 of 1] - jython_installer-2.2.jar
-......................... 100%
-Successfully Installed Jython. Successfully added settings.py
-Trafodion Command Interface Installation Complete.
-/home/myname/trafcitemp>
-```
-
-<<<
-[[trafci-post-installation-instructions]]
-== Post-Installation Instructions
-
-=== Verify Installed Software Files
-
-After downloading and running the installer file, verify that the trafci software files are installed in the correct locations:
-
-[cols="15%l,20%l,65%",options="header"]
-|===
-| Folder       | Files               | Description
-| bin          | trafci            |
-|              | trafci.cmd        | Windows launch file.
-|              | trafci.pl         | Perl wrapper script. _trafci-perl.pl_ is renamed _trafci.pl_. To run this script, see the
-{docs-url}/command_interface/index.html[_Trafodion Command Interface Guide_].
-|              | trafci.py         | Python wrapper script. trafci-python.py is renamed as trafci.py. To run this script, see the
-{docs-url}/command_interface/index.html[_Trafodion Command Interface Guide_].
-|              | trafci.sh         | Linux launch file.
-|              | trafci-perl.pl    | Perl wrapper script. This script has been modified to invoke trafci.pl. This script is retained for backward compatibility.
-|              | trafci-python.py  | Python wrapper script. This script has been modified to invoke trafci.py. This script is retained for backward compatibility.
-| lib          | trafci.jar        | Product JAR file.
-| lib/perl     | Session.pm        | Product file.
-| lib/python   | Session.py        | Product file.
-| samples      | README            | Readme file that describes how to use the sample scripts.
-|              | arrayDML.pl       | Sample Perl program that executes DML statements and returns results in an array format.
-|              | sample.pl         | Sample Perl program that supports multiple sessions in one script. 
-|              | sample.sql        | Sample SQL script.
-|              | sampleDDL.py      | Sample Python file that uses Jython to execute DDL statements.
-|              | sampleDML.py      | Sample Python file that uses Jython to execute DML statements.
-|              | sampleTables.pl   | Sample Perl file that lists all tables and respective row counts. The file accepts a wild-card argument on the command line.
-|              | sampleTables.py   | Sample Python file that lists all tables and respective row counts. The file accepts a wild-card argument on the command line.
-|===
-
-<<<
-== Test Launching trafci
-
-Before launching trafci, make sure that you have set the Java path to the correct location. For more information, see:
-
-* <<jdbct4-path-windows, Setting the PATH to a Supported Java Version on Windows>>
-* <<jdbct4-path-linux, Setting the PATH to a Supported Java Version on Linux>>
-
-If you did not set the Java path on your client workstation and you try to launch trafci, you might see the following error message appear
-momentarily in the trafci window before the trafci window disappears:
-
-```
-'java' is not recognized as an internal or external command, operable program or batch file.
-```
-
-For information about setting up and using trafci, such as choosing the look and feel of the interface or presetting launch parameters, see the
-{docs-url}/command_interface/index.html[Trafodion Command Interface Guide].
-
-<<<
-=== Windows Example
-
-On Windows, do the following:
-
-1. Go to the directory where you installed trafci. For example, `c:\Trafodion\Trafodion Command Interface`
-2. Go to the `bin` directory
-3. Invoke the `trafci.cmd` file.
-4. Answer prompts.
-
-```
-cd "c:\Trafodion\Trafodion Command Interface"
-cd bin
-trafci.cmd
-<screen is cleared>
-
-Welcome to Apache Trafodion Command Interface
-Copyright (c) 2015 Apache Software Foundation
-
-Host Name/IP Address: trafodion.host.com:23400
-User Name: usr
-Password:
-
-
-Connected to Trafodion
-
-SQL> show schemas ;
-
-Welcome to Apache Trafodion Command Interface
-Copyright (c) 2015 Apache Software Foundation
-
-Host Name/IP Address: 10.1.30.28:23400
-User Name: usr
-Password:
-
-
-Connected to Trafodion
-
-SQL>show schemas;
-
-SCHEMA NAMES
---------------------------------------------------------------------------------
-SEABASE   _MD_      _REPOS_   _LIBMGR_
-
-SQL>
-```
-
-<<<
-=== Linux Example
-
-On Linux, do the following:
-
-1. Go to the directory where you installed trafci. For example, `$HOME/trafci`
-2. Go to the `bin` directory
-3. Invoke the `trafci.sh` file.
-4. Answer prompts.
-
-```
-$ cd $HOME/trafci/bin
-$ . ./trafci.sh -h trafodion.home.com:23400 -u usr -p pwd
-
-Welcome to Apache Trafodion Command Interface
-Copyright (c) 2015 Apache Software Foundation
-
-Connected to Trafodion
-
-SQL>show schemas;
-
-SCHEMA NAMES
---------------------------------------------------------------------------------
-SEABASE   _MD_      _REPOS_   _LIBMGR_
-
-SQL>
-```
-
-<<<
-[[trafci-uninstall]]
-== Uninstall trafci
-
-If you used the executable JAR file, `trafciInstaller.jar`, to install trafci, delete the entire
-folder/directory when you installed trafci.
-
-
-
+////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ */
+////
+
+[[trafci]]
+= Install trafci
+
+== Prerequisites
+
+If you have not done so already, please ensure that you have <<java-setup, setup your Java environment>>,
+<<download-software, unpackaged the {project-name} client software>>, and <<jdbct4, installed the JDBC Type-4 Driver>>.
+.
+
+The examples in this chapter assumes that you have unpackaged the trafci installation file 
+to `c:\trafodion\trafci` (Windows) or `$HOME/trafodion/trafci` (Linux).
+
+[[trafci_perl_python]]
+== Install Perl or Python
+
+If you plan to use Perl or Python scripts with trafci, verify that you have Perl or Python installed on the client workstation. trafci supports
+these versions of Perl and Python:
+
+* Perl version 5.8.8
+* Python version 2.3.4
+
+If you do not have Perl or Python, download it from any open-source software provider. You can perform this installation procedure anytime
+before or after installing trafci.
+
+If you plan to run the sample scripts provided with trafci, verify that you have the Perl JavaServer and Jython (Java implementation of Python)
+installed on your client workstation. Use the trafci Installation Wizard to automatically download and install both the Perl JavaServer and
+Jython open source extensions. To download them manually, see the `README` in the samples directory.
+
+<<<
+[[trafci-verify-install]]
+== Verify Installation
+
+Verify that `c:\trafodion\trafci` (Windows) or `$HOME/trafodion/trafci` (Linux) contains the following files:
+
+* `README`
+* `trafciInstaller.jar`
+
+[[trafci-run-installer]]
+== Run trafci Installer
+
+`trafciInstaller.jar` is used to install trafci.
+
+Two modes are supported:
+
+* <<trafci-wizard-install, GUI Wizard Install>>
+* <<trafci-cmd-install, Command-Line Install>>
+
+[[trafci-wizard-install]]
+=== GUI Wizard Install
+
+NOTE: You must have the X Window system installed on your Linux client workstation to run the trafci Installer Wizard.
+If you do not, then use the <<trafci-cmd-install, Command-Line Install>> instructions below.
+<<<
+==== Launch the Installer Wizard
+
+. Move to the trafci install directory.
+** *Windows:* `c:\trafodion\trafci`
+** *Linux:* `$HOME/trafodion/trafci`
+. Double-click on `trafciInstaller.jar`
+
+If the trafci Installer Wizard does not start, then do the following from a command prompt:
+
+. Change director to the trafci install directory.
+** *Windows:* `cd c:\trafodion\trafci`
+** *Linux:* `cd $HOME/trafodion/trafci`
+. Launch the trafci Installer Wizard: `java -jar trafciInstaller.jar`
+
+[[trafci-using-wizard]]
+==== Using the Installer Wizard
+
+When you execute `trafciInstaller.jar`, the Installer Wizard appears:
+
+image:{images}/InstallerWizardWelcome.jpg[trafci Installer Wizard welcome screen]
+
+1.  Click one of the buttons for the type of installation that you would like to perform:
+* *Standard Installation* to start the Installer Wizard. Guides you through installing both the core trafci components and the optional open
+source extensions.
+* *Core Components* for a quick installation of the core trafci files.
+* *Optional Components* if you have already installed the core trafci files but want to install only the optional open source extensions.
+
+2.  After you have selected the components you wish to install, browse and select the JDBC JAR file and then specify an installation directory
+where you will install trafci.
++
+image:{images}/InstallerWizardPaths.jpg[trafci select path dialog]
+
+3.  To locate the *JDBC Type 4 Driver JAR file*, click *Browse* next to the *JDBC Type 4 Driver JAR File*.
++
+Navigate to the lib folder of the {project-name} JDBC driver and select the `jdbcT4.jar` file
+(`c:\trafodion\jdbct4\lib\jdbcT4.jar` on Windows, `$HOME/trafodion/jdbct4/lib/jdbcT4.jar` on Linux), and then click *Select*.
++
+The Installer Wizard now displays the path of the JDBC driver JAR file for *JDBC Type 4 Driver JAR File*.
+
+4. To select the *Trafodion Command Interface installation directory*, click *Browse* next to the *Trafodion Command Interface installation directory*   
++
+Navicate to `c:\trafodion` (Windows) or `$HOME/trafodion` (Linux)and click on *Select*.
++
+The Installer Wizard now displays the path of the installation directory for *Trafodion Command Interface installation directory*.
++
+image:{images}/trafci_Installation_Choices.jpg[trafci installation choices]
+
+5.  Click *Next* to review the open-source legal disclaimer.
+6.  If you agree to the terms and conditions, select the check box, and click *Next*.
++
+The Installer Wizard dialog box shows which components are available for you to download and install.
++
+image:{images}/OptionalComponents.jpg[trafci component selection]
+
+7.  Select the optional components to be downloaded and installed. Each optional component is installed if the component box is checked.
++
+If you want to change the download URL for the extensions, click *Edit URL*, and this dialog box appears:
++
+image:{images}/PerlJavaServerURL.jpg[trafci edit download URL]
++
+Type a new path, and click *OK*.
++
+NOTE: Perl and Python must be installed for the respective extensions to work.
+
+8.  If you do not require a proxy server, proceed to Step 12.
+9.  If you require a proxy server, select *Use the following proxy settings* and enter the proxy server and port for downloading the open
+source extensions.
++
+image:{images}/ProxySettings.jpg[trafci proxy settings]
+
+10.  Click *Detect Proxy Server(s)* to try to auto-detect your proxy settings. If trafci detects one or more proxy servers, it displays
+them in a drop-down menu next to the *Detect Proxy Server(s)* button.
+11.  Click *Install* to start the installation.
+
+12.  After the core trafci files are installed, the *Installation Status* dialog box appears indicating how many files were extracted to the
+installation directory:
++
+image:{images}/Extracted_Files.jpg[height=600,width=600,alt="trafci extracted files"]
++
+Click *OK* to continue the installation.
+
+13.  If you chose to install the optional components, the installer attempts to download and install them. The progress bar indicates the
+download progress of each file. In addition, an installation log provides details about the status of the download and installation of
+the components.
++
+image:{images}/InstallComplete.jpg[trafci installation complete]
+
+14.  After all trafci files are installed, the Installer Wizard completes.
+15.  Click *Exit*.
+
+<<<
+[[trafci-cmd-install]]
+=== Command-Line Installation Steps
+
+1.  At a command prompt, change to the directory where you extracted the contents of the distribution (.zip) file:
++
+*Windows*
++
+```
+c:\> cd c:\trafodion\trafci
+c:\trafodion\trafci> dir
+README  trafciInstaller.jar
+```
++
+*Linux*
++
+```
+$ cd $HOME/trafodion/trafci
+$ ls
+README  trafciInstaller.jar
+```
+
+2.  Before launching the command-line installer, see the command options below:
++
+```
+java -jar trafciInstaller.jar -help
+Usage: java -jar <installer jar> [ -help] | <-cm|-silent>
+   [-jdbcFile <jdbc filename>] [-installDir <install dir>] ]
+```
++
+The `-silent` option installs the client without prompting you for options.
++
+*Windows*
++
+```
+java -jar trafciInstaller.jar -silent -jdbcFile "C:\trafodion\jdbct4\lib\jdbcT4.jar" -installDir C:\trafodion\trafci
+```
++
+*Linux*
++
+```
+java -jar trafciInstaller.jar -silent -jdbcFile "$HOME/jdbct4/lib/jdbcT4.jar" -installDir $HOME/trafodion/trafci
+```
++
+_-jdbcFile_ and _-installDir_ are optional parameters. If you do not specify those parameters, you will be prompted to enter them during
+installation.
++
+<<<
+3.  Launch the command-line installer by entering this command:
++
+```
+java -jar trafciInstaller.jar -cm
+```
++
+The command-line installer starts and prompts you to enter the type of installation:
++
+*Windows*
++
+```
+c:\> cd c:\trafodion\trafci 
+c:\trafodion\trafci> java -jar trafciInstaller.jar -cm
+********************************************************************
+****                                                              **
+**** Welcome to Trafodion Command Interface Installer             **
+****                                                              **
+**** NOTE: The installer requires a the JDBC Type 4               **
+****       Driver to be installed a on your workstation.          **
+********************************************************************
+Type Y for a standard installation, or N for optional components only.
+
+Standard Installation [Y]:
+```
++
+*Linux*
++
+```
+$ cd $HOME/trafodion/trafci 
+$ java -jar trafciInstaller.jar -cm
+********************************************************************
+****                                                              **
+**** Welcome to Trafodion Command Interface Installer             **
+****                                                              **
+**** NOTE: The installer requires a the JDBC Type 4               **
+****       Driver to be installed a on your workstation.          **
+********************************************************************
+Type Y for a standard installation, or N for optional components only.
+
+Standard Installation [Y]:
+```
++
+* For a standard installation, type *Y* and press *Enter*.
+* To install the optional components only, type *N*, press *Enter*, and proceed to Step 7.
++
+NOTE: All items in square brackets are default values. Press Enter to accept the default value.
++
+<<<
+4.  Enter the full directory path and file name of the JDBC driver JAR file, `jdbcT4.jar`, which is located in the JDBC driver lib directory:
++
+```
+JDBC Type 4 Driver JAR File
+--------------------------------
+Enter the location and file name:
+```
++
+* *Windows*: `c:\trafodion\jdbct4\lib\jdbcT4.jar`
+* *Linux*: `/opt/user/trafodion/lib/jdbcT4.jar`
++
+NOTE: Don't use environmental variables on Linux (such as `$HOME`). Instead, specify the full path to the
+`jdbcT4.jar` file.
+
+5.  Enter an existing directory where you would like to install trafci:
++
+```
+Trafodion Command Interface
+--------------------------------
+Enter the installation directory:
+```
++
+* *Windows*: `c:\trafodion\trafci`
+* *Linux*: `/opt/user/trafodion/trafci`
++
+The installation status appears, indicating how many files are installed in the installation directory:
++
+```
+Extracted 18 files from the
+/opt/user/trafodion/trafci/trafciInstaller.jar archive into the
+/opt/user/trafodion/trafci directory.
+Core TRAFCI files installed.
+Do you want to install the optional components? [Y]:
+```
++
+NOTE: Don't use environmental variables on Linux (such as `$HOME`). Instead, specify the full path to the
+`jdbcT4.jar` file.
+
+6.  If you do not wish to download and install the optional components, type *N* at the prompt and press Enter, and your installation
+is complete. Otherwise, type *Y*, press *Enter*, and proceed through the remainder of the installation.
++
+<<<
+
+7.  Type *Y* and press *Enter* if you agree to the terms. If you are doing an optional install only, you are prompted to enter a valid trafci
+installation directory:
++
+```
+Do you agree to these terms? (Y or N): Y
+
+Enter your installation directory:
+```
+
+8.  If you do not require a proxy server, type *N*, press *Enter*, and proceed to Step 10. Otherwise, type *Y*, press *Enter*,
+and proceed to Step 9.
++
+```
+Use a proxy server? [N]:
+```
+
+9.  When prompted to auto-detect proxy servers, type *Y* and press *Enter* to direct trafci to detect your proxy settings.
+If trafci finds proxy servers, it displays them. If you type *N* and press *Enter*, trafci prompts you to enter the proxy server and port:
++
+```
+Use a proxy server? [Y]: Y
+Attempt to auto-detect proxy server(s)? [Y]: N
+Enter the proxy server (do not include the port): myproxyserver.com
+Enter the proxy port: 8080
+```
+
+10.  You are prompted to select which optional components you wish to download and install. You can also change the download URL.
++
+```
+Install Perl JavaServer extensions? [Y]: Y
+
+Perl JavaServer requires 3 files: Java.pm, JavaArray.pm, and JavaServer.jar
+http://search.cpan.org/src/METZZO/Java-4.7/[URL of the folder which contains these files [http://search.cpan.org/src/METZZO/Java-4.7/]:]
+
+Install Perl XML SAX Module? [Y]: Y
+
+Perl SAX XML Module URL (PerlSAX.pm)
+
+Install Jython, a Java implementation of Python? [Y]: Y
+
+Jython URL (jython_installer-2.2.jar)
+```
++
+<<<
+11.  The setup proceeds to download and install the optional open-source components. As each component is retrieved, dots (.) are printed to
+indicate the progress of the download.
++
+```
+Downloading Perl JavaServer [1 of 3] - Java.pm
+......................... 100%
+Downloading Perl JavaServer [2 of 3] - JavaArray.pm1
+......................... 100%
+Downloading Perl JavaServer [3 of 3] - JavaServer.jar
+......................... 100%
+Successfully added settings.pl
+Downloading Perl XML SAX Module [1 of 1] - PerlSAX.pm
+......................... 100%
+Downloading Jython [1 of 1] - jython_installer-2.2.jar
+......................... 100%
+Successfully Installed Jython. Successfully added settings.py
+Trafodion Command Interface Installation Complete.
+$
+```
+
+<<<
+== Verify Installed Software Files
+
+After downloading and running the installer file, verify that the trafci software files are installed in the correct locations.
+`c:\trafodion\trafci` (Windows) or `$HOME/trafodion/trafci` (Linux).
+
+[cols="15%,20%,65%",options="header"]
+|===
+| Folder         | Files               | Description
+| `bin`          | `trafci`            |
+|                | `trafci.cmd`        | Windows launch file.
+|                | `trafci.pl`         | Perl wrapper script. _trafci-perl.pl_ is renamed _trafci.pl_. To run this script, see the
+{docs-url}/command_interface/index.html[_Trafodion Command Interface Guide_].
+|                | `trafci.py`         | Python wrapper script. trafci-python.py is renamed as trafci.py. To run this script, see the
+{docs-url}/command_interface/index.html[_Trafodion Command Interface Guide_].
+|                | `trafci.sh`         | Linux launch file.
+|                | `trafci-perl.pl`    | Perl wrapper script. This script has been modified to invoke trafci.pl. This script is retained for backward compatibility.
+|                | `trafci-python.py`  | Python wrapper script. This script has been modified to invoke trafci.py. This script is retained for backward compatibility.
+| `lib`          | `trafci.jar`        | Product JAR file.
+| `lib/perl`     | `Session.pm`        | Product file.
+| `lib/python`   | `Session.py`        | Product file.
+| `samples`      | `README`            | Readme file that describes how to use the sample scripts.
+|                | `arrayDML.pl`       | Sample Perl program that executes DML statements and returns results in an array format.
+|                | `sample.pl`         | Sample Perl program that supports multiple sessions in one script. 
+|                | `sample.sql`        | Sample SQL script.
+|                | `sampleDDL.py`      | Sample Python file that uses Jython to execute DDL statements.
+|                | `sampleDML.py`      | Sample Python file that uses Jython to execute DML statements.
+|                | `sampleTables.pl`   | Sample Perl file that lists all tables and respective row counts. The file accepts a wild-card argument on the command line.
+|                | `sampleTables.py`   | Sample Python file that lists all tables and respective row counts. The file accepts a wild-card argument on the command line.
+|===
+
+== Modify PATH variable
+
+Modify the PATH variable:
+
+* *Windows:* `c:\trafodion\trafci\bin\`
+* *Linux:* `$HOME/trafodion/trafci/bin`
+
+See <<howto-setup-path, Set Up Path Variable>> for further instructions.
+
+<<<
+== Test Launching trafci
+
+NOTE: For information about setting up and using trafci, such as choosing the look and feel of the interface or presetting launch parameters, see the
+{docs-url}/command_interface/index.html[Trafodion Command Interface Guide].
+
+=== Windows Example
+
+On Windows, do the following:
+
+1. Go to the directory where you installed trafci. For example, `c:\trafodion\trafci`
+2. Go to the `bin` directory
+3. Invoke the `trafci.cmd` file.
+4. Answer prompts.
+
+```
+cd "c:\trafodion\trafci\bin"
+trafci.cmd
+<screen is cleared>
+
+Welcome to Apache Trafodion Command Interface
+Copyright (c) 2015 Apache Software Foundation
+
+Host Name/IP Address: trafodion.host.com:23400
+User Name: usr
+Password:
+
+
+Connected to Trafodion
+
+SQL> show schemas ;
+
+SCHEMA NAMES
+--------------------------------------------------------------------------------
+SEABASE   _MD_      _REPOS_   _LIBMGR_
+
+SQL>
+```
+
+<<<
+=== Linux Example
+
+On Linux, do the following:
+
+1. Go to the directory where you installed trafci. For example, `$HOME/trafodion/trafci`
+2. Go to the `bin` directory
+3. Invoke the `trafci.sh` file.
+4. Answer prompts.
+
+```
+$ cd $HOME/trafodion/trafci/bin
+$ . ./trafci.sh -h trafodion.home.com:23400 -u usr -p pwd
+
+Welcome to Apache Trafodion Command Interface
+Copyright (c) 2015 Apache Software Foundation
+
+Connected to Trafodion
+
+SQL>show schemas;
+
+SCHEMA NAMES
+--------------------------------------------------------------------------------
+SEABASE   _MD_      _REPOS_   _LIBMGR_
+
+SQL>
+```
+
+<<<
+[[trafci-uninstall]]
+== Uninstall trafci
+
+If you used the executable JAR file, `trafciInstaller.jar`, to install trafci, delete the entire
+folder/directory when you installed trafci.
+
+* On Windows:
++
+```
+rmdir /s /q <trafci-installation-directory>
+```
++
+*Example*
++
+```
+rmdir /s /q c:\trafodion\trafci
+```
+
+* On Linux:
++
+```
+rm -rf <jdbc-installation-directory>
+```
++
+*Example*
++
+```
+rm -rf $HOME/trafodion/trafci
+```
+
+NOTE: Remember to remove the trafci reference in the PATH variable.
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/index.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/index.adoc b/docs/client_install/src/asciidoc/index.adoc
index f8ae3b8..c6aa394 100644
--- a/docs/client_install/src/asciidoc/index.adoc
+++ b/docs/client_install/src/asciidoc/index.adoc
@@ -1,68 +1,70 @@
-////
-* @@@ START COPYRIGHT @@@                                                         
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@ 
-////
-
-= Client Installation Guide
-:doctype: book
-:numbered:
-:toc: left
-:toclevels: 3
-:toc-title: Table of Contents
-:icons: font
-:iconsdir: icons
-:experimental:
-:source-language: text
-:revnumber: {project-version}
-:title-logo-image: {project-logo}
-:project-name: {project-name}
-
-:images: ../images
-:sourcedir: ../../resources/source
-
-:leveloffset: 1
-
-// The directory is called _chapters because asciidoctor skips direct
-// processing of files found in directories starting with an _. This
-// prevents each chapter being built as its own book.
-
-include::../../shared/license.txt[]
-
-<<<
-
-include::../../shared/acknowledgements.txt[]
-
-<<<
-include::../../shared/revisions.txt[]
-
-include::asciidoc/_chapters/about.adoc[]
-include::asciidoc/_chapters/introduction.adoc[]
-include::asciidoc/_chapters/jdbct4.adoc[]
-include::asciidoc/_chapters/trafci.adoc[]
-include::asciidoc/_chapters/dbviz.adoc[]
-include::asciidoc/_chapters/SQuirrel.adoc[]
-include::asciidoc/_chapters/odbc_linux.adoc[]
-include::asciidoc/_chapters/odb.adoc[]
-include::asciidoc/_chapters/odbc_windows.adoc[]
-
-= Appendix
-include::asciidoc/_chapters/sample_prog.adoc[]
-
+////
+* @@@ START COPYRIGHT @@@                                                         
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@ 
+////
+
+= Client Installation Guide
+:doctype: book
+:numbered:
+:toc: left
+:toclevels: 3
+:toc-title: Table of Contents
+:icons: font
+:iconsdir: icons
+:experimental:
+:source-language: text
+:revnumber: {project-version}
+:title-logo-image: {project-logo}
+:project-name: {project-name}
+
+:images: ../images
+:sourcedir: ../../resources/source
+:tableau: ../../resources/tableau
+:leveloffset: 1
+
+// The directory is called _chapters because asciidoctor skips direct
+// processing of files found in directories starting with an _. This
+// prevents each chapter being built as its own book.
+
+include::../../shared/license.txt[]
+
+<<<
+include::../../shared/acknowledgements.txt[]
+
+<<<
+include::../../shared/revisions.txt[]
+
+include::asciidoc/_chapters/about.adoc[]
+include::asciidoc/_chapters/introduction.adoc[]
+include::asciidoc/_chapters/preparation.adoc[]
+include::asciidoc/_chapters/jdbct4.adoc[]
+include::asciidoc/_chapters/trafci.adoc[]
+include::asciidoc/_chapters/dbviz.adoc[]
+include::asciidoc/_chapters/SQuirrel.adoc[]
+include::asciidoc/_chapters/odbc_linux.adoc[]
+include::asciidoc/_chapters/odb.adoc[]
+include::asciidoc/_chapters/odbc_windows.adoc[]
+include::asciidoc/_chapters/tableau.adoc[]
+include::asciidoc/_chapters/howto.adoc[]
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/Database_Connection_in_DbVisualizer.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/Database_Connection_in_DbVisualizer.jpg b/docs/client_install/src/images/Database_Connection_in_DbVisualizer.jpg
index 5cd345c..4dc5931 100644
Binary files a/docs/client_install/src/images/Database_Connection_in_DbVisualizer.jpg and b/docs/client_install/src/images/Database_Connection_in_DbVisualizer.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/DbVisualizer_Driver_Manager.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/DbVisualizer_Driver_Manager.jpg b/docs/client_install/src/images/DbVisualizer_Driver_Manager.jpg
index 535d96f..617bbad 100644
Binary files a/docs/client_install/src/images/DbVisualizer_Driver_Manager.jpg and b/docs/client_install/src/images/DbVisualizer_Driver_Manager.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/Extracted_Files.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/Extracted_Files.jpg b/docs/client_install/src/images/Extracted_Files.jpg
index b272015..f905b30 100644
Binary files a/docs/client_install/src/images/Extracted_Files.jpg and b/docs/client_install/src/images/Extracted_Files.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/InstallComplete.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/InstallComplete.jpg b/docs/client_install/src/images/InstallComplete.jpg
index a565be3..20d88c6 100644
Binary files a/docs/client_install/src/images/InstallComplete.jpg and b/docs/client_install/src/images/InstallComplete.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/Physical_Connection.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/Physical_Connection.jpg b/docs/client_install/src/images/Physical_Connection.jpg
index b5d3475..257c44b 100644
Binary files a/docs/client_install/src/images/Physical_Connection.jpg and b/docs/client_install/src/images/Physical_Connection.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/SQuirrel_Add_Alias.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/SQuirrel_Add_Alias.jpg b/docs/client_install/src/images/SQuirrel_Add_Alias.jpg
new file mode 100644
index 0000000..35fd141
Binary files /dev/null and b/docs/client_install/src/images/SQuirrel_Add_Alias.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/SQuirrel_Extra_Class_Path.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/SQuirrel_Extra_Class_Path.jpg b/docs/client_install/src/images/SQuirrel_Extra_Class_Path.jpg
new file mode 100644
index 0000000..ccdbf27
Binary files /dev/null and b/docs/client_install/src/images/SQuirrel_Extra_Class_Path.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/SQuirrel_New_Driver.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/SQuirrel_New_Driver.jpg b/docs/client_install/src/images/SQuirrel_New_Driver.jpg
new file mode 100644
index 0000000..a881bd8
Binary files /dev/null and b/docs/client_install/src/images/SQuirrel_New_Driver.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/tableau_connect.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/tableau_connect.jpg b/docs/client_install/src/images/tableau_connect.jpg
new file mode 100644
index 0000000..dcc7062
Binary files /dev/null and b/docs/client_install/src/images/tableau_connect.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/trafci_Installation_Choices.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/trafci_Installation_Choices.jpg b/docs/client_install/src/images/trafci_Installation_Choices.jpg
new file mode 100644
index 0000000..4a98b6b
Binary files /dev/null and b/docs/client_install/src/images/trafci_Installation_Choices.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_add.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_add.jpg b/docs/client_install/src/images/winodbc_admin_add.jpg
new file mode 100644
index 0000000..da4413f
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_add.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_add_general.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_add_general.jpg b/docs/client_install/src/images/winodbc_admin_add_general.jpg
new file mode 100644
index 0000000..ab90336
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_add_general.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_add_general_edited.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_add_general_edited.jpg b/docs/client_install/src/images/winodbc_admin_add_general_edited.jpg
new file mode 100644
index 0000000..7e1bd93
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_add_general_edited.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_add_network.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_add_network.jpg b/docs/client_install/src/images/winodbc_admin_add_network.jpg
new file mode 100644
index 0000000..a540734
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_add_network.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_add_schema.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_add_schema.jpg b/docs/client_install/src/images/winodbc_admin_add_schema.jpg
new file mode 100644
index 0000000..fbdfa7a
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_add_schema.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_add_test_connection.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_add_test_connection.jpg b/docs/client_install/src/images/winodbc_admin_add_test_connection.jpg
new file mode 100644
index 0000000..93a4f9f
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_add_test_connection.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_add_tested_connection.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_add_tested_connection.jpg b/docs/client_install/src/images/winodbc_admin_add_tested_connection.jpg
new file mode 100644
index 0000000..b9dbf65
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_add_tested_connection.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_add_translate_dll.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_add_translate_dll.jpg b/docs/client_install/src/images/winodbc_admin_add_translate_dll.jpg
new file mode 100644
index 0000000..8af9875
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_add_translate_dll.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_admin_intro.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_admin_intro.jpg b/docs/client_install/src/images/winodbc_admin_intro.jpg
new file mode 100644
index 0000000..e1c206f
Binary files /dev/null and b/docs/client_install/src/images/winodbc_admin_intro.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_destination.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_destination.jpg b/docs/client_install/src/images/winodbc_destination.jpg
new file mode 100644
index 0000000..e31e275
Binary files /dev/null and b/docs/client_install/src/images/winodbc_destination.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_install_finished.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_install_finished.jpg b/docs/client_install/src/images/winodbc_install_finished.jpg
new file mode 100644
index 0000000..5260f3d
Binary files /dev/null and b/docs/client_install/src/images/winodbc_install_finished.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_license.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_license.jpg b/docs/client_install/src/images/winodbc_license.jpg
new file mode 100644
index 0000000..3ca6bb6
Binary files /dev/null and b/docs/client_install/src/images/winodbc_license.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_ready_to_install.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_ready_to_install.jpg b/docs/client_install/src/images/winodbc_ready_to_install.jpg
new file mode 100644
index 0000000..7f672f1
Binary files /dev/null and b/docs/client_install/src/images/winodbc_ready_to_install.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/images/winodbc_welcome.jpg
----------------------------------------------------------------------
diff --git a/docs/client_install/src/images/winodbc_welcome.jpg b/docs/client_install/src/images/winodbc_welcome.jpg
new file mode 100644
index 0000000..9b6d2fd
Binary files /dev/null and b/docs/client_install/src/images/winodbc_welcome.jpg differ


[13/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
Major reorganization of the Client Installation Guide.

Corresponding changes in the odb User's Guide to keep installation in one place.

Formatting changes in some guides.


Project: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/commit/da748b4d
Tree: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/tree/da748b4d
Diff: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/diff/da748b4d

Branch: refs/heads/master
Commit: da748b4d887f20026d0afc1d137fc88a72cf717b
Parents: 9c712a4
Author: Gunnar Tapper <ta...@gmail.com>
Authored: Tue Nov 1 20:37:58 2016 +0000
Committer: Gunnar Tapper <ta...@gmail.com>
Committed: Tue Nov 1 20:37:58 2016 +0000

----------------------------------------------------------------------
 .../jdbcT4/src/main/samples/t4jdbc.properties   |     2 +-
 docs/client_install/pom.xml                     |   600 +-
 .../src/asciidoc/_chapters/SQuirrel.adoc        |   148 +-
 .../src/asciidoc/_chapters/about.adoc           |   331 +-
 .../src/asciidoc/_chapters/dbviz.adoc           |   193 +-
 .../src/asciidoc/_chapters/howto.adoc           |   164 +
 .../src/asciidoc/_chapters/introduction.adoc    |   217 +-
 .../src/asciidoc/_chapters/jdbct4.adoc          |   742 +-
 .../src/asciidoc/_chapters/odb.adoc             |   311 +-
 .../src/asciidoc/_chapters/odbc_linux.adoc      |   701 +-
 .../src/asciidoc/_chapters/odbc_windows.adoc    |   490 +-
 .../src/asciidoc/_chapters/preparation.adoc     |   273 +
 .../src/asciidoc/_chapters/sample_prog.adoc     |   150 +-
 .../src/asciidoc/_chapters/tableau.adoc         |    83 +
 .../src/asciidoc/_chapters/trafci.adoc          |   984 +-
 docs/client_install/src/asciidoc/index.adoc     |   138 +-
 .../Database_Connection_in_DbVisualizer.jpg     |   Bin 58043 -> 63604 bytes
 .../src/images/DbVisualizer_Driver_Manager.jpg  |   Bin 80198 -> 79645 bytes
 .../src/images/Extracted_Files.jpg              |   Bin 28327 -> 26389 bytes
 .../src/images/InstallComplete.jpg              |   Bin 47365 -> 73963 bytes
 .../src/images/Physical_Connection.jpg          |   Bin 185241 -> 71998 bytes
 .../src/images/SQuirrel_Add_Alias.jpg           |   Bin 0 -> 50396 bytes
 .../src/images/SQuirrel_Extra_Class_Path.jpg    |   Bin 0 -> 54897 bytes
 .../src/images/SQuirrel_New_Driver.jpg          |   Bin 0 -> 31639 bytes
 .../src/images/tableau_connect.jpg              |   Bin 0 -> 39547 bytes
 .../src/images/trafci_Installation_Choices.jpg  |   Bin 0 -> 78358 bytes
 .../src/images/winodbc_admin_add.jpg            |   Bin 0 -> 60817 bytes
 .../src/images/winodbc_admin_add_general.jpg    |   Bin 0 -> 48930 bytes
 .../images/winodbc_admin_add_general_edited.jpg |   Bin 0 -> 50337 bytes
 .../src/images/winodbc_admin_add_network.jpg    |   Bin 0 -> 79462 bytes
 .../src/images/winodbc_admin_add_schema.jpg     |   Bin 0 -> 32986 bytes
 .../winodbc_admin_add_test_connection.jpg       |   Bin 0 -> 43949 bytes
 .../winodbc_admin_add_tested_connection.jpg     |   Bin 0 -> 44916 bytes
 .../images/winodbc_admin_add_translate_dll.jpg  |   Bin 0 -> 46249 bytes
 .../src/images/winodbc_admin_intro.jpg          |   Bin 0 -> 59907 bytes
 .../src/images/winodbc_destination.jpg          |   Bin 0 -> 36788 bytes
 .../src/images/winodbc_install_finished.jpg     |   Bin 0 -> 36769 bytes
 .../src/images/winodbc_license.jpg              |   Bin 0 -> 58430 bytes
 .../src/images/winodbc_ready_to_install.jpg     |   Bin 0 -> 34193 bytes
 .../src/images/winodbc_welcome.jpg              |   Bin 0 -> 39496 bytes
 .../src/resources/source/basicsql.cpp           |   850 +-
 .../src/resources/source/build.bat              |    50 +-
 .../client_install/src/resources/source/run.bat |    46 +-
 .../src/resources/tableau/trafodion.tdc         |    16 +
 .../resources/tableau/trafodion.tdc.template    |    16 +
 .../src/asciidoc/_chapters/binder_msgs.adoc     |     6 +-
 .../src/asciidoc/_chapters/install.adoc         |   Bin 13248 -> 2252 bytes
 .../src/resources/source/partLocations.java     |    42 +
 .../src/resources/source/partlocations.java     |    42 -
 .../src/resources/source/supplierInfo.java      |    38 +
 .../src/resources/source/supplierinfo.java      |    38 -
 .../src/resources/source/supplyQuantities.java  |    32 +
 .../src/resources/source/supplyquantities.java  |    32 -
 docs/sql_reference/pom.xml                      |   602 +-
 .../src/asciidoc/_chapters/about.adoc           |   424 +-
 .../src/asciidoc/_chapters/introduction.adoc    |  1036 +-
 .../src/asciidoc/_chapters/limits.adoc          |    74 +-
 .../src/asciidoc/_chapters/olap_functions.adoc  |  2156 +--
 .../src/asciidoc/_chapters/reserved_words.adoc  |   572 +-
 .../src/asciidoc/_chapters/runtime_stats.adoc   |  2706 +--
 .../src/asciidoc/_chapters/sql_clauses.adoc     |  2864 +--
 .../sql_functions_and_expressions.adoc          | 15770 +++++++--------
 .../_chapters/sql_language_elements.adoc        |  8176 ++++----
 .../src/asciidoc/_chapters/sql_statements.adoc  | 17004 +++++++++--------
 .../src/asciidoc/_chapters/sql_utilities.adoc   |  2382 +--
 docs/sql_reference/src/asciidoc/index.adoc      |   137 +-
 66 files changed, 30729 insertions(+), 29909 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/core/conn/jdbcT4/src/main/samples/t4jdbc.properties
----------------------------------------------------------------------
diff --git a/core/conn/jdbcT4/src/main/samples/t4jdbc.properties b/core/conn/jdbcT4/src/main/samples/t4jdbc.properties
index d4c1f98..d8ae73e 100755
--- a/core/conn/jdbcT4/src/main/samples/t4jdbc.properties
+++ b/core/conn/jdbcT4/src/main/samples/t4jdbc.properties
@@ -20,7 +20,7 @@
 # @@@ END COPYRIGHT @@@
 
 catalog = TRAFODION
-schema  = SCH
+schema  = SEABASE
 url = jdbc:t4jdbc://server:port/:
 user = usr
 password = pwd

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/pom.xml
----------------------------------------------------------------------
diff --git a/docs/client_install/pom.xml b/docs/client_install/pom.xml
index 45491a3..7d6d122 100644
--- a/docs/client_install/pom.xml
+++ b/docs/client_install/pom.xml
@@ -1,300 +1,300 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
- <!-- 
-* @@@ START COPYRIGHT @@@                                                       
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
--->
-  <modelVersion>4.0.0</modelVersion>
-  <groupId>org.apache.trafodion</groupId>
-  <artifactId>client-install-guide</artifactId>
-  <version>${env.TRAFODION_VER}</version>
-  <packaging>pom</packaging>
-  <name>Trafodion Client Installation Guide</name>
-  <description>This guide describes how to install different Trafodion client applications.</description>
-  <url>http://trafodion.incubator.apache.org</url>
-  <inceptionYear>2015</inceptionYear>
-
-  <parent>
-    <groupId>org.apache.trafodion</groupId>
-    <artifactId>trafodion</artifactId>
-    <version>1.3.0</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-
-
-  <licenses>
-    <license>
-      <name>The Apache Software License, Version 2.0</name>
-      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
-      <distribution>repo</distribution>
-      <comments>A business-friendly OSS license</comments>
-    </license>
-  </licenses>
-
-  <organization>
-    <name>Apache Software Foundation</name>
-    <url>http://www.apache.org</url>
-  </organization>
-
-  <issueManagement>
-    <system>JIRA</system>
-    <url>http://issues.apache.org/jira/browse/TRAFODION</url>
-  </issueManagement>
-
-  <scm>
-    <connection>scm:git:http://git-wip-us.apache.org/repos/asf/incubator-trafodion.git</connection>
-    <developerConnection>scm:git:https://git-wip-us.apache.org/repos/asf/incubator-trafodion.git</developerConnection>
-    <url>https://git-wip-us.apache.org/repos/asf?p=incubator-trafodion.git</url>
-    <tag>HEAD</tag>
-  </scm>
-
-  <ciManagement>
-    <system>Jenkins</system>
-    <url>https://jenkins.esgyn.com</url>
-  </ciManagement>
-
-  <properties>
-    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-    <asciidoctor.maven.plugin.version>1.5.2.1</asciidoctor.maven.plugin.version>
-    <asciidoctorj.pdf.version>1.5.0-alpha.11</asciidoctorj.pdf.version>
-    <asciidoctorj.version>1.5.4</asciidoctorj.version>
-    <rubygems.prawn.version>2.0.2</rubygems.prawn.version>
-    <jruby.version>9.0.4.0</jruby.version>
-    <dependency.locations.enabled>false</dependency.locations.enabled>
-  </properties>
-
-  <repositories>
-    <repository>
-      <id>rubygems-proxy-releases</id>
-      <name>RubyGems.org Proxy (Releases)</name>
-      <url>http://rubygems-proxy.torquebox.org/releases</url>
-      <releases>
-        <enabled>true</enabled>
-      </releases>
-      <snapshots>
-        <enabled>false</enabled>
-      </snapshots>
-    </repository>
-  </repositories>
-  
-  <dependencies>
-    <dependency>
-      <groupId>rubygems</groupId>
-      <artifactId>prawn</artifactId>
-      <version>${rubygems.prawn.version}</version>
-      <type>gem</type>
-      <scope>provided</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.jruby</groupId>
-      <artifactId>jruby-complete</artifactId>
-      <version>${jruby.version}</version>
-    </dependency>
-    <dependency>
-      <groupId>org.asciidoctor</groupId>
-      <artifactId>asciidoctorj</artifactId>
-      <version>${asciidoctorj.version}</version>
-    </dependency>
-  </dependencies>
-
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>de.saumya.mojo</groupId>
-        <artifactId>gem-maven-plugin</artifactId>
-        <version>1.0.10</version>
-        <configuration>
-          <!-- align JRuby version with AsciidoctorJ to avoid redundant downloading -->
-          <jrubyVersion>${jruby.version}</jrubyVersion>
-          <gemHome>${project.build.directory}/gems</gemHome>
-          <gemPath>${project.build.directory}/gems</gemPath>
-        </configuration>
-        <executions>
-          <execution>
-            <goals>
-              <goal>initialize</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-resources-plugin</artifactId>
-        <version>2.7</version>
-        <configuration>
-          <encoding>UTF-8</encoding>
-          <attributes>
-            <generateReports>false</generateReports>
-          </attributes>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.asciidoctor</groupId>
-        <artifactId>asciidoctor-maven-plugin</artifactId>
-        <version>${asciidoctor.maven.plugin.version}</version> 
-        <dependencies>
-          <dependency>
-            <groupId>org.asciidoctor</groupId>
-            <artifactId>asciidoctorj-pdf</artifactId>
-            <version>${asciidoctorj.pdf.version}</version>
-          </dependency>
-          <dependency>
-            <groupId>org.asciidoctor</groupId>
-            <artifactId>asciidoctorj</artifactId>
-            <version>${asciidoctorj.version}</version>
-          </dependency>
-        </dependencies>
-        <configuration>
-          <sourceDirectory>${basedir}/src</sourceDirectory>
-        </configuration>
-        <executions>
-          <execution>
-            <id>generate-html-doc</id> 
-            <goals>
-              <goal>process-asciidoc</goal> 
-            </goals>
-            <phase>site</phase>
-            <configuration>
-              <doctype>book</doctype>
-              <backend>html5</backend>
-              <sourceHighlighter>coderay</sourceHighlighter>
-              <outputDirectory>${basedir}/target/site</outputDirectory>
-              <requires>
-                <require>${basedir}/../shared/google-analytics-postprocessor.rb</require>
-              </requires>
-              <attributes>
-                <!-- Location of centralized stylesheet -->
-                <stylesheet>${basedir}/../shared/trafodion-manuals.css</stylesheet>
-                <project-version>${env.TRAFODION_VER}</project-version>
-                <project-name>Trafodion</project-name>
-                <project-logo>${basedir}/../shared/trafodion-logo.jpg</project-logo>
-                <project-support>user@trafodion.incubator.apache.org</project-support>
-                <docs-url>http://trafodion.incubator.apache.org/docs</docs-url>
-                <download-url>http://http://trafodion.incubator.apache.org/download.html</download-url>
-                <build-date>${maven.build.timestamp}</build-date>
-                <google-analytics-account>UA-72491210-1</google-analytics-account>
-              </attributes>
-            </configuration>
-          </execution>
-          <execution>
-            <id>generate-pdf-doc</id>
-            <phase>site</phase>
-            <goals>
-              <goal>process-asciidoc</goal>
-            </goals>
-            <configuration>
-              <doctype>book</doctype>
-              <backend>pdf</backend>
-              <sourceHighlighter>coderay</sourceHighlighter>
-              <outputDirectory>${basedir}/target</outputDirectory>
-              <attributes>
-                <pdf-stylesdir>${basedir}/../shared</pdf-stylesdir>
-                <pdf-style>trafodion</pdf-style>
-                <icons>font</icons>
-                <pagenums/>
-                <toc/>
-                <idprefix/>
-                <idseparator>-</idseparator>
-                <project-version>${env.TRAFODION_VER}</project-version>
-                <project-name>Trafodion</project-name>
-                <project-logo>${basedir}/../shared/trafodion-logo.jpg</project-logo>
-                <project-support>user@trafodion.incubator.apache.org</project-support>
-                <docs-url>http://trafodion.incubator.apache.org/docs</docs-url>
-                <download-url>http://http://trafodion.incubator.apache.org/download.html</download-url>
-                <build-date>${maven.build.timestamp}</build-date>
-              </attributes>
-            </configuration>
-          </execution>
-        </executions>
-      </plugin> 
-      <!-- Rename target/site/index.pdf to client-install-guide.pdf -->
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-antrun-plugin</artifactId>
-        <version>1.8</version>
-        <inherited>false</inherited>
-        <executions>
-          <execution>
-            <id>populate-release-directories</id>
-            <phase>post-site</phase>
-            <configuration>
-              <target name="Populate Release Directories">
-                <!-- The website uses the following organization for the docs/target/docs directory:
-                  - To ensure a known location, the base directory contains the LATEST version of the web book and the PDF files.
-                  - The know location is docs/target/docs/<document>
-                  - target/docs/<version>/<document> contains version-specific renderings of the documents.
-                  - target/docs/<version>/<document> contains the PDF version and the web book. The web book is named index.html
-                --> 
-                <!-- Copy the PDF file to its target directories -->
-                <copy file="${basedir}/target/index.pdf" tofile="${basedir}/../target/docs/client_install/Trafodion_Client_Installation_Guide.pdf" />
-                <copy file="${basedir}/target/index.pdf" tofile="${basedir}/../target/docs/${project.version}/client_install/Trafodion_Client_Installation_Guide.pdf" />
-                <!-- Copy the Web Book files to their target directories -->
-                <copy todir="${basedir}/../target/docs/client_install">
-                  <fileset dir="${basedir}/target/site">
-                    <include name="**/*.*"/>  <!--All sub-directories, too-->
-                  </fileset>
-                </copy>
-                <copy todir="${basedir}/../target/docs/${project.version}/client_install">
-                  <fileset dir="${basedir}/target/site">
-                    <include name="**/*.*"/>  <!--All sub-directories, too-->
-                  </fileset>
-                </copy>
-              </target>
-            </configuration>
-            <goals>
-              <goal>run</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-
-  <!-- Included because this is required. No reports are generated. -->
-  <reporting>
-    <excludeDefaults>true</excludeDefaults>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-project-info-reports-plugin</artifactId>
-        <version>2.8</version>
-        <reportSets>
-          <reportSet>
-            <reports>
-            </reports>
-          </reportSet>
-        </reportSets>
-      </plugin>
-    </plugins>
-  </reporting>
-
-  <distributionManagement>
-    <site>
-      <id>trafodion.incubator.apache.org</id>
-      <name>Trafodion Website at incubator.apache.org</name>
-      <!-- On why this is the tmp dir and not trafodion.incubator.apache.org, see
-      https://issues.apache.org/jira/browse/HBASE-7593?focusedCommentId=13555866&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13555866
-      -->
-      <url>file:///tmp</url>
-    </site>
-  </distributionManagement>
-</project>
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+ <!-- 
+* @@@ START COPYRIGHT @@@                                                       
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+-->
+  <modelVersion>4.0.0</modelVersion>
+  <groupId>org.apache.trafodion</groupId>
+  <artifactId>client-install-guide</artifactId>
+  <version>${env.TRAFODION_VER}</version>
+  <packaging>pom</packaging>
+  <name>Trafodion Client Installation Guide</name>
+  <description>This guide describes how to install different Trafodion client applications.</description>
+  <url>http://trafodion.incubator.apache.org</url>
+  <inceptionYear>2015</inceptionYear>
+
+  <parent>
+    <groupId>org.apache.trafodion</groupId>
+    <artifactId>trafodion</artifactId>
+    <version>1.3.0</version>
+    <relativePath>../../pom.xml</relativePath>
+  </parent>
+
+
+  <licenses>
+    <license>
+      <name>The Apache Software License, Version 2.0</name>
+      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
+      <distribution>repo</distribution>
+      <comments>A business-friendly OSS license</comments>
+    </license>
+  </licenses>
+
+  <organization>
+    <name>Apache Software Foundation</name>
+    <url>http://www.apache.org</url>
+  </organization>
+
+  <issueManagement>
+    <system>JIRA</system>
+    <url>http://issues.apache.org/jira/browse/TRAFODION</url>
+  </issueManagement>
+
+  <scm>
+    <connection>scm:git:http://git-wip-us.apache.org/repos/asf/incubator-trafodion.git</connection>
+    <developerConnection>scm:git:https://git-wip-us.apache.org/repos/asf/incubator-trafodion.git</developerConnection>
+    <url>https://git-wip-us.apache.org/repos/asf?p=incubator-trafodion.git</url>
+    <tag>HEAD</tag>
+  </scm>
+
+  <ciManagement>
+    <system>Jenkins</system>
+    <url>https://jenkins.esgyn.com</url>
+  </ciManagement>
+
+  <properties>
+    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+    <asciidoctor.maven.plugin.version>1.5.2.1</asciidoctor.maven.plugin.version>
+    <asciidoctorj.pdf.version>1.5.0-alpha.11</asciidoctorj.pdf.version>
+    <asciidoctorj.version>1.5.4</asciidoctorj.version>
+    <rubygems.prawn.version>2.0.2</rubygems.prawn.version>
+    <jruby.version>9.0.4.0</jruby.version>
+    <dependency.locations.enabled>false</dependency.locations.enabled>
+  </properties>
+
+  <repositories>
+    <repository>
+      <id>rubygems-proxy-releases</id>
+      <name>RubyGems.org Proxy (Releases)</name>
+      <url>http://rubygems-proxy.torquebox.org/releases</url>
+      <releases>
+        <enabled>true</enabled>
+      </releases>
+      <snapshots>
+        <enabled>false</enabled>
+      </snapshots>
+    </repository>
+  </repositories>
+  
+  <dependencies>
+    <dependency>
+      <groupId>rubygems</groupId>
+      <artifactId>prawn</artifactId>
+      <version>${rubygems.prawn.version}</version>
+      <type>gem</type>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.jruby</groupId>
+      <artifactId>jruby-complete</artifactId>
+      <version>${jruby.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.asciidoctor</groupId>
+      <artifactId>asciidoctorj</artifactId>
+      <version>${asciidoctorj.version}</version>
+    </dependency>
+  </dependencies>
+
+  <build>
+    <plugins>
+      <plugin>
+        <groupId>de.saumya.mojo</groupId>
+        <artifactId>gem-maven-plugin</artifactId>
+        <version>1.0.10</version>
+        <configuration>
+          <!-- align JRuby version with AsciidoctorJ to avoid redundant downloading -->
+          <jrubyVersion>${jruby.version}</jrubyVersion>
+          <gemHome>${project.build.directory}/gems</gemHome>
+          <gemPath>${project.build.directory}/gems</gemPath>
+        </configuration>
+        <executions>
+          <execution>
+            <goals>
+              <goal>initialize</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-resources-plugin</artifactId>
+        <version>2.7</version>
+        <configuration>
+          <encoding>UTF-8</encoding>
+          <attributes>
+            <generateReports>false</generateReports>
+          </attributes>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.asciidoctor</groupId>
+        <artifactId>asciidoctor-maven-plugin</artifactId>
+        <version>${asciidoctor.maven.plugin.version}</version> 
+        <dependencies>
+          <dependency>
+            <groupId>org.asciidoctor</groupId>
+            <artifactId>asciidoctorj-pdf</artifactId>
+            <version>${asciidoctorj.pdf.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.asciidoctor</groupId>
+            <artifactId>asciidoctorj</artifactId>
+            <version>${asciidoctorj.version}</version>
+          </dependency>
+        </dependencies>
+        <configuration>
+          <sourceDirectory>${basedir}/src</sourceDirectory>
+        </configuration>
+        <executions>
+          <execution>
+            <id>generate-html-doc</id> 
+            <goals>
+              <goal>process-asciidoc</goal> 
+            </goals>
+            <phase>site</phase>
+            <configuration>
+              <doctype>book</doctype>
+              <backend>html5</backend>
+              <sourceHighlighter>coderay</sourceHighlighter>
+              <outputDirectory>${basedir}/target/site</outputDirectory>
+              <requires>
+                <require>${basedir}/../shared/google-analytics-postprocessor.rb</require>
+              </requires>
+              <attributes>
+                <!-- Location of centralized stylesheet -->
+                <stylesheet>${basedir}/../shared/trafodion-manuals.css</stylesheet>
+                <project-version>${env.TRAFODION_VER}</project-version>
+                <project-name>Trafodion</project-name>
+                <project-logo>${basedir}/../shared/trafodion-logo.jpg</project-logo>
+                <project-support>user@trafodion.incubator.apache.org</project-support>
+                <docs-url>http://trafodion.incubator.apache.org/docs</docs-url>
+                <download-url>http://trafodion.incubator.apache.org/download.html</download-url>
+                <build-date>${maven.build.timestamp}</build-date>
+                <google-analytics-account>UA-72491210-1</google-analytics-account>
+              </attributes>
+            </configuration>
+          </execution>
+          <execution>
+            <id>generate-pdf-doc</id>
+            <phase>site</phase>
+            <goals>
+              <goal>process-asciidoc</goal>
+            </goals>
+            <configuration>
+              <doctype>book</doctype>
+              <backend>pdf</backend>
+              <sourceHighlighter>coderay</sourceHighlighter>
+              <outputDirectory>${basedir}/target</outputDirectory>
+              <attributes>
+                <pdf-stylesdir>${basedir}/../shared</pdf-stylesdir>
+                <pdf-style>trafodion</pdf-style>
+                <icons>font</icons>
+                <pagenums/>
+                <toc/>
+                <idprefix/>
+                <idseparator>-</idseparator>
+                <project-version>${env.TRAFODION_VER}</project-version>
+                <project-name>Trafodion</project-name>
+                <project-logo>${basedir}/../shared/trafodion-logo.jpg</project-logo>
+                <project-support>user@trafodion.incubator.apache.org</project-support>
+                <docs-url>http://trafodion.incubator.apache.org/docs</docs-url>
+                <download-url>http://http://trafodion.incubator.apache.org/download.html</download-url>
+                <build-date>${maven.build.timestamp}</build-date>
+              </attributes>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin> 
+      <!-- Rename target/site/index.pdf to client-install-guide.pdf -->
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-antrun-plugin</artifactId>
+        <version>1.8</version>
+        <inherited>false</inherited>
+        <executions>
+          <execution>
+            <id>populate-release-directories</id>
+            <phase>post-site</phase>
+            <configuration>
+              <target name="Populate Release Directories">
+                <!-- The website uses the following organization for the docs/target/docs directory:
+                  - To ensure a known location, the base directory contains the LATEST version of the web book and the PDF files.
+                  - The know location is docs/target/docs/<document>
+                  - target/docs/<version>/<document> contains version-specific renderings of the documents.
+                  - target/docs/<version>/<document> contains the PDF version and the web book. The web book is named index.html
+                --> 
+                <!-- Copy the PDF file to its target directories -->
+                <copy file="${basedir}/target/index.pdf" tofile="${basedir}/../target/docs/client_install/Trafodion_Client_Installation_Guide.pdf" />
+                <copy file="${basedir}/target/index.pdf" tofile="${basedir}/../target/docs/${project.version}/client_install/Trafodion_Client_Installation_Guide.pdf" />
+                <!-- Copy the Web Book files to their target directories -->
+                <copy todir="${basedir}/../target/docs/client_install">
+                  <fileset dir="${basedir}/target/site">
+                    <include name="**/*.*"/>  <!--All sub-directories, too-->
+                  </fileset>
+                </copy>
+                <copy todir="${basedir}/../target/docs/${project.version}/client_install">
+                  <fileset dir="${basedir}/target/site">
+                    <include name="**/*.*"/>  <!--All sub-directories, too-->
+                  </fileset>
+                </copy>
+              </target>
+            </configuration>
+            <goals>
+              <goal>run</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+    </plugins>
+  </build>
+
+  <!-- Included because this is required. No reports are generated. -->
+  <reporting>
+    <excludeDefaults>true</excludeDefaults>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-project-info-reports-plugin</artifactId>
+        <version>2.8</version>
+        <reportSets>
+          <reportSet>
+            <reports>
+            </reports>
+          </reportSet>
+        </reportSets>
+      </plugin>
+    </plugins>
+  </reporting>
+
+  <distributionManagement>
+    <site>
+      <id>trafodion.incubator.apache.org</id>
+      <name>Trafodion Website at incubator.apache.org</name>
+      <!-- On why this is the tmp dir and not trafodion.incubator.apache.org, see
+      https://issues.apache.org/jira/browse/HBASE-7593?focusedCommentId=13555866&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13555866
+      -->
+      <url>file:///tmp</url>
+    </site>
+  </distributionManagement>
+</project>

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/SQuirrel.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/SQuirrel.adoc b/docs/client_install/src/asciidoc/_chapters/SQuirrel.adoc
index 563993f..bde3a3f 100644
--- a/docs/client_install/src/asciidoc/_chapters/SQuirrel.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/SQuirrel.adoc
@@ -1,72 +1,76 @@
-////
-/**
- *@@@ START COPYRIGHT @@@
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- * @@@ END COPYRIGHT @@@
- */
-////
-
-= Configure SQuirreL Client
-These configuration instructions apply to {project-name} Release 1.3.*_n_* and subsequent updates of {project-name} until otherwise indicated.
-
-== Prerequisite Software
-
-Make sure that you have this software installed on your workstation:
-
-* Java Runtime Environment (JRE) 1.7 or higher. See <<jdbct4-java-environment, Java Environment>> in the <<jdbct4, Install JDBC Type-4 Driver>> chapter above.
-* {project-name} JDBC Type-4 Driver. See <<jdbct4,Install JDBC Type-4 Driver>> above.
-* SQuirreL SQL Client 3.5.0. See the http://squirrel-sql.sourceforge.net/[_SQuirreL SQL Client website_].
-
-<<<
-== Configuration Instructions
-=== Register JDBC Type-4 Driver
-
-Use the *Add Driver* function and register the {project-name} JDBC Type-4 Driver:
-
-1. Click on the *Add* button and locate the {project-name} JDBC Type-4 Driver.
-2. Click on the *List Drivers* button to find the JDBC Driver Class Name.
-3. Set the properties as shown below:
-+
-image:{images}/Add_Driver_SQuirreL.jpg[alt="SQuirreL Add Driver Dialog Box"]
-+
-* Name: `{project-name}`
-* Example URL: `jdbc:t4jdbc://_host-name or ip-address_:37800/:` (Default port number: *23400*)
-
-<<<
-=== Connect to {project-name}
-
-Use the Add Alias dialog box and create an alias for your {project-name} System:
-
-image:{images}/Add_Alias_SQuirreL.jpg[alt="SQuirreL Add Alias Dialog Box"]
-
-1. Edit the connection *URL* to match your {project-name} system's host name and port number:
-+
-*Example*
-+
-```
-jdbc:t4jdbc://<host-name or ip-address>:37800/:
-```
-
-2. Click on the *Properties* button for the alias.
-+
-<<<
-3. In the *Schemas* tab, select the option *Load all and cache all Schemas*.
-+
-image:{images}/Properties_for_Alias_SQuirreL.jpg[width=400,height=400,alt="SQuirreL Properties Dialog Box"]
-
-Once you have a successful connection, use the SQL tab and run a query to confirm the connection.
-
+////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ */
+////
+
+= Configure SQuirreL Client
+
+== Prerequisites
+
+If you have not done so already, please ensure that you have <<java-setup, setup your Java environment>>,
+<<download-software, unpackaged the {project-name} client software>>, and <<jdbct4, installed the JDBC Type-4 Driver>>.
+
+You also need SQuirreL SQL Client 3.7.0 or later. See the http://squirrel-sql.sourceforge.net/[_SQuirreL SQL Client website_].
+
+== Configuration Instructions
+=== Register JDBC Type-4 Driver
+
+.  Start SQuirreL SQL Client
+.  Click in the rectangle box in the upper left of the window that has \u201cDrivers\u201d printed sideways.
+.  Select the "+ New Driver. . ." under "Drivers" from the top menu.
++
+image:{images}/SQuirrel_New_Driver.jpg[alt="SQuirreL New Driver Dialog Box"]
+
+. In the *Add Driver* dialog box:
+.. Enter `Trafodion` in the *Name* field.
+.. Enter `jdbc:t4jdbc://<host-name or ip-address:23400/:` in the *Example URL* field.
++
+The default port number is 23400. If you have configured JDBC server on Trafodion with a different port number, 
+then you will need to change the 23400 value to match.
+.. Select the *Extra Class Path* tab and then click the *Add* button.  
++
+Use the file browser to navigate to the directory where you installed the 
+drivers and select the driver. (`jdbct4\lib\jdbcT4.jar`)
+.. Enter `org.trafodion.jdbc.t4.T4Driver` in the Class Name field at the bottom of the dialog box and then click on *OK*.
+.. If configured correctly, you will see a message stating 
+"Driver class org.trafodion.jdbc.t4.T4Driver successfully registered for driver definition: Trafodion" in the text box 
+at the bottom of the SQuirreL SQL window.
++
+image:{images}/SQuirrel_Extra_Class_Path.jpg[scaledwidth="75%",alt="SQuirreL Extra Class Path Dialog Box"]
+
+. Click in the rectangle box in the upper left of the window that has "Drivers" printed sideways.
+
+<<<
+=== Connect to {project-name}
+
+Use the Add Alias dialog box and create an alias for your Trafodion System.
+
+.  Select the "+ New Alias. . ." under *Aliases* from the top menu.
+.  In the *Add Alias* dialog box:
+..  Enter `Trafodion Cluster` or however you want to identify the cluster in the Name field.
+..  Select *Trafodion* from the Driver select menu.
+..  Enter <your user name> in the User Name field.
+..  Enter <your password> in the Password field.
+..  Click on the *Properties* button.  In the Schema tab, select the *Load all and cache all Schemas* radio button.
+..  Click on the *OK* button to close the Properties dialog and then click on the *OK* button to close the *Add Aliases* window.
++
+image:{images}/SQuirrel_Add_Alias.jpg[alt="SQuirreL Add Alias Dialog Box"]
+. You will now be presented with a dialog to connect.  Click on the *Connect* button and then issue a query to test the connection.
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/about.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/about.adoc b/docs/client_install/src/asciidoc/_chapters/about.adoc
index 25e220c..e64cc3a 100644
--- a/docs/client_install/src/asciidoc/_chapters/about.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/about.adoc
@@ -1,166 +1,165 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-= About This Document
-This manual describes how to install and configure client applications that enable you to connect to and use a {project-name} database.
-
-== Intended Audience
-This manual is intended for users who want to connect to and use a {project-name} database.
-
-== New and Changed Information
-This manual shows updated versions for {project-name} Release {project-version}. It also provides instructions on how to download and install <<odb,{project-name} odb>>, a
-new multi-threaded, ODBC-based command-line tool for parallel data loading and extracting.
-
-== Notation Conventions
-This list summarizes the notation conventions for syntax presentation in this manual.
-
-* UPPERCASE LETTERS
-+
-Uppercase letters indicate keywords and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. 
-+
-```
-SELECT
-```
-
-* lowercase letters
-+
-Lowercase letters, regardless of font, indicate variable items that you supply. Items not enclosed in brackets are required.
-+
-```
-file-name
-```
-
-<<<
-* &#91; &#93; Brackets 
-+
-Brackets enclose optional syntax items.
-+
-```
-DATETIME [start-field TO] end-field
-```
-+
-A group of items enclosed in brackets is a list from which you can choose one item or none.
-+
-The items in the list can be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines.
-+
-For example: 
-+
-```
-DROP SCHEMA schema [CASCADE]
-DROP SCHEMA schema [ CASCADE | RESTRICT ]
-```
-
-* { } Braces 
-+
-Braces enclose required syntax items.
-+
-```
-FROM { grantee [, grantee ] ... }
-```
-+ 
-A group of items enclosed in braces is a list from which you are required to choose one item.
-+
-The items in the list can be arranged either vertically, with aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines.
-+
-For example:
-+
-```
-INTERVAL { start-field TO end-field }
-{ single-field } 
-INTERVAL { start-field TO end-field | single-field }
-``` 
-* | Vertical Line 
-+
-A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces.
-```
-{expression | NULL} 
-```
-
-<<<
-* &#8230; Ellipsis
-+
-An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times.
-+
-```
-ATTRIBUTE[S] attribute [, attribute] ...
-{, sql-expression } ...
-```
-+ 
-An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times.
-+
-For example:
-+
-```
-expression-n ...
-```
-
-* Punctuation
-+
-Parentheses, commas, semicolons, and other symbols not previously described must be typed as shown.
-+
-```
-DAY (datetime-expression)
-@script-file 
-```
-+
-Quotation marks around a symbol such as a bracket or brace indicate the symbol is a required character that you must type as shown.
-+
-For example:
-+
-```
-"{" module-name [, module-name] ... "}"
-```
-
-<<<
-* Item Spacing
-+
-Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma.
-+
-```
-DAY (datetime-expression) DAY(datetime-expression)
-```
-+
-If there is no space between two items, spaces are not permitted. In this example, no spaces are permitted between the period and any other items:
-+
-```
-myfile.sh
-```
-
-* Line Spacing
-+
-If the syntax of a command is too long to fit on a single line, each continuation line is indented three spaces and is separated from the preceding line by a blank line.
-+
-This spacing distinguishes items in a continuation line from items in a vertical list of selections. 
-+
-```
-match-value [NOT] LIKE _pattern
-   [ESCAPE esc-char-expression] 
-```
-
-== Comments Encouraged
-We encourage your comments concerning this document. We are committed to providing documentation that meets your
-needs. Send any errors found, suggestions for improvement, or compliments to {project-support}.
-
-Include the document title and any comment, error found, or suggestion for improvement you have concerning this document.
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+= About This Document
+This manual describes how to install and configure client applications that enable you to connect to and use a {project-name} database.
+
+== Intended Audience
+This manual is intended for users who want to connect to and use a {project-name} database.
+
+== New and Changed Information
+This manual shows updated versions for {project-name} Release {project-version}.
+
+== Notation Conventions
+This list summarizes the notation conventions for syntax presentation in this manual.
+
+* UPPERCASE LETTERS
++
+Uppercase letters indicate keywords and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. 
++
+```
+SELECT
+```
+
+* lowercase letters
++
+Lowercase letters, regardless of font, indicate variable items that you supply. Items not enclosed in brackets are required.
++
+```
+file-name
+```
+
+<<<
+* &#91; &#93; Brackets 
++
+Brackets enclose optional syntax items.
++
+```
+DATETIME [start-field TO] end-field
+```
++
+A group of items enclosed in brackets is a list from which you can choose one item or none.
++
+The items in the list can be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines.
++
+For example: 
++
+```
+DROP SCHEMA schema [CASCADE]
+DROP SCHEMA schema [ CASCADE | RESTRICT ]
+```
+
+* { } Braces 
++
+Braces enclose required syntax items.
++
+```
+FROM { grantee [, grantee ] ... }
+```
++ 
+A group of items enclosed in braces is a list from which you are required to choose one item.
++
+The items in the list can be arranged either vertically, with aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines.
++
+For example:
++
+```
+INTERVAL { start-field TO end-field }
+{ single-field } 
+INTERVAL { start-field TO end-field | single-field }
+``` 
+* | Vertical Line 
++
+A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces.
+```
+{expression | NULL} 
+```
+
+<<<
+* &#8230; Ellipsis
++
+An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times.
++
+```
+ATTRIBUTE[S] attribute [, attribute] ...
+{, sql-expression } ...
+```
++ 
+An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times.
++
+For example:
++
+```
+expression-n ...
+```
+
+* Punctuation
++
+Parentheses, commas, semicolons, and other symbols not previously described must be typed as shown.
++
+```
+DAY (datetime-expression)
+@script-file 
+```
++
+Quotation marks around a symbol such as a bracket or brace indicate the symbol is a required character that you must type as shown.
++
+For example:
++
+```
+"{" module-name [, module-name] ... "}"
+```
+
+<<<
+* Item Spacing
++
+Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma.
++
+```
+DAY (datetime-expression) DAY(datetime-expression)
+```
++
+If there is no space between two items, spaces are not permitted. In this example, no spaces are permitted between the period and any other items:
++
+```
+myfile.sh
+```
+
+* Line Spacing
++
+If the syntax of a command is too long to fit on a single line, each continuation line is indented three spaces and is separated from the preceding line by a blank line.
++
+This spacing distinguishes items in a continuation line from items in a vertical list of selections. 
++
+```
+match-value [NOT] LIKE _pattern
+   [ESCAPE esc-char-expression] 
+```
+
+== Comments Encouraged
+We encourage your comments concerning this document. We are committed to providing documentation that meets your
+needs. Send any errors found, suggestions for improvement, or compliments to {project-support}.
+
+Include the document title and any comment, error found, or suggestion for improvement you have concerning this document.

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/dbviz.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/dbviz.adoc b/docs/client_install/src/asciidoc/_chapters/dbviz.adoc
index d064e2a..a982504 100644
--- a/docs/client_install/src/asciidoc/_chapters/dbviz.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/dbviz.adoc
@@ -1,83 +1,110 @@
-////
-/**
- *@@@ START COPYRIGHT @@@
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- * @@@ END COPYRIGHT @@@
- */
-////
-
-= Configure DbVisualizer
-
-== Prerequisite Software
-
-Make sure that you have this software installed on your workstation:
-
-* Java Runtime Environment (JRE) 1.7 or higher. See <<jdbct4-java-environment, Java Environment>> in the <<jdbct4, Install JDBC Type-4 Driver>> chapter above.
-+
-In addition, see DbVisualizer\u2019s FAQ \u201cHow to\u201d page:
-http://confluence.dbvis.com/pages/viewpage.action?pageId=3146120[_How do I change the Java version that DbVisualizer uses?_]
-* {project-name} JDBC Type-4 Driver. See <<jdbct4,Install JDBC Type-4 Driver>> above.
-* DbVisualizer 9.x.x. See the http://www.dbvis.com/[_DbVisualizer website_].
-
-== Configuration Instructions
-
-=== Disable Connection Validation Select Option
-
-==== DbVisualizer 9.1 (or an earlier version)
-Edit the `_DbVisualizer-Install-Dir_\resources\dbvis-custom.prefs` file and
-disable the `ConnectionValidationSelect` option as shown below:
-
-```
-dbvis.generic.-ConnectionValidationSelect=disabled
-```
-
-==== DbVisualizer 9.2 (or a later version)
-Set the *Physical Connection* property to keep your connections alive. Follow these steps:
-
-1.  Double-click the database connection, select the *Properties* tab.
-2.  In the left navigation tree, expand the *Generic* connection properties, and select *Physical Connection*.
-3.  Under *Validation SQL*, enter `values(current_timestamp)` and click *Apply*.
-+
-<<<
-image:{images}/Physical_Connection.jpg[width=600,height=600,alt="DbVisualizer Physical Connection"]
-
-<<<
-=== Register JDBC Type-4 Driver
-
-Use the DbVisualizer Driver Manager and register the {project-name} JDBC Type-4 Driver.
-
-image:{images}/DbVisualizer_Driver_Manager.jpg[image]
-
-* Use the Open File icon and locate the {project-name} JDBC Type-4 Driver.
-* Use the JDBC URL format:
-+
-```
-jdbc:t4jdbc://<host-name or ip-address>:23400/:
-```
-+
-*NOTE*: This example uses a modified port number (*37800*) rather than the default port number (*23400*).
-
-<<<
-=== Connect to {project-name}
-
-Create a new connection by selecting the {project-name} JDBC Type-4 Driver and filling in the connection parameters. Edit the database URL to match
-your {project-name} system\u2019s host name and port number; for example: `jdbc:t4jdbc://<host-name or ip-address>:37800/:` (default is: *23400*).
-
-image:{images}/Database_Connection_in_DbVisualizer.jpg[image]
-
-Once you have connected successfully, execute a query using SQL Commander to confirm the connection.
-
+\ufeff////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ */
+////
+
+= Configure DBVisualizer
+
+== Prerequisites
+
+If you have not done so already, please ensure that you have <<java-setup, setup your Java environment>>,
+<<download-software, unpackaged the {project-name} client software>>, and <<jdbct4, installed the JDBC Type-4 Driver>>.
+
+You also need DBVisualizer 9.x.x. See the http://www.dbvis.com/[_DbVisualizer website_].
+In addition, see DBVisualizer\u2019s FAQ \u201cHow to\u201d page:
+http://confluence.dbvis.com/pages/viewpage.action?pageId=3146120[_How do I change the Java version that DBVisualizer uses?_]
+
+The examples in this chapter assumes that you have unpackaged the trafci installation file 
+to `c:\trafodion\trafci` (Windows) or `$HOME/trafodion/trafci` (Linux).
+
+== Configuration Instructions
+
+=== Register JDBC Type-4 Driver
+
+. Select *Tools>Driver Manager*.
+. Click on the green plus sign to add a new driver. 
++
+image:{images}/DbVisualizer_Driver_Manager.jpg[height=500,width=500,alt="DBVizualizer Add New Driver"]
++
+* Use the Open File icon and locate the {project-name} JDBC Type-4 Driver.
+`c:\trafodion\jdbct4\lib\jdbcT4.jar` (Windows) or `$HOME/trafodion/jdbct4/lib/jdbcT4.jar` (Linux).
+* Use the JDBC URL format (literary, do not replace the placeholders):
++
+```
+jdbc:t4jdbc://<host-name or ip-address>:23400/:
+```
+
+. Close the dialog box to save the settings.
+
+=== Create Database Connection
+
+. Select *Database>Create Database Connection*.
+. Click on the *Driver (JDBC)* field. This presents you with a drop-down list of drivers.
+. Select the driver you created in the step above. (`Trafodion`)
+. Right-click the *Database URL* field. Select the URL format that pops up.
++
+The field should be populated with: `jdbc:t4jdbc://<host-name or ip-address>:23400/:`
+
+. Edit (double-click the field) the database URL to match your target host name and port number.
++
+*Example*
++
+```
+jdbc:t4jdbc://node01.host.com:23400/:
+```
+. Add your *Database Userid*.
+. Add your *Database Password*.
++
+image:{images}/Database_Connection_in_DbVisualizer.jpg[height=500,width=500,alt="DBVizualizer Make Connection"]
+
+<<<
+=== Disable Connection Validation Select Option
+
+==== DBVisualizer 9.2 (or a later version)
+
+Set the *Physical Connection* property to keep your connections alive. Follow these steps:
+
+. Open the database connection you created earlier if it's not open.
+. Select the *Properties* tab.
+. In the left navigation tree, expand the *Generic* connection properties, and select *Physical Connection*.
+. Under *Validation SQL*, enter `values(current_timestamp)` and click *Apply*.
++
+image:{images}/Physical_Connection.jpg[height=500,width=500,alt="DBVisualizer Physical Connection"]
+
+==== DBVisualizer 9.1 (or an earlier version)
+Edit the `_DBVisualizer-Install-Dir_\resources\dbvis-custom.prefs` file and
+disable the `ConnectionValidationSelect` option as shown below:
+
+```
+dbvis.generic.-ConnectionValidationSelect=disabled
+```
+
+=== Connect to {project-name}
+
+. Right click on the database connection.
+. Select *Connect*.
+
+Once connected:
+
+. Select *SQL Commander>New SQL Commander*. (CTRL+T)
+. Browse the tree structure in the left pane under *Connections*.
+. Try a query.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/howto.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/howto.adoc b/docs/client_install/src/asciidoc/_chapters/howto.adoc
new file mode 100644
index 0000000..e721255
--- /dev/null
+++ b/docs/client_install/src/asciidoc/_chapters/howto.adoc
@@ -0,0 +1,164 @@
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+= How To
+
+[[howto-setup-path]]
+== Set Up Path Variable
+
+You need to update your PATH variable for Java and the {project-name} clients.
+The examples below show how to add Java to your PATH variable. The process
+is similar for the different {project-name} clients.
+
+NOTE: You typically point the PATH variable to the `bin` directory, if it exists.
+Otherwise, you point the PATH variable to the directory containing the client executable.
+The examples below point to the Java `bin` directory.
+
+[[howto-setup-path-windows]]
+=== Set PATH Variable on Windows
+
+==== Windows 10
+
+. Right-click on the Windows icon on the menu bar. Select *System*.
+. Click on *Advanced System Settings*.
+. In the *System Properties* dialog box, click the *Advanced* tab.
+. Click the *Environment Variables* button.
+. Under *System* variables, select the variable named *Path*, and then click *Edit. . .*:
++
+image:{images}/win10_edit_path.jpg[Windows 10 Edit Path Variable]
++
+<<<
+. Click *Browse. . .*. Find the directory where you installed Java or the {project-name} client. Select the `bin` directory as applicable.
++
+image:{images}/win10_select_java.jpg[image]
+
+.  Click *OK* to close the browse window. Click *OK* to close the edit window.
+.  Verify that the updated *Path* appears under *System* variables, and click *OK*.
+.  In the *System Properties* dialog box, click *OK* to accept the changes.
+
+<<<
+==== Windows 8
+
+. Right-click the *Computer* icon on your desktop, and then select *Properties*. The *Control Panel > System and Security > System* window appears.
+
+. In the left navigation bar, click the *Advanced* system settings link.
+
+.  In the *System Properties* dialog box, click the *Environment Variables* button.
+
+.  Under *System* variables, select the variable named *Path*, and then click *Edit*:
++
+image:{images}/path2.jpg[image]
+
+.  Place the cursor at the start of the *Variable* value field and
+enter the path of the Java or {project-name} client `bin` directory, ending with a semicolon (;):
++
+image:{images}/varval2.jpg[image]
++
+*Example*
++
+```
+"c:\Program Files (x86)\Java\jre7\bin";
+```
++
+NOTE: Check that no space exists after the semicolon (;) in the path. If there are spaces in the directory name, delimit the entire directory path in double quotes (") before the semicolon.
+
+.  Click *OK*.
+.  Verify that the updated *Path* appears under *System* variables, and click *OK*.
+.  In the *System Properties* dialog box, click *OK* to accept the changes.
+
+For a full installation, a sample PATH variable may contain:
+
+```
+SET PATH=%PATH%;C:\Program Files\Java\jre1.8.0_101\;C:\trafodion\trafci\bin\;
+```
+
+
+[[howto-setup-path-linux]]
+=== Set PATH Variable on Linux
+
+. Open the user profile (`.profile` or `.bash_profile` for the Bash shell) in the `$HOME` directory.
++
+```
+cd $HOME
+vi .profile
+```
+
+. In the user profile, set the `PATH` environment variable to include the path of the Java 
+or {project-name} client `bin` directory. 
++
+```
+export PATH=$PATH:/usr/lib/jvm/java-1.7.0/bin
+```
++
+NOTE: Place the path of the Java bin directory after `$PATH` separated by colon (`:`). 
+
+.  To activate the changes, either log out and log in again or source in the user profile.
++
+```
+source profile
+```
+
+For a full installation, a sample PATH variable may contain:
+
+```
+export PATH=$PATH:/usr/lib/jvm/java-1.7.0/bin:/opt/user/trafodion/trafci/bin
+```
+
+[[howto-setup-path-verify]]
+=== Verify PATH Variable
+
+Ensure that you can access the Java or {project-name} client
+from the command line.
+
+*Examples*
+
+Display the Java version.
+
+```
+c:\trafodion> java -version
+
+java version "1.7.0_45" # This is the version you need to check
+Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
+Java HotSpot(TM) Client VM (build 24.45-b08, mixed mode, sharing)
+c:\trafodion>
+```
+
+Display the trafci version.
+
+```
+c:\trafodion> trafci -version
+
+Welcome to EsgynDB Enterprise Command Interface
+Copyright (c) 2015-2016 Esgyn Corporation
+
+JDBC Type 4 Driver Build ID : Traf_JDBC_Type4_Build_439d96b
+Command Interface Build ID  : TrafCI_Build_439d96b
+
+c:\trafodion>
+```
+
+If you cannot display the version information, then you need to 
+check your PATH variable settings again.
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/introduction.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/introduction.adoc b/docs/client_install/src/asciidoc/_chapters/introduction.adoc
index 8e9f06d..4b9172e 100644
--- a/docs/client_install/src/asciidoc/_chapters/introduction.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/introduction.adoc
@@ -1,147 +1,70 @@
-////
-/**
-*@@@ START COPYRIGHT @@@
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements. See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*     http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[introduction]]
-= Introduction
-This manual describes how to install and configure the following client applications, which enable you to connect to and use a {project-name}
-database.
-
-== Client Summary
-
-=== JDBC-Based Clients
-
-The following table lists JDBC-based clients supported by {project-name}.
-[cols="30%,70%",options="header"]
-|===
-| Client Name | Description
-| *{project-name} JDBC Type 4 Driver* | A driver that enables Java applications that run on a client workstation to connect to a {project-name} database. +
- +
-*NOTE:* The {project-name} Command Interface (TrafCI), DbVisualizer, and SQuirreL SQL Client require this driver to be installed on the client
-workstation.
-| *{project-name} Command Interface (TrafCI)* | A command-line interface that allows you to connect to a {project-name} database and run SQL statements and other commands interactively or from
-script files. For more information, see the http://trafodion.incubator.apache.org/docs/command_interface/index.html[{project-name} Command Interface Guide].
-| *DbVisualizer* | A third-party database tool that allows you to connect to a {project-name} database. For more information, see the http://www.dbvis.com/[DbVisualizer website].
-| *SQuirreL SQL Client* | A third-party database tool that allows you to connect to a {project-name} database. For more information, see the 
-http://squirrel-sql.sourceforge.net/[SQuirreL SQL Client website].
-|===
-
-=== ODBC-Based Clients
-
-The following table lists ODBC-based clients supported by {project-name}.
-[cols="30%,70%",options="header"]
-|===
-| Client Name | Description
-| *{project-name} ODBC Driver for Linux* | A driver that enables applications, which were developed for the Microsoft ODBC API and run on a Linux workstation, to connect to a
-{project-name} database.
-| *{project-name} ODBC Driver for Windows* | *[Not included in this release]*^1^ +
- +
-A driver that enables applications, which were developed for the Microsoft Open Database Connectivity (ODBC) application programming
-interface (API) and which run on a Windows workstation, to connect to a {project-name} database.
-| *{project-name} odb tool* | A multi-threaded, ODBC-based command-line tool for parallel data loading and extracting. For more information, see the
-{docs-url}/odb/index.html[{project-name} odb User Guide].
-|===
-
-^1^ License issues prevent us from including the ODBC Driver for Windows in this release. Contact
-{project-support} for help obtaining the driver.
-
-<<<
-[[introduction-download]]
-== Download Installation Package
-The {project-name} client software is available from the {download-url}[{project-name} Download] page. There is one
-`{project-name} Clients` package per release listed under *<version> Binaries*.
-
-The `{project-name} Clients` package consists of a zipped tar file that contains the {project-name} Clients tar file. The {project-name} Client
-binaries are located in the Clients folder, which contains the following files:
-
-[cols="30%l,70%", options="header"]
-|===
-| File                             | Usage
-| odbc64_linux.tar.gz              | {project-name} odb tool.
-| TFODBC64-*.exe                   | *[Not included in this release]*^1^ {project-name} ODBC Driver for Windows.
-| TRAF_ODBC_Linux_Driver_64.tar.gz | {project-name} ODBC driver for Linux.
-| trafci.zip                       | The {project-name} command interpreter `trafci`.
-| JDBCT4.zip                       | {project-name} JDBC Type 4 Driver.
-|===
-
-^1^ License issues prevent us from including the ODBC Driver for Windows in this release. Contact 
-{project-support} for help obtaining the driver.
-
-[[introduction-windows-download]]
-=== Windows Download
-
-Do the following:
-
-1.  Create a download folder on the client workstation. For example, `{project-name} Downloads`.
-
-2.  Open a Web browser and navigate to the {project-name} downloads site {download-url}.
-
-3.  Orient yourself to the binaries for the release you're installing.
-Click on the `{project-name} Clients` link to start downloading the {project-name} clients tar file to your workstation.
-
-4.  Place the `apache-trafodion-clients-*.tar.gz` file into the download folder.
-
-5.  Unpack the `apache-trafodion-clients-\*.tar.gz` file using an unzip program of your choice. This creates
-an `apache-trafodion-clients-*.tar` file.
-
-6. Unpack the `apache-trafodion-clients-*.tar` file using an unzip program of your choice. This creates
-the `clients` folder, which has the following content:
-+
-```
-JDBCT4.zip odb64_linux.tar.gz trafci.zip TRAF_ODBC_Linux_Driver_64.tar.gz
-```
-+
-You use these files to install the different {project-name} clients.
-
-[[introduction-linux-download]]
-=== Linux Download
-
-Do the following:
-
-1. Create a download directory on the client workstation. For example, `$HOME/trafodion-downloads`.
-
-2. Open a Web browser and navigate to the {project-name} downloads site {download-url}.
-
-3.  Orient yourself to the binaries for the release you're installing.
-Right-click on the `{project-name} Clients` link and select *Copy link address*.
-
-4.  Go to the download directory on the client workstation and use `wget` to download the client package
-using the link address you copied in step 3 above.
-
-5.  Unpack the `apache-trafodion-clients-*.tar.gz` using `tar`.
-+
-*Example*
-+
-```
-$ mkdir $HOME/trafodion-downloads
-$ cd $HOME/trafodion-downloads
-$ wget <link to package>
-$ tar -xzf apache-trafodion-clients-1.3.0-incubating-bin.tar.gz
-$ cd clients
-$ ls
-JDBCT4.zip  odb64_linux.tar.gz  trafci.zip  TRAF_ODBC_Linux_Driver_64.tar.gz
-$
-```
-+
-You use these files to install the different {project-name} clients.
-
-
-
+////
+/**
+*@@@ START COPYRIGHT @@@
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements. See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[introduction]]
+= Introduction
+This manual describes how to install and configure the following client applications, which enable you to connect to and use a {project-name}
+database.
+
+== Client Summary
+
+=== JDBC-Based Clients
+
+The following table lists JDBC-based clients supported by {project-name}.
+[cols="30%,70%",options="header"]
+|===
+| Client Name | Description
+| *{project-name} JDBC Type 4 Driver* | A driver that enables Java applications that run on a client workstation to connect to a {project-name} database. +
+ +
+*NOTE:* The {project-name} Command Interface (trafci), DBVisualizer, and SQuirreL SQL Client require this driver to be installed on the client
+workstation.
+| *{project-name} Command Interface (trafci)* | A command-line interface that allows you to connect to a {project-name} database and run SQL statements and other commands interactively or from
+script files. For more information, see the http://trafodion.incubator.apache.org/docs/command_interface/index.html[{project-name} Command Interface Guide].
+| *DBVisualizer* | A third-party database tool that allows you to connect to a {project-name} database. For more information, see the http://www.dbvis.com/[DbVisualizer website].
+| *SQuirreL SQL Client* | A third-party database tool that allows you to connect to a {project-name} database. For more information, see the 
+http://squirrel-sql.sourceforge.net/[SQuirreL SQL Client website].
+|===
+
+=== ODBC-Based Clients
+
+The following table lists ODBC-based clients supported by {project-name}.
+[cols="30%,70%",options="header"]
+|===
+| Client Name | Description
+| *{project-name} ODBC Driver for Linux* | A driver that enables applications, which were developed for the Microsoft ODBC API and run on a Linux workstation, to connect to a
+{project-name} database.
+| *{project-name} ODBC Driver for Windows* | *[Not included in this release]*^1^ +
+ +
+A driver that enables applications, which were developed for the Microsoft Open Database Connectivity (ODBC) application programming
+interface (API) and which run on a Windows workstation, to connect to a {project-name} database.
+| *{project-name} odb tool* | A multi-threaded, ODBC-based command-line tool for parallel data loading and extracting. For more information, see the
+{docs-url}/odb/index.html[{project-name} odb User Guide].
+| *Tableau* | An interactive data visualization products focused on business intelligence
+For more information, see the http://www.tableau.com/[Tableau Software website].
+|===
+
+^1^ License issues prevent us from including the ODBC Driver for Windows in this release. Contact
+{project-support} for help obtaining the driver.
+
+
+
+


[02/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc b/docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc
index 248f994..d856f2c 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc
@@ -1,1191 +1,1191 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[sql_utilities]]
-= SQL Utilities
-
-A utility is a tool that runs within {project-name} SQL and performs tasks.
-This section describes the {project-name} SQL utilities:
-
-[cols=","]
-|===
-| <<load_statement,LOAD Statement>>                           | Uses the {project-name} Bulk Loader to load data from a source table, either
-a {project-name} table or a Hive table, into a target {project-name} table.
-| <<populate_index_utility,POPULATE INDEX Utility>>           | Loads indexes.
-| <<purgedata_utility,PURGEDATA Utility>>                     | Purges data from tables and indexes.
-| <<unload_statement,UNLOAD Statement>>                       | Unloads data from {project-name} tables into an HDFS location that you
-specify.
-| <<update_statistics_statement,UPDATE STATISTICS Statement>> | Updates the histogram statistics for one or more groups of columns
-within a table. These statistics are used to devise optimized access plans.
-|===
-
-NOTE: {project-name} SQL utilities are entered interactively or from script
-files using a client-based tool, such as the {project-name} Command Interface
-(TrafCI). To install and configure a client application that enables you
-to connect to and issue SQL utilities, see the
-{docs-url}/client_installation/index.html[_{project-name} Client Installation Guide_].
-
-<<<
-[[load_statement]]
-== LOAD Statement
-
-The LOAD statement uses the {project-name} Bulk Loader to load data from a
-source table, either a {project-name} table or a Hive table, into a target
-{project-name} table. The {project-name} Bulk Loader prepares and loads HFiles
-directly in the region servers and bypasses the write path and the cost
-associated with it. The write path begins at a client, moves to a region
-server, and ends when data eventually is written to an HBase data file
-called an HFile.
-
-The {project-name} bulk load process takes place in the following phases:
-
-* *Disable Indexes* (if incremental index build not used)
-
-* *Prepare* (takes most time, heart of the bulk load operation)
-** Read source files ({project-name} Table, Hive table, or Hive external table)
-** Data encoded in {project-name} encoding
-** Data repartitioned and sorted to match regions of target table
-** Data written to HFiles
-** Data repartitioned and written to index HFiles (if incremental index build IS used)
-
-* *Complete* (with or without Snapshot recovery)
-** Take a snapshot of the table
-** Merge HFiles into HBase table (very fast \u2013 move, not a copy)
-** Delete snapshot or restore from snapshot if merge fails
-
-* *Populate Indexes* (if incremental index build is NOT used)
-
-* *Cleanup*
-** HFiles temporary space cleanup
-
-LOAD is a {project-name} SQL extension.
-
-```
-LOAD [WITH option[[,] option]...] INTO target-table SELECT ... FROM source-table
-
-option is:
-    TRUNCATE TABLE
-  | NO RECOVERY
-  | NO POPULATE INDEXES
-  | NO DUPLICATE CHECK
-  | NO OUTPUT
-  | INDEX TABLE ONLY
-  | UPSERT USING LOAD
-```
-
-[[load_syntax]]
-=== Syntax Description of LOAD
-
-* `_target-table_`
-+
-is the name of the target {project-name} table where the data will be loaded.
-See <<database_object_names,Database Object Names>>.
-
-* `_source-table_`
-+
-is the name of either a {project-name} table or a Hive table that has the
-source data. Hive tables can be accessed in {project-name} using the
-HIVE.HIVE schema (for example, hive.hive.orders). The Hive table needs
-to already exist in Hive before {project-name} can access it. If you want to
-load data that is already in an HDFS folder, then you need to create an
-external Hive table with the right fields and pointing to the HDFS
-folder containing the data. You can also specify a WHERE clause on the
-source data as a filter.
-
-* `[WITH _option_[[,] _option_]&#8230;]`
-+
-is a set of options that you can specify for the load operation. You can
-specify one or more of these options:
-
-** `TRUNCATE TABLE`
-+
-causes the Bulk Loader to truncate the target table before starting the
-load operation. By default, the Bulk Loader does not truncate the target
-table before loading data.
-
-** `NO RECOVERY`
-+
-specifies that the Bulk Loader not use HBase snapshots for recovery. By
-default, the Bulk Loader handles recovery using the HBase snapshots
-mechanism.
-
-<<<
-** `NO POPULATE INDEXES`
-+
-specifies that the Bulk Loader not handle index maintenance or populate
-the indexes. By default, the Bulk Loader handles index maintenance,
-disabling indexes before starting the load operation and populating them
-after the load operation is complete.
-
-** `NO DUPLICATE CHECK`
-+
-causes the Bulk Loader to ignore duplicates in the source data. By
-default, the Bulk Loader checks if there are duplicates in the source
-data and generates an error when it detects duplicates.
-
-** `NO OUTPUT
-+
-prevents the LOAD statement from displaying status messages. By default,
-the LOAD statement prints status messages listing the steps that the
-Bulk Loader is executing.
-
-* `INDEX TABLE ONLY`
-+
-specifies that the target table, which is an index, be populated with
-data from the parent table.
-
-* `UPSERT USING LOAD`
-+
-specifies that the data be inserted into the target table using row set
-inserts without a transaction.
-
-<<<
-[[load_considerations]]
-=== Considerations for LOAD
-
-[[load_required_privileges]]
-==== Required Privileges
-
-To issue a LOAD statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are the owner of the target table.
-* You have these privileges:
-** SELECT and INSERT privileges on the target table
-** DELETE privilege on the target table if TRUNCATE TABLE is specified
-* You have the MANAGE_LOAD component privilege for the SQL_OPERATIONS component.
-
-[[load_configuration_before_running_load]]
-==== Configuration Before Running LOAD
-
-Before running the LOAD statement, make sure that you have configured
-the staging folder, source table, and HBase according to these
-guidelines.
-
-==== Staging Folder for HFiles
-
-The Bulk Loader uses an HDFS folder as a staging area for the HFiles
-before calling HBase APIs to merge them into the {project-name} table.
-
-By default, {project-name} uses /bulkload as the staging folder. This folder
-must be owned by the same user as the one under which {project-name} runs. {project-name}
-also must have full permissions on this folder. The HBase user (that is,
-the user under which HBase runs) must have read/write access to this
-folder.
-
-Example:
-
-```
-drwxr-xr-x - trafodion trafodion 0 2014-07-07 09:49 /bulkload.
-```
-
-<<<
-==== Improving Throughput
-
-The following CQD (Control Query Default) settings help improve the Bulk Loader
-throughput:
-
-* `TRAF_LOAD_MAX_HFILE_SIZE`
-+
-Specifies the HFile size limit beyond which the current file is closed and a
-new file is created for the same partition. Adjust this setting to minimize
-HBase splitting/merging.
-+
-*Default*: 10GB
-
-* `TRAF_LOAD_PREP_TMP_LOCATION`
-+
-Specifies the HDFS directory where HFiles are created during load.
-+
-*Default*: `/bulkload`
-
-Also, consider using `NO DUPLICATE CHECK` to improve througput if your
-source data is clean.
-
-==== Hive Source Table
-
-To load data stored in HDFS, you need to create a Hive table with
-the right fields and types pointing to the HDFS folder containing the
-data before you start the load.
-
-==== HBase Snapshots
-
-If you do not specify the NO RECOVERY OPTION in the LOAD statement, the
-Bulk Loader uses HBase snapshots as a mechanism for recovery. Snapshots
-are a lightweight operation where some metadata is copied. (Data is not
-copied.)
-
-A snapshot is taken before the load starts and is removed after
-the load completes successfully. If something goes wrong and it is
-possible to recover, the snapshot is used to restore the table to its
-initial state before the load started. To use this recovery mechanism,
-HBase needs to be configured to allow snapshots.
-
-==== Incremental Loads
-
-The Bulk Loader allows for incremental loads by default. Snapshots are
-taken before second phase starts and deleted once the bulk load completes.
-
-If something goes wrong with the load, then the snapshot is restored to
-go to the previous state.
-
-<<<
-==== Non-Incremental Loads
-
-These following bulk load options can be used to do non-incremental load:
-
-* `NO RECOVERY`: Do not take a snapshot of the table.
-* `TRUNCATE TABLE`: Truncates the table before starting the load.
-
-==== Space Usage
-
-The target table values for SYSKEY, SALT, identity, divisioning columns
-are created automatically the during transformation step. The size of the
-HBase files is determined based on encoding, compression, HDFS replication
-factor, and row format. Target table can be pre-split into regions using
-salting, a Java Program, by seeding the table with data.
-
-==== Performance
-
-The overall throughput is influenced by row format, row length, number of
-columns, skew in data, etc. LOAD sas upsert semantics (duplicate constraint
-not checked with existing data). LOAD has lower CPU abd disk activity than
-similar trickle load (INSERT, UPSERT, or UPSERT USING LOAD), Also, LOAD has
-lower compaction activity after completion than Trickle Load. 
-
-==== Hive Scans
-
-Direct access for Hive table data supports:
-
-* Only text input format and sequence files.
-* Only structured data types.
-
-Tables must be created/dropped/altered through Hive itself.
-
-{project-name}:
-
-* Reads Hive metadata to determine information about table.
-* UPDATE STATISTICS can be performed on Hive tables - improves performance! 
-* Can write to Hive tables in both Text and Sequence formats (used by UNLOAD).
-
-<<<
-[[load_examples]]
-=== Examples of LOAD
-    
-* For customer demographics data residing in
-`/hive/tpcds/customer_demographics`, create an external Hive table using
-the following Hive SQL:
-+
-```
-create external table customer_demographics
-(
-    cd_demo_sk int
-  , cd_gender string
-  , cd_marital_status string
-  , cd_education_status string
-  , cd_purchase_estimate int
-  , cd_credit_rating string
-  , cd_dep_count int
-  , cd_dep_employed_count int
-  , cd_dep_college_count int
-)
-
-row format delimited fields terminated by '|' location
-'/hive/tpcds/customer_demographics';
-```
-
-* The {project-name} table where you want to load the data is defined using
-this DDL:
-+
-```
-create table customer_demographics_salt
-(
-    cd_demo_sk int not null
-  , cd_gender char(1)
-  , cd_marital_status char(1)
-  , cd_education_status char(20)
-  , cd_purchase_estimate int
-  , cd_credit_rating char(10)
-  , cd_dep_count int
-  , cd_dep_employed_count int
-  , cd_dep_college_count int
-  , primary key (cd_demo_sk)
-)
-salt using 4 partitions on (cd_demo_sk);
-```
-
-* This example shows how the LOAD statement loads the
-customer_demographics_salt table from the Hive table,
-`hive.hive.customer_demographics`:
-+
-```
->>load into customer_demographics_salt
-+>select * from hive.hive.customer_demographics where cd_demo_sk <= 5000;
-Task: LOAD Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
-Task: DISABLE INDEX Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
-Task: DISABLE INDEX Status: Ended Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
-Task: PREPARATION Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
-       Rows Processed: 5000
-Task: PREPARATION Status: Ended ET: 00:00:03.199
-Task: COMPLETION Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
-Task: COMPLETION Status: Ended ET: 00:00:00.331
-Task: POPULATE INDEX Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
-Task: POPULATE INDEX Status: Ended ET: 00:00:05.262
-```
-
-<<<
-[[populate_index_utility]]
-== POPULATE INDEX Utility
-
-The POPULATE INDEX utility performs a fast INSERT of data into an index
-from the parent table. You can execute this utility in a client-based
-tool like TrafCI.
-
-```
-POPULATE INDEX index ON table [index-option]
-
-index-option is:
-    ONLINE | OFFLINE
-```
-
-[[populate_index_syntax]]
-=== Syntax Description of POPULATE INDEX
-
-* `_index_`
-+
-is an SQL identifier that specifies the simple name for the index. You
-cannot qualify _index_ with its schema name. Indexes have their own
-name space within a schema, so an index name might be the same as a table
-or constraint name. However, no two indexes in a schema can have the
-same name.
-
-* `_table_`
-+
-is the name of the table for which to populate the index. See
-<<database_object_names,Database Object Names>>.
-
-* `ONLINE`
-+
-specifies that the populate operation should be done on-line. That is,
-ONLINE allows read and write DML access on the base table while the
-populate operation occurs. Additionally, ONLINE reads the audit trail to
-replay updates to the base table during the populate phase. If a lot of
-audit is generated and you perform many CREATE INDEX operations, we
-recommend that you avoid ONLINE operations because they can add more
-contention to the audit trail. The default is ONLINE.
-
-* `OFFLINE`
-+
-specifies that the populate should be done off-line. OFFLINE allows only
-read DML access to the base table. The base table is unavailable for
-write operations at this time. OFFLINE must be specified explicitly.
-SELECT is allowed.
-
-<<<
-[[populate_index_considerations]]
-=== Considerations for POPULATE INDEX
-
-When POPULATE INDEX is executed, the following steps occur:
-
-* The POPULATE INDEX operation runs in many transactions.
-* The actual data load operation is run outside of a transaction.
-
-If a failure occurs, the rollback is faster because it does not have to
-process a lot of audit. Also, if a failure occurs, the index remains
-empty, unaudited, and not attached to the base table (off-line).
-
-* When an off-line POPULATE INDEX is being executed, the base table is
-accessible for read DML operations. When an on-line POPULATE INDEX is
-being executed, the base table is accessible for read and write DML
-operations during that time period, except during the commit phase at
-the very end.
-* If the POPULATE INDEX operation fails unexpectedly, you may need to
-drop the index again and re-create and repopulate.
-* On-line POPULATE INDEX reads the audit trail to replay updates by
-allowing read/write access. If you plan to create many indexes in
-parallel or if you have a high level of activity on the audit trail, you
-should consider using the OFFLINE option.
-
-Errors can occur if the source base table or target index cannot be
-accessed, or if the load fails due to some resource problem or problem
-in the file system.
-
-[[populate_index_required_privileges]]
-==== Required Privileges
-
-To perform a POPULATE INDEX operation, one of the following must be
-true:
-
-* You are DB ROOT.
-* You are the owner of the table.
-* You have the SELECT and INSERT (or ALL) privileges on the associated table.
-
-[[populate_index_examples]]
-=== Examples of POPULATE INDEX
-
-* This example loads the specified index from the specified table:
-+
-```
-POPULATE INDEX myindex ON myschema.mytable;
-```
-
-* This example loads the specified index from the specified table, which
-uses the default schema:
-+
-```
-POPULATE INDEX index2 ON table2;
-```
-
-<<<
-[[purgedata_utility]]
-== PURGEDATA Utility
-
-The PURGEDATA utility performs a fast DELETE of data from a table and
-its related indexes. You can execute this utility in a client-based tool
-like TrafCI.
-
-```
-PURGEDATA object
-```
-
-[[purgedata_syntax]]
-=== Syntax Description of PURGEDATA
-
-_object_
-
-is the name of the table from which to purge the data. See
-<<"database object names","Database Object Names">>.
-
-[[purgedata_considerations]]
-=== Considerations for PURGEDATA
-
-* The _object_ can be a table name.
-* Errors are returned if _table_ cannot be accessed or if a resource or
-file-system problem causes the delete to fail.
-* PURGEDATA is not supported for volatile tables.
-
-[[purgedata_required_privileges]]
-==== Required Privileges
-
-To perform a PURGEDATA operation, one of the following must be true:
-
-* You are DB ROOT.
-* You are the owner of the table.
-* You have the SELECT and DELETE (or ALL) privileges on the associated
-table.
-
-[[purgedata_availability]]
-==== Availability
-
-PURGEDATA marks the table OFFLINE and sets the corrupt bit while
-processing. If PURGEDATA fails before it completes, the table and its
-dependent indexes will be unavailable, and you must run PURGEDATA again
-to complete the operation and remove the data. Error 8551 with an
-accompanying file system error 59 or error 1071 is returned in this
-case.
-
-[[purgedata_examples]]
-=== Examples of PURGEDATA
-
-* This example purges the data in the specified table. If the table has
-indexes, their data is also purged.
-+
-```
-PURGEDATA myschema.mytable;
-```
-
-<<<
-[[unload_statement]]
-== UNLOAD Statement
-
-The UNLOAD statement unloads data from {project-name} tables into an HDFS
-location that you specify. Extracted data can be either compressed or
-uncompressed based on what you choose.
-
-UNLOAD is a {project-name} SQL extension.
-
-```
-UNLOAD [WITH option[ option]...] INTO 'target-location' SELECT ... FROM source-table ...
-
-option is:
-    DELIMITER { 'delimiter-string' | delimiter-ascii-value }
-  | RECORD_SEPARATOR { 'separator-literal' | separator-ascii-value }
-  | NULL_STRING 'string-literal'
-  | PURGEDATA FROM TARGET
-  | COMPRESSION GZIP
-  | MERGE FILE merged_file-path [OVERWRITE]
-  | NO OUTPUT
-  | { NEW | EXISTING } SNAPSHOT HAVING SUFFIX 'string'
-```
-
-[[unload_syntax]]
-=== Syntax Description of UNLOAD
-
-* `'_target-location_'`
-+
-is the full pathname of the target HDFS folder where the extracted data
-will be written. Enclose the name of folder in single quotes. Specify
-the folder name as a full pathname and not as a relative path. You must
-have write permissions on the target HDFS folder. If you run UNLOAD in
-parallel, multiple files will be produced under the _target-location_.
-The number of files created will equal the number of ESPs.
-
-* `SELECT &#8230; FROM _source-table_ &#8230;`
-+
-is either a simple query or a complex one that contains GROUP BY, JOIN,
-or UNION clauses. _source-table_ is the name of a {project-name} table that
-has the source data. See <<database_object_names,Database Object Names>>.
-
-* `[WITH _option_[ _option_]&#8230;]`
-+
-is a set of options that you can specify for the unload operation. If
-you specify an option more than once, {project-name} returns an error with
-SQLCODE -4489. You can specify one or more of these options:
-
-** `DELIMITER { '_delimiter-string_' | _delimiter-ascii-value_ }`
-+
-specifies the delimiter as either a delimiter string or an ASCII value.
-If you do not specify this option, {project-name} uses the character "|" as
-the delimiter.
-
-*** _delimiter-string_ can be any ASCII or Unicode string. You can also
-specify the delimiter as an ASCII value. Valid values range from 1 to 255.
-Specify the value in decimal notation; hexadecimal or octal
-notation are currently not supported. If you are using an ASCII value,
-the delimiter can be only one character wide. Do not use quotes when
-specifying an ASCII value for the delimiter.
-
-** `RECORD_SEPARATOR { '_separator-literal_' | _separator-ascii-value_ }`
-+
-specifies the character that will be used to separate consecutive
-records or rows in the output file. You can specify either a literal
-or an ASCII value for the separator. The default value is a newline character.
-
-*** _separator-literal_ can be any ASCII or Unicode character. You can also
-specify the separator as an ASCII value. Valid values range from 1 to 255.
-Specify the value in decimal notation; hexadecimal or octal
-notation are currently not supported. If you are using an ASCII value,
-the separator can be only one character wide. Do not use quotes when
-specifying an ASCII value for the separator.
-
-** `NULL_STRING '_string-literal_'`
-+
-specifies the string that will be used to indicate a NULL value. The
-default value is the empty string ''.
-
-** `PURGEDATA FROM TARGET`
-+
-causes files in the target HDFS folder to be deleted before the unload
-operation.
-
-** `COMPRESSION GZIP`
-+
-uses gzip compression in the extract node, writing the data to disk in
-this compressed format. GZIP is currently the only supported type of
-compression. If you do not specify this option, the extracted data will
-be uncompressed.
-
-** `MERGE FILE _merged_file-path_ [OVERWRITE]`
-+
-merges the unloaded files into one single file in the specified
-_merged-file-path_. If you specify compression, the unloaded data will
-be in compressed format, and the merged file will also be in compressed
-format. If you specify the optional OVERWRITE keyword, the file is
-overwritten if it already exists; otherwise, {project-name} raises an error
-if the file already exists.
-
-** `NO OUTPUT`
-+
-prevents the UNLOAD statement from displaying status messages. By
-default, the UNLOAD statement prints status messages listing the steps
-that the Bulk Unloader is executing.
-
-<<<
-* `{ NEW | EXISTING } SNAPSHOT HAVING SUFFIX '_string_'`
-+
-initiates an HBase snapshot scan during the unload operation. During a
-snapshot scan, the Bulk Unloader will get a list of the {project-name} tables
-from the query explain plan and will create and verify snapshots for the
-tables. Specify a suffix string, '_string_', which will be appended to
-each table name.
-
-[[unload_considerations]]
-=== Considerations for UNLOAD
-
-* You must have write permissions on the target HDFS folder.
-* If a WITH option is specified more than once, {project-name} returns an
-error with SQLCODE -4489.
-
-[[unload_required_privileges]]
-==== Required Privileges
-
-To issue an UNLOAD statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are the owner of the target table.
-* You have the SELECT privilege on the target table.
-* You have the MANAGE_LOAD component privilege for the SQL_OPERATIONS
-component.
-
-[[unload_examples]]
-=== Examples of UNLOAD
-
-* This example shows how the UNLOAD statement extracts data from a
-{project-name} table, `TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS`, into an HDFS
-folder, `/bulkload/customer_demographics`:
-+
-```
->>UNLOAD
-+>WITH PURGEDATA FROM TARGET
-+>MERGE FILE 'merged_customer_demogs.gz' OVERWRITE
-+>COMPRESSION GZIP
-+>INTO '/bulkload/customer_demographics'
-+>select * from trafodion.hbase.customer_demographics
-+><<+ cardinality 10e10 ,+ cardinality 10e10 >>;
-Task: UNLOAD Status: Started
-Task: EMPTY TARGET Status: Started
-Task: EMPTY TARGET Status: Ended ET: 00:00:00.014
-Task: EXTRACT Status: Started
-       Rows Processed: 200000
-Task: EXTRACT Status: Ended ET: 00:00:04.743 Task: MERGE FILES Status: Started
-Task: MERGE FILES Status: Ended ET: 00:00:00.063
-
---- 200000 row(s) unloaded.
-```
-
-<<<
-[[update_statistics_statement]]
-== UPDATE STATISTICS Statement
-
-The UPDATE STATISTICS statement updates the histogram statistics for one
-or more groups of columns within a table. These statistics are used to
-devise optimized access plans.
-
-UPDATE STATISTICS is a {project-name} SQL extension.
-
-```
-UPDATE STATISTICS FOR TABLE table [CLEAR | on-clause | sample-table-clause ]
-
-on-clause is:
-    ON column-group-list CLEAR
-  | ON column-group-list [histogram-option]...
-  | ON column-group-list INCREMENTAL WHERE predicate
-
-column-group-list is:
-    column-list [,column-list]...
-  | EVERY COLUMN [,column-list]...
-  | EVERY KEY [,column-list]...
-  | EXISTING COLUMN[S] [,column-list]...
-  | NECESSARY COLUMN[S] [,column-list]...
-
-column-list for a single-column group is:
-    column-name
-  | (column-name)
-  | column-name TO column-name
-  | (column-name) TO (column-name)
-  | column-name TO (column-name)
-  | (column-name) TO column-name
-
-column-list for a multicolumn group is:
-    (column-name, column-name [,column-name]...)
-
-histogram-option is:
-    GENERATE n INTERVALS
-  | SAMPLE [sample-option]
-
-sample-option is:
-    [r ROWS]
-  | RANDOM percent PERCENT [PERSISTENT]
-  | PERIODIC size ROWS EVERY period ROWS
-
-sample-table-clause is:
-    CREATE SAMPLE RANDOM percent PERCENT
-  | REMOVE SAMPLE
-```
-
-[[update_statistics_syntax]]
-=== Syntax Description of UPDATE STATISTICS
-
-* `_table_`
-+
-names the table for which statistics are to be updated. To refer to a
-table, use the ANSI logical name.
-See <<database_object_names,Database Object Names>>.
-
-* `CLEAR`
-+
-deletes some or all histograms for the table _table_. Use this option
-when new applications no longer use certain histogram statistics.
-+
-If you do not specify _column-group-list_, all histograms for _table_
-are deleted. If you specify _column-group-list_, only columns in the
-group list are deleted.
-
-* `ON _column-group-list_`
-+
-specifies one or more groups of columns for which to generate histogram
-statistics with the option of clearing the histogram statistics. You
-must use the ON clause to generate statistics stored in histogram
-tables.
-
-* `_column-list_`
-+
-specifies how _column-group-list_ can be defined. The column list
-represents both a single-column group and a multi-column group.
-
-** Single-column group:
-
-*** `_column-name_ | (_column-name_) | _column-name_ TO _column-name_ |
-(_column-name_) TO (_column-name_)`
-+
-specifies how you can specify individual columns or a group of
-individual columns.
-+
-To generate statistics for individual columns, list each column. You can
-list each single column name within or without parentheses.
-
-** Multicolumn group:
-
-*** `(_column-name_, _column-name_ [,_column-name_]&#8230;)`
-+
-specifies a multi-column group.
-+
-To generate multi-column statistics, group a set of columns within
-parentheses, as shown. You cannot specify the name of a column more than
-once in the same group of columns.
-+
-<<<
-+
-One histogram is generated for each unique column group. Duplicate
-groups, meaning any permutation of the same group of columns, are
-ignored and processing continues. When you run UPDATE STATISTICS again
-for the same user table, the new data for that table replaces the data
-previously generated and stored in the table\u2019s histogram tables.
-Histograms of column groups not specified in the ON clause remain
-unchanged in histogram tables.
-+
-For more information about specifying columns, see
-<<generating_and_clearing_statistics_for_columns,Generating and Clearing Statistics for Columns>>.
-
-* `EVERY COLUMN`
-+
-The EVERY COLUMN keyword indicates that histogram statistics are to be
-generated for each individual column of _table_ and any multi-columns
-that make up the primary key and indexes. For example, _table_ has
-columns A, B, C, D defined, where A, B, C compose the primary key. In
-this case, the ON EVERY COLUMN option generates a single column
-histogram for columns A, B, C, D, and two multi-column histograms of (A,
-B, C) and (A, B).
-+
-The EVERY COLUMN option does what EVERY KEY does, with additional
-statistics on the individual columns.
-
-* `EVERY KEY`
-+
-The EVERY KEY keyword indicates that histogram statistics are to be
-generated for columns that make up the primary key and indexes. For
-example, _table_ has columns A, B, C, D defined. If the primary key
-comprises columns A, B, statistics are generated for (A, B), A and B. If
-the primary key comprises columns A, B, C, statistics are generated for
-(A,B,C), (A,B), A, B, C. If the primary key comprises columns A, B, C,
-D, statistics are generated for (A, B, C, D), (A, B, C), (A, B), and A,
-B, C, D.
-
-* `EXISTING COLUMN[S]`
-+
-The EXISTING COLUMN keyword indicates that all existing histograms of
-the table are to be updated. Statistics must be previously captured to
-establish existing columns.
-
-* `NECESSARY COLUMN[S]`
-+
-The NECESSARY COLUMN[S] keyword generates statistics for histograms that
-the optimizer has requested but do not exist. Update statistics
-automation must be enabled for NECESSARY COLUMN[S] to generate
-statistics. To enable automation, see <<update_statistics_automating_update_statistics,
-Automating Update Statistics>>.
-
-<<<
-* `_histogram-option_`
-
-** `GENERATE _n_ INTERVALS`
-+
-The GENERATE _n_ INTERVALS option for UPDATE STATISTICS accepts values
-between 1 and 10,000. Keep in mind that increasing the number of
-intervals per histograms may have a negative impact on compile time.
-+
-Increasing the number of intervals can be used for columns with small
-set of possible values and large variance of the frequency of these
-values. For example, consider a column \u2018CITY\u2019 in table SALES, which
-stores the city code where the item was sold, where number of cities in
-the sales data is 1538. Setting the number of intervals to a number
-greater or equal to the number of cities (that is, setting the number of
-intervals to 1600) guarantees that the generated histogram captures the
-number of rows for each city. If the specified value n exceeds the
-number of unique values in the column, the system generates only as many
-intervals as the number of unique values.
-
-** `SAMPLE [_sample-option_]`
-+
-is a clause that specifies that sampling is to be used to gather a
-subset of the data from the table. UPDATE STATISTICS stores the sample
-results and generates histograms.
-+
-If you specify the SAMPLE clause without additional options, the result
-depends on the number of rows in the table. If the table contains no
-more than 10,000 rows, the entire table will be read (no sampling). If
-the number of rows is greater than 10,000 but less than 1 million,
-10,000 rows are randomly sampled from the table. If there are more than
-1 million rows in the table, a random row sample is used to read 1
-percent of the rows in the table, with a maximum of 1 million rows
-sampled.
-+
-TIP: As a guideline, the default sample of 1 percent of the rows in the
-table, with a maximum of 1 million rows, provides good statistics for
-the optimizer to generate good plans.
-+
-If you do not specify the SAMPLE clause, if the table has fewer rows
-than specified, or if the sample size is greater than the system limit,
-{project-name} SQL reads all rows from _table_. See <<sample_clause,SAMPLE Clause>>.
-
-*** `_sample-option_`
-
-**** `r_ rows`
-+
-A row sample is used to read _r_ rows from the table. The value _r_ must
-be an integer that is greater than zero 
-
-**** `RANDOM _percent_ PERCENT`
-+
-directs {project-name} SQL to choose rows randomly from the table. The value
-percent must be a value between zero and 100 (0 < percent &#60;= 100). In
-addition, only the first four digits to the right of the decimal point
-are significant. For example, value 0.00001 is considered to be 0.0000,
-Value 1.23456 is considered to be 1.2345.
-
-***** `PERSISTENT`
-+
-directs {project-name} SQL to create a persistent sample table and store the
-random sample in it. This table can then be used later for updating statistics
-incrementally.
-
-**** `PERIODIC _size_ ROWS EVERY _period_ ROW`
-+
-directs {project-name} SQL to choose the first _size_ number of rows from
-each _period_ of rows. The value _size_ must be an integer that is
-greater than zero and less than or equal to the value _period_. (0 <
-_size_ &#60;= _period_). The size of the _period_ is defined by the number
-of rows specified for _period_. The value _period_ must be an integer
-that is greater than zero (_period_ > 0).
-
-* `INCREMENTAL WHERE _predicate_`
-+
-directs {project-name} SQL to update statistics incrementally. That is, instead
-of taking a fresh sample of the entire table, {project-name} SQL will use a previously
-created persistent sample table. {project-name} SQL will update the persistent sample
-by replacing any rows satisfying the _predicate_ with a fresh sample of rows from
-the original table satisfying the _predicate_. The sampling rate used is the
-_percent_ specified when the persistent sample table was created. Statistics
-are then generated from this updated sample. See also
-<<update_statistics_incremental_update_statistics,
-Incremental Update Statistics>>.
-
-* `CREATE SAMPLE RANDOM _percent_ PERCENT`
-+
-Creates a persistent sample table associated with this table. The sample is
-created using a random sample of _percent_ percent of the rows. The table
-can then be used for later incremental statistics update.
-
-* `REMOVE SAMPLE`
-+
-Drops the persistent sample table associated with this table.
-
-[[update_statistics_considerations]]
-=== Considerations for UPDATE STATISTICS
-
-[[update_statistics_using_statistics]]
-==== Using Statistics
-
-Use UPDATE STATISTICS to collect and save statistics on columns. The SQL
-compiler uses histogram statistics to determine the selectivity of
-predicates, indexes, and tables. Because selectivity directly influences
-the cost of access plans, regular collection of statistics increases the
-likelihood that {project-name} SQL chooses efficient access plans.
-
-While UPDATE STATISTICS is running on a table, the table is active and
-available for query access.
-
-When a user table is changed, either by changing its data significantly
-or its definition, re-execute the UPDATE STATISTICS statement for the
-table.
-
-<<<
-[[update_statistics_histogram_statistics]]
-==== Histogram Statistics
-
-Histogram statistics are used by the compiler to produce the best plan
-for a given SQL query. When histograms are not available, default
-assumptions are made by the compiler and the resultant plan might not
-perform well. Histograms that reflect the latest data in a table are
-optimal.
-
-The compiler does not need histogram statistics for every column of a
-table. For example, if a column is only in the select list, its
-histogram statistics will be irrelevant. A histogram statistic is useful
-when a column appears in:
-
-* A predicate
-* A GROUP BY column
-* An ORDER BY clause
-* A HAVING clause
-* Or similar clause
-
-In addition to single-column histogram statistics, the compiler needs
-multi-column histogram statistics, such as when group by column-5,
-column-3, column-19 appears in a query. Then, histogram statistics for
-the combination (column-5, column-3, column-19) are needed.
-
-[[update_statistics_required-privileges]]
-==== Required Privileges
-
-To perform an UPDATE STATISTICS operation, one of the following must be
-true:
-
-* You are DB ROOT.
-* You are the owner of the target table.
-* You have the MANAGE_STATISTICS component privilege for the
-SQL_OPERATIONS component.
-
-[[update_statistics_locking]]
-==== Locking
-
-UPDATE STATISTICS momentarily locks the definition of the user table
-during the operation but not the user table itself. The UPDATE
-STATISTICS statement uses READ UNCOMMITTED isolation level for the user
-table.
-
-<<<
-[[update_statistics_transactions]]
-==== Transactions
-
-Do not start a transaction before executing UPDATE STATISTICS. UPDATE
-STATISTICS runs multiple transactions of its own, as needed. Starting
-your own transaction in which UPDATE STATISTICS runs could cause the
-transaction auto abort time to be exceeded during processing.
-
-[[update_statistics_generating_and_clearing_statistics_for_columns]]
-==== Generating and Clearing Statistics for Columns
-
-To generate statistics for particular columns, name each column, or name
-the first and last columns of a sequence of columns in the table. For
-example, suppose that a table has consecutive columns CITY, STATE, ZIP.
-This list gives a few examples of possible options you can specify:
-
-[cols="25%,37%,37%",options="header"]
-|===
-| Single-Column Group   | Single-Column Group Within Parentheses | Multicolumn Group
-| ON CITY, STATE, ZIP   | ON (CITY),(STATE),(ZIP)                | ON (CITY, STATE) or ON (CITY,STATE,ZIP)
-| ON CITY TO ZIP        | ON (CITY) TO (ZIP)                     |
-| ON ZIP TO CITY        | ON (ZIP) TO (CITY)                     |
-| ON CITY, STATE TO ZIP | ON (CITY), (STATE) TO (ZIP)            |
-| ON CITY TO STATE, ZIP | ON (CITY) TO (STATE), (ZIP)            |
-|===
-
-The TO specification is useful when a table has many columns, and you
-want histograms on a subset of columns. Do not confuse (CITY) TO (ZIP)
-with (CITY, STATE, ZIP), which refers to a multi-column histogram.
-
-You can clear statistics in any combination of columns you specify, not
-necessarily with the _column-group-list_ you used to create statistics.
-However, those statistics will remain until you clear them.
-
-<<<
-[[update_statistics_column_lists_and_access_plans]]
-==== Column Lists and Access Plans
-
-Generate statistics for columns most often used in data access plans for
-a table\u2014that is, the primary key, indexes defined on the table, and any
-other columns frequently referenced in predicates in WHERE or GROUP BY
-clauses of queries issued on the table. Use the EVERY COLUMN option to
-generate histograms for every individual column or multi columns that
-make up the primary key and indexes.
-
-The EVERY KEY option generates histograms that make up the primary key
-and indexes.
-
-If you often perform a GROUP BY over specific columns in a table, use
-multi-column lists in the UPDATE STATISTICS statement (consisting of the
-columns in the GROUP BY clause) to generate histogram statistics that
-enable the optimizer to choose a better plan. Similarly, when a query
-joins two tables by two or more columns, multi-column lists (consisting
-of the columns being joined) help the optimizer choose a better plan.
-
-[[update_statistics_automating_update_statistics]]
-==== Automating Update Statistics
-
-To enable update statistics automation, set the Control Query Default
-(CQD) attribute, USTAT_AUTOMATION_INTERVAL, in a session where you will
-run update statistics operations. For example:
-
-```
-control query default USTAT_AUTOMATION_INTERVAL '1440';
-```
-
-The value of USTAT_AUTOMATION_INTERVAL is intended to be an automation
-interval (in minutes), but, in {project-name} Release 1.0, this value does
-not act as a timing interval. Instead, any value greater than zero
-enables update statistics automation.
-
-After enabling update statistics automation, prepare each of the queries
-that you want to optimize. For example:
-
-```
-prepare s from select...;
-```
-
-The PREPARE statement causes the {project-name} SQL compiler to compile and
-optimize a query without executing it. When preparing queries with
-update statistic automation enabled, any histograms needed by the
-optimizer that are not present will cause those columns to be marked as
-needing histograms.
-
-Next, run this UPDATE STATISTICS statement against each table, using ON
-NECESSARY COLUMN[S] to generate the needed histograms:
-
-```
-update statistics for table _table-name_ on necessary columns sample;
-```
-
-[[update_statistics_incremental_update_statistics]]
-==== Incremental Update Statistics
-
-UPDATE STATISTICS processing time can be lengthy for very large tables.
-One strategy for reducing the time is to create histograms only for
-columns that actually need them (for example, using the ON NECESSARY COLUMNS 
-column group). Another strategy is to update statistics incrementally. These
-strategies can be used together if desired.
-
-To use the incremental update statistics feature, you must first create
-statistics for the table and create a persistent sample table. One way to
-do this is to perform a normal update statistics command, adding the
-PERSISTENT keyword to the _sample-option_. Another way to do this if you
-already have reasonably up-to-date statistics for the table, is to create
-a persistent sample table separately using the CREATE SAMPLE option.
-
-You can then perform update statistics incrementally by using the INCREMENTAL
-WHERE _predicate_ syntax in the on-clause. The _predicate_ should be chosen
-to describe the set of rows that have changed since the last statistics update
-was performed. For example, if your table contains a column with a timestamp
-giving the date and time of last update, this is a particularly useful column
-to use in the _predicate_.
-
-If you decide later that you wish to change the _percent_ sampling rate used
-for the persistent sample table, you can do so by dropping the persistent
-sample table (using REMOVE SAMPLE) and creating a new one (by using the
-CREATE SAMPLE option). Using a higher _percent_ results in more accurate
-histograms, but at the price of a longer-running operation.
-
-<<<
-[[update_statistics_examples]]
-=== Examples of UPDATE STATISTICS
-
-* This example generates four histograms for the columns jobcode,
-empnum, deptnum, and (empnum, deptnum) for the table EMPLOYEE. Depending
-on the table\u2019s size and data distribution, each histogram should contain
-ten intervals.
-+
-```
-UPDATE STATISTICS FOR TABLE employee
-ON (jobcode),(empnum, deptnum) GENERATE 10 INTERVALS;
-
---- SQL operation complete.
-```
-
-* This example generates histogram statistics using the ON EVERY COLUMN
-option for the table DEPT. This statement performs a full scan, and
-{project-name} SQL determines the default number of intervals.
-+
-```
-UPDATE STATISTICS FOR TABLE dept ON EVERY COLUMN;
-
---- SQL operation complete.
-```
-
-* Suppose that a construction company has an ADDRESS table of potential
-sites and a DEMOLITION_SITES table that contains some of the columns of
-the ADDRESS table. The primary key is ZIP. Join these two tables on two
-of the columns in common:
-+
-```
-SELECT COUNT(AD.number), AD.street,
-       AD.city, AD.zip, AD.state
-FROM address AD, demolition_sites DS
-WHERE AD.zip = DS.zip AND AD.type = DS.type
-GROUP BY AD.street, AD.city, AD.zip, AD.state;
-```
-+
-To generate statistics specific to this query, enter these statements:
-+
-```
-UPDATE STATISTICS FOR TABLE address
-ON (street), (city), (state), (zip, type);
-
-UPDATE STATISTICS FOR TABLE demolition_sites ON (zip, type);
-```
-
-* This example removes all histograms for table DEMOLITION_SITES:
-+
-```
-UPDATE STATISTICS FOR TABLE demolition_sites CLEAR;
-```
-
-<<<
-* This example selectively removes the histogram for column STREET in
-table ADDRESS:
-+
-```
-UPDATE STATISTICS FOR TABLE address ON street CLEAR;
-```
-
-* This example generates statistics but also creates a persistent 
-sample table for use when updating statistics incrementally:
-+
-```
-UPDATE STATISTICS FOR TABLE address
-ON (street), (city), (state), (zip, type)
-SAMPLE RANDOM 5 PERCENT PERSISTENT;
-```
-
-* This example updates statistics incrementally. It assumes that
-a persistent sample table has already been created. The predicate
-in the WHERE clause describes the set of rows that have changed
-since statistics were last updated. Here we assume that rows
-with a state of California are the only rows that have changed:
-+
-```
-UPDATE STATISTICS FOR TABLE address
-ON EXISTING COLUMNS
-INCREMENTAL WHERE state = 'CA';
-```
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[sql_utilities]]
+= SQL Utilities
+
+A utility is a tool that runs within {project-name} SQL and performs tasks.
+This section describes the {project-name} SQL utilities:
+
+[cols=","]
+|===
+| <<load_statement,LOAD Statement>>                           | Uses the {project-name} Bulk Loader to load data from a source table, either
+a {project-name} table or a Hive table, into a target {project-name} table.
+| <<populate_index_utility,POPULATE INDEX Utility>>           | Loads indexes.
+| <<purgedata_utility,PURGEDATA Utility>>                     | Purges data from tables and indexes.
+| <<unload_statement,UNLOAD Statement>>                       | Unloads data from {project-name} tables into an HDFS location that you
+specify.
+| <<update_statistics_statement,UPDATE STATISTICS Statement>> | Updates the histogram statistics for one or more groups of columns
+within a table. These statistics are used to devise optimized access plans.
+|===
+
+NOTE: {project-name} SQL utilities are entered interactively or from script
+files using a client-based tool, such as the {project-name} Command Interface
+(TrafCI). To install and configure a client application that enables you
+to connect to and issue SQL utilities, see the
+{docs-url}/client_installation/index.html[_{project-name} Client Installation Guide_].
+
+<<<
+[[load_statement]]
+== LOAD Statement
+
+The LOAD statement uses the {project-name} Bulk Loader to load data from a
+source table, either a {project-name} table or a Hive table, into a target
+{project-name} table. The {project-name} Bulk Loader prepares and loads HFiles
+directly in the region servers and bypasses the write path and the cost
+associated with it. The write path begins at a client, moves to a region
+server, and ends when data eventually is written to an HBase data file
+called an HFile.
+
+The {project-name} bulk load process takes place in the following phases:
+
+* *Disable Indexes* (if incremental index build not used)
+
+* *Prepare* (takes most time, heart of the bulk load operation)
+** Read source files ({project-name} Table, Hive table, or Hive external table)
+** Data encoded in {project-name} encoding
+** Data repartitioned and sorted to match regions of target table
+** Data written to HFiles
+** Data repartitioned and written to index HFiles (if incremental index build IS used)
+
+* *Complete* (with or without Snapshot recovery)
+** Take a snapshot of the table
+** Merge HFiles into HBase table (very fast \u2013 move, not a copy)
+** Delete snapshot or restore from snapshot if merge fails
+
+* *Populate Indexes* (if incremental index build is NOT used)
+
+* *Cleanup*
+** HFiles temporary space cleanup
+
+LOAD is a {project-name} SQL extension.
+
+```
+LOAD [WITH option[[,] option]...] INTO target-table SELECT ... FROM source-table
+
+option is:
+    TRUNCATE TABLE
+  | NO RECOVERY
+  | NO POPULATE INDEXES
+  | NO DUPLICATE CHECK
+  | NO OUTPUT
+  | INDEX TABLE ONLY
+  | UPSERT USING LOAD
+```
+
+[[load_syntax]]
+=== Syntax Description of LOAD
+
+* `_target-table_`
++
+is the name of the target {project-name} table where the data will be loaded.
+See <<database_object_names,Database Object Names>>.
+
+* `_source-table_`
++
+is the name of either a {project-name} table or a Hive table that has the
+source data. Hive tables can be accessed in {project-name} using the
+HIVE.HIVE schema (for example, hive.hive.orders). The Hive table needs
+to already exist in Hive before {project-name} can access it. If you want to
+load data that is already in an HDFS folder, then you need to create an
+external Hive table with the right fields and pointing to the HDFS
+folder containing the data. You can also specify a WHERE clause on the
+source data as a filter.
+
+* `[WITH _option_[[,] _option_]&#8230;]`
++
+is a set of options that you can specify for the load operation. You can
+specify one or more of these options:
+
+** `TRUNCATE TABLE`
++
+causes the Bulk Loader to truncate the target table before starting the
+load operation. By default, the Bulk Loader does not truncate the target
+table before loading data.
+
+** `NO RECOVERY`
++
+specifies that the Bulk Loader not use HBase snapshots for recovery. By
+default, the Bulk Loader handles recovery using the HBase snapshots
+mechanism.
+
+<<<
+** `NO POPULATE INDEXES`
++
+specifies that the Bulk Loader not handle index maintenance or populate
+the indexes. By default, the Bulk Loader handles index maintenance,
+disabling indexes before starting the load operation and populating them
+after the load operation is complete.
+
+** `NO DUPLICATE CHECK`
++
+causes the Bulk Loader to ignore duplicates in the source data. By
+default, the Bulk Loader checks if there are duplicates in the source
+data and generates an error when it detects duplicates.
+
+** `NO OUTPUT`
++
+prevents the LOAD statement from displaying status messages. By default,
+the LOAD statement prints status messages listing the steps that the
+Bulk Loader is executing.
+
+* `INDEX TABLE ONLY`
++
+specifies that the target table, which is an index, be populated with
+data from the parent table.
+
+* `UPSERT USING LOAD`
++
+specifies that the data be inserted into the target table using row set
+inserts without a transaction.
+
+<<<
+[[load_considerations]]
+=== Considerations for LOAD
+
+[[load_required_privileges]]
+==== Required Privileges
+
+To issue a LOAD statement, one of the following must be true:
+
+* You are DB ROOT.
+* You are the owner of the target table.
+* You have these privileges:
+** SELECT and INSERT privileges on the target table
+** DELETE privilege on the target table if TRUNCATE TABLE is specified
+* You have the MANAGE_LOAD component privilege for the SQL_OPERATIONS component.
+
+[[load_configuration_before_running_load]]
+==== Configuration Before Running LOAD
+
+Before running the LOAD statement, make sure that you have configured
+the staging folder, source table, and HBase according to these
+guidelines.
+
+==== Staging Folder for HFiles
+
+The Bulk Loader uses an HDFS folder as a staging area for the HFiles
+before calling HBase APIs to merge them into the {project-name} table.
+
+By default, {project-name} uses /bulkload as the staging folder. This folder
+must be owned by the same user as the one under which {project-name} runs. {project-name}
+also must have full permissions on this folder. The HBase user (that is,
+the user under which HBase runs) must have read/write access to this
+folder.
+
+Example:
+
+```
+drwxr-xr-x - trafodion trafodion 0 2014-07-07 09:49 /bulkload.
+```
+
+<<<
+==== Improving Throughput
+
+The following CQD (Control Query Default) settings help improve the Bulk Loader
+throughput:
+
+* `TRAF_LOAD_MAX_HFILE_SIZE`
++
+Specifies the HFile size limit beyond which the current file is closed and a
+new file is created for the same partition. Adjust this setting to minimize
+HBase splitting/merging.
++
+*Default*: 10GB
+
+* `TRAF_LOAD_PREP_TMP_LOCATION`
++
+Specifies the HDFS directory where HFiles are created during load.
++
+*Default*: `/bulkload`
+
+Also, consider using `NO DUPLICATE CHECK` to improve througput if your
+source data is clean.
+
+==== Hive Source Table
+
+To load data stored in HDFS, you need to create a Hive table with
+the right fields and types pointing to the HDFS folder containing the
+data before you start the load.
+
+==== HBase Snapshots
+
+If you do not specify the NO RECOVERY OPTION in the LOAD statement, the
+Bulk Loader uses HBase snapshots as a mechanism for recovery. Snapshots
+are a lightweight operation where some metadata is copied. (Data is not
+copied.)
+
+A snapshot is taken before the load starts and is removed after
+the load completes successfully. If something goes wrong and it is
+possible to recover, the snapshot is used to restore the table to its
+initial state before the load started. To use this recovery mechanism,
+HBase needs to be configured to allow snapshots.
+
+==== Incremental Loads
+
+The Bulk Loader allows for incremental loads by default. Snapshots are
+taken before second phase starts and deleted once the bulk load completes.
+
+If something goes wrong with the load, then the snapshot is restored to
+go to the previous state.
+
+<<<
+==== Non-Incremental Loads
+
+These following bulk load options can be used to do non-incremental load:
+
+* `NO RECOVERY`: Do not take a snapshot of the table.
+* `TRUNCATE TABLE`: Truncates the table before starting the load.
+
+==== Space Usage
+
+The target table values for SYSKEY, SALT, identity, divisioning columns
+are created automatically the during transformation step. The size of the
+HBase files is determined based on encoding, compression, HDFS replication
+factor, and row format. Target table can be pre-split into regions using
+salting, a Java Program, by seeding the table with data.
+
+==== Performance
+
+The overall throughput is influenced by row format, row length, number of
+columns, skew in data, etc. LOAD sas upsert semantics (duplicate constraint
+not checked with existing data). LOAD has lower CPU abd disk activity than
+similar trickle load (INSERT, UPSERT, or UPSERT USING LOAD), Also, LOAD has
+lower compaction activity after completion than Trickle Load. 
+
+==== Hive Scans
+
+Direct access for Hive table data supports:
+
+* Only text input format and sequence files.
+* Only structured data types.
+
+Tables must be created/dropped/altered through Hive itself.
+
+{project-name}:
+
+* Reads Hive metadata to determine information about table.
+* UPDATE STATISTICS can be performed on Hive tables - improves performance! 
+* Can write to Hive tables in both Text and Sequence formats (used by UNLOAD).
+
+<<<
+[[load_examples]]
+=== Examples of LOAD
+    
+* For customer demographics data residing in
+`/hive/tpcds/customer_demographics`, create an external Hive table using
+the following Hive SQL:
++
+```
+create external table customer_demographics
+(
+    cd_demo_sk int
+  , cd_gender string
+  , cd_marital_status string
+  , cd_education_status string
+  , cd_purchase_estimate int
+  , cd_credit_rating string
+  , cd_dep_count int
+  , cd_dep_employed_count int
+  , cd_dep_college_count int
+)
+
+row format delimited fields terminated by '|' location
+'/hive/tpcds/customer_demographics';
+```
+
+* The {project-name} table where you want to load the data is defined using
+this DDL:
++
+```
+create table customer_demographics_salt
+(
+    cd_demo_sk int not null
+  , cd_gender char(1)
+  , cd_marital_status char(1)
+  , cd_education_status char(20)
+  , cd_purchase_estimate int
+  , cd_credit_rating char(10)
+  , cd_dep_count int
+  , cd_dep_employed_count int
+  , cd_dep_college_count int
+  , primary key (cd_demo_sk)
+)
+salt using 4 partitions on (cd_demo_sk);
+```
+
+* This example shows how the LOAD statement loads the
+customer_demographics_salt table from the Hive table,
+`hive.hive.customer_demographics`:
++
+```
+>>load into customer_demographics_salt
++>select * from hive.hive.customer_demographics where cd_demo_sk <= 5000;
+Task: LOAD Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
+Task: DISABLE INDEX Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
+Task: DISABLE INDEX Status: Ended Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
+Task: PREPARATION Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
+       Rows Processed: 5000
+Task: PREPARATION Status: Ended ET: 00:00:03.199
+Task: COMPLETION Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
+Task: COMPLETION Status: Ended ET: 00:00:00.331
+Task: POPULATE INDEX Status: Started Object: TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS_SALT
+Task: POPULATE INDEX Status: Ended ET: 00:00:05.262
+```
+
+<<<
+[[populate_index_utility]]
+== POPULATE INDEX Utility
+
+The POPULATE INDEX utility performs a fast INSERT of data into an index
+from the parent table. You can execute this utility in a client-based
+tool like TrafCI.
+
+```
+POPULATE INDEX index ON table [index-option]
+
+index-option is:
+    ONLINE | OFFLINE
+```
+
+[[populate_index_syntax]]
+=== Syntax Description of POPULATE INDEX
+
+* `_index_`
++
+is an SQL identifier that specifies the simple name for the index. You
+cannot qualify _index_ with its schema name. Indexes have their own
+name space within a schema, so an index name might be the same as a table
+or constraint name. However, no two indexes in a schema can have the
+same name.
+
+* `_table_`
++
+is the name of the table for which to populate the index. See
+<<database_object_names,Database Object Names>>.
+
+* `ONLINE`
++
+specifies that the populate operation should be done on-line. That is,
+ONLINE allows read and write DML access on the base table while the
+populate operation occurs. Additionally, ONLINE reads the audit trail to
+replay updates to the base table during the populate phase. If a lot of
+audit is generated and you perform many CREATE INDEX operations, we
+recommend that you avoid ONLINE operations because they can add more
+contention to the audit trail. The default is ONLINE.
+
+* `OFFLINE`
++
+specifies that the populate should be done off-line. OFFLINE allows only
+read DML access to the base table. The base table is unavailable for
+write operations at this time. OFFLINE must be specified explicitly.
+SELECT is allowed.
+
+<<<
+[[populate_index_considerations]]
+=== Considerations for POPULATE INDEX
+
+When POPULATE INDEX is executed, the following steps occur:
+
+* The POPULATE INDEX operation runs in many transactions.
+* The actual data load operation is run outside of a transaction.
+
+If a failure occurs, the rollback is faster because it does not have to
+process a lot of audit. Also, if a failure occurs, the index remains
+empty, unaudited, and not attached to the base table (off-line).
+
+* When an off-line POPULATE INDEX is being executed, the base table is
+accessible for read DML operations. When an on-line POPULATE INDEX is
+being executed, the base table is accessible for read and write DML
+operations during that time period, except during the commit phase at
+the very end.
+* If the POPULATE INDEX operation fails unexpectedly, you may need to
+drop the index again and re-create and repopulate.
+* On-line POPULATE INDEX reads the audit trail to replay updates by
+allowing read/write access. If you plan to create many indexes in
+parallel or if you have a high level of activity on the audit trail, you
+should consider using the OFFLINE option.
+
+Errors can occur if the source base table or target index cannot be
+accessed, or if the load fails due to some resource problem or problem
+in the file system.
+
+[[populate_index_required_privileges]]
+==== Required Privileges
+
+To perform a POPULATE INDEX operation, one of the following must be
+true:
+
+* You are DB ROOT.
+* You are the owner of the table.
+* You have the SELECT and INSERT (or ALL) privileges on the associated table.
+
+[[populate_index_examples]]
+=== Examples of POPULATE INDEX
+
+* This example loads the specified index from the specified table:
++
+```
+POPULATE INDEX myindex ON myschema.mytable;
+```
+
+* This example loads the specified index from the specified table, which
+uses the default schema:
++
+```
+POPULATE INDEX index2 ON table2;
+```
+
+<<<
+[[purgedata_utility]]
+== PURGEDATA Utility
+
+The PURGEDATA utility performs a fast DELETE of data from a table and
+its related indexes. You can execute this utility in a client-based tool
+like TrafCI.
+
+```
+PURGEDATA object
+```
+
+[[purgedata_syntax]]
+=== Syntax Description of PURGEDATA
+
+_object_
+
+is the name of the table from which to purge the data. See
+<<"database object names","Database Object Names">>.
+
+[[purgedata_considerations]]
+=== Considerations for PURGEDATA
+
+* The _object_ can be a table name.
+* Errors are returned if _table_ cannot be accessed or if a resource or
+file-system problem causes the delete to fail.
+* PURGEDATA is not supported for volatile tables.
+
+[[purgedata_required_privileges]]
+==== Required Privileges
+
+To perform a PURGEDATA operation, one of the following must be true:
+
+* You are DB ROOT.
+* You are the owner of the table.
+* You have the SELECT and DELETE (or ALL) privileges on the associated
+table.
+
+[[purgedata_availability]]
+==== Availability
+
+PURGEDATA marks the table OFFLINE and sets the corrupt bit while
+processing. If PURGEDATA fails before it completes, the table and its
+dependent indexes will be unavailable, and you must run PURGEDATA again
+to complete the operation and remove the data. Error 8551 with an
+accompanying file system error 59 or error 1071 is returned in this
+case.
+
+[[purgedata_examples]]
+=== Examples of PURGEDATA
+
+* This example purges the data in the specified table. If the table has
+indexes, their data is also purged.
++
+```
+PURGEDATA myschema.mytable;
+```
+
+<<<
+[[unload_statement]]
+== UNLOAD Statement
+
+The UNLOAD statement unloads data from {project-name} tables into an HDFS
+location that you specify. Extracted data can be either compressed or
+uncompressed based on what you choose.
+
+UNLOAD is a {project-name} SQL extension.
+
+```
+UNLOAD [WITH option[ option]...] INTO 'target-location' SELECT ... FROM source-table ...
+
+option is:
+    DELIMITER { 'delimiter-string' | delimiter-ascii-value }
+  | RECORD_SEPARATOR { 'separator-literal' | separator-ascii-value }
+  | NULL_STRING 'string-literal'
+  | PURGEDATA FROM TARGET
+  | COMPRESSION GZIP
+  | MERGE FILE merged_file-path [OVERWRITE]
+  | NO OUTPUT
+  | { NEW | EXISTING } SNAPSHOT HAVING SUFFIX 'string'
+```
+
+[[unload_syntax]]
+=== Syntax Description of UNLOAD
+
+* `'_target-location_'`
++
+is the full pathname of the target HDFS folder where the extracted data
+will be written. Enclose the name of folder in single quotes. Specify
+the folder name as a full pathname and not as a relative path. You must
+have write permissions on the target HDFS folder. If you run UNLOAD in
+parallel, multiple files will be produced under the _target-location_.
+The number of files created will equal the number of ESPs.
+
+* `SELECT &#8230; FROM _source-table_ &#8230;`
++
+is either a simple query or a complex one that contains GROUP BY, JOIN,
+or UNION clauses. _source-table_ is the name of a {project-name} table that
+has the source data. See <<database_object_names,Database Object Names>>.
+
+* `[WITH _option_[ _option_]&#8230;]`
++
+is a set of options that you can specify for the unload operation. If
+you specify an option more than once, {project-name} returns an error with
+SQLCODE -4489. You can specify one or more of these options:
+
+** `DELIMITER { '_delimiter-string_' | _delimiter-ascii-value_ }`
++
+specifies the delimiter as either a delimiter string or an ASCII value.
+If you do not specify this option, {project-name} uses the character "|" as
+the delimiter.
+
+*** _delimiter-string_ can be any ASCII or Unicode string. You can also
+specify the delimiter as an ASCII value. Valid values range from 1 to 255.
+Specify the value in decimal notation; hexadecimal or octal
+notation are currently not supported. If you are using an ASCII value,
+the delimiter can be only one character wide. Do not use quotes when
+specifying an ASCII value for the delimiter.
+
+** `RECORD_SEPARATOR { '_separator-literal_' | _separator-ascii-value_ }`
++
+specifies the character that will be used to separate consecutive
+records or rows in the output file. You can specify either a literal
+or an ASCII value for the separator. The default value is a newline character.
+
+*** _separator-literal_ can be any ASCII or Unicode character. You can also
+specify the separator as an ASCII value. Valid values range from 1 to 255.
+Specify the value in decimal notation; hexadecimal or octal
+notation are currently not supported. If you are using an ASCII value,
+the separator can be only one character wide. Do not use quotes when
+specifying an ASCII value for the separator.
+
+** `NULL_STRING '_string-literal_'`
++
+specifies the string that will be used to indicate a NULL value. The
+default value is the empty string ''.
+
+** `PURGEDATA FROM TARGET`
++
+causes files in the target HDFS folder to be deleted before the unload
+operation.
+
+** `COMPRESSION GZIP`
++
+uses gzip compression in the extract node, writing the data to disk in
+this compressed format. GZIP is currently the only supported type of
+compression. If you do not specify this option, the extracted data will
+be uncompressed.
+
+** `MERGE FILE _merged_file-path_ [OVERWRITE]`
++
+merges the unloaded files into one single file in the specified
+_merged-file-path_. If you specify compression, the unloaded data will
+be in compressed format, and the merged file will also be in compressed
+format. If you specify the optional OVERWRITE keyword, the file is
+overwritten if it already exists; otherwise, {project-name} raises an error
+if the file already exists.
+
+** `NO OUTPUT`
++
+prevents the UNLOAD statement from displaying status messages. By
+default, the UNLOAD statement prints status messages listing the steps
+that the Bulk Unloader is executing.
+
+<<<
+* `{ NEW | EXISTING } SNAPSHOT HAVING SUFFIX '_string_'`
++
+initiates an HBase snapshot scan during the unload operation. During a
+snapshot scan, the Bulk Unloader will get a list of the {project-name} tables
+from the query explain plan and will create and verify snapshots for the
+tables. Specify a suffix string, '_string_', which will be appended to
+each table name.
+
+[[unload_considerations]]
+=== Considerations for UNLOAD
+
+* You must have write permissions on the target HDFS folder.
+* If a WITH option is specified more than once, {project-name} returns an
+error with SQLCODE -4489.
+
+[[unload_required_privileges]]
+==== Required Privileges
+
+To issue an UNLOAD statement, one of the following must be true:
+
+* You are DB ROOT.
+* You are the owner of the target table.
+* You have the SELECT privilege on the target table.
+* You have the MANAGE_LOAD component privilege for the SQL_OPERATIONS
+component.
+
+[[unload_examples]]
+=== Examples of UNLOAD
+
+* This example shows how the UNLOAD statement extracts data from a
+{project-name} table, `TRAFODION.HBASE.CUSTOMER_DEMOGRAPHICS`, into an HDFS
+folder, `/bulkload/customer_demographics`:
++
+```
+>>UNLOAD
++>WITH PURGEDATA FROM TARGET
++>MERGE FILE 'merged_customer_demogs.gz' OVERWRITE
++>COMPRESSION GZIP
++>INTO '/bulkload/customer_demographics'
++>select * from trafodion.hbase.customer_demographics
++><<+ cardinality 10e10 ,+ cardinality 10e10 >>;
+Task: UNLOAD Status: Started
+Task: EMPTY TARGET Status: Started
+Task: EMPTY TARGET Status: Ended ET: 00:00:00.014
+Task: EXTRACT Status: Started
+       Rows Processed: 200000
+Task: EXTRACT Status: Ended ET: 00:00:04.743 Task: MERGE FILES Status: Started
+Task: MERGE FILES Status: Ended ET: 00:00:00.063
+
+--- 200000 row(s) unloaded.
+```
+
+<<<
+[[update_statistics_statement]]
+== UPDATE STATISTICS Statement
+
+The UPDATE STATISTICS statement updates the histogram statistics for one
+or more groups of columns within a table. These statistics are used to
+devise optimized access plans.
+
+UPDATE STATISTICS is a {project-name} SQL extension.
+
+```
+UPDATE STATISTICS FOR TABLE table [CLEAR | on-clause | sample-table-clause ]
+
+on-clause is:
+    ON column-group-list CLEAR
+  | ON column-group-list [histogram-option]...
+  | ON column-group-list INCREMENTAL WHERE predicate
+
+column-group-list is:
+    column-list [,column-list]...
+  | EVERY COLUMN [,column-list]...
+  | EVERY KEY [,column-list]...
+  | EXISTING COLUMN[S] [,column-list]...
+  | NECESSARY COLUMN[S] [,column-list]...
+
+column-list for a single-column group is:
+    column-name
+  | (column-name)
+  | column-name TO column-name
+  | (column-name) TO (column-name)
+  | column-name TO (column-name)
+  | (column-name) TO column-name
+
+column-list for a multicolumn group is:
+    (column-name, column-name [,column-name]...)
+
+histogram-option is:
+    GENERATE n INTERVALS
+  | SAMPLE [sample-option]
+
+sample-option is:
+    [r ROWS]
+  | RANDOM percent PERCENT [PERSISTENT]
+  | PERIODIC size ROWS EVERY period ROWS
+
+sample-table-clause is:
+    CREATE SAMPLE RANDOM percent PERCENT
+  | REMOVE SAMPLE
+```
+
+[[update_statistics_syntax]]
+=== Syntax Description of UPDATE STATISTICS
+
+* `_table_`
++
+names the table for which statistics are to be updated. To refer to a
+table, use the ANSI logical name.
+See <<database_object_names,Database Object Names>>.
+
+* `CLEAR`
++
+deletes some or all histograms for the table _table_. Use this option
+when new applications no longer use certain histogram statistics.
++
+If you do not specify _column-group-list_, all histograms for _table_
+are deleted. If you specify _column-group-list_, only columns in the
+group list are deleted.
+
+* `ON _column-group-list_`
++
+specifies one or more groups of columns for which to generate histogram
+statistics with the option of clearing the histogram statistics. You
+must use the ON clause to generate statistics stored in histogram
+tables.
+
+* `_column-list_`
++
+specifies how _column-group-list_ can be defined. The column list
+represents both a single-column group and a multi-column group.
+
+** Single-column group:
+
+*** `_column-name_ | (_column-name_) | _column-name_ TO _column-name_ |
+(_column-name_) TO (_column-name_)`
++
+specifies how you can specify individual columns or a group of
+individual columns.
++
+To generate statistics for individual columns, list each column. You can
+list each single column name within or without parentheses.
+
+** Multicolumn group:
+
+*** `(_column-name_, _column-name_ [,_column-name_]&#8230;)`
++
+specifies a multi-column group.
++
+To generate multi-column statistics, group a set of columns within
+parentheses, as shown. You cannot specify the name of a column more than
+once in the same group of columns.
++
+<<<
++
+One histogram is generated for each unique column group. Duplicate
+groups, meaning any permutation of the same group of columns, are
+ignored and processing continues. When you run UPDATE STATISTICS again
+for the same user table, the new data for that table replaces the data
+previously generated and stored in the table\u2019s histogram tables.
+Histograms of column groups not specified in the ON clause remain
+unchanged in histogram tables.
++
+For more information about specifying columns, see
+<<generating_and_clearing_statistics_for_columns,Generating and Clearing Statistics for Columns>>.
+
+* `EVERY COLUMN`
++
+The EVERY COLUMN keyword indicates that histogram statistics are to be
+generated for each individual column of _table_ and any multi-columns
+that make up the primary key and indexes. For example, _table_ has
+columns A, B, C, D defined, where A, B, C compose the primary key. In
+this case, the ON EVERY COLUMN option generates a single column
+histogram for columns A, B, C, D, and two multi-column histograms of (A,
+B, C) and (A, B).
++
+The EVERY COLUMN option does what EVERY KEY does, with additional
+statistics on the individual columns.
+
+* `EVERY KEY`
++
+The EVERY KEY keyword indicates that histogram statistics are to be
+generated for columns that make up the primary key and indexes. For
+example, _table_ has columns A, B, C, D defined. If the primary key
+comprises columns A, B, statistics are generated for (A, B), A and B. If
+the primary key comprises columns A, B, C, statistics are generated for
+(A,B,C), (A,B), A, B, C. If the primary key comprises columns A, B, C,
+D, statistics are generated for (A, B, C, D), (A, B, C), (A, B), and A,
+B, C, D.
+
+* `EXISTING COLUMN[S]`
++
+The EXISTING COLUMN keyword indicates that all existing histograms of
+the table are to be updated. Statistics must be previously captured to
+establish existing columns.
+
+* `NECESSARY COLUMN[S]`
++
+The NECESSARY COLUMN[S] keyword generates statistics for histograms that
+the optimizer has requested but do not exist. Update statistics
+automation must be enabled for NECESSARY COLUMN[S] to generate
+statistics. To enable automation, see <<update_statistics_automating_update_statistics,
+Automating Update Statistics>>.
+
+<<<
+* `_histogram-option_`
+
+** `GENERATE _n_ INTERVALS`
++
+The GENERATE _n_ INTERVALS option for UPDATE STATISTICS accepts values
+between 1 and 10,000. Keep in mind that increasing the number of
+intervals per histograms may have a negative impact on compile time.
++
+Increasing the number of intervals can be used for columns with small
+set of possible values and large variance of the frequency of these
+values. For example, consider a column \u2018CITY\u2019 in table SALES, which
+stores the city code where the item was sold, where number of cities in
+the sales data is 1538. Setting the number of intervals to a number
+greater or equal to the number of cities (that is, setting the number of
+intervals to 1600) guarantees that the generated histogram captures the
+number of rows for each city. If the specified value n exceeds the
+number of unique values in the column, the system generates only as many
+intervals as the number of unique values.
+
+** `SAMPLE [_sample-option_]`
++
+is a clause that specifies that sampling is to be used to gather a
+subset of the data from the table. UPDATE STATISTICS stores the sample
+results and generates histograms.
++
+If you specify the SAMPLE clause without additional options, the result
+depends on the number of rows in the table. If the table contains no
+more than 10,000 rows, the entire table will be read (no sampling). If
+the number of rows is greater than 10,000 but less than 1 million,
+10,000 rows are randomly sampled from the table. If there are more than
+1 million rows in the table, a random row sample is used to read 1
+percent of the rows in the table, with a maximum of 1 million rows
+sampled.
++
+TIP: As a guideline, the default sample of 1 percent of the rows in the
+table, with a maximum of 1 million rows, provides good statistics for
+the optimizer to generate good plans.
++
+If you do not specify the SAMPLE clause, if the table has fewer rows
+than specified, or if the sample size is greater than the system limit,
+{project-name} SQL reads all rows from _table_. See <<sample_clause,SAMPLE Clause>>.
+
+*** `_sample-option_`
+
+**** `r_ rows`
++
+A row sample is used to read _r_ rows from the table. The value _r_ must
+be an integer that is greater than zero 
+
+**** `RANDOM _percent_ PERCENT`
++
+directs {project-name} SQL to choose rows randomly from the table. The value
+percent must be a value between zero and 100 (0 < percent &#60;= 100). In
+addition, only the first four digits to the right of the decimal point
+are significant. For example, value 0.00001 is considered to be 0.0000,
+Value 1.23456 is considered to be 1.2345.
+
+***** `PERSISTENT`
++
+directs {project-name} SQL to create a persistent sample table and store the
+random sample in it. This table can then be used later for updating statistics
+incrementally.
+
+**** `PERIODIC _size_ ROWS EVERY _period_ ROW`
++
+directs {project-name} SQL to choose the first _size_ number of rows from
+each _period_ of rows. The value _size_ must be an integer that is
+greater than zero and less than or equal to the value _period_. (0 <
+_size_ &#60;= _period_). The size of the _period_ is defined by the number
+of rows specified for _period_. The value _period_ must be an integer
+that is greater than zero (_period_ > 0).
+
+* `INCREMENTAL WHERE _predicate_`
++
+directs {project-name} SQL to update statistics incrementally. That is, instead
+of taking a fresh sample of the entire table, {project-name} SQL will use a previously
+created persistent sample table. {project-name} SQL will update the persistent sample
+by replacing any rows satisfying the _predicate_ with a fresh sample of rows from
+the original table satisfying the _predicate_. The sampling rate used is the
+_percent_ specified when the persistent sample table was created. Statistics
+are then generated from this updated sample. See also
+<<update_statistics_incremental_update_statistics,
+Incremental Update Statistics>>.
+
+* `CREATE SAMPLE RANDOM _percent_ PERCENT`
++
+Creates a persistent sample table associated with this table. The sample is
+created using a random sample of _percent_ percent of the rows. The table
+can then be used for later incremental statistics update.
+
+* `REMOVE SAMPLE`
++
+Drops the persistent sample table associated with this table.
+
+[[update_statistics_considerations]]
+=== Considerations for UPDATE STATISTICS
+
+[[update_statistics_using_statistics]]
+==== Using Statistics
+
+Use UPDATE STATISTICS to collect and save statistics on columns. The SQL
+compiler uses histogram statistics to determine the selectivity of
+predicates, indexes, and tables. Because selectivity directly influences
+the cost of access plans, regular collection of statistics increases the
+likelihood that {project-name} SQL chooses efficient access plans.
+
+While UPDATE STATISTICS is running on a table, the table is active and
+available for query access.
+
+When a user table is changed, either by changing its data significantly
+or its definition, re-execute the UPDATE STATISTICS statement for the
+table.
+
+<<<
+[[update_statistics_histogram_statistics]]
+==== Histogram Statistics
+
+Histogram statistics are used by the compiler to produce the best plan
+for a given SQL query. When histograms are not available, default
+assumptions are made by the compiler and the resultant plan might not
+perform well. Histograms that reflect the latest data in a table are
+optimal.
+
+The compiler does not need histogram statistics for every column of a
+table. For example, if a column is only in the select list, its
+histogram statistics will be irrelevant. A histogram statistic is useful
+when a column appears in:
+
+* A predicate
+* A GROUP BY column
+* An ORDER BY clause
+* A HAVING clause
+* Or similar clause
+
+In addition to single-column histogram statistics, the compiler needs
+multi-column histogram statistics, such as when group by column-5,
+column-3, column-19 appears in a query. Then, histogram statistics for
+the combination (column-5, column-3, column-19) are needed.
+
+[[update_statistics_required-privileges]]
+==== Required Privileges
+
+To perform an UPDATE STATISTICS operation, one of the following must be
+true:
+
+* You are DB ROOT.
+* You are the owner of the target table.
+* You have the MANAGE_STATISTICS component privilege for the
+SQL_OPERATIONS component.
+
+[[update_statistics_locking]]
+==== Locking
+
+UPDATE STATISTICS momentarily locks the definition of the user table
+during the operation but not the user table itself. The UPDATE
+STATISTICS statement uses READ UNCOMMITTED isolation level for the user
+table.
+
+<<<
+[[update_statistics_transactions]]
+==== Transactions
+
+Do not start a transaction before executing UPDATE STATISTICS. UPDATE
+STATISTICS runs multiple transactions of its own, as needed. Starting
+your own transaction in which UPDATE STATISTICS runs could cause the
+transaction auto abort time to be exceeded during processing.
+
+[[update_statistics_generating_and_clearing_statistics_for_columns]]
+==== Generating and Clearing Statistics for Columns
+
+To generate statistics for particular columns, name each column, or name
+the first and last columns of a sequence of columns in the table. For
+example, suppose that a table has consecutive columns CITY, STATE, ZIP.
+This list gives a few examples of possible options you can specify:
+
+[cols="25%,37%,37%",options="header"]
+|===
+| Single-Column Group   | Single-Column Group Within Parentheses | Multicolumn Group
+| ON CITY, STATE, ZIP   | ON (CITY),(STATE),(ZIP)                | ON (CITY, STATE) or ON (CITY,STATE,ZIP)
+| ON CITY TO ZIP        | ON (CITY) TO (ZIP)                     |
+| ON ZIP TO CITY        | ON (ZIP) TO (CITY)                     |
+| ON CITY, STATE TO ZIP | ON (CITY), (STATE) TO (ZIP)            |
+| ON CITY TO STATE, ZIP | ON (CITY) TO (STATE), (ZIP)            |
+|===
+
+The TO specification is useful when a table has many columns, and you
+want histograms on a subset of columns. Do not confuse (CITY) TO (ZIP)
+with (CITY, STATE, ZIP), which refers to a multi-column histogram.
+
+You can clear statistics in any combination of columns you specify, not
+necessarily with the _column-group-list_ you used to create statistics.
+However, those statistics will remain until you clear them.
+
+<<<
+[[update_statistics_column_lists_and_access_plans]]
+==== Column Lists and Access Plans
+
+Generate statistics for columns most often used in data access plans for
+a table\u2014that is, the primary key, indexes defined on the table, and any
+other columns frequently referenced in predicates in WHERE or GROUP BY
+clauses of queries issued on the table. Use the EVERY COLUMN option to
+generate histograms for every individual column or multi columns that
+make up the primary key and indexes.
+
+The EVERY KEY option generates histograms that make up the primary key
+and indexes.
+
+If you often perform a GROUP BY over specific columns in a table, use
+multi-column lists in the UPDATE STATISTICS statement (consisting of the
+columns in the GROUP BY clause) to generate histogram statistics that
+enable the optimizer to choose a better plan. Similarly, when a query
+joins two tables by two or more columns, multi-column lists (consisting
+of the columns being joined) help the optimizer choose a better plan.
+
+[[update_statistics_automating_update_statistics]]
+==== Automating Update Statistics
+
+To enable update statistics automation, set the Control Query Default
+(CQD) attribute, USTAT_AUTOMATION_INTERVAL, in a session where you will
+run update statistics operations. For example:
+
+```
+control query default USTAT_AUTOMATION_INTERVAL '1440';
+```
+
+The value of USTAT_AUTOMATION_INTERVAL is intended to be an automation
+interval (in minutes), but, in {project-name} Release 1.0, this value does
+not act as a timing interval. Instead, any value greater than zero
+enables update statistics automation.
+
+After enabling update statistics automation, prepare each of the queries
+that you want to optimize. For example:
+
+```
+prepare s from select...;
+```
+
+The PREPARE statement causes the {project-name} SQL compiler to compile and
+optimize a query without executing it. When preparing queries with
+update statistic automation enabled, any histograms needed by the
+optimizer that are not present will cause those columns to be marked as
+needing histograms.
+
+Next, run this UPDATE STATISTICS statement against each table, using ON
+NECESSARY COLUMN[S] to generate the needed histograms:
+
+```
+update statistics for table _table-name_ on necessary columns sample;
+```
+
+[[update_statistics_incremental_update_statistics]]
+==== Incremental Update Statistics
+
+UPDATE STATISTICS processing time can be lengthy for very large tables.
+One strategy for reducing the time is to create histograms only for
+columns that actually need them (for example, using the ON NECESSARY COLUMNS 
+column group). Another strategy is to update statistics incrementally. These
+strategies can be used together if desired.
+
+To use the incremental update statistics feature, you must first create
+statistics for the table and create a persistent sample table. One way to
+do this is to perform a normal update statistics command, adding the
+PERSISTENT keyword to the _sample-option_. Another way to do this if you
+already have reasonably up-to-date statistics for the table, is to create
+a persistent sample table separately using the CREATE SAMPLE option.
+
+You can then perform update statistics incrementally by using the INCREMENTAL
+WHERE _predicate_ syntax in the on-clause. The _predicate_ should be chosen
+to describe the set of rows that have changed since the last statistics update
+was performed. For example, if your table contains a column with a timestamp
+giving the date and time of last update, this is a particularly useful column
+to use in the _predicate_.
+
+If you decide later that you wish to change the _percent_ sampling rate used
+for the persistent sample table, you can do so by dropping the persistent
+sample table (using REMOVE SAMPLE) and creating a new one (by using the
+CREATE SAMPLE option). Using a higher _percent_ results in more accurate
+histograms, but at the price of a longer-running operation.
+
+<<<
+[[update_statistics_examples]]
+=== Examples of UPDATE STATISTICS
+
+* This example generates four histograms for the columns jobcode,
+empnum, deptnum, and (empnum, deptnum) for the table EMPLOYEE. Depending
+on the table\u2019s size and data distribution, each histogram should contain
+ten intervals.
++
+```
+UPDATE STATISTICS FOR TABLE employee
+ON (jobcode),(empnum, deptnum) GENERATE 10 INTERVALS;
+
+--- SQL operation complete.
+```
+
+* This example generates histogram statistics using the ON EVERY COLUMN
+option for the table DEPT. This statement performs a full scan, and
+{project-name} SQL determines the default number of intervals.
++
+```
+UPDATE STATISTICS FOR TABLE dept ON EVERY COLUMN;
+
+--- SQL operation complete.
+```
+
+* Suppose that a construction company has an ADDRESS table of potential
+sites and a DEMOLITION_SITES table that contains some of the columns of
+the ADDRESS table. The primary key is ZIP. Join these two tables on two
+of the columns in common:
++
+```
+SELECT COUNT(AD.number), AD.street,
+       AD.city, AD.zip, AD.state
+FROM address AD, demolition_sites DS
+WHERE AD.zip = DS.zip AND AD.type = DS.type
+GROUP BY AD.street, AD.city, AD.zip, AD.state;
+```
++
+To generate statistics specific to this query, enter these statements:
++
+```
+UPDATE STATISTICS FOR TABLE address
+ON (street), (city), (state), (zip, type);
+
+UPDATE STATISTICS FOR TABLE demolition_sites ON (zip, type);
+```
+
+* This example removes all histograms for table DEMOLITION_SITES:
++
+```
+UPDATE STATISTICS FOR TABLE demolition_sites CLEAR;
+```
+
+<<<
+* This example selectively removes the histogram for column STREET in
+table ADDRESS:
++
+```
+UPDATE STATISTICS FOR TABLE address ON street CLEAR;
+```
+
+* This example generates statistics but also creates a persistent 
+sample table for use when updating statistics incrementally:
++
+```
+UPDATE STATISTICS FOR TABLE address
+ON (street), (city), (state), (zip, type)
+SAMPLE RANDOM 5 PERCENT PERSISTENT;
+```
+
+* This example updates statistics incrementally. It assumes that
+a persistent sample table has already been created. The predicate
+in the WHERE clause describes the set of rows that have changed
+since statistics were last updated. Here we assume that rows
+with a state of California are the only rows that have changed:
++
+```
+UPDATE STATISTICS FOR TABLE address
+ON EXISTING COLUMNS
+INCREMENTAL WHERE state = 'CA';
+```



[12/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/jdbct4.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/jdbct4.adoc b/docs/client_install/src/asciidoc/_chapters/jdbct4.adoc
index 3714472..43dbf73 100644
--- a/docs/client_install/src/asciidoc/_chapters/jdbct4.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/jdbct4.adoc
@@ -1,413 +1,329 @@
-////
-/**
- *@@@ START COPYRIGHT @@@
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- * @@@ END COPYRIGHT @@@
- * 
-////
-
-[[jdbct4]]
-= Install JDBC Type-4 Driver
-
-[[jdbct4-installation-requirements]]
-== Installation Requirements
-
-The {project-name} JDBC Type 4 Driver requires a Java-enabled platform that supports the Java Development Kit (JDK) 1.7 or higher.
-
-[[jdbct4-java-environment]]
-=== Java Environment
-
-The {project-name} JDBC Type 4 Driver requires that a compatible Java version be installed on the client workstation and that the Java path be set to
-the correct location. The supported Java version is 1.7 or higher.
-
-NOTE: If you plan to do Java-based development, install the Java Development Kit (JDK) rather than the Java Runtime Environment (JRE).
-These examples use JRE.
-
-[[jdbct4-verify-java-version]]
-==== Verify Java Version
-
-To display the Java version of the client workstation on the screen, enter:
-
-```
-java -version
-```
-
-```
-C:\> java -version
-
-java version "1.7.0_45" # This is the version you need to check
-Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
-Java HotSpot(TM) Client VM (build 24.45-b08, mixed mode, sharing)
-C:\>
-```
-
-The Java version should be *1.7* or higher. If the returned version is not supported, please refer to
-<<jdbct4-install-java, Installing a Supported Java Version>> below.
-
-<<<
-If you see this message:
-
-```
-'java' is not recognized as an internal or external command, operable program or batch file.`
-```
-
-It indicates that the Java PATH is not set. Follow one of these sets of instructions, depending on the operating system of your client
-workstation:
-
-* <<jdbct4-path-windows, Setting the PATH to a Supported Java Version on Windows>>
-* <<jdbct4-path-linux, Setting the PATH to a Supported Java Version on Linux>>
-
-[[jdbct4-install-java]]
-==== Install Supported Java Version
-
-The supported Java version is 1.7 or higher. To install one of the supported Java versions on the client workstation,
-go to this link: http://www.java.com/en/download
-
-After installing the Java version, proceed with setting the Java path. Follow one of these sets of instructions, depending on the operating
-system of your client workstation:
-
-* <<jdbct4-path-windows, Setting the PATH to a Supported Java Version on Windows>>
-* <<jdbct4-path-linux, Setting the PATH to a Supported Java Version on Linux>>
-
-[[jdbct4-path-windows]]
-==== Set Windows PATH Variable
-
-===== Windows 10
-
-1. Right-click on the Windows icon on the menu bar. Select *System*.
-
-2. Click on *Advanced System Settings*.
-
-3.  In the *System Properties* dialog box, click the *Advanced* tab.
-4.  Click the *Environment Variables* button.
-5.  Under *System* variables, select the variable named *Path*, and then click *Edit. . .*:
-+
-image:{images}/win10_edit_path.jpg[Windows 10 Edit Path Variable]
-
-6.  Click *Browse. . .*. Find the directory where you installed Java and select it.
-+
-image:{images}/win10_select_java.jpg[image]
-
-7.  Click *OK* to close the browse window. Click *OK* to close the edit window.
-8.  Verify that the updated *Path* appears under *System* variables, and click *OK*.
-9.  In the *System Properties* dialog box, click *OK* to accept the changes.
-
-
-===== Windows 8
-
-1.  Open system Right-click the *Computer* icon on your desktop, and then select *Properties*. The *Control Panel > System and Security > System* window
-appears.
-
-2. In the left navigation bar, click the *Advanced* system settings link.
-
-3.  In the *System Properties* dialog box, click the *Environment Variables* button.
-
-4.  Under *System* variables, select the variable named *Path*, and then click *Edit*:
-+
-image:{images}/path2.jpg[image]
-
-5.  Place the cursor at the start of the *Variable* value field and enter the path of the Java bin directory, ending with a semicolon (;):
-+
-image:{images}/varval2.jpg[image]
-+
-*Example*
-+
-```
-"C:\Program Files (x86)\Java\jre7\bin";
-```
-+
-NOTE: Check that no space exists after the semicolon (;) in the path. If there are spaces in the directory name, delimit the entire directory
-path in double quotes (") before the semicolon.
-
-6.  Click *OK*.
-7.  Verify that the updated *Path* appears under *System* variables, and click *OK*.
-8.  In the *System Properties* dialog box, click *OK* to accept the changes.
-
-[[jdbct4-path_linux]]
-==== Set Linux PATH Variable
-
-1.  Open the user profile (`.profile` or `.bash_profile` for the Bash shell) in the `$HOME` directory.
-+
-```
-cd $HOME
-vi .profile
-```
-
-2.  In the user profile, set the `PATH` environment variable to include the path of the Java bin 
-directory. 
-+
-```
-export PATH=/opt/java1.7/jre/bin:$PATH
-```
-+
-NOTE: Place the path of the Java bin directory before `$PATH`, and check that no space exists after the colon (:) in the path. In the C shell,
-use the setenv command instead of export.
-
-3.  To activate the changes, either log out and log in again or execute the user profile.
-+
-```
-. .profile
-```
-
-[[jdbct4-install-instructions]]
-== Installation Instructions
-
-You download and extract the {project-name} client package using the instructions in <<introduction-download, Download Installation Package>> above.
-
-[[jdbct4-install-driver]]
-=== Install JDBC Type-4 Driver
-
-1.  Change the directory to the `clients` subdirectory.
-2.  Extract the contents of the `JDBCT4.zip` file by using the unzip command (or the extract function of your compression software):
-+
-*Example*
-+
-```
-unzip JDBCT4.zip -d $HOME/jdbc
-```
-
-The content of the target directory is as follows:
-
-[cols="33%l,30%l,37%",options="header"]
-|===
-| Installation Folder                | Files                        | Description
-| /lib                               | jdbcT4.jar                   | Product JAR file.
-| /samples                           | t4jdbc.properties            | Properties file that you can configure for your application environment.
-|                                    | README                       | Readme file that explains how to use the common sample set.
-| /samples/common                    | sampleUtils.java             | Sample source code for creating, populating, and dropping sample tables.
-| /samples/DBMetaSample              | DBMetaSample.java            | Sample source code for getting metadata about the sample tables.
-|                                    | README                       | Readme file that explains how to use this sample set.
-| /samples/PreparedStatementSample   | PreparedStatementSample.java | Sample code for simple or parameterized SELECT statements that are prepared.
-|                                    | README                       | Readme file that explains how to use this sample set.
-| /samples/ResultSetSample           | README                       | Readme file that explains how to use this sample set.
-|                                    | ResultSetSample.java         | Sample source code for fetching rows from a result set.
-| /samples/StatementSample           | README                       | Readme file that explains how to use this sample set.
-|                                    | StatementSample.java         | Sample source code for fetching rows from a simple SELECT statement.
-|===
-
-[[jdbct4-setup-env]]
-== Set Up Client Environment
-
-=== Java Development
-
-If you plan to write and run Java applications that use the {project-name} JDBC Type 4 Driver to connect to a {project-name} database, then set these
-environment variables on the client workstation, replacing `_jdk-directory_` with the location of your Java Development Kit and
-replacing `_jdbc-installation-directory_` with the name of the directory where you downloaded the JDBC Type 4 driver:
-
-[cols="20%l,40%l,40%l",options="header"]
-|===
-| Environment Variable | On Windows                                                              | On Linux
-| JAVA_HOME            | set JAVA_HOME="_jdk-directory_"^1^                                      | export JAVA_HOME=_jdk-directory_
-| PATH                 | set PATH=%PATH%;%JAVA_HOME%\bin                                         | export PATH=$PATH:$JAVA_HOME/bin
-| CLASSPATH            | set CLASSPATH=%CLASSPATH%;_jdbc-installation-directory_\lib\jdbcT4.jar; | export CLASSPATH=$CLASSPATH:_jdbc-installation-directory_/lib/jdbcT4.jar:
-|===
-
-^1^ Enclose the _jdk-directory_ in quotes to ensure that Windows can find the directory correctly. You can use the `set <variable>` command to verify the setting.
-
-<<<
-=== Configure Applications
-
-Edit the `t4jdbc.properties` file in the `samples` folder. Refer to the `README` file in the `samples` folder for instructions.
-
-Set these values for your environment:
-
-* _catalog_: Specify a catalog that exists in the database.
-* _schema_: Specify a schema that exists in the database.
-* _user_: Specify the name of a user who will be accessing the database.
-* _password_: Specify the password of a user who will be accessing the database.
-* _url_: Specify this string: _jdbc:t4jdbc://_host-name_:_port-number_/:_
-
-_host-name_ is the IP address or host name of the database platform, and _port-number_ is the location where the 
-{project-name} Database Connectivity Service (DCS) is running, which is *23400* by default. See the
-http://trafodion.incubator.apache.org/docs/dcs_reference/index.html[{project-name} Database Connectivity Services Reference Guide]
-for information about how to configure the DCS port.
-
-*Example*
-
-In this example, {project-name} authentication has not been enabled. Therefore, you can use a dummy
-user and password. If authentication is enabled, then use your user and password information.
-
-```
-catalog = TRAFODION
-schema = SEABASE
-user = usr
-password = pwd
-
-url = jdbc:t4jdbc://trafodion.host.com:23400/:
-```
-
-NOTE: The driver\u2019s class name is `org.trafodion.jdbc.t4.T4Driver`.
-
-<<<
-[[jdbct4-test-programs]]
-== Test Programs
-
-The `README` file in the `samples` folder provide information for how you build and run sample Java programs.
-You can use these programs to verify the setup of the {project-name} JDBC Type-4 driver.
-See the <<jdbct4-install-driver, Install JDBC Type-4 Driver>> section above for information on the different
-sample programs that are included with the {project-name} JDBC Type-4 driver.
-
-*Example*
-
-Build and run the StatementSample test program to verify the JDBC Type-4 driver installation.
-
-```
-C:\Development Tools\Trafodion JDBCT4\samples>cd StatementSample
-
-C:\Development Tools\Trafodion JDBCT4\samples\StatementSample>%JAVA_HOME%\bin\javac -classpath ..\..\lib\jdbcT4.jar *.java ..\common\*.java
-Note: ..\common\sampleUtils.java uses or overrides a deprecated API.
-Note: Recompile with -Xlint:deprecation for details.
-C:\Development Tools\Trafodion JDBCT4\samples\StatementSample>%JAVA_HOME%\bin\java -classpath ..\..\lib\jdbcT4.jar -Dt4jdbc.properties=..\t4jdbc.properties StatementSample
-Mar 16, 2016 9:36:54 PM common.sampleUtils getPropertiesConnection
-INFO: DriverManager.getConnection(url, props) passed
-
-Inserting TimeStamp
-
-Simple Select
-
-Printing ResultSetMetaData ...
-No. of Columns 12
-Column 1 Data Type: CHAR Name: C1
-Column 2 Data Type: SMALLINT Name: C2
-Column 3 Data Type: INTEGER Name: C3
-Column 4 Data Type: BIGINT Name: C4
-Column 5 Data Type: VARCHAR Name: C5
-Column 6 Data Type: NUMERIC Name: C6
-Column 7 Data Type: DECIMAL Name: C7
-Column 8 Data Type: DATE Name: C8
-Column 9 Data Type: TIME Name: C9
-Column 10 Data Type: TIMESTAMP Name: C10
-Column 11 Data Type: REAL Name: C11
-Column 12 Data Type: DOUBLE PRECISION Name: C12
-
-Fetching rows...
-
-Printing Row 1 using getString(), getObject()
-Column 1 - Row1                ,Row1
-Column 2 - 100,100
-Column 3 - 12345678,12345678
-Column 4 - 123456789012,123456789012
-Column 5 - Selva,Selva
-Column 6 - 100.12,100.12
-Column 7 - 100.12,100.12
-Column 8 - 2000-05-06,2000-05-06
-Column 9 - 10:11:12,10:11:12
-Column 10 - 2000-05-06 10:11:12.000000,2000-05-06 10:11:12.0
-Column 11 - 100.12,100.12
-Column 12 - 100.12,100.12
-
-Printing Row 2 using getString(), getObject()
-Column 1 - Row2                ,Row2
-Column 2 - -100,-100
-Column 3 - -12345678,-12345678
-Column 4 - -123456789012,-123456789012
-Column 5 - Selva,Selva
-Column 6 - -100.12,-100.12
-Column 7 - -100.12,-100.12
-Column 8 - 2000-05-16,2000-05-16
-Column 9 - 10:11:12,10:11:12
-Column 10 - 2000-05-06 10:11:12.000000,2000-05-06 10:11:12.0
-Column 11 - -100.12,-100.12
-Column 12 - -100.12,-100.12
-
-Printing Row 3 using getString(), getObject()
-Column 1 - TimeStamp           ,TimeStamp
-Column 2 - -100,-100
-Column 3 - -12345678,-12345678
-Column 4 - -123456789012,-123456789012
-Column 5 - Selva,Selva
-Column 6 - -100.12,-100.12
-Column 7 - -100.12,-100.12
-Column 8 - 2016-03-16,2016-03-16
-Column 9 - 21:37:03,21:37:03
-Column 10 - 2016-03-16 21:37:03.053,2016-03-16 21:37:03.053
-Column 11 - -100.12,-100.12
-Column 12 - -100.12,-100.12
-
-End of Data
-
-C:\Development Tools\Trafodion JDBCT4\samples\StatementSample>
-```
-
-<<<
-== Uninstall JDBC Type-4 Driver
-Run one of these sets of commands to remove the {project-name} JDBC Type 4 Driver:
-
-* On Linux:
-+
-```
-rm -rf <jdbc-installation-directory>
-```
-+
-*Example*
-+
-```
-rm -rf ~/jdbc
-```
-
-* On Windows:
-+
-```
-del <jdbc-installation-directory>
-rmdir <jdbc-installation-directory>
-```
-+
-<<<
-+
-*Example*
-+
-Windows uninstall
-+
-```
-C:\>del /s JDBC
-C:\JDBC\, Are you sure (Y/N)? Y
-C:\JDBC\install\*, Are you sure (Y/N)? Y
-Deleted file - C:\JDBC\install\t4jdbcSanityCheck.class
-Deleted file - C:\JDBC\install\t4jdbcUninstall.class
-Deleted file - C:\JDBC\install\product.contents
-C:\JDBC\lib\*, Are you sure (Y/N)? Y
-Deleted file - C:\JDBC\lib\jdbcT4.jar
-C:\JDBC\samples\*, Are you sure (Y/N)? Y
-Deleted file - C:\JDBC\samples\t4jdbc.properties
-Deleted file - C:\JDBC\samples\README
-C:\JDBC\samples\common\*, Are you sure (Y/N)? Y
-Deleted file - C:\JDBC\samples\common\sampleUtils.java
-C:\JDBC\samples\DBMetaSample\*, Are you sure (Y/N)? Y
-Deleted file - C:\JDBC\samples\DBMetaSample\DBMetaSample.java
-Deleted file - C:\JDBC\samples\DBMetaSample\README
-C:\JDBC\samples\PreparedStatementSample\*, Are you sure (Y/N)? Y 
-Deleted file - C:\JDBC\samples\PreparedStatementSample\PreparedStatementSample.java
-Deleted file - C:\JDBC\samples\PreparedStatementSample\README
-C:\JDBC\samples\ResultSetSample\*, Are you sure (Y/N)? Y
-Deleted file - C:\JDBC\samples\ResultSetSample\README
-Deleted file - C:\JDBC\samples\ResultSetSample\ResultSetSample.java
-C:\JDBC\samples\StatementSample\*, Are you sure (Y/N)? Y
-Deleted file - C:\JDBC\samples\StatementSample\README
-Deleted file - C:\JDBC\samples\StatementSample\StatementSample.java
-C:\>rmdir /s JDBC
-JDBC, Are you sure (Y/N)? Y
-C:\>
-```
-
-<<<
-== Reinstall JDBC Type-4 Driver
-
-1. Close all applications running on the workstation, except the Web browser.
-2. Download and extract the {project-name} client package using the instructions in <<introduction-download, Download Installation Package>> above.
-3. Install the new {project-name} JDBC Type-4 driver. See <<jdbct4-install-driver, Install JDBC Type-4 Driver>>.
-4. Set up the client environment. Please refer to: <<jdbct4-setup-env, Set Up Client Environment>>.
-
+////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ * 
+////
+
+[[jdbct4]]
+= Install JDBC Type-4 Driver
+
+== Prerequisites
+
+If you have not done so already, please ensure that you have <<java-setup, setup your Java environment>>
+and <<download-software, unpackaged the {project-name} client software>>.
+
+The examples in this chapter assumes that you have unpackaged the JDBC Type 4 driver installation files
+to `c:\trafodion\jdbct4` (Windows) or `$HOME/trafodion/jdbct4` (Linux).
+
+[[jdbct4-validate-install]]
+== Validate Install Directory
+
+The content of the `jdbct4` installation directory is as follows:
+
+[cols="33%,30%,37%",options="header"]
+|===
+| Installation Folder                  | Files                          | Description
+| `/lib`                               | `jdbcT4.jar`                   | Product JAR file.
+| `/samples`                           | `t4jdbc.properties`            | Properties file that you can configure for your application environment.
+|                                      | `README`                       | Readme file that explains how to use the common sample set.
+| `/samples/common`                    | `sampleUtils.java`             | Sample source code for creating, populating, and dropping sample tables.
+| `/samples/DBMetaSample`              | `DBMetaSample.java`            | Sample source code for getting metadata about the sample tables.
+|                                      | `README`                       | Readme file that explains how to use this sample set.
+| `/samples/PreparedStatementSample`   | `PreparedStatementSample.java` | Sample code for simple or parameterized SELECT statements that are prepared.
+|                                      | `README`                       | Readme file that explains how to use this sample set.
+| `/samples/ResultSetSample`           | `README`                       | Readme file that explains how to use this sample set.
+|                                      | `ResultSetSample.java`         | Sample source code for fetching rows from a result set.
+| `/samples/StatementSample`           | `README`                       | Readme file that explains how to use this sample set.
+|                                      | `StatementSample.java`         | Sample source code for fetching rows from a simple SELECT statement.
+|===
+
+<<<
+[[jdbct4-setup-env]]
+== Set Up Client Environment
+
+[[jdbct4-java-development]]
+=== Java Development
+
+If you plan to write and run Java applications that use the {project-name} JDBC Type 4 Driver to connect to a {project-name} database, then set these
+environment variables on the client workstation, replacing `_jdk-directory_` with the location of your Java Development Kit and
+replacing `_jdbc-installation-directory_` with the name of the directory where you downloaded the JDBC Type 4 driver:
+
+[cols="20%,40%,40%",options="header"]
+|===
+| Environment Variable   | On Windows                                                                | On Linux
+| `JAVA_HOME`            | `set JAVA_HOME="_jdk-directory_"`^1^                                      | `export JAVA_HOME=_jdk-directory_`
+| `PATH`                 | `set PATH=%PATH%;%JAVA_HOME%\bin`                                         | `export PATH=$PATH:$JAVA_HOME/bin`
+| `CLASSPATH`            | `set CLASSPATH=%CLASSPATH%;_jdbc-installation-directory_\lib\jdbcT4.jar;` | `export CLASSPATH=$CLASSPATH:_jdbc-installation-directory_/lib/jdbcT4.jar:`
+|===
+
+^1^ Enclose the _jdk-directory_ in quotes to ensure that Windows can find the directory correctly. You can use the `set <variable>` command to verify the setting.
+
+<<<
+=== Configure Applications
+
+Edit the `t4jdbc.properties` file in the `samples` folder. Refer to the `README` file in the `samples` folder for instructions.
+
+Set these values for your environment:
+
+* `catalog`: Specify a catalog that exists in the database.
+* `schema`: Specify a schema that exists in the database.
+* `user`: Specify the name of a user who will be accessing the database.
+* `password`: Specify the password of a user who will be accessing the database.
+* `url`: Specify this string: `jdbc:t4jdbc://<host-name>:<port-number>/:`
+
+`<host-name>` is the IP address or host name of the database platform.
+
+`<port-number>` is the location where the 
+{project-name} Database Connectivity Service (DCS) is running. (Default: *23400*).
+
+See the http://trafodion.incubator.apache.org/docs/dcs_reference/index.html[{project-name} Database Connectivity Services Reference Guide]
+for information about how to configure the DCS port.
+
+*Example*
+
+In this example, {project-name} authentication has not been enabled. Therefore, you can use a dummy
+user and password. If authentication is enabled, then use your user and password information.
+
+```
+catalog = TRAFODION
+schema = SEABASE
+user = usr
+password = pwd
+
+url = jdbc:t4jdbc://trafodion.host.com:23400/:
+```
+
+NOTE: The driver\u2019s class name is `org.trafodion.jdbc.t4.T4Driver`.
+
+<<<
+[[jdbct4-test-programs]]
+== Test Programs
+
+NOTE: You must use JDK and set up the environmental variables as documented in
+<<jdbct4-java-development, Java Development>> to build the test programs.
+
+The `README` file in the `samples` folder provide information for how you build and run sample Java programs.
+You can use these programs to verify the setup of the {project-name} JDBC Type-4 driver.
+
+See the <<jdbct4-validate-install, Validate Install Directory>> section above for information on the different
+sample programs that are included with the {project-name} JDBC Type-4 driver.
+
+*Windows Example*
+
+Build and run the StatementSample test program to verify the JDBC Type-4 driver installation.
+
+```
+C:\trafodion\jdbct4\samples> cd StatementSample
+
+C:\trafodion\jdbct4\samples\StatementSample> %JAVA_HOME%\bin\javac -classpath ..\..\lib\jdbcT4.jar *.java ..\common\*.java
+
+Note: ..\common\sampleUtils.java uses or overrides a deprecated API.
+Note: Recompile with -Xlint:deprecation for details.
+
+C:\trafodion\jdbct4\samples\StatementSample> %JAVA_HOME%\bin\java -classpath ..\..\lib\jdbcT4.jar;..;. -Dt4jdbc.properties=..\t4jdbc.properties StatementSample
+
+Mar 16, 2016 9:36:54 PM common.sampleUtils getPropertiesConnection
+INFO: DriverManager.getConnection(url, props) passed
+
+Inserting TimeStamp
+
+Simple Select
+
+Printing ResultSetMetaData ...
+No. of Columns 12
+Column 1 Data Type: CHAR Name: C1
+Column 2 Data Type: SMALLINT Name: C2
+Column 3 Data Type: INTEGER Name: C3
+Column 4 Data Type: BIGINT Name: C4
+Column 5 Data Type: VARCHAR Name: C5
+Column 6 Data Type: NUMERIC Name: C6
+Column 7 Data Type: DECIMAL Name: C7
+Column 8 Data Type: DATE Name: C8
+Column 9 Data Type: TIME Name: C9
+Column 10 Data Type: TIMESTAMP Name: C10
+Column 11 Data Type: REAL Name: C11
+Column 12 Data Type: DOUBLE PRECISION Name: C12
+
+Fetching rows...
+
+Printing Row 1 using getString(), getObject()
+Column 1 - Row1                ,Row1
+Column 2 - 100,100
+Column 3 - 12345678,12345678
+Column 4 - 123456789012,123456789012
+Column 5 - Selva,Selva
+Column 6 - 100.12,100.12
+Column 7 - 100.12,100.12
+Column 8 - 2000-05-06,2000-05-06
+Column 9 - 10:11:12,10:11:12
+Column 10 - 2000-05-06 10:11:12.000000,2000-05-06 10:11:12.0
+Column 11 - 100.12,100.12
+Column 12 - 100.12,100.12
+
+Printing Row 2 using getString(), getObject()
+Column 1 - Row2                ,Row2
+Column 2 - -100,-100
+Column 3 - -12345678,-12345678
+Column 4 - -123456789012,-123456789012
+Column 5 - Selva,Selva
+Column 6 - -100.12,-100.12
+Column 7 - -100.12,-100.12
+Column 8 - 2000-05-16,2000-05-16
+Column 9 - 10:11:12,10:11:12
+Column 10 - 2000-05-06 10:11:12.000000,2000-05-06 10:11:12.0
+Column 11 - -100.12,-100.12
+Column 12 - -100.12,-100.12
+
+Printing Row 3 using getString(), getObject()
+Column 1 - TimeStamp           ,TimeStamp
+Column 2 - -100,-100
+Column 3 - -12345678,-12345678
+Column 4 - -123456789012,-123456789012
+Column 5 - Selva,Selva
+Column 6 - -100.12,-100.12
+Column 7 - -100.12,-100.12
+Column 8 - 2016-03-16,2016-03-16
+Column 9 - 21:37:03,21:37:03
+Column 10 - 2016-03-16 21:37:03.053,2016-03-16 21:37:03.053
+Column 11 - -100.12,-100.12
+Column 12 - -100.12,-100.12
+
+End of Data
+
+C:\trafodion\jdbct4\samples\StatementSample>
+```
+
+<<<
+*Linux Example*
+
+Build and run the StatementSample test program to verify the JDBC Type-4 driver installation.
+
+```
+$ cd $HOME/trafodion/jdbct4/samples/StatementSample
+
+$ $JAVA_HOME/bin/javac -classpath ../../lib/jdbcT4.jar *.java ../common/*.java
+
+Note: ..\common\sampleUtils.java uses or overrides a deprecated API.
+Note: Recompile with -Xlint:deprecation for details.
+
+$ $JAVA_HOME/bin/java -classpath ../../lib/jdbcT4.jar:..:. -Dt4jdbc.properties=../t4jdbc.properties StatementSample
+
+Mar 16, 2016 9:36:54 PM common.sampleUtils getPropertiesConnection
+INFO: DriverManager.getConnection(url, props) passed
+
+Inserting TimeStamp
+
+Simple Select
+
+Printing ResultSetMetaData ...
+No. of Columns 12
+Column 1 Data Type: CHAR Name: C1
+Column 2 Data Type: SMALLINT Name: C2
+Column 3 Data Type: INTEGER Name: C3
+Column 4 Data Type: BIGINT Name: C4
+Column 5 Data Type: VARCHAR Name: C5
+Column 6 Data Type: NUMERIC Name: C6
+Column 7 Data Type: DECIMAL Name: C7
+Column 8 Data Type: DATE Name: C8
+Column 9 Data Type: TIME Name: C9
+Column 10 Data Type: TIMESTAMP Name: C10
+Column 11 Data Type: REAL Name: C11
+Column 12 Data Type: DOUBLE PRECISION Name: C12
+
+Fetching rows...
+
+Printing Row 1 using getString(), getObject()
+Column 1 - Row1                ,Row1
+Column 2 - 100,100
+Column 3 - 12345678,12345678
+Column 4 - 123456789012,123456789012
+Column 5 - Selva,Selva
+Column 6 - 100.12,100.12
+Column 7 - 100.12,100.12
+Column 8 - 2000-05-06,2000-05-06
+Column 9 - 10:11:12,10:11:12
+Column 10 - 2000-05-06 10:11:12.000000,2000-05-06 10:11:12.0
+Column 11 - 100.12,100.12
+Column 12 - 100.12,100.12
+
+Printing Row 2 using getString(), getObject()
+Column 1 - Row2                ,Row2
+Column 2 - -100,-100
+Column 3 - -12345678,-12345678
+Column 4 - -123456789012,-123456789012
+Column 5 - Selva,Selva
+Column 6 - -100.12,-100.12
+Column 7 - -100.12,-100.12
+Column 8 - 2000-05-16,2000-05-16
+Column 9 - 10:11:12,10:11:12
+Column 10 - 2000-05-06 10:11:12.000000,2000-05-06 10:11:12.0
+Column 11 - -100.12,-100.12
+Column 12 - -100.12,-100.12
+
+Printing Row 3 using getString(), getObject()
+Column 1 - TimeStamp           ,TimeStamp
+Column 2 - -100,-100
+Column 3 - -12345678,-12345678
+Column 4 - -123456789012,-123456789012
+Column 5 - Selva,Selva
+Column 6 - -100.12,-100.12
+Column 7 - -100.12,-100.12
+Column 8 - 2016-03-16,2016-03-16
+Column 9 - 21:37:03,21:37:03
+Column 10 - 2016-03-16 21:37:03.053,2016-03-16 21:37:03.053
+Column 11 - -100.12,-100.12
+Column 12 - -100.12,-100.12
+
+End of Data
+
+$
+```
+
+<<<
+== Uninstall JDBC Type-4 Driver
+Run one of these sets of commands to remove the {project-name} JDBC Type 4 Driver:
+
+* On Windows:
++
+```
+rmdir /s /q <jdbc-installation-directory>
+```
++
+*Example*
++
+```
+rmdir /s /q c:\trafodion\jdbct4
+```
+
+* On Linux:
++
+```
+rm -rf <jdbc-installation-directory>
+```
++
+*Example*
++
+```
+rm -rf $HOME/trafodion/jdbct4
+```
+
+NOTE: Remember to update/remove environmental variables if you've created them in th 
+<<jdbct4-java-development, Java Development>>.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/odb.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/odb.adoc b/docs/client_install/src/asciidoc/_chapters/odb.adoc
index 512a0ea..c433513 100644
--- a/docs/client_install/src/asciidoc/_chapters/odb.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/odb.adoc
@@ -1,66 +1,245 @@
-////
-/**
- *@@@ START COPYRIGHT @@@
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- * @@@ END COPYRIGHT @@@
- */
-////
-
-[[install-odb]]
-= Install odb
-
-[[installation-requirements]]
-== Installation Requirements
-
-The odb tool runs on 64-bit Linux. On a Linux workstation, odb requires `pthread` libraries, which are usually installed by default. It also
-requires the unixODBC Driver Manager to be installed and configured on the client workstation. For more information, see the
-{docs-url}/odb/index.html[_{project-name} odb User Guide_].
-
-[[installation-instructions]]
-== Installation Instructions
-
-NOTE: Before following these installation instructions, please make sure to install and configure unixODBC on the client workstation. For more
-information, see the {docs-url}/odb/index.html[_{project-name} odb User Guide_].
-
-You download and extract the {project-name} client package using the instructions in <<introduction-download, Download Installation Package>> above.
-
-[[odb-install]]
-=== Install odb
-
-1.  Change the directory to the `clients` subdirectory.
-2.  Unpack the contents of the `odb64_linux.tar.gz` file to a location on your client workstation:
-+
-```
-mkdir $HOME/odb
-tar -xzf odb64_linux.tar.gz -C $HOME/odb
-```
-+
-The command extracts these files:
-+
-* `README`
-* `/bin/odb64luo` (the odb executable)
-
-3.  You are now ready to run the odb executable. For more information, see the {docs-url}/odb/index.html[_{project-name} odb User Guide_].
-
-[[odb-uninstall]]
-== Uninstall odb
-
-To uninstall odb, delete the `README` and `/bin/odb64luo` files from their installed location.
-
-```
-rm -rf odb-installation-directory
-```
+////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ */
+////
+
+[[odb-install]]
+= Install odb
+
+If you have not done so already, please ensure that you have
+<<download-software, unpackaged the {project-name} client software>> and
+<<odbc-linux-install, setup the {project-name} Linux ODBC Driver>>.
+
+The examples in this chapter assumes that you have unpackaged the odb installation file 
+to `$HOME/trafodion/odb`.
+
+== odb Requirements
+
+odb requires:
+
+* pthread libraries (Generally installed by default)
+* unixODBC. See installation instructions <<odb-install-unixodbc, below>>.
+
+[[odb-install-unixodbc]]
+== Install and Configure unixODBC
+
+This section explains how to install and configure `unixODBC`.
+Refer to the http://www.unixodbc.org/doc/[unixODBC documentation] for additional
+configuration information.
+
+1.  Obtain the source code tar ball from http://www.unixodbc.org/ Use version 2.3.*_x_* or later.
+2.  Unpack the tar ball:
++
+```
+$ tar -xzvf unixODBC-2.3.1.tar.gz
+```
+
+3.  Configure unixODBC installation (root access required):
++
+```
+$ cd unixODBC-2.3.1
+$ sudo ./configure --disable-gui --enable-threads --disable-drivers
+```
++
+unixODBC is installed under `/usr/local`.
++
+If you don't have root privileges or you want to install unixODBC somewhere else
+then `add --prefix=<installation_path>` to the `configure` command here above.
++
+*Example - Install unixODBC in Alternate Location*
++
+```
+$ ./configure --prefix=$HOME/trafodion/unixodbc --disable-gui --enable-threads --disable-drivers
+```
++
+<<<
+4.  Compile unixODBC sources:
++
+```
+$ make
+```
+
+5.  Install unixODBC:
+
+```
+$ make install
+```
+
+=== Configure unixODBC
+
+1.  Define environment variables.
+2.  Define data sources.
+
+Start with the environment variables (which you can add to your profile script):
+
+1. Set the `ODBCHOME` variable to the unixODBC installation directory (the one configured via `--prefix` here above).
++
+*Example*
++
+```
+$ export ODBCHOME=$HOME/trafodion/unixodbc
+```
+
+2. Configure the system data sources directory (the one containing `odbc.ini` and `odbcinst.ini`).
+Normally, the `etc/` directory under `$ODBCHOME`:
++
+```
+$ export ODBCSYSINI=$ODBCHOME/etc
+```
+
+3. Configure the `ODBCINI` variable to the full path of the `odbc.ini` file:
++
+```
+$ export ODBCINI=$ODBCSYSINI/odbc.ini
+```
+
+4. Add the unixODBC lib directory to your `LD_LIBRARY_PATH` (Linux) or `LIBPATH` (IBM AIX) or `SHLIB_PATH` (HP/UX):
++
+```
+$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ODBCHOME/lib
+```
+
+<<<
+=== Configure Data Sources
+
+==== `odbc.ini`
+
+```
+[ODBC]
+AppUnicodeType=utf16
+
+[<DATA_SOURCE_NAME>]
+Description = DSN Description
+Driver = <odbcinst.ini corresponding section>
+...
+Other (Driver specific) parameters
+...
+```
+
+==== `odbcinst.ini`
+
+```
+[<Driver name in odbc.ini>]
+Description = Driver description
+Driver = <ODBC driver>
+FileUsage = 1
+UsageCount = 1
+```
+
+<<<
+*{project-name} Example*
+
+```
+$ cat odbc.ini
+
+[ODBC]
+AppUnicodeType=utf16
+
+[traf]
+Description = traf DSN 
+Driver = Trafodion 
+Catalog = TRAFODION 
+Schema = QA 
+DataLang = 0 
+FetchBufferSize = SYSTEM_DEFAULT 
+Server = TCP:<server-name>:<port-no> 
+SQL_ATTR_CONNECTION_TIMEOUT = SYSTEM_DEFAULT 
+SQL_LOGIN_TIMEOUT = SYSTEM_DEFAULT 
+SQL_QUERY_TIMEOUT = NO_TIMEOUT 
+ServiceName = TRAFODION_DEFAULT_SERVICE
+
+$ cat odbcinst.ini
+
+[Trafodion]
+Description = {project-name} ODBC Stand Alone Driver
+Driver = /<dir-name>/conn/clients/odbc/libtrafodbc_drvr64.so
+FileUsage = 1 
+UsageCount = 1 
+
+[ODBC]
+Threading = 1 
+Trace = Off 
+Tracefile = uodbc.trc
+$
+```
+
+<<<
+The `Threading` setting is defined as follows
+(extracted from unixODBC sources `DriverManager/handles.c`):
+
+[source,cplusplus]
+----
+/*
+* ...
+* If compiled with thread support the DM allows four different
+* thread strategies
+*
+
+* Level 0 - Only the DM internal structures are protected
+* the driver is assumed to take care of it's self
+*
+
+* Level 1 - The driver is protected down to the statement level
+* each statement will be protected, and the same for the connect
+* level for connect functions, note that descriptors are considered
+* equal to statements when it comes to thread protection.
+*
+
+* Level 2 - The driver is protected at the connection level. only
+* one thread can be in a particular driver at one time
+*
+
+* Level 3 - The driver is protected at the env level, only one thing
+* at a time.
+*
+
+* By default the driver open connections with a lock level of 0,
+* drivers should be expected to be thread safe now.
+* this can be changed by adding the line
+*
+
+* Threading = N
+*
+* to the driver entry in odbcinst.ini, where N is the locking level
+*
+*/
+----
+
+<<<
+[[odb-verify-install]]
+== Verify odb Installation
+
+`$HOME/trafodion/odb` should contain the following files.
+
+* `README`
+* `/bin/odb64luo` (the odb executable)
+
+See the {docs-url}/odb/index.html[{project-name} odb User Guide] for information how to use odb.
+
+[[odb-uninstall]]
+== Uninstall odb
+
+To uninstall odb, delete the `README` and `/bin/odb64luo` files from their installed location.
+
+*Example*
+
+```
+rm -rf $HOME/trafodion/odb
+```

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/odbc_linux.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/odbc_linux.adoc b/docs/client_install/src/asciidoc/_chapters/odbc_linux.adoc
index 6838df8..0321ace 100644
--- a/docs/client_install/src/asciidoc/_chapters/odbc_linux.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/odbc_linux.adoc
@@ -1,302 +1,399 @@
-////
-/**
- *@@@ START COPYRIGHT @@@
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- * @@@ END COPYRIGHT @@@
- */
-////
-
-= Install Linux ODBC Driver
-
-== Installation Requirements
-
-The driver for Linux requires `libgcc 3.4.3` and `libstd++ 6.0`.
-
-If you are building ODBC applications, please use the preferred build platform, RedHat 6.x or CentOS 6.x.
-
-== Installation Instructions
-
-You download and extract the {project-name} client package using the instructions in <<introduction-download, Download Installation Package>> above.
-
-The package file contains the {project-name} ODBC distribution file, `TRAF_ODBC_Linux_Driver_64.tar.gz`, which is extracted to the `clients` subdirectory.
-It contains the following files:
-
-```
-connect_test.cpp 
-install.sh 
-libicudataNv44.so.44 
-libicuucNv44.so.44 
-libtrafodbc_l64.so 
-libtrafodbc_l64_drvr.so 
-LICENSE 
-license.txt 
-MD5SUM 
-TRAFDSN 
-```
-
-By default, a new version of the {project-name} ODBC driver is installed in the following directories unless you specify a different directory
-during installation:
-
-* `/usr/lib64`
-* `/etc/odbc`
-
-NOTE: The following header files are not packaged with the {project-name} ODBC driver: +
- +
-- `sql.h` +
-- `sqlext.h` +
-- `sqltypes.h` +
-- `sqlucode.h` +
- +
-To install those header files, <<win_odbc_client_env, Setting Up the Client Environment>>..
-
-=== Install/Reinstall Linux ODBC Driver
-
-NOTE: You must have root access to install the {project-name} ODBC Driver for Linux at the default system location.
-
-1.  Change the directory to the clients subdirectory, and decompress the `.tar.gz` distribution file:
-+
-```
-gunzip TRAF_ODBC_Linux_Driver_64.tar.gz
-```
-2.  Extract the contents of the `.tar` file. A directory called `PkgTmp` is created.
-+
-```
-tar \u2013xvf TRAF_ODBC_Linux_Driver_64.tar
-```
-
-3.  Install the product by entering these commands:
-+
-```
-cd PkgTmp 
-sudo ./install.sh
-```
-+
-Except for the sample file, the `install.sh` script saves a copy (`.SAV`) of your previous installation files if they exist.
-4.  Accept the terms of the license agreement by entering *yes*.
-5.  Enter a directory for the library files, or press Enter to use the default directory (`/usr/lib64`).
-6.  Enter a directory for the data-source template file, or press *Enter* to use the default directory (`/etc/odbc`).
-7.  Enter a directory for the sample program, or press *Enter* to use the default directory (`/etc/odbc`).
-
-<<<
-=== Set Up Client Environment
-
-If you selected default options during installation, ensure that:
-
-* The libraries are located in the `/usr/lib64` directory.
-* A `TRAFDSN` file is in the `/etc/odbc` directory.
-
-If you select non-default locations during installation, ensure that the files are installed in the directories that you specified during
-installation.
-
-The driver expects the `TRAFDSN` file to be present in either the default location (`/etc/odbc`) or the current working directory (`CWD`) of the
-application.
-
-If you are building ODBC applications, you need to install these header files in your build environment:
-
-* `sql.h`
-* `sqlext.h`
-* `sqltypes.h`
-* `sqlucode.h`
-
-To install those header files from the latest packages, run this `yum` command:
-
-```
-sudo yum -y install libiodbc libiodbc-devel
-```
-
-The `yum` command automatically installs the header files in the `/usr/include` and `/usr/include/libiodbc` directories.
-
-<<<
-=== Enable Compression
-
-When compression is enabled in the ODBC driver, the ODBC driver can send and receive large volumes of data quickly and efficiently to and from
-the {project-name} Database Connectivity Services (DCS) server over a TCP/IP network. By default, compression is disabled.
-
-To enable compression in the ODBC driver or to change the compression setting, follow these steps:
-
-* If you are using the {project-name} ODBC driver manager, add
-+
-```
-Compression = compression-level
-```
-+
-to the `DSN` section of `TRAFDSN` file.
-
-* If you are using a third-party driver manager, such as unixODBC, add
-+
-```
-Compression = compression-level
-```
-+
-to the `DSN` section of the `odbc.ini` file.
-
-The `_compression-level_` is one of these values:
-
-* `SYSTEM_DEFAULT`, which is the same as no compression
-* `no compression`
-* `best speed`
-* `best compression`
-* `balance`
-* An integer from `0` to `9`, with `0` being `no compression` and `9` being the `maximum available compression`
-
-<<<
-=== Use Third-Party Driver Manager
-
-NOTE: For better performance, we recommend that you use at least version `2.3._x_` of unixODBC.
-
-* If you are using an external driver manager, then you must point to `libtrafodbc_drvr64.so` and not to `libtrafodbc64.so`.
-* The driver, `libtrafodbc_l64_drvr.so`, has been verified with iODBC and unixODBC driver managers.
-* These driver managers, as well as documentation, can be found at these Web sites:
-* http://www.iodbc.org/
-* http://www.unixodbc.org/
-* For information on the necessary data-source configuration options, you will need to add to the respective configuration files (for example,
-to `odbc.ini`).
-
-<<<
-=== Run Sample Program (`connect_test`)
-
-NOTE: The examples after each step assume that you have default installation directories.
-
-If you have a previous version of the {project-name} ODBC driver installed, you need to re-link your existing application to ensure that you pick up
-the correct version of the driver. If you are unsure of the version, check the version of your application with this command:
-
-```
-ldd object-file
-```
-
-1.  Move to the directory where you installed the sample program:
-+
-```
-cd /etc/odbc
-```
-
-2.  Set the environment variable `LD_LIBRARY_PATH`:
-+
-```
-export LD_LIBRARY_PATH=<path-to-odbc-library-files or /usr/lib64>
-```
-+
-*Example*
-+
-```
-export LD_LIBRARY_PATH=/usr/lib64
-```
-
-3.  In the `/etc/odbc/TRAFDSN` file, add the correct IP address to the `Server` parameter for the `Default_DataSource`.
-+
-*Example*
-+
-```
-[Default_DataSource]
-Description = Default Data Source
-Catalog = TRAFODION
-Schema = SEABASE
-DataLang = 0
-FetchBufferSize = SYSTEM_DEFAULT
-Server = TCP:1.2.3.4:23400 <- _Set IP Address_
-SQL_ATTR_CONNECTION_TIMEOUT = SYSTEM_DEFAULT
-SQL_LOGIN_TIMEOUT = SYSTEM_DEFAULT
-SQL_QUERY_TIMEOUT = NO_TIMEOUT
-```
-+
-<<<
-
-4.  Compile the sample program.
-+
-```
-sudo g++ -g connect_test.cpp -L/usr/lib64 -I/usr/include/odbc -ltrafodbc64 -o connect_test
-```
-
-5.  Run the sample program:
-+
-```
-./connect_test -d Default_DataSource -u username -p password
-```
-
-If the sample program runs successfully, you should see output similar to the following:
-
-```
-Using Connect String: DSN=Default_DataSource;UID=username;PWD=****;
-Connect Test Passed...
-```
-
-<<<
-[[linux_odbc_run_basicsql]]
-=== Run Sample Program (`basicsql`)
-
-NOTE: The Basic SQL sample program is not currently bundled with the ODBC Linux driver. To obtain the source code for this program, see
-<<odbc_sample_program, `basicsql` (Sample ODBC Program)>>.
-
-If you have a previous version of the {project-name} ODBC driver installed, you need to re-link your existing application to ensure that you pick up
-the correct version of the driver. If you are unsure of the version, check the version of your application with this command:
-
-```
-ldd object-file
-```
-
-1.  Move to the directory where you put the `basicsql.cpp` file.
-
-2.  Set the environment variable `LD_LIBRARY_PATH`:
-+
-```
-export LD_LIBRARY_PATH=<path-to-odbc-driver-dlls>
-```
-
-3.  In the `/etc/odbc/TRAFDSN` file, add the correct IP address to the `Server` parameter for the `Default_DataSource`. For example:
-+
-*Example*
-+
-```
-[Default_DataSource]
-Description = Default Data Source
-Catalog = TRAFODION
-Schema = SEABASE
-DataLang = 0
-FetchBufferSize = SYSTEM_DEFAULT
-Server = TCP:1.2.3.4:23400 
-SQL_ATTR_CONNECTION_TIMEOUT = SYSTEM_DEFAULT
-SQL_LOGIN_TIMEOUT = SYSTEM_DEFAULT
-SQL_QUERY_TIMEOUT = NO_TIMEOUT
-```
-+
-<<<
-
-4.  Compile the sample program.
-+
-```
-g++ -g basicsql.cpp -L. -I. -ltrafodbc64 -o basicsql
-```
-
-5.  Run the sample program:
-+
-```
-basicsql Default_DataSource <username> <password>
-```
-
-If the sample program runs successfully, you should see output similar to the following:
-
-```
-Using Connect String: DSN=Default_DataSource;UID=user1;PWD=pwd1;
-Successfully connected using SQLDriverConnect.
-Drop sample table if it exists... Creating sample table TASKS...
-Table TASKS created using SQLExecDirect.
-Inserting data using SQLBindParameter, SQLPrepare, SQLExecute Data
-Data inserted.
-Fetching data using SQLExecDirect, SQLFetch, SQLGetData
-Data selected: 1000 CREATE REPORTS 2014-3-22
-Basic SQL ODBC Test Passed!
-```
+////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ */
+////
+
+[[odbc-linux-install]]
+= Install Linux ODBC Driver
+
+== Installation Requirements
+
+If you have not done so already, please ensure that you have <<download-software, unpackaged the {project-name}
+client software>>.
+
+In addition, the ODBC driver for Linux requires `libgcc 3.4.3` and `libstd++ 6.0`.
+
+If you are building ODBC applications, please use the preferred build platform, RedHat 6.x or CentOS 6.x.
+
+The examples in this chapter assumes that you have unpackaged the JDBC Type for installation files
+to `$HOME/trafodion/odbc`.
+
+<<<
+== Validate Install Directory
+
+`$HOME/trafodion/odbc/PkgTmp` should contain:
+
+```
+connect_test.cpp 
+install.sh 
+libicudataNv44.so.44 
+libicuucNv44.so.44 
+libtrafodbc_l64.so 
+libtrafodbc_l64_drvr.so 
+LICENSE 
+license.txt 
+MD5SUM 
+TRAFDSN 
+```
+
+By default, a new version of the {project-name} ODBC driver is installed in the following directories
+unless you specify a different directory during installation:
+
+* `/usr/lib64`
+* `/etc/odbc`
+
+NOTE: The following header files are not packaged with the {project-name} ODBC driver: +
+ +
+- `sql.h` +
+- `sqlext.h` +
+- `sqltypes.h` +
+- `sqlucode.h` +
+ +
+To install those header files, <<linux_odbc_client_env, Set Up Client Environment>> below.
+
+<<<
+== Install/Reinstall Linux ODBC Driver
+
+NOTE: You must have root (`sudo`) access to install the {project-name} ODBC Driver for Linux at the default system location.
+If you don't have such access, the install the ODBC driver to an alternate location; for example: `$HOME/trafodion/odbc`.
+
+. Install the product by entering these commands:
++
+*With `sudo` Access*
++
+```
+cd $HOME/trafodion/odbc/PkgTmp 
+sudo ./install.sh
+```
++
+*Without `sudo` Access*
++
+```
+cd $HOME/trafodion/odbc/PkgTmp 
+./install.sh
+```
++
+Except for the sample file, the `install.sh` script saves a copy (`.SAV`) of your previous installation files if they exist.
+.  Accept the terms of the license agreement by entering *yes*.
++
+NOTE: Don't use environmental variables when specifying alternative location. Instead, use
+the full path. For example, specify `/opt/user/trafodion/odbc` instead of `$HOME/trafodion/odbc`.
+
+. Enter a directory for the library files, or press Enter to use the default directory (`/usr/lib64`).
+. Enter a directory for the data-source template file, or press *Enter* to use the default directory (`/etc/odbc`).
+. Enter a directory for the sample program, or press *Enter* to use the default directory (`/etc/odbc`).
+. If you installed the library files, data-source template file, and the sample program in an
+  alternative location, then verify the directory content:
+
+<<<
+
+[[linux_odbc_client_env]]
+=== Set Up Client Environment
+
+If you selected default options during installation, ensure that:
+
+* The libraries are located in the `/usr/lib64` directory.
+* A `TRAFDSN` file is in the `/etc/odbc` directory.
+
+If you select non-default locations during installation, ensure that the files are installed
+in the directories that you specified during installation:
+
+```
+$ cd $HOME/trafodion/odbc
+$ ls
+connect_test.cpp  libicuuc.so       libtrafodbc_drvr64.so      libtrafodbc_l64.so    PkgTmp
+libicudata.so     libicuuc.so.44    libtrafodbc_l64_drvr.so    libtrafodbc_l64.so.1  TRAFDSN
+libicudata.so.44  libtrafodbc64.so  libtrafodbc_l64_drvr.so.1  MD5SUM
+
+$
+```
+
+The driver expects the `TRAFDSN` file to be present in either the default location (`/etc/odbc`)
+or the current working direct or (`CWD`) of the application. As a best practice, copy
+the `TRAFDSN` file to the application directory.
+
+Edit the `TRAFDSN` file. Make changes to the `Default_DataSource` section. At a minimum,
+change the value for `Server` to the address of the host you are connecting to.
+
+*Before*
+
+```
+[Default_DataSource]
+Description                 = Default Data Source
+Catalog                     = TRAFODION
+Schema                      = SEABASE
+DataLang                    = 0
+FetchBufferSize             = SYSTEM_DEFAULT
+Server                      = TCP:1.2.3.4:23400
+SQL_ATTR_CONNECTION_TIMEOUT = SYSTEM_DEFAULT
+SQL_LOGIN_TIMEOUT           = SYSTEM_DEFAULT
+SQL_QUERY_TIMEOUT           = NO_TIMEOUT
+```
+
+<<<
+*After*
+
+```
+[Default_DataSource]
+Description                 = Default Data Source
+Catalog                     = TRAFODION
+Schema                      = SEABASE
+DataLang                    = 0
+FetchBufferSize             = SYSTEM_DEFAULT
+Server                      = TCP:node01.host.com:23400
+SQL_ATTR_CONNECTION_TIMEOUT = SYSTEM_DEFAULT
+SQL_LOGIN_TIMEOUT           = SYSTEM_DEFAULT
+SQL_QUERY_TIMEOUT           = NO_TIMEOUT
+```
+
+If you are building ODBC applications, you need to install these header files in your build environment:
+
+* `sql.h`
+* `sqlext.h`
+* `sqltypes.h`
+* `sqlucode.h`
+
+To install those header files from the latest packages, run this `yum` command:
+
+```
+sudo yum -y install libiodbc libiodbc-devel
+```
+
+The `yum` command automatically installs the header files in the `/usr/include` and `/usr/include/libiodbc` directories.
+
+<<<
+=== Enable Compression
+
+When compression is enabled in the ODBC driver, the ODBC driver can send and receive large volumes of data quickly and efficiently to and from
+the {project-name} Database Connectivity Services (DCS) server over a TCP/IP network. By default, compression is disabled.
+
+To enable compression in the ODBC driver or to change the compression setting, follow these steps:
+
+* If you are using the {project-name} ODBC driver manager, add
++
+```
+Compression = compression-level
+```
++
+to the `DSN` section of `TRAFDSN` file.
+
+* If you are using a third-party driver manager, such as unixODBC, add
++
+```
+Compression = compression-level
+```
++
+to the `DSN` section of the `odbc.ini` file.
+
+The `_compression-level_` is one of these values:
+
+* `SYSTEM_DEFAULT`, which is the same as no compression
+* `no compression`
+* `best speed`
+* `best compression`
+* `balance`
+* An integer from `0` to `9`, with `0` being `no compression` and `9` being the `maximum available compression`
+
+*Example*
+
+```
+[Default_DataSource]
+Description                 = Default Data Source
+Catalog                     = TRAFODION
+Schema                      = SEABASE
+DataLang                    = 0
+FetchBufferSize             = SYSTEM_DEFAULT
+Server                      = TCP:node01.host.com:23400
+SQL_ATTR_CONNECTION_TIMEOUT = SYSTEM_DEFAULT
+SQL_LOGIN_TIMEOUT           = SYSTEM_DEFAULT
+SQL_QUERY_TIMEOUT           = NO_TIMEOUT
+Compression                 = Best Compression
+```
+
+<<<
+== Use Third-Party Driver Manager
+
+NOTE: For better performance, we recommend that you use at least version `2.3._x_` of unixODBC.
+
+* If you are using an external driver manager, then you must point to `libtrafodbc_drvr64.so` and not to `libtrafodbc64.so`.
+* The driver, `libtrafodbc_l64_drvr.so`, has been verified with iODBC and unixODBC driver managers.
+* These driver managers, as well as documentation, can be found at these Web sites:
+** http://www.iodbc.org/
+** http://www.unixodbc.org/
+* For information on the necessary data-source configuration options, you will need to add to the respective configuration files (for example,
+to `odbc.ini`).
+
+<<<
+== Run Sample Program (`connect_test`)
+
+NOTE: The examples after each step assume that you have default installation directories.
+
+If you have a previous version of the {project-name} ODBC driver installed,
+you need to re-link your existing application to ensure that you pick up
+the correct version of the driver. If you are unsure of the version,
+check the version of your application with this command:
+
+```
+ldd object-file
+```
+
+.  Move to the directory where you installed the sample program:
++
+```
+cd /etc/odbc
+```
+
+.  Set the environment variable `LD_LIBRARY_PATH`:
++
+```
+export LD_LIBRARY_PATH=<path-to-odbc-library-files or /usr/lib64>
+```
++
+*Example*
++
+```
+export LD_LIBRARY_PATH=/usr/lib64
+```
+
+.  In the `/etc/odbc/TRAFDSN` file, add the correct IP address to the `Server` parameter for the `Default_DataSource`.
++
+*Example (connecting to `node01.host.com:23400`)*
++
+```
+[Default_DataSource]
+Description                 = Default Data Source
+Catalog                     = TRAFODION
+Schema                      = SEABASE
+DataLang                    = 0
+FetchBufferSize             = SYSTEM_DEFAULT
+Server                      = TCP:node01.host.com:23400 
+SQL_ATTR_CONNECTION_TIMEOUT = SYSTEM_DEFAULT
+SQL_LOGIN_TIMEOUT           = SYSTEM_DEFAULT
+SQL_QUERY_TIMEOUT           = NO_TIMEOUT
+Compression                 = Best Compression
+```
++
+<<<
+
+.  Compile the sample program.
++
+*Default Installation*
++
+```
+g++ -g connect_test.cpp -L/usr/lib64 -I/usr/include/odbc -ltrafodbc64 -o connect_test
+```
++
+*Alterntiave Installation*
++
+```
+g++ -g connect_test.cpp -L$HOME/trafodion/odbc -I/usr/include/odbc -ltrafodbc64 -o connect_test
+```
+
+.  Run the sample program:
++
+```
+./connect_test -d Default_DataSource -u username -p password
+```
+
+If the sample program runs successfully, you should see output similar to the following:
+
+```
+Using Connect String: DSN=Default_DataSource;UID=username;PWD=****;
+Connect Test Passed...
+```
+
+<<<
+[[linux_odbc_run_basicsql]]
+== Run Sample Program (`basicsql`)
+
+NOTE: The Basic SQL sample program is not currently bundled with the ODBC Linux driver. To obtain the source code for this program, see
+<<odbc_sample_program, `basicsql` (Sample ODBC Program)>>.
+
+If you have a previous version of the {project-name} ODBC driver installed,
+you need to re-link your existing application to ensure that you pick up
+the correct version of the driver.
+
+If you are unsure of the version, check the version of your application with this command:
+
+```
+ldd object-file
+```
+
+.  Move to the directory where you put the `basicsql.cpp` file.
+
+.  Set the environment variable `LD_LIBRARY_PATH`:
++
+```
+export LD_LIBRARY_PATH=<path-to-odbc-driver-dlls>
+```
+
+.  In the `/etc/odbc/TRAFDSN` file, add the correct IP address to the `Server` parameter for the `Default_DataSource`. For example:
++
+*Example (connecting to `node01.host.com:23400`)*
++
+```
+[Default_DataSource]
+Description                 = Default Data Source
+Catalog                     = TRAFODION
+Schema                      = SEABASE
+DataLang                    = 0
+FetchBufferSize             = SYSTEM_DEFAULT
+Server                      = TCP:node01.host.com:23400 
+SQL_ATTR_CONNECTION_TIMEOUT = SYSTEM_DEFAULT
+SQL_LOGIN_TIMEOUT           = SYSTEM_DEFAULT
+SQL_QUERY_TIMEOUT           = NO_TIMEOUT
+Compression                 = Best Compression
+```
++
+<<<
+
+.  Compile the sample program.
++
+*Default Installation*
++
+```
+g++ -g basicsql.cpp -L/usr/lib64 -I/usr/include/odbc -ltrafodbc64 -o basicsql
+```
++
+*Alterntiave Installation*
++
+```
+g++ -g basicsql.cpp -L$HOME/trafodion/odbc -I/usr/include/odbc -ltrafodbc64 -o basicsql
+```
+
+.  Run the sample program:
++
+```
+./basicsql Default_DataSource <username> <password>
+```
+
+If the sample program runs successfully, you should see output similar to the following:
+
+```
+Using Connect String: DSN=Default_DataSource;UID=user1;PWD=pwd1;
+Successfully connected using SQLDriverConnect.
+Drop sample table if it exists... Creating sample table TASKS...
+Table TASKS created using SQLExecDirect.
+Inserting data using SQLBindParameter, SQLPrepare, SQLExecute Data
+Data inserted.
+Fetching data using SQLExecDirect, SQLFetch, SQLGetData
+Data selected: 1000 CREATE REPORTS 2014-3-22
+Basic SQL ODBC Test Passed!
+```

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/client_install/src/asciidoc/_chapters/odbc_windows.adoc
----------------------------------------------------------------------
diff --git a/docs/client_install/src/asciidoc/_chapters/odbc_windows.adoc b/docs/client_install/src/asciidoc/_chapters/odbc_windows.adoc
index 48435ff..67e2d6d 100644
--- a/docs/client_install/src/asciidoc/_chapters/odbc_windows.adoc
+++ b/docs/client_install/src/asciidoc/_chapters/odbc_windows.adoc
@@ -1,244 +1,246 @@
-////
-/**
- *@@@ START COPYRIGHT @@@
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- * @@@ END COPYRIGHT @@@
- */
-////
-
-[[install-windows-odbc-driver]]
-= Install Windows ODBC Driver
-
-WARNING: License issues prevent us from including the ODBC Driver for Windows in this release. Contact the
-{project-name} user e-mail list ({project-support}) for help obtaining the driver.
-
-== Installation Requirements
-
-[cols="40%s,60%",options="header"]
-|===
-| Item             | Requirement
-| Computer         | Windows compatible PC workstation
-| Memory           | Recommended minimum 32 MB
-| Disk Space       | Minimum 30 MB additional free space
-| Operating System | x64 Edition of Microsoft Windows 7, Windows 8, Windows 10, or Windows Server 2008
-| Network Software | TCP/IP
-|===
-
-== Installation Instructions
-[[win_odbc_install]]
-
-=== Install Windows ODBC Driver
-
-NOTE: To install the driver on your PC, you must be logged on with a user ID that has administrator privileges.
-
-You download and extract the {project-name} client package using the instructions in <<introduction-download, Download Installation Package>> above.
-
-The ODBC client installation file, `TFODBC64-*.exe`, which installs or links to multiple client components:
-
-[cols="40%s,60%",options="header"]
-|===
-| This client component&#8230; | Does this&#8230;
-| Microsoft ODBC Driver Manager | Manages access to ODBC drivers for applications. The driver manager loads and unloads drivers and passes calls for ODBC functions to the
-correct driver.
-| Trafodion ODBC driver | Implements ODBC function calls to enable an ODBC client application to access the {project-name} database.
-| Microsoft ODBC Administrator | Adds, configures, and removes ODBC data sources on client workstations.
-|===
-
-By default, a new version of the ODBC driver is installed in this directory and its folders unless you specify a different directory
-during installation:
-
-[cols="40%l,60%",options="header"]
-|===
-| Default Installation Directory    | Client Operating System
-| C:\Program Files\Trafodion\TRAF   | ODBC _version_ Windows 64-bit
-|===
-
-=== Start the InstallShield wizard
-The InstallShield wizard walks you through the steps to install the client components ({project-name} ODBC 1.0) on your workstation. You can
-perform the installation in _interactive mode_, in which you provide input or accept defaults when prompted as ODBC is installed. 
-Please refer to <<win_odbc_interactive_mode,Interactive Mode Installation>> below.
-
-[[win_odbc_interactive_mode]]
-==== Interactive Mode Installation
-
-1.  Double-click the `TFODBC64-*.exe` distribution file to start the InstallShield wizard.
-2.  On the *Welcome* page, click *Next*.
-3.  Read and select the *I accept the agreement* radio button. Click *Next*. 
-4.  On the *Destination Folder* page, click *Install* to select the default location: `C:\Program Files\Trafodion\TRAF ODBC _version_\` 
-+
-This location is the installation directory for ODBC header and help files. All other ODBC files are installed in `%SYSTEMROOT%\system32`.
-
-5. Read and accept the Microsoft C++ license agreement by checking the *I agree to the license terms and conditions*. Click *Install*.
-6. Click *Close*.
-7. Click *Finish* to exit the installation wizard.
-
-=== Add a client data source
-1.  Start the Microsoft ODBC Administrator:
-* On Windows 7: *Start>All Programs>{project-name} ODBC _version_>MS ODBC Administrator*
-* On Windows 8: Right-click the *{project-name} ODBC _version_* icon on the desktop and select MS ODBC Administrator.
-* On Windows 10: Right-click the Windows icon in the menu bar. Select *Settings*. Search for *Set up ODBC data sources (64-bit)*. Click on the found item. 
-
-2.  In the *ODBC Data Source Administrator* dialog box, click *Add*.
-3.  Select *TRAF ODBC _version_*, and then click *Finish* to start the *Create a New {project-name} ODBC Data Source* wizard.
-4.  Enter the data source name (for example, `Default_DataSource_Schema1`) and an optional description, and click *Next*.
-5.  Enter the `IP address` or `host name` for the database platform. Enter the default port number as *23400*^1^. Leave the defaults as is, and click *Next*.
-6.  Enter the schema name. The default schema name is `SEABASE`. Click *Next*.
-7.  Enter the translate DLL name and its option, if you have one. If not, leave it blank. Leave the localization defaults as is.
-+
-The Replacement Character replaces any character that is incompatible for translation when retrieving data. It is one character (one or two
-bytes long). The Replacement Character is assumed to be in the character set specified in the Client/Server Character Set Interaction. If it is not specified, `?` is used as the default.
-+
-Click *Finish*.
-
-8.  The wizard gives you an opportunity to test the connection. Click *Test Connection* and click *OK*.
-9.  The server ID and schema are filled in for you. Enter a valid user name and password, and click *OK*.
-+
-The wizard attempts to connect to the data source and displays a message stating whether it was successful or not.
-10.  Click *OK* to save the data source, or click *Cancel* _twice_ to quit the *Create Data Source* wizard.
-
-^1^ Your specific installation may use a different port number. Check with your {project-name} administrator.
-
-<<<
-[[win_odbc_client_env]]
-=== Set Up Client Environment
-All client data sources connect to the pre-configured server data source on the database platform, which is `Default_DataSource`. 
-
-You can configure one data source only, `Default_DataSource`, on the database platform, but you can create other data source 
-definitions on the workstation. 
-
-For example, if you have more than one schema on the database platform and you want to connect 
-to each of those schemas on the database platform, you can create a client data source for each of those schemas. 
-
-Instead of changing the schema definition in the data source definition on the workstation, you can create multiple data source 
-definitions with different schemas on the workstation. The client data source will use the specified schema but will connect to 
-`Default_DataSource` on the database platform.
-
-To create a data source on the client workstation, follow these steps:
-
-1.  Launch the *MS ODBC Administrator*. 
-* On Windows 7: *Start>All Programs>{project-name} ODBC _version_>MS ODBC Administrator*
-* On Windows 8: Right-click the *{project-name} ODBC _version_* icon on the desktop and select MS ODBC Administrator.
-* On Windows 10: Right-click the Windows icon in the menu bar. Select *Settings*. Search for *Set up ODBC data sources (64-bit)*. Click on the found item. 
-
-2.  In the *ODBC Data Source Administrator* dialog box, select the *User DSN* tab, and click *Add*.
-3.  Select the *TRAF ODBC _version_* driver, and then click *Finish*.
-+
-A new dialog box appears, prompting you to create a new data source.
-4.  Enter the name of the data source, `Default_DataSource`, and click *Next* to continue.
-5.  Enter the IP address and port number of the {project-name} system to which will be connecting. By default, the port number is *23400*^1^. 
-Click *Next* to continue.
-6.  Select the default schema. If you do not select a schema, the default is `SEABASE`. Click *Next* to continue.
-+
-<<<
-7.  If desired, configure the *translate dll*, which translates data from one character set to another, and configure the localization. By
-default, the client error message language is English, and the client\u2019s local character set is used. Click *Finish* to continue.
-+
-The *Test {project-name} ODBC Connection* dialog box appears, allowing you to test the connection using the data source that you created.
-
-8.  Click *Test Connection*.
-9.  When prompted, enter your user name and password, and, optionally, schema. Click *OK*.
-+
-If the connection is successful, you will see `Connected Successfully` in the *Test {project-name} ODBC Connection* dialog box.
-10.  Click *OK* to save the data source, or click *Cancel* _twice_ to quit the *Create Data Source* wizard.
-
-^1^ Your specific installation may use a different port number. Check with your {project-name} administrator.
-
-=== Enable Compression
-When compression is enabled in the ODBC driver, the ODBC driver can send and receive large volumes of data quickly and efficiently to and from
-the {project-name} Database Connectivity Services (DCS) server over a TCP/IP network. By default, compression is disabled.
-
-To enable compression in the ODBC driver or to change the compression setting, follow these steps:
-
-1.  Launch the MS ODBC Administrator. 
-* On Windows 7: *Start>All Programs>{project-name} ODBC _version_>MS ODBC Administrator*
-* On Windows 8: Right-click the *{project-name} ODBC _version_* icon on the desktop and select MS ODBC Administrator.
-* On Windows 10: Right-click the Windows icon in the menu bar. Select *Settings*. Search for *Set up ODBC data sources (64-bit)*. Click on the found item. 
-
-2.  In the *ODBC Data Source Administrator* dialog box, select the *User DSN* tab, select the name of your data source under 
-*User Data Sources*, and click *Configure*. If you did not create a data source, please refer to 
-<<win_odbc_client_env, Setting Up the Client Environment>>.
-+
-A new dialog box appears, showing the configuration of your data source.
-+
-<<<
-3.  Select the *Network* tab, and then select one of these values for *Compression*:
-* `SYSTEM_DEFAULT`, which is the same as no compression
-* `no compression`
-* `best speed`
-* `best compression`
-* `balance`
-* An integer from 0 to 9, with 0 being no compression and 9 being the
-maximum available compression
-4.  Click *OK* to accept the change.
-5.  Click *OK* to exit the *ODBC Data Source Administrator* dialog box.
-
-<<<
-[[win_odbc_run_basicsql]]
-=== Run Sample Program (`basicsql`)
-NOTE: The Basic SQL sample program is not currently bundled with the ODBC Windows driver. To obtain the source code and the build and run
-files for this program, please refer to  <<odbc_sample_program, ODBC Sample Program>>.
-
-To build and run the executable file, follow these steps:
-
-1.  Open a Visual Studio x64 Win64 Command Prompt. Make sure to select the x64 version of the command prompt. For example, on Windows 7, select
-*Start>All Programs>Microsoft Visual Studio 2010>Visual Studio Tools>Visual Studio x64 Win64 Command Prompt*.
-2.  At the command prompt, move to the directory where you put the `basicsql.cpp` and build files.
-3.  Run build at the command prompt. You will see `basicsql.exe` created in the same directory as the source file.
-4.  Before running the sample program, create a {project-name} data source named `Default_DataSource` on the client workstation using MS ODBC
-Administrator. For instructions, please refer to <<win_odbc_client_env,Set Up Client Environment>>.
-5.  From the command prompt, run the sample program by entering either run or this command:
-+
-```
-basicsql DefaultDataSource <username> <password>
-```
-+
-If the sample program executes successfully, you should see this output:
-+
-*Example*
-+
-```
-Using Connect String: DSN=Default_DataSource;UID=user1;PWD=pwd1;
-Successfully connected using SQLDriverConnect.
-Drop sample table if it exists...
-Creating sample table TASKS...
-Table TASKS created using SQLExecDirect.
-Inserting data using SQLBindParameter, SQLPrepare, SQLExecute
-Data inserted.
-Fetching data using SQLExecDirect, SQLFetch, SQLGetData
-Data selected: 1000 CREATE REPORTS 2014-3-22
-Basic SQL ODBC Test Passed!
-```
-
-<<<
-== Reinstall Windows ODBC Driver
-To reinstall the driver, we recommend that you fully remove your ODBC driver and then install the new version. Please refer to
-<<win_odbc_uninstall,Uninstalling the {project-name} ODBC Driver for Windows>> and then <<win_odbc_install, Installing the {project-name} ODBC Driver for Windows>>.
-
-[[win_odbc_uninstall]]
-== Uninstalling Windows ODBC Driver
-1.  Start to remove the ODBC driver:
-* On Windows 7: *Start>All Programs>{project-name} ODBC _version_>Remove TRAF ODBC _version_*
-* On Windows 8: Right-click the *{project-name} ODBC _version_* icon on the desktop and select *Remove TRAF ODBC _version_*.
-* On Windows 10: Right-click the Windows icon in the menu bar. Select *Control Panel*. Click on *Uninstall a program*. Locate *{project-name} ODBC64 _version_* and select it. Click on *Uninstall*.
-
-2.  When the *Windows Installer* dialog box asks you if you want to uninstall this product, click *Yes*.
-3.  The *{project-name} ODBC _version_* dialog box displays the status and asks you to wait while `Windows configures {project-name} ODBC _version_` (that is, removes
-the {project-name} ODBC Driver from your Windows workstation).
-+
-After this dialog box disappears, {project-name} ODBC _version_ is no longer on your workstation.
-
-NOTE: Uninstalling the ODBC driver does not remove pre-existing data source definitions from the Windows registry.
+////
+/**
+ *@@@ START COPYRIGHT @@@
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ * @@@ END COPYRIGHT @@@
+ */
+////
+
+[[install-windows-odbc-driver]]
+= Install Windows ODBC Driver
+
+WARNING: License issues prevent us from including the ODBC Driver for Windows in this release. Contact the
+{project-name} user e-mail list ({project-support}) for help obtaining the driver.
+
+If you have not done so already, please ensure that you have unpackaged the
+<<download-software, unpackaged the {project-name} client software>>.
+
+The examples in this chapter assumes that you have unpackaged the installation files
+to `c:\trafodion\odbc`.
+
+== Installation Requirements
+
+[cols="40%s,60%",options="header"]
+|===
+| Item             | Requirement
+| Computer         | Windows compatible PC workstation
+| Memory           | Recommended minimum 32 MB
+| Disk Space       | Minimum 30 MB additional free space
+| Operating System | x64 Edition of Microsoft Windows 7, Windows 8, Windows 10, or Windows Server 2008
+| Network Software | TCP/IP
+|===
+
+<<<
+[[win_odbc_install]]
+== Installation Instructions
+
+NOTE: To install the driver on your PC, you must be logged on with a user ID that has administrator privileges.
+
+The ODBC client installation file (`c:\trafodion\odbc\TFODBC64-*.exe`) installs or links to
+multiple client components:
+
+[cols="40%s,60%",options="header"]
+|===
+| This client component&#8230; | Does this&#8230;
+| Microsoft ODBC Driver Manager | Manages access to ODBC drivers for applications. The driver manager loads and unloads drivers and passes calls for ODBC functions to the
+correct driver.
+| Trafodion ODBC driver | Implements ODBC function calls to enable an ODBC client application to access the {project-name} database.
+| Microsoft ODBC Administrator | Adds, configures, and removes ODBC data sources on client workstations.
+|===
+
+By default, a new version of the ODBC driver is installed in this directory and its folders unless you specify a different directory
+during installation:
+
+[cols="40%l,60%",options="header"]
+|===
+| Default Installation Directory    | Client Operating System
+| C:\Program Files\Trafodion\TRAF   | ODBC _version_ Windows 64-bit
+|===
+
+<<<
+To install the {project-name} ODBC driver, do the following:
+
+1.  Double-click the `TFODBC64-*.exe` distribution file to start the InstallShield wizard.
+2.  On the *Welcome* page, click *Next*.
++
+image:{images}/winodbc_welcome.jpg[Windows ODBC Installer Welcome Screen]
++
+<<<
+3.  Read and select the *I accept the agreement* radio button. Click *Next*. 
++
+image:{images}/winodbc_license.jpg[Windows ODBC Installer License Screen]
++
+<<<
+4.  On the *Destination Folder* page, click *Install* to select the default location: `C:\Program Files\Trafodion\TRAF ODBC _version_\` 
++
+image:{images}/winodbc_destination.jpg[Windows ODBC Installer Destination Screen]
++
+This location is the installation directory for ODBC header and help files. All other ODBC files are installed in `%SYSTEMROOT%\system32`.
++
+<<<
+5. Validate the Destination and click *Install*.
++
+image:{images}/winodbc_ready_to_install.jpg[Windows ODBC Ready to Install Screen]
++
+<<<
+6. The Windows ODBC driver is installed. Click *Finish* to exit the installation wizard.
++
+image:{images}/winodbc_install_finished.jpg[Windows ODBC Install Finished Screen]
+
+<<<
+[[win_odbc_setup_data_source]]
+== Set Up ODBC Data Source
+
+1.  Start the Microsoft ODBC Administrator:
+* On Windows 7: *Start>All Programs>{project-name} ODBC _version_>MS ODBC Administrator*
+* On Windows 8: Right-click the *{project-name} ODBC _version_* icon on the desktop and select MS ODBC Administrator.
+* On Windows 10: Click the Windows icon in the menu bar. Type *Set up ODBC data sources (64-bit)*. Click on the found item. 
++
+image:{images}/winodbc_admin_intro.jpg[Windows ODBC Admin Intro Screen]
++
+<<<
+2.  In the *ODBC Data Source Administrator* dialog box, click *Add*.
++
+image:{images}/winodbc_admin_add.jpg[Windows ODBC Admin Create Data Source Screen]
++
+<<<
+3.  Select *TRAF ODBC _version_*, and then click *Finish* to start the *Create a New {project-name} ODBC Data Source* wizard.
++
+image:{images}/winodbc_admin_add_general.jpg[Windows ODBC Admin Create Data Source General Screen]
++
+<<<
+4.  Enter the data source name (for example, `Default_DataSource_Schema1`) and an optional description, and click *Next*.
++
+image:{images}/winodbc_admin_add_general_edited.jpg[Windows ODBC Admin Create Data Source Edited General Screen]
++
+<<<
+5.  Enter the `IP address` or `host name` for the database platform. Enter the default port number as *23400*^1^. Leave the defaults as is, and click *Next*.
++
+image:{images}/winodbc_admin_add_network.jpg[Windows ODBC Admin Create Data Source Network Screen]
++
+<<<
+6.  Enter the schema name. The default schema name is `SEABASE`. Click *Next*.
++
+image:{images}/winodbc_admin_add_schema.jpg[Windows ODBC Admin Create Data Source Schema Screen]
++
+<<<
+7.  Enter the translate DLL name and its option, if you have one. If not, leave it blank. Leave the localization defaults as is.
++
+image:{images}/winodbc_admin_add_translate_dll.jpg[Windows ODBC Admin Create Data Source Translate DLL Screen]
++
+The Replacement Character replaces any character that is incompatible for translation when retrieving data. It is one character (one or two
+bytes long). The Replacement Character is assumed to be in the character set specified in the Client/Server Character Set Interaction. If it is not specified, `?` is used as the default.
++
+Click *Finish*.
+
+8.  The wizard gives you an opportunity to test the connection. Click *Test Connection* and click *OK*.
++
+<<<
+9.  The server ID and schema are filled in for you. Enter a valid user name and password, and click *OK*.
++
+image:{images}/winodbc_admin_add_test_connection.jpg[Windows ODBC Admin Create Data Source Test Connection Screen]
++
+The wizard attempts to connect to the data source and displays a message stating whether it was successful or not.
+10.  Click *OK* to save the data source, or click *Cancel* _twice_ to quit the *Create Data Source* wizard.
+
+^1^ Your specific installation may use a different port number. Check with your {project-name} administrator.
+
+<<<
+=== Enable Compression
+When compression is enabled in the ODBC driver, the ODBC driver can send and receive large volumes of data quickly and efficiently to and from
+the {project-name} Database Connectivity Services (DCS) server over a TCP/IP network. By default, compression is disabled.
+
+To enable compression in the ODBC driver or to change the compression setting, follow these steps:
+
+1.  Launch the MS ODBC Administrator. 
+* On Windows 7: *Start>All Programs>{project-name} ODBC _version_>MS ODBC Administrator*
+* On Windows 8: Right-click the *{project-name} ODBC _version_* icon on the desktop and select MS ODBC Administrator.
+* On Windows 10: Right-click the Windows icon in the menu bar. Select *Settings*. Search for *Set up ODBC data sources (64-bit)*. Click on the found item. 
+
+2.  In the *ODBC Data Source Administrator* dialog box, select the *User DSN* tab, select the name of your data source under 
+*User Data Sources*, and click *Configure*. If you did not create a data source, please refer to 
+<<win_odbc_client_env, Setting Up the Client Environment>>.
++
+A new dialog box appears, showing the configuration of your data source.
+
+3.  Select the *Network* tab, and then select one of these values for *Compression*:
+* `SYSTEM_DEFAULT`, which is the same as no compression
+* `no compression`
+* `best speed`
+* `best compression`
+* `balance`
+* An integer from 0 to 9, with 0 being no compression and 9 being the
+maximum available compression
+4.  Click *OK* to accept the change.
+5.  Click *OK* to exit the *ODBC Data Source Administrator* dialog box.
+
+<<<
+[[win_odbc_run_basicsql]]
+== Run Sample Program (`basicsql`)
+NOTE: The Basic SQL sample program is not currently bundled with the ODBC Windows driver. To obtain the source code and the build and run
+files for this program, please refer to  <<odbc_sample_program, ODBC Sample Program>>.
+
+To build and run the executable file, follow these steps:
+
+1.  Open a Visual Studio x64 Win64 Command Prompt. Make sure to select the x64 version of the command prompt. For example, on Windows 7, select
+*Start>All Programs>Microsoft Visual Studio 2010>Visual Studio Tools>Visual Studio x64 Win64 Command Prompt*.
+2.  At the command prompt, move to the directory where you put the `basicsql.cpp` and build files.
+3.  Run build at the command prompt. You will see `basicsql.exe` created in the same directory as the source file.
+4.  Before running the sample program, create a {project-name} data source named `Default_DataSource` on the client workstation using MS ODBC
+Administrator. For instructions, please refer to <<win_odbc_client_env,Set Up Client Environment>>.
+5.  From the command prompt, run the sample program by entering either run or this command:
++
+```
+basicsql DefaultDataSource <username> <password>
+```
++
+If the sample program executes successfully, you should see this output:
++
+*Example*
++
+```
+Using Connect String: DSN=Default_DataSource;UID=user1;PWD=pwd1;
+Successfully connected using SQLDriverConnect.
+Drop sample table if it exists...
+Creating sample table TASKS...
+Table TASKS created using SQLExecDirect.
+Inserting data using SQLBindParameter, SQLPrepare, SQLExecute
+Data inserted.
+Fetching data using SQLExecDirect, SQLFetch, SQLGetData
+Data selected: 1000 CREATE REPORTS 2014-3-22
+Basic SQL ODBC Test Passed!
+```
+
+<<<
+== Reinstall Windows ODBC Driver
+To reinstall the driver, we recommend that you fully remove your ODBC driver and then install the new version. Please refer to
+<<win_odbc_uninstall,Uninstalling the {project-name} ODBC Driver for Windows>> and then <<win_odbc_install, Installing the {project-name} ODBC Driver for Windows>>.
+
+[[win_odbc_uninstall]]
+== Uninstalling Windows ODBC Driver
+1.  Start to remove the ODBC driver:
+* On Windows 7: *Start>All Programs>{project-name} ODBC _version_>Remove TRAF ODBC _version_*
+* On Windows 8: Right-click the *{project-name} ODBC _version_* icon on the desktop and select *Remove TRAF ODBC _version_*.
+* On Windows 10: Right-click the Windows icon in the menu bar. Select *Control Panel*. Click on *Uninstall a program*. Locate *{project-name} ODBC64 _version_* and select it. Click on *Uninstall*.
+
+2.  When the *Windows Installer* dialog box asks you if you want to uninstall this product, click *Yes*.
+3.  The *{project-name} ODBC _version_* dialog box displays the status and asks you to wait while `Windows configures {project-name} ODBC _version_` (that is, removes
+the {project-name} ODBC Driver from your Windows workstation).
++
+After this dialog box disappears, {project-name} ODBC _version_ is no longer on your workstation.
+
+NOTE: Uninstalling the ODBC driver does not remove pre-existing data source definitions from the Windows registry.


[07/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/runtime_stats.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/runtime_stats.adoc b/docs/sql_reference/src/asciidoc/_chapters/runtime_stats.adoc
index 6f1e17d..bbde7cd 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/runtime_stats.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/runtime_stats.adoc
@@ -1,1353 +1,1353 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[sql_runtime_statistics]]
-= SQL Runtime Statistics
-
-The Runtime Management System (RMS) shows the status of queries while
-they are running. RMS can service on-demand requests from the {project-name}
-Command Interface (TrafCI) to get statistics for a given query ID or for
-active queries in a given process. RMS also provides information about
-itself to determine the health of the RMS infrastructure.
-
-RMS provides the summary statistics for each fragment instance and
-detailed statistics for each operator (TDB_ID) of a given active query.
-A query is considered active if either the compilation or execution is
-in progress. The variable_input column output is returned as a multiple
-value pair of the form _token=value_. For more information, see
-<<considerations_obtaining_stats_fragment,
-Considerations For Obtaining Statistics For Each Fragment-Instance of an Active Query>>.
-
-RMS is enabled and available all the time.
-
-== PERTABLE and OPERATOR Statistics
-
-The SQL database engine determines which type of statistics collection
-is appropriate for the query. The RMS infrastructure provides the
-runtime metrics about a query while a query is executing. You can
-identify queries that are using excessive resources, suspend a query to
-determine its impact on resources, and cancel a query, when necessary.
-PERTABLE statistics count rows and report rows estimated in the
-operators in the disk processes and time spent in the ESP processes.
-Although PERTABLE statistics can deduce when all the rows have been read
-from the disks, it is impossible to correctly assess the current state
-of the query.
-
-Complex queries such as joins, sorts, and group result sets are often
-too large to fit into memory, so intermediate results must overflow to
-scratch files. These operators are called Big Memory Operators (BMOs).
-Because of the BMOs, RMS provides OPERATOR statistics, which provide a
-richer set of statistics so that the current state of a query can be
-determined at any time.
-
-With OPERATOR statistics, all SQL operators are instrumented and the
-following statistics are collected:
-
-* Node time spent in the operator
-* Actual number of rows flowing to the parent operator
-* Estimated number of rows flowing to the parent operator (estimated by the optimizer)
-* Virtual memory used in the BMO
-* Amount of data overflowed to scratch files and read back to the query
-
-For more information,
-see <<displaying_sql_runtimestatistics,Displaying SQL Runtime Statistics>>.
-
-[[adaptive_statistics_collection]]
-== Adaptive Statistics Collection
-
-The SQL database engine chooses the appropriate statistics collection
-type based on the type of query. By default, the SQL database engine
-statistics collection is OPERATOR statistics. You can view the
-statistics in different formats: PERTABLE, ACCUMULATED, PROGRESS, and
-DEFAULT. Statistics Collection is adaptive to ensure that sufficient
-statistics information is available without
-
-causing any performance impact to the query's execution. For some
-queries, either no statistics or PERTABLE statistics are collected.
-
-[cols="50%,50%l",options="header"]
-|===
-| Query Type                      | Statistics Collection Type
-| OLT optimized queries           | PERTABLE
-| Unique queries                  | PERTABLE
-| CQD                             | No statistics
-| SET commands                    | No statistics
-| EXPLAIN                         | No statistics
-| GET STATISTICS                  | No statistics
-| All other queries               | DEFAULT
-|===
-
-<<<
-[[retrieving_sql_runtime_statistics]]
-== Retrieving SQL Runtime Statistics
-
-[[using_the_get_statistics_command]]
-=== Using the GET STATISTICS Command
-
-The GET STATISTICS command shows statistical information for:
-
-* A single query ID (QID)
-* Active queries for a process ID (PID)
-* RMS itself
-
-A query is considered active if either compilation or execution is in
-progress. In the case of a SELECT statement, a query is in execution
-until the statement or result set is closed. Logically, a query is
-considered to be active when the compile end time is -1 and the compile
-start time is not -1, or when the execute end time is -1 and the execute
-start time is not -1.
-
-[[syntax_of_get_statistics]]
-=== Syntax of GET STATISTICS
-
-```
-GET STATISTICS FOR QID { query-id | CURRENT } [stats-view-type] }
-               | PID { process-name | [ nodeid, pid ] } [ ACTIVE n ][ stats-view-type ]
-               | RMS node-num | ALL [ RESET ]
-
-stats-view-type is:
-  ACCUMULATED | PERTABLE | PROGRESS | DEFAULT
-
-```
-
-* `QID`
-+
-Required keyword if requesting statistics for a specific query.
-
-* `_query-id_`
-+
-is the query ID. You must put the _query-id_ in double quotes if the
-user name in the query ID contains lower case letters or if the user
-name contains a period.
-+
-NOTE: The _query-id_ is a unique identifier for the SQL statement
-generated when the query is compiled (prepared). The _query-id_ is
-visible for queries executed through certain TrafCI commands.
-
-* `CURRENT`
-+
-provides statistics for the most recently prepared or executed statement
-in the same session where you run the GET STATISTICS FOR QID CURRENT
-command. You must issue the GET STATISTICS FOR QID CURRENT command
-immediately after the PREPARE or EXECUTE statement.
-
-* `PID`
-+
-Required keyword if requesting statistics for an active query in a given
-process.
-
-* `_process-name_`
-+
-is the name of the process ID (PID) in the format: $Z_nnn_. The
-process name can be for the master (MXOSRVR) or executor server process
-(ESP). If the process name corresponds to the ESP, the ACTIVE _n_ query
-is just the _n_th query in that ESP and might not be the currently
-active query in the ESP.
-
-* `ACTIVE _n_`
-+
-describes which of the active queries for which RMS returns statistics.
-ACTIVE 1 is the default. ACTIVE 1 returns statistics for the first
-active query. ACTIVE 2 returns statistics for the second active query.
-
-* `_stats-view-type_`
-+
-sets the statistics view type to a different format. Statistics are
-collected at the operator level by default. For exceptions, see
-<<adaptive_statistics_collection,Adaptive Statistics Collection>>.
-
-* `ACCUMULATED`
-+
-causes the statistics to be displayed in an aggregated summary across
-all tables in the query.
-
-* `PERTABLE`
-+
-displays statistics for each table in the query. This is the default
-_stats-view-type_ although statistics are collected at the operator
-level. If the collection occurs at a lower level due to Adaptive
-Statistics, the default is the lowered collection level. For more
-information, 
-see <<adaptive_statistics_collection,Adaptive Statistics Collection>>.
-
-* `progress`
-+
-displays rows of information corresponding to each of the big memory
-operators (BMO) operators involved in the query, in addition to pertable
-_stats-view-type_. For more information about BMOs,
-see <<pertable_and_operator_statistics,Pertable and Operator Statistics>>.
-
-* `PROGRESS`
-+
-displays rows of information corresponding to each of the big memory
-operators (BMO) operators involved in the query, in addition to pertable
-_stats-view-type_. For more information about BMOs, 
-see <<pertable_and_operator_statistics,Pertable and Operator Statistics>>.
-
-* `default`
-+
-displays statistics in the same way as it is collected.
-
-* `RMS`
-+
-required keyword if requesting statistics about RMS itself.
-
-* `_node-num_`
-+
-returns the statistics about the RMS infrastructure for a given node.
-
-* `ALL`
-+
-returns the statistics about the RMS infrastructure for every node in the cluster.
-
-* `RESET`
-+
-resets the cumulative RMS statistics counters.
-
-[[examples_of_get_statistics]]
-=== Examples of GET STATISTICS
-
-These examples show the runtime statistics that various get statistics
-commands return. for more information about the runtime statistics and
-RMS counters,
-see <<displaying_sql_runtime_statistics,Displaying SQL Runtime Statistics>>.
-
-* This GET STATISTICS command returns PERTABLE statistics for the most
-recently executed statement in the same session:
-+
-```
-SQL> GET STATISTICS FOR QID CURRENT;
-
-Qid                      MXID1100801837021216821167247667200000000030000_59_SQL_CUR_6
-Compile Start Time       2011/03/30 07:29:15.332216
-Compile End Time         2011/03/30 07:29:15.339467
-Compile Elapsed Time                 0:00:00.007251
-Execute Start Time       2011/03/30 07:29:15.383077
-Execute End Time         2011/03/30 07:29:15.470222
-Execute Elapsed Time                 0:00:00.087145
-State                    CLOSE
-Rows Affected            0
-SQL Error Code           100
-Stats Error Code         0
-Query Type               SQL_SELECT_NON_UNIQUE Estimated Accessed Rows 0
-Estimated Used Rows      0
-Parent Qid               NONE
-Child Qid                NONE
-Number of SQL Processes  1
-Number of Cpus           1
-Execution Priority       -1
-Transaction Id           -1
-Source String            SELECT
-CUR_SERVICE,PLAN,TEXT,CUR_SCHEMA,RULE_NAME,APPL_NAME,SESSION_NAME,DSN_NAME,ROLE_NAME,DEFAULT_SCHEMA_ACCESS_ONLY
- FROM(VALUES(CAST('HP_DEFAULT_SERVICE' as VARCHAR(50)),CAST(0 AS INT),CAST(0 AS INT),CAST('NEO.USR' as
-VARCHAR(260)),CAST('' as VARCHAR(
-SQL Source Length        548
-Rows Returned            1
-First Row Returned Time  2011/03/30 07:29:15.469778
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type    OPERATOR_STATS
-SQL Process Busy Time    0
-UDR Process Busy Time    0
-SQL Space Allocated      32 KB
-SQL Space Used           3 KB
-SQL Heap Allocated       7 KB
-SQL Heap Used            1 KB
-EID Space Allocated      0 KB
-EID Space Used           0 KB
-EID Heap Allocated       0 KB
-EID Heap Used            0 KB
-Processes Created        0
-Process Create Time      0
-Request Message Count    0
-Request Message Bytes    0
-Reply Message Count      0
-Reply Message Bytes      0
-Scr. Overflow Mode       DISK
-Scr File Count           0
-Scr. Buffer Blk Size     0
-Scr. Buffer Blks Read    0
-Scr. Buffer Blks Written 0
-Scr. Read Count          0
-Scr. Write Count         0
-
---- SQL operation complete.
-```
-
-<<<
-* This GET STATISTICS command returns PERTABLE statistics for the
-specified query ID (note that this command should be issued in the same
-session):
-+
-```
-SQL> GET STATISTICS FOR QID
-+> "MXID1100800517921216818752807267200000000030000_48_SQL_CUR_2"
-+> ;
-
-Qid                      MXID1100800517921216818752807267200000000030000_48_SQL_CUR_2
-Compile Start Time       2011/03/30 00:53:21.382211
-Compile End Time         2011/03/30 00:53:22.980201
-Compile Elapsed Time                 0:00:01.597990
-Execute Start Time       2011/03/30 00:53:23.079979
-Execute End Time         -1
-Execute Elapsed Time                 7:16:13.494563
-State                    OPEN
-Rows Affected            -1
-SQL Error Code           0
-Stats Error Code         0
-Query Type               SQL_SELECT_NON_UNIQUE
-Estimated Accessed Rows  2,487,984
-Estimated Used Rows      2,487,984
-Parent Qid               NONE
-Child Qid                NONE
-Number of SQL Processes  129
-Number of Cpus           9
-Execution Priority       -1
-Transaction Id           34359956800
-Source String            select count(*) from
-MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT K,
-MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT J,
-MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT H,
-MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT G
-SQL Source Length        220
-Rows Returned            0
-First Row Returned Time  -1
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type    OPERATOR_STATS
-SQL Process Busy Time    830,910,830,000
-UDR Process Busy Time    0
-SQL Space Allocated      179,049                  KB
-SQL Space Used           171,746                  KB
-SQL Heap Allocated       1,140,503                KB
-SQL Heap Used            1,138,033                KB
-EID Space Allocated      46,080                   KB
-EID Space Used           42,816                   KB
-EID Heap Allocated       18,624                   KB
-EID Heap Used            192                      KB
-Processes Created        32
-Process Create Time      799,702
-Request Message Count    202,214
-Request Message Bytes    27,091,104
-Reply Message Count      197,563
-Reply Message Bytes      1,008,451,688
-Scr. Overflow Mode       DISK
-Scr File Count           0
-Scr. Buffer Blk Size     0
-Scr. Buffer Blks Read    0
-Scr. Buffer Blks Written 0
-Scr. Read Count          0
-Scr. Write Count         0 
-
-Table Name
-   Records Accessed       Records Used   Disk   Message     Message   Lock   Lock   Disk Process   Open   Open
-   Estimated/Actual   Estimated/Actual   I/Os     Count     Bytes     Escl   wait   Busy Time      Count  Time
-MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT(H)
-            621,996            621,996
-            621,998            621,998      0       441  10,666,384      0       0       303,955      32  15,967
-MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT(J) 621,996 621,996
-            621,996            621,996
-            621,998            621,998      0       439  10,666,384      0        0      289,949      32  19,680
-MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT(K) 621,996 621,996
-            621,996            621,996
-            621,998            621,998      0       439  10,666,384      0        0      301,956      32  14,419
-MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT(G)
-                  0            621,996
-                  0                  0      0       192   4,548,048      0         0           0      32  40,019
-
---- SQL operation complete.
-```
-
-<<<
-* This GET STATISTICS command returns ACCUMULATED statistics for the
-most recently executed statement in the same session:
-+
-```
-SQL> GET STATISTICS FOR QID CURRENT ACCUMULATED;
-
-Qid                      MXID1100802517321216821277534304000000000340000_957_SQL_CUR_6
-Compile Start Time       2011/03/30 08:05:07.646667
-Compile End Time         2011/03/30 08:05:07.647622
-Compile Elapsed Time                0:00:00.000955
-Execute Start Time       2011/03/30 08:05:07.652710
-Execute End Time         2011/03/30 08:05:07.740461
-Execute Elapsed Time                0:00:00.087751
-State                    CLOSE
-Rows Affected            0
-SQL Error Code           100
-Stats Error Code         0
-Query Type               SQL_SELECT_NON_UNIQUE
-Estimated Accessed Rows  0
-Estimated Used Rows      0
-Parent Qid               NONE
-Child Qid                NONE
-Number of SQL Processes  0
-Number of Cpus           0
-Execution Priority       -1
-Transaction Id           -1
-Source String            SELECT
-CUR_SERVICE,PLAN,TEXT,CUR_SCHEMA,RULE_NAME,APPL_NAME,SESSION_NAME,DSN_NAME,ROLE_NAME,DEFAULT_SCHEMA_ACCESS_ONLY
-FROM(VALUES(CAST('HP_DEFAULT_SERVICE' as VARCHAR(50)),CAST(0 AS INT),CAST(0 AS INT),CAST('NEO.SCH' as
-VARCHAR(260)),CAST('' as VARCHAR(
-SQL Source Length        548
-Rows Returned            1
-First Row Returned Time  2011/03/30 08:05:07.739827
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type    OPERATOR_STATS
-Accessed Rows            0
-Used Rows                0
-Message Count            0
-Message Bytes            0
-Stats Bytes              0
-Disk IOs                 0
-Lock Waits               0
-Lock Escalations         0
-Disk Process Busy Time   0
-SQL Process Busy Time    0
-UDR Process Busy Time    0
-SQL Space Allocated      32                       KB
-SQL Space Used           3                        KB
-SQL Heap Allocated       7                        KB
-SQL Heap Used            1                        KB
-EID Space Allocated      0                        KB
-EID Space Used           0                        KB
-EID Heap Allocated       0                        KB
-EID Heap Used            0                        KB
-Opens                    0
-Open Time                0
-Processes Created        0
-Process Create Time      0
-Request Message Count    0
-Request Message Bytes    0
-Reply Message Count      0
-Reply Message Bytes      0
-Scr. Overflow Mode       UNKNOWN
-Scr. File Count          0
-Scr. Buffer Blk Size     0
-Scr. Buffer Blks Read    0
-Scr. Buffer Blks Written 0
-Scr. Read Count          0
-Scr. Write Count         0
-
---- SQL operation complete.
-```
-
-<<<
-* These GET STATISTICS commands return PERTABLE statistics for the first
-active query in the specified process ID:
-+
-```
-SQL> GET STATISTICS FOR PID 0,27195;
-SQL> GET STATISTICS FOR PID $Z000F3R;
-```
-
-[[displaying_sql_runtime_statistics]]
-== Displaying SQL Runtime Statistics
-
-By default, GET STATISTICS displays table-wise statistics (PERTABLE). If
-you want to view the statistics in a different format, use the
-appropriate view option of the GET STATISTICS command.
-
-RMS provides abbreviated statistics information for prepared statements
-and full runtime statistics for executed statements.
-
-The following table shows the RMS counters that are returned by GET
-STATISTICS, tokens from the STATISTICS table-valued function that relate
-to the RMS counters, and descriptions of the counters and tokens.
-
-[cols="25%l,25%l,50%",options="header"]
-|===
-| Counter Name         | Tokens in STATISTICS Table-Valued Function | Description
-| Qid                  | Qid                                        | A unique ID generated for each query. Each time a SQL statement is prepared, a new query ID is generated.
-| Compile Start Time   | CompStartTime                              | Time when the query compilation started or time when PREPARE for this query started.
-| Compile End Time     | CompEndTime                                | Time when the query compilation ended or time when PREPARE for this query ended.
-| Compile Elapsed Time | CompElapsedTime                            | Amount of actual time to prepare the query.
-| Execute Start Time   | ExeStartTime                               | Time when query execution started. 
-| Execute End Time     | ExeEndTime                                 | Time when query execution ended. When a query is executing, Execute End Time is -1.
-| Execute Elapsed Time | ExeElapsedTime                             | Amount of actual time used by the SQL executor to execute the query.
-| State                | State                                      | Internally used.
-| Rows Affected        | RowsAffected                               | Represents the number of rows affected by the INSERT, UPDATE, or DELETE (IUD) SQL statements.
-Value of -1 for SELECT statements or non-IUD SQL statements.
-| SQL Error Code       | SQLErrorCode                               | Top-level error code returned by the query, indicating whether the query completed with warnings, errors,
-or successfully. A positive number indicates a warning. A negative number indicates an error. The value returned may not be accurate up to the point GET STATISTICS was executed.
-| Stats Error Code     | StatsErrorCode                             | Error code returned to the statistics collector while obtaining statistics from RMS. If an error code,
-counter values may be incorrect. Reissue the GET STATISTICS command.
-| Query Type           | Estimated Accessed Rows                    |  Type of DML statement and enum value: +
- +
-- SQL_SELECT_UNIQUE=1 +
-- SQL_SELECT_NON_UNIQUE=2 +
-- SQL_INSERT_UNIQUE=3 +
-- SQL_INSERT_NON_UNIQUE=4 +
-- SQL_UPDATE_UNIQUE=5 +
-- SQL_UPDATE_NON_UNIQUE=6 +
-- SQL_DELETE_UNIQUE=7 +
-- SQL_DELETE_NON_UNIQUE=8 +
-- SQL_CONTROL=9 +
-- SQL_SET_TRANSACTION=10 +
-- SQL_SET_CATALOG=11 +
-- SQL_SET_SCHEMA=12 +
-- SQL_CALL_NO_RESULT_SETS=13 +
-- SQL_CALL_WITH_RESULT_SETS=14 +
-- SQL_SP_RESULT_SET=15 +
-- SQL_INSERT_ROWSET_SIDETREE=16 +
-- SQL_CAT_UTIL=17 +
-- SQL_EXE_UTIL=18 +
-- SQL_OTHER=1 +
-- SQL_UNKNOWN=0
-| QueryType            | EstRowsAccessed                            | Compiler's estimated number of rows accessed by the executor in TSE.
-| Estimated Used Rows  | EstRowsUsed                                | Compiler's estimated number of rows returned by the executor in TSE after applying the predicates.
-| Parent Qid           | parentQid                                  | A unique ID for the parent query. If there is no parent query ID associated with the query, RMS returns NONE.
-For more information, see <<using_the_parent_query_id,Using the Parent Query ID>>.
-| Child Qid            | childQid                                   | A unique ID for the child query. If there is no child query, then there will be no child query ID and
-RMS returns NONE. For more information, see <<child_query_id,Child Query ID>>.
-| Number of SQL Processes | numSqlProcs                             | Represents the number of SQL processes (excluding TSE processes) involved in executing the query.
-| Number of CPUs       | numCpus                                    | Represents the number of nodes that SQL is processing the query.
-| Transaction ID       | transId                                    | Represents the transaction ID of the transaction involved in executing the query. When no transaction exists,
-the Transaction ID is -1.
-| Source String        | sqlSrc                                     | Contains the first 254 bytes of source string.
-| SQL Source Length    | sqlSrcLen                                  | The actual length of the SQL source string.
-| Rows Returned        | rowsReturned                               | Represents the number of rows returned from the root operator at the master executor process.
-| First Row Returned Time | firstRowReturnTime                      | Represents the actual time that the first row is returned by the master root operator.
-| Last Error Before AQR | LastErrorBeforeAQR                        | The error code that triggered Automatic Query Retry (AQR) for the most recent retry. If the value is not 0,
-this is the error code that triggered the most recent AQR.
-| Number of AQR retries | AQRNumRetries                             | The number of retries for the current query until now.
-| Delay before AQR     | DelayBeforeAQR                             | Delay in seconds that SQL waited before initiating AQR.
-| No. of times reclaimed | reclaimSpaceCnt                          | When a process is under virtual memory pressure, the execution space occupied by the queries executed much
-earlier will be reclaimed to free up space for the upcoming queries. This counter represents how many times this particular query is reclaimed.
-|                      | statsRowType                               | statsRowType can be one of the following: +
- +
-- SQLSTATS_DESC_OPER_STATS=0 +
-- SQLSTATS_DESC_ROOT_OPER_STATS=1 +
-- SQLSTATS_DESC_PERTABLE_STATS=11 +
-- SQLSTATS_DESC_UDR_STATS=13 +
-- SQLSTATS_DESC_MASTER_STATS=15 +
-- SQLSTATS_DESC_RMS_STATS=16 +
-- SQLSTATS_DESC_BMO_STATS=17 
-| Stats Collection Type | StatsType                                 | Collection type, which is OPERATOR_STATS by default. StatsType can be one of the following: +
- +
-- SQLCLI_NO_STATS=0 +
-- SQLCLI_ACCUMULATED_STATS=2 +
-- SQLCLI_PERTABLE_STATS=3 +
-- SQLCLI_OPERATOR_STATS=5
-| Accessed Rows (Rows Accessed) | AccessedRows                      | Actual number of rows accessed by the executor in TSE.
-| Used Rows (Rows Used) | UsedRows                                  | Number of rows returned by TSE after applying the predicates. In a push down plan, TSE may not return all the used rows.
-| Message Count        | NumMessages                                | Count of the number of messages sent to TSE.
-| Message Bytes        | MessageBytes                               | Count of the message bytes exchanged with TSE.
-| Stats Bytes          | StatsBytes                                 | Number of bytes returned for statistics counters from TSE.
-| Disk IOs             | DiskIOs                                    | Number of physical disk reads for accessing the tables.
-| Lock Waits           | LockWaits                                  | Number of times this statement had to wait on a conflicting lock.
-| Lock Escalations     | Escalations                                | Number of times row locks escalated to a file lock during the execution of this statement.
-| Disk Process Busy Time | ProcessBusyTime                          | An approximation of the total node time in microseconds spent by TSE for executing the query.
-| SQL Process Busy Time | CpuTime                                   | An approximation of the total node time in microseconds spent in the master and ESPs involved in the query.
-| UDR Process Busy Time (same as UDR CPU Time) | udrCpuTime         | An approximation of the total node time in microseconds spent in the UDR server process.
-| UDR Server ID        | UDRServerId                                | MXUDR process ID.
-| Recent Request Timestamp |                                        | Actual timestamp of the recent request sent to MXUDR.
-| Recent Reply Timestamp |                                          | Actual timestamp of the recent request received by MXUDR.
-| SQL Space Allocated^1^ | SpaceTotal^1^                            | The amount of "space" type of memory in KB allocated in the master and ESPs involved in the query.
-| SQL Space Used^1^      | SpaceUsed^1^                             | Amount of "space" type of memory in KB used in master and ESPs involved in the query.
-| SQL Heap Allocated^2^  | HeapTotal^2^                             | Amount of "heap" type of memory in KB allocated in master and ESPs involved in the query.
-| SQL Heap Used^2^       | HeapUsed^2^                              | Amount of "heap" type of memory in KB used in master and ESPs involved in the query.
-| EID Space Allocated^1^ | Dp2SpaceTotal                            | Amount of "space" type of memory in KB allocated in the executor in TSEs involved in the query.
-| EID Space Used^1^      | Dp2SpaceUsed                             | Amount of "space" type of memory in KB used in the executor in TSEs involved in the query.
-| EID Heap Allocated^2^  | Dp2HeapTotal                             | Amount of "heap" memory in KB allocated in the executor in TSEs involved in the query.
-| EID Heap Used2         | Dp2HeapUsed                              | Amount of "heap" memory in KB used in the executor in TSEs involved in the query.
-| Opens                  | Opens                                    | Number of OPEN calls performed by the SQL executor on behalf of this statement.
-| Open Time              | OpenTime                                 | Time (in microseconds) this process spent doing opens on behalf of this statement.
-| Processes Created      | Newprocess                               | The number of processes (ESPs and MXCMPs) created by the master executor for this statement.
-| Process Create Time    | NewprocessTime                           | The elapsed time taken to create these processes.
-| Table Name             | AnsiName                                 | Name of a table in the query.
-| Request Message Count  | reqMsgCnt                                | Number of messages initiated from the master to ESPs or from the ESP to ESPs.
-| Request Message Bytes  | regMsgBytes                              | Number of message bytes that are sent from the master to ESPs or from the ESP to ESPs as part of the request messages.
-| Reply Message Count    | replyMsgCnt                              | Number of reply messages from the ESPs for the message requests.
-| Reply Message Bytes    | replyMsgBytes                            | Number of bytes sent as part of the reply messages.
-| Scr. Overflow Mode     | scrOverFlowMode                          | Represents the scratch overflow mode. Modes are DISK_TYPE or SSD_TYPE.
-| Scr. File Count        | scrFileCount                             | Number of scratch files created to execute the query. Default file size is 2 GB.
-| Scr. Buffer Blk Size   | scrBufferBlockSize                       | Size of buffer block that is used to read from/write to the scratch file.
-| Scr. Buffer Blks Read  | scrBufferRead                            | Number of scratch buffer blocks read from the scratch file.
-| Scr. Buffer Blks Written | scrBufferWritten                       | Number of scratch buffer blocks written to the scratch file. Exact size of scratch file can be obtained
-by multiplying Scr. Buffer Blk Size by this counter.
-| Scr. Read Count        | scrReadCount                             | Number of file-system calls involved in reading buffer blocks from scratch files. One call reads multiple
-buffer blocks at once.
-| Scr. Write Count       | scrWriteCount                            | Number of file-system calls involved in writing buffer blocks to scratch files. One call writes multiple
-buffer blocks at once.
-| BMO Heap Used          | bmoHeapUsed                              | Amount of "heap" type of memory in KB used in the BMO operator(s). The BMO operators are HASH_JOIN (and
-all varieties of HASH_JOIN), HASH_GROUPBY (and all varieties of HASH_GROUPBY), and SORT (and all varieties of SORT).
-| BMO Heap Total         | bmoHeapTotal                             | Amount of "heap" type of memory in KB allocated in the BMO operator(s).
-| BMO Heap High Watermark | bmoHeapWM                               | Maximum amount of memory used in the BMO operator.
-| BMO Space Buffer Size  | bmoSpaceBufferSize                       | Size in KB for space buffers allocated for the type of memory.
-| BMO Space Buffer Count | bmoSpaceBufferCount                      | Count of space buffers allocated for the type of memory.
-| Records Accessed (Estimated / Actual) |                           | Actual number of rows accessed by the executor in TSE. 
-| Records Used (Estimated / Actual) |                               | Number of rows returned by TSE after applying the predicates. In a push-down plan, TSE may not return all the used rows.
-| ID                     |                                          | TDB ID of the operator at the time of execution of the query.
-| LCID                   |                                          | Left child operator ID.
-| RCID                   |                                          | Right child operator ID.
-| PaID                   |                                          | Parent operator ID (TDB-ID).
-| ExID                   |                                          | Explain plan operator ID.
-| Frag                   |                                          | Fragment ID to which this operator belongs.
-| Dispatches             |                                          | Number of times the operator is scheduled in SQL executor.
-| Oper CPU Time          | OperCpuTime                              | Approximation of the node time spent by the operator to execute the query.
-| Est. Records Used      |                                          | Approximation of the number of tuples that would flow up to the parent operator.
-| Act. Records Used      |                                          | Actual number of tuples that flowed up to the parent operator.
-|                        | ProcessId                                | Name of the process ID (PID) in the format: $Znnn. The process name can be for the master (MXOSRVR) or executor
-server process (ESP).
-|===
-
-1. Space is memory allocated from a pool owned by the executor. The executor
-operators requesting the memory are not expected to return the memory until
-the statement is deallocated.
-
-2. Heap memory is used for temporary allocations. Operators may return heap memory before the statement is deallocated.
-This allows the memory to be reused as needed.
-
-<<<
-[[examples_of_displaying_sql_runtime_statistics]]
-=== Examples of Displaying SQL Runtime Statistics
-
-NOTE: Some of the output has been reformatted for better document readability.
-
-[[statistics_of_a_prepared_statement]]
-==== Statistics of a Prepared Statement
-
-* This example shows the output of the currently prepared statement:
-+
-```
-SQL> GET STATISTICS FOR QID CURRENT;
-
-Qid                      MXID1100000649721215837305997952000000001930000_4200_Q1
-Compile Start Time       2010/12/06 10:55:40.931000
-Compile End Time         2010/12/06 10:55:42.131845
-Compile Elapsed Time                 0:00:01.200845
-ExecuteStart Time        -1
-Execute End Time         -1
-Execute Elapsed Time                 0:00:00.000000
-State                    CLOSE
-Rows Affected            -1
-SQL Error Code           0
-Stats Error Code         0
-Query Type               SQL_SELECT_NON_UNIQUE
-Estimated Accessed Rows  100,010
-Estimated Used Rows      100,010
-Parent Qid               NONE
-Child Qid                NONE
-Number of SQL Processes  0
-Number of Cpus           0
-Execution Priority       -1
-Transaction Id           -1
-Source String            select * from t100k where b in (select b from t10)
-SQL Source Length        50
-Rows Returned            0
-First Row Returned Time  -1
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type   OPERATOR_STATS
---- SQL operation complete.
-```
-
-<<<
-[[pertable_statistics_of_an_executing_statement]]
-=== PERTABLE Statistics of an Executing Statement
-
-* This example shows the PERTABLE statistics of an executing statement:
-+
-```
-SQL> GET STATISTICS FOR QID CURRENT;
-
-Qid                      MXID1100000649721215837305997952000000001930000_4200_Q1
-Compile Start Time       2010/12/06 10:55:40.931000
-Compile End Time         2010/12/06 10:55:42.131845
-Compile Elapsed Time                 0:00:01.200845
-Execute Start Time       2010/12/06 10:56:16.254686
-Execute End Time         2010/12/06 10:56:18.434873
-Execute Elapsed Time                 0:00:02.180187
-State                    CLOSE
-Rows Affected            0
-SQL Error Code           100
-Stats Error Code         0
-Query Type               SQL_SELECT_NON_UNIQUE
-Estimated Accessed Rows  100,010
-Estimated Used Rows      100,010
-Parent Qid               NONE
-Child Qid                NONE
-Number of SQL Processes  7
-Number of Cpus           1
-Execution Priority       -1
-Transaction Id           18121
-Source String            select * from t100k where b in (select b from t10)
-SQL Source Length        50
-Rows Returned            100
-First Row Returned Time  2010/12/06 10:56:18.150977
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type    OPERATOR_STATS
-SQL Process Busy Time    600,000
-UDR Process Busy Time    0
-SQL Space Allocated      1,576                    KB
-SQL Space Used           1,450                    KB
-SQL Heap Allocated       199                      KB
-SQL Heap Used            30                       KB
-EID Space Allocated      704                      KB
-EID Space Used           549                      KB
-EID Heap Allocated       582                      KB
-EID Heap Used            6                        KB
-Processes Created        4
-Process Create Time      750,762
-Request Message Count    701
-Request Message Bytes    135,088
-Reply Message Count      667
-Reply Message Bytes      3,427,664
-Scr. Overflow Mode       DISK
-Scr File Count           0
-Scr. Buffer Blk Size     0
-Scr. Buffer Blks Read    0
-Scr. Buffer Blks Written 0
-
-Table Name
-   Records Accessed       Records Used   Disk   Message     Message   Lock   Lock   Disk Process   Open   Open
-   Estimated/Actual   Estimated/Actual   I/Os     Count     Bytes     Escl   wait   Busy Time      Count  Time
-NEO.SCTEST.T10
-                 10                 10
-                 10                 10      0         2        5,280     0      0          2,000      32  15,967
-NEO.SCTEST.T100K
-           100,000            100,000
-           100,000            100,000       0       110   3,235,720      0      0        351,941       4  48,747
-
---- SQL operation complete.
-```
-
-<<<
-[[accumulated_statistics_of_an_executing_statement]]
-=== ACCUMULATED Statistics of an Executing Statement
-
-* This example shows the ACCUMULATED statistics of an executing statement:
-+
-```
-SQL> GET STATISTICS FOR QID CURRENT ACCUMULATED;
-
-Qid                      MXID1100000649721215837305997952000000001930000_4200_Q1
-Compile Start Time       2010/12/06 10:55:40.931000
-Compile End Time         2010/12/06 10:55:42.131845
-Compile Elapsed Time                 0:00:01.200845
-Execute Start Time       2010/12/06 10:56:16.254686
-Execute End Time         2010/12/06 10:56:18.434873
-Execute Elapsed Time                 0:00:02.180187
-State                    CLOSE
-Rows Affected            0
-SQL Error Code           100
-Stats Error Code         0
-Query Type               SQL_SELECT_NON_UNIQUE
-Estimated Accessed Rows  100,010
-Estimated Used Rows      100,010
-Parent Qid               NONE
-Child Qid                NONE
-Number of SQL Processes  7
-Number of Cpus           1
-Execution Priority       -1
-Transaction Id           18121
-Source String            select * from t100k where b in (select b from t10)
-SQL Source Length        50
-Rows Returned            100
-First Row Returned Time  2010/12/06 10:56:18.150977
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type    OPERATOR_STATS
-Accessed Rows            100,010
-Used Rows                100,010
-Message Count            112
-Message Bytes            3,241,000
-Stats Bytes              2,904
-Disk IOs                 0
-Lock Waits               0
-Lock Escalations         0
-Disk Process Busy Time   353,941
-SQL Process Busy Time    600,000
-UDR Process Busy Time    0
-SQL Space Allocated      1,576                    KB
-SQL Space Used           1,450                    KB
-SQL Heap Allocated       199                      KB
-SQL Heap Used            30                       KB
-EID Space Allocated      704                      KB
-EID Space Used           549                      KB
-EID Heap Allocated       582                      KB
-EID Heap Used            6                        KB
-Opens                    4
-Open Time                48,747
-Processes Created        4
-Process Create Time      750,762
-Request Message Count    701
-Request Message Bytes    135,088
-Reply Message Count      667
-Reply Message Bytes      3,427,664
-Scr. Overflow Mode       DISK
-Scr. File Count          0
-Scr. Buffer Blk Size     0
-Scr. Buffer Blks Read    0
-Scr. Buffer Blks Written 0
---- SQL operation complete.
-```
-
-<<<
-[[progress-statistics-of-an-executing-statement]]
-=== PROGRESS Statistics of an Executing Statement
-
-* This example shows the PROGRESS statistics of an executing statement:
-+
-```
-SQL> GET STATISTICS FOR QID CURRENT PROGRESS;
-
-Qid                      MXID1100000649721215837305997952000000001930000_4200_Q1
-Compile Start Time       2010/12/06 10:55:40.931000
-Compile End Time         2010/12/06 10:55:42.131845
-Compile Elapsed Time                 0:00:01.200845
-Execute Start Time       2010/12/06 10:56:16.254686
-Execute End Time         2010/12/06 10:56:18.434873
-Execute Elapsed Time                 0:00:02.180187
-State                    CLOSE
-Rows Affected            0
-SQL Error Code           100
-Stats Error Code         0
-Query Type               SQL_SELECT_NON_UNIQUE
-Estimated Accessed Rows  100,010
-Estimated Used Rows      100,010
-Parent Qid               NONE
-Child Qid                NONE
-Number of SQL Processes  7
-Number of Cpus           1
-Execution Priority       -1
-Transaction Id           18121
-Source String            select * from t100k where b in (select b from t10)
-SQL Source Length        50
-Rows Returned            100
-First Row Returned Time  2010/12/06 10:56:18.150977
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type    OPERATOR_STATS
-SQL Process Busy Time    600,000
-SQL Space Allocated      1,576                    KB
-SQL Space Used           1,450                    KB
-SQL Heap Allocated       199                      KB
-SQL Heap Used            30                       KB
-EID Space Allocated      704                      KB
-EID Space Used           549                      KB
-EID Heap Allocated       582                      KB
-EID Heap Used            6                        KB
-Processes Created        4
-Process Create Time      750,762
-Request Message Count    701
-Request Message Bytes    135,088
-Reply Message Count      667
-Reply Message Bytes      3,427,664
-Table Name
-   Records Accessed       Records Used   Disk   Message     Message   Lock   Lock   Disk Process   Open   Open
-   Estimated/Actual   Estimated/Actual   I/Os     Count     Bytes     Escl   wait   Busy Time      Count  Time
-NEO.SCTEST.T10
-                 10                 10
-                 10                 10       0        2       5,280      0      0          2,000        0 0
-NEO.SCTEST.T100K
-            100,000            100,000
-            100,000            100,000       0      110   3,235,720      0      0        351,941        4 48,747
-
-Id TDB       Mode Phase  Phase  BMO   BMO    BMO   BMO    BMO     File   Scratch Buffer     Cpu 
-   Name      Phase       Start  Heap  Heap   Heap  Space  Spacez  Count  Size/Read/Written  Time
-                         Time   Used  Total  WM    BufSz  BufCnt
-16 EX_HASHJ  DISK        0      0     56     0     0      -1      0      0                  60,000
-```
-
-<<<
-[[default_statistics_of_an_executing_statement]]
-=== DEFAULT Statistics of an Executing Statement
-
-* This example shows the DEFAULT statistics of an executing statement:
-+
-```
-SQL> GET STATISTICS FOR QID CURRENT DEFAULT;
-
-Qid                      MXID1100000649721215837305997952000000001930000_4200_Q1
-Compile Start Time       2010/12/06 10:55:40.931000
-Compile End Time         2010/12/06 10:55:42.131845
-Compile Elapsed Time                 0:00:01.200845
-Execute Start Time       2010/12/06 10:56:16.254686
-Execute End Time         2010/12/06 10:56:18.434873
-Execute Elapsed Time                 0:00:02.180187
-State                    CLOSE
-Rows Affected            0
-SQL Error Code           100
-Stats Error Code         0
-Query Type               SQL_SELECT_NON_UNIQUE
-Estimated Accessed Rows  100,010
-Estimated Used Rows      100,010
-Parent Qid               NONE
-Child Qid                NONE
-Number of SQL Processes  7
-Number of Cpus           1
-Execution Priority       -1
-Transaction Id           18121
-Source String            select * from t100k where b in (select b from t10)
-SQL Source Length        50
-Rows Returned            100
-First Row Returned Time  2010/12/06 10:56:18.150977
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type   OPERATOR_STATS
-
-Id  LCId  RCId PaId ExId Frag TDB Name         Dispatches  Oper CPU   Records    Records 
-                                                           Time Est.  Used Act.  Used Details
-21  20    .    .    10   0    EX_ROOT                  15          0          0           100
-20  19    .    21   9    0    EX_SPLIT_TOP             13          0        100           100
-19  18    .    20   9    0    EX_SEND_TOP              20          0        100           100
-18  17    .    19   9    2    EX_SEND_BOTTOM           72          0        100           100
-17  16    .    18   9    2    EX_SPLIT_BOTTOM          88          0        100           100
-16  15    .    17   8    2    EX_HASHJ              1,314     60,000        100           100
-15  14    .    16   7    2    EX_SPLIT_TOP          1,343     20,000    100,000       100,000
-14  13    .    15   7    2    EX_SEND_TOP           1,342    120,000    100,000       100,000
-13  12    .    14   7    5    EX_SEND_BOTTOM        1,534    200,000    100,000       100,000
-12  11    .    13   7    5    EX_SPLIT_BOTTOM         493     70,000    100,000       100,000
-11  10    .    12   6    5    EX_SPLIT_TOP            486     70,000    100,000       100,000
-10  9     .    11   5    5    EX_PARTN_ACCESS       1,634     60,000    100,000             0  
-9   8     .    10   5    6    EX_EID_ROOT              12          0    100,000       100,000
-8   7     .    9    4    6    EX_DP2_SUBS_OPER        160    170,000    100,000            10 
-7   6     .    8    3    2    EX_SPLIT_TOP             16          0         10            10
-6   5     .    7    3    2    EX_SEND_TOP              17          0         10            10
-5   4     .    6    3    3    EX_SEND_BOTTOM           17          0         10            10
-4   3     .    5    3    3    EX_SPLIT_BOTTOM           9          0         10            10
-3   2     .    4    2    3    EX_PARTN_ACCESS           6          0         10            10
-2   1     .    3    2    4    EX_EID_ROOT               3          0         10             0
-1   .     .    1    1    4    EX_DP2_SUBS_OPER          3    100,000         10            10
-
---- SQL operation complete.
-```
-
-<<<
-[[using_the_parent_query_id]]
-=== Using the Parent Query ID
-
-When executed, some SQL statements execute additional SQL statements,
-resulting in a parent-child relationship. For example, when executed,
-the UPDATE STATISTICS, MAINTAIN, and CALL statements execute other SQL
-statements called child queries. The child queries might execute even
-more child queries, thus introducing a hierarchy of SQL statements with
-parent-child relationships. The parent query ID maps the child query to
-the immediate parent SQL statement, helping you to trace the child SQL
-statement back to the user-issued SQL statement.
-
-The parent query ID is available as a counter, Parent Qid, in the
-runtime statistics output. See Table 1-1 . A query directly
-issued by a user will not have a parent query ID and the counter will
-indicate "None."
-
-[[child_query_id]]
-=== Child Query ID
-
-In many cases, a child query will execute in the same node as its
-parent. In such cases, the GET STATISTICS report on the parent query ID
-will contain a query ID value for the child query which executed most
-recently. Conversely, if no child query exists, or the child query is
-executing in a different node, no child query ID will be reported.
-
-The following examples shows GET STATISTICS output for both the parent
-and one child query which are executed when the user issues a CREATE
-TABLE AS command:
-
-<<<
-```
-SQL> -- get statistics for the parent query
-
-SQL> GET STATISTICS FOR QID
-+> MXID01001091200212164828759544076000000000217DEFAULT_MXCI_USER00_34SQLCI_DML_LAST
-+> ;
-
-Qid                      MXID11001091200212164828759544076000000000217DEFAULT_MXCI_USER00_34SQLCI_DML_LAST
-Compile Start Time       2011/02/18 14:49:04.606513
-Compile End Time         2011/02/18 14:49:04.631802
-Compile Elapsed Time                 0:00:00.025289
-Execute Start Time       2011/02/18 14:49:04.632142
-Execute End Time         -1
-Execute Elapsed Time                 0:03:29.473604
-State                    CLOSE
-Rows Affected            -1
-SQL Error Code           0
-Stats Error Code         0
-Query Type               SQL_INSERT_NON_UNIQUE
-Estimated Accessed Rows  0
-Estimated Used Rows      0
-Parent Qid               NONE
-Child Qid                MXID11001091200212164828759544076000000000217DEFAULT_MXCI_USER00_37_86
-Number of SQL Processes  1
-Number of Cpus           1
-Execution Priority       148
-Transaction Id           -1
-Source String            create table odetail hash partition by (ordernum, partnum)
-as select * from SALES.ODETAIL;
-SQL Source Length        91
-Rows Returned            0
-First Row Returned Time  -1
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0 
-Stats Collection Type    OPERATOR_STATS
-
-Id  LCId  RCId PaId ExId Frag TDB Name         Dispatches  Oper CPU   Records    Records 
-                                                           Time Est.  Used Act.  Used Details
- 2  1     .     .   2     0    EX_ROOT         0           0    0     0          
- 1  .     .     2   1     0    CREATE_TABLE_AS 0           0    0     0
-
---- SQL operation complete.
-```
-<<<
-```
-SQL> --  get statistics for the child query
-SQL> GET STATISTICS FOR QID
-+> MXID11001091200212164828759544076000000000217DEFAULT_MXCI_USER00_37_86
-+> ;
-
-Qid                      MXID01001091200212164828759544076000000000217DEFAULT_MXCI_USER00_37_86
-Compile Start Time       2011/02/18 14:49:07.632898
-Compile End Time         2011/02/18 14:49:07.987334 
-Compile Elapsed Time                 0:00:00.354436
-Execute Start Time       2011/02/18 14:49:07.987539
-Execute End Time         -1
-Execute Elapsed Time                 0:02:33.173486
-State                    OPEN
-Rows Affected            -1
-SQL Error Code           0
-Stats Error Code         0
-Query Type               SQL_INSERT_NON_UNIQUE
-Estimated Accessed Rows  101
-Estimated Used Rows      101
-Parent Qid               MXID101001091200212164828759544076000000000217DEFAULT_MXCI_USER00_34SQLCI_DML_LAST
-Child Qid                NONE
-Number of SQL Processes  1
-Number of Cpus           1
-Execution Priority       148
-Transaction Id           \ARC0101(2).9.9114503
-Source String            insert using sideinserts into CAT.SCH.ODETAIL select * from SALES.ODETAIL;
-SQL Source Length        75
-Rows Returned            0
-First Row Returned Time  -1
-Last Error before AQR    0
-Number of AQR retries    0
-Delay before AQR         0
-No. of times reclaimed   0
-Stats Collection Type    OPERATOR_STATS
-
-Id  LCId  RCId PaId ExId Frag TDB Name         Dispatches  Oper CPU   Records    Records 
-                                                           Time Est.  Used Act.  Used Details
- 4  3     .    9    3     0   EX_SPLIT_TOP     1           10,062     100        0
- 3  2     .    4    2     0   EX_PARTN_ACCESS  66          9,649      100        0
-
---- SQL operation complete.
-```
-
-<<<
-== Gathering Statistics About RMS
-
-Use the GET STATISTICS FOR RMS command to get information about RMS
-itself. The GET STATISTICS FOR RMS statement can be used to retrieve
-information about one node or all nodes. An individual report is
-provided for each node.
-
-[cols="30%l,70%",options="header"]
-|===
-| Counter                      | Description
-| CPU                          | The node number of the {project-name} cluster.
-| RMS Version                  | Internal version of RMS.
-| SSCP PID                     | SQL Statistics control process ID.
-| SSCP Creation Timestamp      | Actual timestamp when SQL statistics control process was created.
-| SSMP PID                     | SQL statistics merge process ID.
-| SSMP Creation Timestamp      | Timestamp when SQL statistics merge was created.
-| Source String Store Len      | Storage length of source string.
-| Stats Heap Allocated         | Amount of memory allocated by all the queries executing in the given node in the RMS shared segments at this instance of time.
-| Stats Heap Used              | Amount of memory used by all the queries executing in the given node in the RMS shared segment at this instance of time.
-| Stats Heap High WM           | High amount of memory used by all the queries executing in the given node in the RMS shared segment until now.
-| No. of Process Regd.         | Number of processes registered in the shared segment.
-| No. of Query Fragments Regd. | Number of query fragments registered in the shared segment.
-| RMS Semaphore Owner          | Process ID that locked the semaphore at this instance of time.
-| No. of SSCPs Opened          | Number of Statistics Control Processes opened. Normally, this should be equal to the number of nodes in the {project-name} cluster.
-| No. of SSCPs Open Deleted    | Number of Statistics Control Processes with broken communication. Usually, this should be 0.
-| Last GC Time                 | The recent timestamp at which the shared segment was garbage collected.
-| Queries GCed in Last Run     | Number of queries that were garbage collected in the recent GC run.
-| Total Queries GCed           | Total number of queries that were garbage collected since the statistics reset timestamp.
-| SSMP Request Message Count   | Count of the number of messages sent from the SSMP process since the statistics reset timestamp.
-| SSMP Request Message Bytes   | Number of messages bytes that are sent as part of the request from the SSMP process since the statistics reset timestamp.
-| SSMP Reply Message Count     | Count of the number of reply messages received by the SSMP process since the statistics reset timestamp.
-| SSMP Reply Message Bytes     | Number of messages bytes that are sent as part of the reply messages received by the SSMP process since the statistics reset timestamp.
-| SSCP Request Message Count   | Count of the number of messages sent from the SSCP process since the statistics reset timestamp.
-| SSCP Request Message Bytes   | Number of messages bytes are sent as part of the request from the SSCP process since the statistics reset timestamp.
-| SSCP Reply Message Count     | Count of the number of reply messages received by the SSCP process since the statistics reset timestamp.
-| SSCP Reply Message Bytes     | Number of messages bytes that are sent as part of the reply messages received by the SSCP process since the statistics reset timestamp.
-| RMS Stats Reset Timestamp    | Timestamp for resetting RMS statistics.
-|===
-
-```
-SQL> GET STATISTICS FOR RMS ALL;
-
-Node name
-CPU                         0
-RMS Version                 2511
-SSCP PID                    19521
-SSCP Priority               0
-SSCP Creation Timestamp     2010/12/05 02:32:33.642752
-SSMP PID                    19527
-SSMP Priority               0
-SSMP Creation Timestamp     2010/12/05 02:32:33.893440
-Source String Store Len     254
-Stats Heap Allocated        0
-Stats Heap Used             3,002,416
-Stats Heap High WM          3,298,976
-No.of Process Regd.         157
-No.of Query Fragments Regd. 296 RMS Semaphore Owner -1
-No.of SSCPs Opened          1
-No.of SSCPs Open Deleted    0
-Last GC Time                2010/12/06 10:53:46.777432
-Queries GCed in Last Run    55
-Total Queries GCed          167
-SSMP Request Message Count  58,071
-SSMP Request Message Bytes  14,161,144
-SSMP Reply Message Count    33,466
-SSMP Reply Message Bytes    15,400,424
-SSCP Request Message Count  3,737
-SSCP Request Message Bytes  837,744
-SSCP Reply Message Count    3,736 SSCP
-Reply Message Bytes         5,015,176
-RMS Stats Reset Timestamp   2010/12/05 14:32:33.891083
-
---- SQL operation complete.
-```
-
-<<<
-[[using_the_queryid_extract_function]]
-== Using the QUERYID_EXTRACT Function
-
-Use the QUERYID_EXTRACT function within an SQL statement to extract
-components of a query ID for use in a SQL query. The query ID, or QID,
-is a unique, cluster-wide identifier for a query and
-is generated for dynamic SQL statements whenever a SQL string is
-prepared.
-
-=== Syntax of QUERYID_EXTRACT
-
-```
-QUERYID_EXTRACT ('query-id', 'attribute')
-```
-
-The syntax of the QUERYID_EXTRACT function is:
-
-* `_query-id_`
-+
-is the query ID in string format.
-
-* `_attribute_`
-+
-is the attribute to be extracted. The value of _attribute_ can be one of
-these parts of the query ID:
-+
-[cols="30%l,70%",options="header"]
-|===
-| Attribute Value         | Description
-| SEGMENTNUM              | Logical node ID in {project-name} cluster
-| CPUNUM or CPU           | Logical node ID in {project-name} cluster
-| PIN                     | Linux process ID number
-| EXESTARTTIME            | Executor start time
-| SESSIONNUM              | Session number
-| USERNAME                | User name
-| SESSIONNAME             | Session name
-| SESSIONID               | Session ID
-| QUERYNUM                | Query number
-| STMTNAME                | Statement ID or handle
-|===
-+
-NOTE: The SEGMENTNUM and CPUNUM attributes are the same.
-
-The result data type of the QUERYID_EXTRACT function is a VARCHAR with a
-length sufficient to hold the result. All values are returned in string
-format. Here is the QUERYID_EXTRACT function in a SELECT statement:
-
-```
-SELECT QUERYID_EXTRACT('_query-id_', '_attribute-value_') FROM (VALUES(1)) AS t1;
-```
-
-<<<
-[[examples_of_queryid_extract]]
-=== Examples of QUERYID_EXTRACT
-
-* This command returns the node number of the query ID:
-+
-```
-SQL> SELECT 
-+> SUBSTR(
-+>   QUERYID_EXTRACT(
-+>     'MXID11000022675212170554548762240000000000206U6553500_21_S1','CPU'
-+>   ), 1, 20
-+>  ) FROM (VALUES(1))
-+> AS t1;
-
-(EXPR)
----------------------------------------------------------------------------
-0
-
---- 1 row(s) selected.
-```
-
-* This command returns the PIN of the query ID:
-+
-```
-SQL> SELECT
-+> SUBSTR(
-+>   QUERYID_EXTRACT(
-+>     'MXID11000022675212170554548762240000000000206U6553500_21_S1','PIN'
-+>   ), 1, 20
-+> ) FROM (VALUES(1)) AS t1;
-
-(EXPR)
----------------------------------------------------------------------------
-22675
-
---- 1 row(s) selected.
-```
-
-<<<
-[[stats_each_fragment_instance_active_query]]
-== Statistics for Each Fragment-Instance of an Active Query
-
-You can retrieve statistics for a query while it executes by using the
-STATISTICS table-valued function. Depending on the syntax used, you can
-obtain statistics summarizing each parallel fragment-instance of the
-query, or for any operator in each fragment-instance.
-
-[[syntax_of_statistics_table-valued_function]]
-=== Syntax of STATISTICS Table-Valued Function
-
-```
-TABLE(STATISTICS (NULL, 'qid-str'))
-
-qid-str is:
-   QID=query-id [ ,{ TDBID_DETAIL=tdb-id | DETAIL=1 } ]
-```
-
-* `_query-id_`
-+
-is the system-generated query ID. For example:
-+
-```
-QID=MXID11000022675212170554548762240000000000206U6553500_21_S1
-```
-
-* `_tdb-id_`
-+
-is the TDB ID of a given operator. TDB values can be obtained from the
-report returned from the GET STATISTICS command.
-
-[[considerations_obtaining_stats_fragment]]
-=== Considerations For Obtaining Statistics For Each Fragment-Instance of an Active Query
-
-If the DETAIL=1 or TDBID_DETAIL=_tdb_id_ options are used when the
-query is not executing, the STATISTICS table-valued function will not
-return any results.
-
-The STATISTICS table-valued function can be used with a SELECT statement
-to return several columns. Many different counters exist in the
-_variable_info_ column. The counters in this column are formatted as
-token-value pairs and the counters reported will depend on which option
-is used: DETAIL=1 or TDBID_DETAIL=_tdb_id_. If the TDBID_DETAIL option
-is used, the counters reported will also depend on the type of operator
-specified by the _tdb_id_. The reported counters can also be
-determined by the statsRowType counter.
-
-The tokens for these counters are listed in the column 
-<<displaying_sql_runtime_statistics>>,Displaying SQL Runtime Statistics>>.
-
-* This query lists process names of all ESPs of an executing query
-identified by the given QID:
-+
-```
-SQL> SELECT
-+> SUBSTR(VARIABLE_INFO,
-+> POSITION('ProcessId:' IN variable_info), 20) AS processes
-+>FROM
-+>TABLE(statistics(NULL,
-+>'QID=MXID11000032684212170811581160672000000000206U6553500_19_S1,DETAIL=1'))
-+>GROUP BY 1;
-
-PROCESSES
---------------------
-ProcessId: $Z0000GS
-ProcessId: $Z0000GT
-ProcessId: $Z0000GU
-ProcessId: $Z0000GV
-ProcessId: $Z0102IQ
-ProcessId: $Z000RNU
-ProcessId: $Z0102IR
-ProcessId: $Z0102IS
-ProcessId: $Z0102IT
-
---- 9 row(s) selected.
-```
-
-<<<
-* This query gives BMO heap used for the hash join identified as TDB #15
-in an executing query identified by the given QID:
-+
-```
-SQL>SELECT CAST (
-+> SUBSTR(variable_info,
-+> POSITION('bmoHeapUsed:' IN variable_info),
-+> POSITION('bmoHeapUsed:' in variable_info) +
-+> 13 + (POSITION(' ' IN
-+> SUBSTR(variable_info,
-+> 13 + POSITION('bmoHeapUsed:' IN variable_info))) -
-+> POSITION('bmoHeapUsed:' IN variable_info)))
-+> AS CHAR(25))
-+> FROM TABLE(statistics(NULL,
-+>'QID=MXID11000021706212170733911504160000000000206U6553500_25_S1,TDBID_DETAIL=15'));
-
-(EXPR)
--------------------------
-bmoHeapUsed: 3147
-bmoHeapUsed: 3147
-bmoHeapUsed: 3147
-bmoHeapUsed: 3147
-bmoHeapUsed: 3147
-bmoHeapUsed: 3147
-bmoHeapUsed: 3147
-bmoHeapUsed: 3147
---- 8 row(s) selected.
-```
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[sql_runtime_statistics]]
+= SQL Runtime Statistics
+
+The Runtime Management System (RMS) shows the status of queries while
+they are running. RMS can service on-demand requests from the {project-name}
+Command Interface (TrafCI) to get statistics for a given query ID or for
+active queries in a given process. RMS also provides information about
+itself to determine the health of the RMS infrastructure.
+
+RMS provides the summary statistics for each fragment instance and
+detailed statistics for each operator (TDB_ID) of a given active query.
+A query is considered active if either the compilation or execution is
+in progress. The variable_input column output is returned as a multiple
+value pair of the form _token=value_. For more information, see
+<<considerations_obtaining_stats_fragment,
+Considerations For Obtaining Statistics For Each Fragment-Instance of an Active Query>>.
+
+RMS is enabled and available all the time.
+
+== PERTABLE and OPERATOR Statistics
+
+The SQL database engine determines which type of statistics collection
+is appropriate for the query. The RMS infrastructure provides the
+runtime metrics about a query while a query is executing. You can
+identify queries that are using excessive resources, suspend a query to
+determine its impact on resources, and cancel a query, when necessary.
+PERTABLE statistics count rows and report rows estimated in the
+operators in the disk processes and time spent in the ESP processes.
+Although PERTABLE statistics can deduce when all the rows have been read
+from the disks, it is impossible to correctly assess the current state
+of the query.
+
+Complex queries such as joins, sorts, and group result sets are often
+too large to fit into memory, so intermediate results must overflow to
+scratch files. These operators are called Big Memory Operators (BMOs).
+Because of the BMOs, RMS provides OPERATOR statistics, which provide a
+richer set of statistics so that the current state of a query can be
+determined at any time.
+
+With OPERATOR statistics, all SQL operators are instrumented and the
+following statistics are collected:
+
+* Node time spent in the operator
+* Actual number of rows flowing to the parent operator
+* Estimated number of rows flowing to the parent operator (estimated by the optimizer)
+* Virtual memory used in the BMO
+* Amount of data overflowed to scratch files and read back to the query
+
+For more information,
+see <<displaying_sql_runtimestatistics,Displaying SQL Runtime Statistics>>.
+
+[[adaptive_statistics_collection]]
+== Adaptive Statistics Collection
+
+The SQL database engine chooses the appropriate statistics collection
+type based on the type of query. By default, the SQL database engine
+statistics collection is OPERATOR statistics. You can view the
+statistics in different formats: PERTABLE, ACCUMULATED, PROGRESS, and
+DEFAULT. Statistics Collection is adaptive to ensure that sufficient
+statistics information is available without
+
+causing any performance impact to the query's execution. For some
+queries, either no statistics or PERTABLE statistics are collected.
+
+[cols="50%,50%l",options="header"]
+|===
+| Query Type                      | Statistics Collection Type
+| OLT optimized queries           | PERTABLE
+| Unique queries                  | PERTABLE
+| CQD                             | No statistics
+| SET commands                    | No statistics
+| EXPLAIN                         | No statistics
+| GET STATISTICS                  | No statistics
+| All other queries               | DEFAULT
+|===
+
+<<<
+[[retrieving_sql_runtime_statistics]]
+== Retrieving SQL Runtime Statistics
+
+[[using_the_get_statistics_command]]
+=== Using the GET STATISTICS Command
+
+The GET STATISTICS command shows statistical information for:
+
+* A single query ID (QID)
+* Active queries for a process ID (PID)
+* RMS itself
+
+A query is considered active if either compilation or execution is in
+progress. In the case of a SELECT statement, a query is in execution
+until the statement or result set is closed. Logically, a query is
+considered to be active when the compile end time is -1 and the compile
+start time is not -1, or when the execute end time is -1 and the execute
+start time is not -1.
+
+[[syntax_of_get_statistics]]
+=== Syntax of GET STATISTICS
+
+```
+GET STATISTICS FOR QID { query-id | CURRENT } [stats-view-type] }
+               | PID { process-name | [ nodeid, pid ] } [ ACTIVE n ][ stats-view-type ]
+               | RMS node-num | ALL [ RESET ]
+
+stats-view-type is:
+  ACCUMULATED | PERTABLE | PROGRESS | DEFAULT
+
+```
+
+* `QID`
++
+Required keyword if requesting statistics for a specific query.
+
+* `_query-id_`
++
+is the query ID. You must put the _query-id_ in double quotes if the
+user name in the query ID contains lower case letters or if the user
+name contains a period.
++
+NOTE: The _query-id_ is a unique identifier for the SQL statement
+generated when the query is compiled (prepared). The _query-id_ is
+visible for queries executed through certain TrafCI commands.
+
+* `CURRENT`
++
+provides statistics for the most recently prepared or executed statement
+in the same session where you run the GET STATISTICS FOR QID CURRENT
+command. You must issue the GET STATISTICS FOR QID CURRENT command
+immediately after the PREPARE or EXECUTE statement.
+
+* `PID`
++
+Required keyword if requesting statistics for an active query in a given
+process.
+
+* `_process-name_`
++
+is the name of the process ID (PID) in the format: $Z_nnn_. The
+process name can be for the master (MXOSRVR) or executor server process
+(ESP). If the process name corresponds to the ESP, the ACTIVE _n_ query
+is just the _n_th query in that ESP and might not be the currently
+active query in the ESP.
+
+* `ACTIVE _n_`
++
+describes which of the active queries for which RMS returns statistics.
+ACTIVE 1 is the default. ACTIVE 1 returns statistics for the first
+active query. ACTIVE 2 returns statistics for the second active query.
+
+* `_stats-view-type_`
++
+sets the statistics view type to a different format. Statistics are
+collected at the operator level by default. For exceptions, see
+<<adaptive_statistics_collection,Adaptive Statistics Collection>>.
+
+* `ACCUMULATED`
++
+causes the statistics to be displayed in an aggregated summary across
+all tables in the query.
+
+* `PERTABLE`
++
+displays statistics for each table in the query. This is the default
+_stats-view-type_ although statistics are collected at the operator
+level. If the collection occurs at a lower level due to Adaptive
+Statistics, the default is the lowered collection level. For more
+information, 
+see <<adaptive_statistics_collection,Adaptive Statistics Collection>>.
+
+* `progress`
++
+displays rows of information corresponding to each of the big memory
+operators (BMO) operators involved in the query, in addition to pertable
+_stats-view-type_. For more information about BMOs,
+see <<pertable_and_operator_statistics,Pertable and Operator Statistics>>.
+
+* `PROGRESS`
++
+displays rows of information corresponding to each of the big memory
+operators (BMO) operators involved in the query, in addition to pertable
+_stats-view-type_. For more information about BMOs, 
+see <<pertable_and_operator_statistics,Pertable and Operator Statistics>>.
+
+* `default`
++
+displays statistics in the same way as it is collected.
+
+* `RMS`
++
+required keyword if requesting statistics about RMS itself.
+
+* `_node-num_`
++
+returns the statistics about the RMS infrastructure for a given node.
+
+* `ALL`
++
+returns the statistics about the RMS infrastructure for every node in the cluster.
+
+* `RESET`
++
+resets the cumulative RMS statistics counters.
+
+[[examples_of_get_statistics]]
+=== Examples of GET STATISTICS
+
+These examples show the runtime statistics that various get statistics
+commands return. for more information about the runtime statistics and
+RMS counters,
+see <<displaying_sql_runtime_statistics,Displaying SQL Runtime Statistics>>.
+
+* This GET STATISTICS command returns PERTABLE statistics for the most
+recently executed statement in the same session:
++
+```
+SQL> GET STATISTICS FOR QID CURRENT;
+
+Qid                      MXID1100801837021216821167247667200000000030000_59_SQL_CUR_6
+Compile Start Time       2011/03/30 07:29:15.332216
+Compile End Time         2011/03/30 07:29:15.339467
+Compile Elapsed Time                 0:00:00.007251
+Execute Start Time       2011/03/30 07:29:15.383077
+Execute End Time         2011/03/30 07:29:15.470222
+Execute Elapsed Time                 0:00:00.087145
+State                    CLOSE
+Rows Affected            0
+SQL Error Code           100
+Stats Error Code         0
+Query Type               SQL_SELECT_NON_UNIQUE Estimated Accessed Rows 0
+Estimated Used Rows      0
+Parent Qid               NONE
+Child Qid                NONE
+Number of SQL Processes  1
+Number of Cpus           1
+Execution Priority       -1
+Transaction Id           -1
+Source String            SELECT
+CUR_SERVICE,PLAN,TEXT,CUR_SCHEMA,RULE_NAME,APPL_NAME,SESSION_NAME,DSN_NAME,ROLE_NAME,DEFAULT_SCHEMA_ACCESS_ONLY
+ FROM(VALUES(CAST('HP_DEFAULT_SERVICE' as VARCHAR(50)),CAST(0 AS INT),CAST(0 AS INT),CAST('NEO.USR' as
+VARCHAR(260)),CAST('' as VARCHAR(
+SQL Source Length        548
+Rows Returned            1
+First Row Returned Time  2011/03/30 07:29:15.469778
+Last Error before AQR    0
+Number of AQR retries    0
+Delay before AQR         0
+No. of times reclaimed   0
+Stats Collection Type    OPERATOR_STATS
+SQL Process Busy Time    0
+UDR Process Busy Time    0
+SQL Space Allocated      32 KB
+SQL Space Used           3 KB
+SQL Heap Allocated       7 KB
+SQL Heap Used            1 KB
+EID Space Allocated      0 KB
+EID Space Used           0 KB
+EID Heap Allocated       0 KB
+EID Heap Used            0 KB
+Processes Created        0
+Process Create Time      0
+Request Message Count    0
+Request Message Bytes    0
+Reply Message Count      0
+Reply Message Bytes      0
+Scr. Overflow Mode       DISK
+Scr File Count           0
+Scr. Buffer Blk Size     0
+Scr. Buffer Blks Read    0
+Scr. Buffer Blks Written 0
+Scr. Read Count          0
+Scr. Write Count         0
+
+--- SQL operation complete.
+```
+
+<<<
+* This GET STATISTICS command returns PERTABLE statistics for the
+specified query ID (note that this command should be issued in the same
+session):
++
+```
+SQL> GET STATISTICS FOR QID
++> "MXID1100800517921216818752807267200000000030000_48_SQL_CUR_2"
++> ;
+
+Qid                      MXID1100800517921216818752807267200000000030000_48_SQL_CUR_2
+Compile Start Time       2011/03/30 00:53:21.382211
+Compile End Time         2011/03/30 00:53:22.980201
+Compile Elapsed Time                 0:00:01.597990
+Execute Start Time       2011/03/30 00:53:23.079979
+Execute End Time         -1
+Execute Elapsed Time                 7:16:13.494563
+State                    OPEN
+Rows Affected            -1
+SQL Error Code           0
+Stats Error Code         0
+Query Type               SQL_SELECT_NON_UNIQUE
+Estimated Accessed Rows  2,487,984
+Estimated Used Rows      2,487,984
+Parent Qid               NONE
+Child Qid                NONE
+Number of SQL Processes  129
+Number of Cpus           9
+Execution Priority       -1
+Transaction Id           34359956800
+Source String            select count(*) from
+MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT K,
+MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT J,
+MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT H,
+MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT G
+SQL Source Length        220
+Rows Returned            0
+First Row Returned Time  -1
+Last Error before AQR    0
+Number of AQR retries    0
+Delay before AQR         0
+No. of times reclaimed   0
+Stats Collection Type    OPERATOR_STATS
+SQL Process Busy Time    830,910,830,000
+UDR Process Busy Time    0
+SQL Space Allocated      179,049                  KB
+SQL Space Used           171,746                  KB
+SQL Heap Allocated       1,140,503                KB
+SQL Heap Used            1,138,033                KB
+EID Space Allocated      46,080                   KB
+EID Space Used           42,816                   KB
+EID Heap Allocated       18,624                   KB
+EID Heap Used            192                      KB
+Processes Created        32
+Process Create Time      799,702
+Request Message Count    202,214
+Request Message Bytes    27,091,104
+Reply Message Count      197,563
+Reply Message Bytes      1,008,451,688
+Scr. Overflow Mode       DISK
+Scr File Count           0
+Scr. Buffer Blk Size     0
+Scr. Buffer Blks Read    0
+Scr. Buffer Blks Written 0
+Scr. Read Count          0
+Scr. Write Count         0 
+
+Table Name
+   Records Accessed       Records Used   Disk   Message     Message   Lock   Lock   Disk Process   Open   Open
+   Estimated/Actual   Estimated/Actual   I/Os     Count     Bytes     Escl   wait   Busy Time      Count  Time
+MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT(H)
+            621,996            621,996
+            621,998            621,998      0       441  10,666,384      0       0       303,955      32  15,967
+MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT(J) 621,996 621,996
+            621,996            621,996
+            621,998            621,998      0       439  10,666,384      0        0      289,949      32  19,680
+MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT(K) 621,996 621,996
+            621,996            621,996
+            621,998            621,998      0       439  10,666,384      0        0      301,956      32  14,419
+MANAGEABILITY.INSTANCE_REPOSITORY.EVENTS_TEXT(G)
+                  0            621,996
+                  0                  0      0       192   4,548,048      0         0           0      32  40,019
+
+--- SQL operation complete.
+```
+
+<<<
+* This GET STATISTICS command returns ACCUMULATED statistics for the
+most recently executed statement in the same session:
++
+```
+SQL> GET STATISTICS FOR QID CURRENT ACCUMULATED;
+
+Qid                      MXID1100802517321216821277534304000000000340000_957_SQL_CUR_6
+Compile Start Time       2011/03/30 08:05:07.646667
+Compile End Time         2011/03/30 08:05:07.647622
+Compile Elapsed Time                0:00:00.000955
+Execute Start Time       2011/03/30 08:05:07.652710
+Execute End Time         2011/03/30 08:05:07.740461
+Execute Elapsed Time                0:00:00.087751
+State                    CLOSE
+Rows Affected            0
+SQL Error Code           100
+Stats Error Code         0
+Query Type               SQL_SELECT_NON_UNIQUE
+Estimated Accessed Rows  0
+Estimated Used Rows      0
+Parent Qid               NONE
+Child Qid                NONE
+Number of SQL Processes  0
+Number of Cpus           0
+Execution Priority       -1
+Transaction Id           -1
+Source String            SELECT
+CUR_SERVICE,PLAN,TEXT,CUR_SCHEMA,RULE_NAME,APPL_NAME,SESSION_NAME,DSN_NAME,ROLE_NAME,DEFAULT_SCHEMA_ACCESS_ONLY
+FROM(VALUES(CAST('HP_DEFAULT_SERVICE' as VARCHAR(50)),CAST(0 AS INT),CAST(0 AS INT),CAST('NEO.SCH' as
+VARCHAR(260)),CAST('' as VARCHAR(
+SQL Source Length        548
+Rows Returned            1
+First Row Returned Time  2011/03/30 08:05:07.739827
+Last Error before AQR    0
+Number of AQR retries    0
+Delay before AQR         0
+No. of times reclaimed   0
+Stats Collection Type    OPERATOR_STATS
+Accessed Rows            0
+Used Rows                0
+Message Count            0
+Message Bytes            0
+Stats Bytes              0
+Disk IOs                 0
+Lock Waits               0
+Lock Escalations         0
+Disk Process Busy Time   0
+SQL Process Busy Time    0
+UDR Process Busy Time    0
+SQL Space Allocated      32                       KB
+SQL Space Used           3                        KB
+SQL Heap Allocated       7                        KB
+SQL Heap Used            1                        KB
+EID Space Allocated      0                        KB
+EID Space Used           0                        KB
+EID Heap Allocated       0                        KB
+EID Heap Used            0                        KB
+Opens                    0
+Open Time                0
+Processes Created        0
+Process Create Time      0
+Request Message Count    0
+Request Message Bytes    0
+Reply Message Count      0
+Reply Message Bytes      0
+Scr. Overflow Mode       UNKNOWN
+Scr. File Count          0
+Scr. Buffer Blk Size     0
+Scr. Buffer Blks Read    0
+Scr. Buffer Blks Written 0
+Scr. Read Count          0
+Scr. Write Count         0
+
+--- SQL operation complete.
+```
+
+<<<
+* These GET STATISTICS commands return PERTABLE statistics for the first
+active query in the specified process ID:
++
+```
+SQL> GET STATISTICS FOR PID 0,27195;
+SQL> GET STATISTICS FOR PID $Z000F3R;
+```
+
+[[displaying_sql_runtime_statistics]]
+== Displaying SQL Runtime Statistics
+
+By default, GET STATISTICS displays table-wise statistics (PERTABLE). If
+you want to view the statistics in a different format, use the
+appropriate view option of the GET STATISTICS command.
+
+RMS provides abbreviated statistics information for prepared statements
+and full runtime statistics for executed statements.
+
+The following table shows the RMS counters that are returned by GET
+STATISTICS, tokens from the STATISTICS table-valued function that relate
+to the RMS counters, and descriptions of the counters and tokens.
+
+[cols="25%l,25%l,50%",options="header"]
+|===
+| Counter Name         | Tokens in STATISTICS Table-Valued Function | Description
+| Qid                  | Qid                                        | A unique ID generated for each query. Each time a SQL statement is prepared, a new query ID is generated.
+| Compile Start Time   | CompStartTime                              | Time when the query compilation started or time when PREPARE for this query started.
+| Compile End Time     | CompEndTime                                | Time when the query compilation ended or time when PREPARE for this query ended.
+| Compile Elapsed Time | CompElapsedTime                            | Amount of actual time to prepare the query.
+| Execute Start Time   | ExeStartTime                               | Time when query execution started. 
+| Execute End Time     | ExeEndTime                                 | Time when query execution ended. When a query is executing, Execute End Time is -1.
+| Execute Elapsed Time | ExeElapsedTime                             | Amount of actual time used by the SQL executor to execute the query.
+| State                | State                                      | Internally used.
+| Rows Affected        | RowsAffected                               | Represents the number of rows affected by the INSERT, UPDATE, or DELETE (IUD) SQL statements.
+Value of -1 for SELECT statements or non-IUD SQL statements.
+| SQL Error Code       | SQLErrorCode                               | Top-level error code returned by the query, indicating whether the query completed with warnings, errors,
+or successfully. A positive number indicates a warning. A negative number indicates an error. The value returned may not be accurate up to the point GET STATISTICS was executed.
+| Stats Error Code     | StatsErrorCode                             | Error code returned to the statistics collector while obtaining statistics from RMS. If an error code,
+counter values may be incorrect. Reissue the GET STATISTICS command.
+| Query Type           | Estimated Accessed Rows                    |  Type of DML statement and enum value: +
+ +
+- SQL_SELECT_UNIQUE=1 +
+- SQL_SELECT_NON_UNIQUE=2 +
+- SQL_INSERT_UNIQUE=3 +
+- SQL_INSERT_NON_UNIQUE=4 +
+- SQL_UPDATE_UNIQUE=5 +
+- SQL_UPDATE_NON_UNIQUE=6 +
+- SQL_DELETE_UNIQUE=7 +
+- SQL_DELETE_NON_UNIQUE=8 +
+- SQL_CONTROL=9 +
+- SQL_SET_TRANSACTION=10 +
+- SQL_SET_CATALOG=11 +
+- SQL_SET_SCHEMA=12 +
+- SQL_CALL_NO_RESULT_SETS=13 +
+- SQL_CALL_WITH_RESULT_SETS=14 +
+- SQL_SP_RESULT_SET=15 +
+- SQL_INSERT_ROWSET_SIDETREE=16 +
+- SQL_CAT_UTIL=17 +
+- SQL_EXE_UTIL=18 +
+- SQL_OTHER=1 +
+- SQL_UNKNOWN=0
+| QueryType            | EstRowsAccessed                            | Compiler's estimated number of rows accessed by the executor in TSE.
+| Estimated Used Rows  | EstRowsUsed                                | Compiler's estimated number of rows returned by the executor in TSE after applying the predicates.
+| Parent Qid           | parentQid                                  | A unique ID for the parent query. If there is no parent query ID associated with the query, RMS returns NONE.
+For more information, see <<using_the_parent_query_id,Using the Parent Query ID>>.
+| Child Qid            | childQid                                   | A unique ID for the child query. If there is no child query, then there will be no child query ID and
+RMS returns NONE. For more information, see <<child_query_id,Child Query ID>>.
+| Number of SQL Processes | numSqlProcs                             | Represents the number of SQL processes (excluding TSE processes) involved in executing the query.
+| Number of CPUs       | numCpus                                    | Represents the number of nodes that SQL is processing the query.
+| Transaction ID       | transId                                    | Represents the transaction ID of the transaction involved in executing the query. When no transaction exists,
+the Transaction ID is -1.
+| Source String        | sqlSrc                                     | Contains the first 254 bytes of source string.
+| SQL Source Length    | sqlSrcLen                                  | The actual length of the SQL source string.
+| Rows Returned        | rowsReturned                               | Represents the number of rows returned from the root operator at the master executor process.
+| First Row Returned Time | firstRowReturnTime                      | Represents the actual time that the first row is returned by the master root operator.
+| Last Error Before AQR | LastErrorBeforeAQR                        | The error code that triggered Automatic Query Retry (AQR) for the most recent retry. If the value is not 0,
+this is the error code that triggered the most recent AQR.
+| Number of AQR retries | AQRNumRetries                             | The number of retries for the current query until now.
+| Delay before AQR     | DelayBeforeAQR                             | Delay in seconds that SQL waited before initiating AQR.
+| No. of times reclaimed | reclaimSpaceCnt                          | When a process is under virtual memory pressure, the execution space occupied by the queries executed much
+earlier will be reclaimed to free up space for the upcoming queries. This counter represents how many times this particular query is reclaimed.
+|                      | statsRowType                               | statsRowType can be one of the following: +
+ +
+- SQLSTATS_DESC_OPER_STATS=0 +
+- SQLSTATS_DESC_ROOT_OPER_STATS=1 +
+- SQLSTATS_DESC_PERTABLE_STATS=11 +
+- SQLSTATS_DESC_UDR_STATS=13 +
+- SQLSTATS_DESC_MASTER_STATS=15 +
+- SQLSTATS_DESC_RMS_STATS=16 +
+- SQLSTATS_DESC_BMO_STATS=17 
+| Stats Collection Type | StatsType                                 | Collection type, which is OPERATOR_STATS by default. StatsType can be one of the following: +
+ +
+- SQLCLI_NO_STATS=0 +
+- SQLCLI_ACCUMULATED_STATS=2 +
+- SQLCLI_PERTABLE_STATS=3 +
+- SQLCLI_OPERATOR_STATS=5
+| Accessed Rows (Rows Accessed) | AccessedRows                      | Actual number of rows accessed by the executor in TSE.
+| Used Rows (Rows Used) | UsedRows                                  | Number of rows returned by TSE after applying the predicates. In a push down plan, TSE may not return all the used rows.
+| Message Count        | NumMessages                                | Count of the number of messages sent to TSE.
+| Message Bytes        | MessageBytes                               | Count of the message bytes exchanged with TSE.
+| Stats Bytes          | StatsBytes                                 | Number of bytes returned for statistics counters from TSE.
+| Disk IOs             | DiskIOs                                    | Number of physical disk reads for accessing the tables.
+| Lock Waits           | LockWaits                                  | Number of times this statement had to wait on a conflicting lock.
+| Lock Escalations     | Escalations                                | Number of times row locks escalated to a file lock during the execution of this statement.
+| Disk Process Busy Time | ProcessBusyTime                          | An approximation of the total node time in microseconds spent by TSE for executing the query.
+| SQL Process Busy Time | CpuTime                                   | An approximation of the total node time in microseconds spent in the master and ESPs involved in the query.
+| UDR Process Busy Time (same as UDR CPU Time) | udrCpuTime         | An approximation of the total node time in microseconds spent in the UDR server process.
+| UDR Server ID        | UDRServerId                                | MXUDR process ID.
+| Recent Request Timestamp |                                        | Actual timestamp of the recent request sent to MXUDR.
+| Recent Reply Timestamp |                                          | Actual timestamp of the recent request received by MXUDR.
+| SQL Space Allocated^1^ | SpaceTotal^1^                            | The amount of "space" type of memory in KB allocated in the master and ESPs involved in the query.
+| SQL Space U

<TRUNCATED>


[05/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc b/docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc
index 1053033..221668a 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc
@@ -1,7885 +1,7885 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[sql_functions_and_expressions]]
-= SQL Functions and Expressions
-
-This section describes the syntax and semantics of specific functions
-and expressions that you can use in {project-name} SQL statements. The
-functions and expressions are categorized according to their
-functionality.
-
-[[standard_normalization]]
-== Standard Normalization
-
-For datetime functions, the definition of standard normalization is: If
-the ending day of the resulting date is invalid, the day will be rounded
-DOWN to the last day of the result month.
-
-== Aggregate (Set) Functions
-
-An aggregate (or set) function operates on a group or groups of rows
-retrieved by the SELECT statement or the subquery in which the aggregate
-function appears.
-
-
-[cols="25%,75%"]
-|===
-| <<avg_function,AVG Function>>                 | Computes the average of a group of numbers derived from the evaluation
-of the expression argument of the function.
-| <<count_function,COUNT Function>>             | Counts the number of rows that result from a query (by using
-*) or the number of rows that contain a distinct value in the one-column
-table derived from the expression argument of the function (optionally
-distinct values).
-| <<max_function,MAX/MAXIMUM Function>> | Determines a maximum value from the group of values derived from the
-evaluation of the expression argument.
-| <<min_function,MIN Function>>                 | Determines a minimum value from the group of values derived from the
-evaluation of the expression argument.
-| <<stddev_function,STDDEV Function>>           | Computes the statistical standard deviation of a group of numbers
-derived from the evaluation of the expression argument of the function.
-The numbers can be weighted.
-| <<sum_function,SUM Function>>                 | Computes the sum of a group of numbers derived from the evaluation of
-the expression argument of the function.
-"VARIANCE Function" 
-Computes the statistical variance of a group of numbers derived from the
-evaluation of the expression argument of the function. The numbers can
-be weighted.
-|===
-
-
-Columns and expressions can be arguments of an aggregate function. The
-expressions cannot contain aggregate functions or subqueries.
-
-An aggregate function can accept an argument specified as DISTINCT,
-which eliminates duplicate values before the aggregate function is
-applied. See <<distinct_aggregate_functions,DISTINCT Aggregate Functions>>.
-
-If you include a GROUP BY clause in the SELECT statement, the columns
-you refer to in the select list must be either grouping columns or
-arguments of an aggregate function. If you do not include
-a GROUP BY clause but you specify an aggregate function in the select
-list, all rows of the SELECT result table form the one and only group.
-
-See the individual entry for the function.
-
-[[character_string_functions]]
-== Character String Functions
-
-These functions manipulate character strings and use a character value
-expression as an argument or return a result of a character data type.
-Character string functions treat each single-byte or multi-byte character
-in an input string as one character, regardless of the byte length of
-the character.
-
-
-[cols="25%,75%"]
-|===
-| <<ascii_function,ASCII Function>>                       | Returns the ASCII code value of the first character of a character value
-expression.
-| <<char_function,CHAR Function>>                         | Returns the specified code value in a character set.
-| <<char_length_function,CHAR_LENGTH Function>>           | Returns the number of characters in a string. You can also use
-CHARACTER_LENGTH.
-| <<code_value_function,CODE_VALUE Function>>             | Returns an unsigned integer that is the code point of the first
-character in a character value expression that can be associated with
-one of the supported character sets.
-| <<concat_function,CONCAT Function>>                     | Returns the concatenation of two character value expressions as a string
-value. You can also use the concatenation operator (\|\|).
-| <<insert_function,INSERT Function>>                     | Returns a character string where a specified number of characters within
-the character string have been deleted and then a second character
-string has been inserted at a specified start position.
-| <<lcase_function,LCASE Function>>                       | Down-shifts alphanumeric characters. You can also use LOWER.
-| <<left_function,LEFT Function>>                         | Returns the leftmost specified number of characters from a character expression.
-| <<locate_function,LOCATE Function>>                     | Returns the position of a specified substring within a character string.
-You can also use POSITION.
-| <<lower_function,LOWER Function>>                       | Down-shifts alphanumeric characters. You can also use LCASE.
-| <<lpad_function,LPAD Function>>                         | Replaces the leftmost specified number of characters in a character
-expression with a padding character.
-| <<ltrim_function,LTRIM Function>>                       | Removes leading spaces from a character string.
-| <<octet_length_function,OCTET_LENGTH Function>>         | Returns the length of a character string in bytes.
-| <<position_function,POSITION Function>>                 | Returns the position of a specified substring within a character string.
-You can also use LOCATE.
-| <<repeat_function,REPEAT Function>>                     | Returns a character string composed of the evaluation of a character
-expression repeated a specified number of times.
-| <<replace_function,REPLACE Function>>                   | Returns a character string where all occurrences of a specified
-character string in the original string are replaced with another
-character string.
-| <<right_function,RIGHT Function>>                       | Returns the rightmost specified number of characters from a character
-expression.
-| <<rpad_function,RPAD Function>>                         | Replaces the rightmost specified number of characters in a character
-expression with a padding character.
-| <<rtrim_function,RTRIM Function>>                       | Removes trailing spaces from a character string.
-| <<space_function,SPACE Function>>                       | Returns a character string consisting of a specified number of spaces.
-| <<substring_function,SUBSTRING/SUBSTR Function>>        | Extracts a substring from a character string.
-| <<translate_function,TRANSLATE Function>>               | Translates a character string from a source character set to a target
-character set.
-| <<trim_function,TRIM Function>>                         | Removes leading or trailing characters from a character string.
-| <<ucase_function,UCASE Function>>                       | Up-shifts alphanumeric characters. You can also use UPSHIFT or UPPER.
-| <<upper_function,UPPER Function>>                       | Up-shifts alphanumeric characters. You can also use UPSHIFT or UCASE.
-| <<upshift_function,UPSHIFT Function>>                   | Up-shift alphanumeric characters. You can also use UPPER or UCASE.
-|===
-
-See the individual entry for the function.
-
-[[datetime_functions]]
-== Datetime Functions
-
-These functions use either a datetime value expression as an argument or
-return a result of datetime data type:
-
-[cols="25%,75%"]
-|===
-| <<add_months_function,ADD_MONTHS Function>>                               | Adds the integer number of months specified by _intr_expr_ 
-to _datetime_expr_ and normalizes the result.
-| <<converttimestamp_function,CONVERTTIMESTAMP Function>>                   | Converts a Julian timestamp to a TIMESTAMP value.
-| <<current_function,CURRENT Function>> | Returns the current timestamp. You can also use the
-<<current_timestamp_function,CURRENT_TIMESTAMP Function>>. 
-| <<current_date_function,CURRENT_DATE Function>>                           | Returns the current date.
-| <<current_time_function,CURRENT_TIME Function>>                           | Returns the current time.
-| <<current_timestamp_function,CURRENT_TIMESTAMP Function>> | Returns the current timestamp. You can also use the <<current_function,CURRENT Function>>.
-| <<date_add_function,DATE_ADD Function>>                                   | Adds the interval specified by _interval_expression_
-to _datetime_expr_.
-| <<date_part_function_of_an_interval,DATE_PART Function (of an Interval)>> | Extracts the datetime field specified by _text_ from the interval value
-specified by interval and returns the result as an exact numeric value.
-| <<date_part_function_of_a_timestamp,DATE_PART Function (of a Timestamp)>> | Extracts the datetime field specified by _text_ from the datetime value
-specified by timestamp and returns the result as an exact numeric value.
-| <<date_sub_function,DATE_SUB Function>>                                   | Subtracts the specified _interval_expression_ from
-_datetime_expr._
-| <<date_trunc_function,DATE_TRUNC Function>>                               | Returns the date with the time portion of the day truncated.
-| <<dateadd_function,DATEADD Function>>                                     | Adds the interval specified by _datepart_ and _num_expr_
-to _datetime_expr_.
-| <<datediff_function,DATEDIFF Function>>                                   | Returns the integer value for the number of _datepart_ units of time
-between _startdate_ and _enddate_.
-| <<dateformat_function,DATEFORMAT Function>>                               | Formats a datetime value for display purposes.
-| <<day_function,DAY Function>>                                             | Returns an integer value in the range 1 through 31 that represents the
-corresponding day of the month. You can also use DAYOFMONTH.
-| <<dayname_function,DAYNAME Function>>                                     | Returns the name of the day of the week from a date or timestamp
-expression.
-| <<dayofmonth_function,DAYOFMONTH Function>>                               | Returns an integer value in the range 1 through 31 that represents the
-corresponding day of the month. You can also use DAY.
-| <<dayofweek_function,DAYOFWEEK Function>>                                 | Returns an integer value in the range 1 through 7 that represents the
-corresponding day of the week.
-| <<dayofyear_function,DAYOFYEAR Function>>                                 | Returns an integer value in the range 1 through 366 that represents the
-corresponding day of the year.
-| <<extract_function,EXTRACT Function>>                                     | Returns a specified datetime field from a datetime value expression or
-an interval value expression.
-| <<hour_function,HOUR Function>>                                           | Returns an integer value in the range 0 through 23 that represents the
-corresponding hour of the day.
-| <<juliantimestamp_function,JULIANTIMESTAMP Function>>                     | Converts a datetime value to a Julian timestamp.
-| <<minute_function,MINUTE Function>>                                       | Returns an integer value in the range 0 through 59 that represents the
-corresponding minute of the hour.
-| <<month_function,MONTH Function>>                                         | Returns an integer value in the range 1 through 12 that represents the
-corresponding month of the year.
-| <<monthname_function,MONTHNAME Function>>                                 | Returns a character literal that is the name of the month of the year
-(January, February, and so on).
-| <<quarter_function,QUARTER Function>>                                     | Returns an integer value in the range 1 through 4 that represents the
-corresponding quarter of the year.
-| <<second_function,SECOND Function>>                                       | Returns an integer value in the range 0 through 59 that represents the
-corresponding second of the minute.
-| <<timestampadd_function,TIMESTAMPADD Function>>                           | Adds the interval of time specified by _interval-ind_ and
-_num_expr_ to _datetime_expr_.
-| <<timestampdiff_function,TIMESTAMPDIFF Function>>                         | Returns the integer value for the number of _interval-ind_
-units of time between _startdate_ and _enddate_.
-| <<week_function,WEEK Function>>                                           | Returns an integer value in the range 1 through 54 that represents the
-corresponding week of the year.
-| <<year_function,YEAR Function>>                                           | Returns an integer value that represents the year.
-|===
-
-See the individual entry for the function.
-
-[[mathematical_functions]]
-== Mathematical Functions
-
-Use these mathematical functions within an SQL numeric value expression:
-
-[cols="25%,75%"]
-|===
-| <<abs_function,ABS Function>>         | Returns the absolute value of a numeric value expression. 
-| <<acos_function,ACOS Function>>       | Returns the arccosine of a numeric value expression as an angle expressed in radians.
-| <<asin_function,ASIN Function>>       | Returns the arcsine of a numeric value expression as an angle expressed in radians.
-| <<atan_function,ATAN Function>>       | Returns the arctangent of a numeric value expression as an angle expressed in radians.
-| <<atan2_function,ATAN2 Function>>     | Returns the arctangent of the x and y coordinates, specified by two numeric value expressions, as an angle expressed in radians.
-| <<ceiling_function,CEILING Function>> | Returns the smallest integer greater than or equal to a numeric value expression.
-| <<cos_function,COS Function>>         | Returns the cosine of a numeric value expression, where the expression is an angle expressed in radians.
-| <<cosh_function,COSH Function>>       | Returns the hyperbolic cosine of a numeric value expression, where the expression is an angle expressed in radians.
-| <<degrees_function,DEGREES Function>> | Converts a numeric value expression expressed in radians to the number of degrees.
-| <<exp_function,EXP Function>>         | Returns the exponential value (to the base e) of a numeric value expression.
-| <<floor_function,FLOOR Function>>     | Returns the largest integer less than or equal to a numeric value  expression.
-| <<log_function,LOG Function>>         | Returns the natural logarithm of a numeric value expression.
-| <<log10_function,LOG10 Function>>     | Returns the base 10 logarithm of a numeric value expression.
-| <<mod_function,MOD Function>>         | Returns the remainder (modulus) of an integer value expression divided by an integer value expression.
-| <<nullifzero_function,NULLIFZERO Function>> | Returns the value of the operand unless it is zero, in which case it returns NULL.
-| <<pi_function,PI Function>>           | Returns the constant value of pi as a floating-point value.
-| <<power_function,POWER Function>>     | Returns the value of a numeric value expression raised to the power of an integer value expression. You can also use the exponential operator \*\*.
-| <<radians_function,RADIANS Function>> | Converts a numeric value expression expressed in degrees to the number of radians.
-| <<round_function,ROUND Function>>     | Returns the value of _numeric_expr_ round to _num_ places to the right of the decimal point.
-| <<sign_function,SIGN Function>>       | Returns an indicator of the sign of a numeric value expression. If value is less than zero, returns -1 as the indicator. If value is zero,
-returns 0. If value is greater than zero, returns 1.
-| <<sin_function,SIN Function>>         | Returns the sine of a numeric value expression, where the expression is an angle expressed in radians.
-| <<sinh_function,SINH Function>>       | Returns the hyperbolic sine of a numeric value expression, where the expression is an angle expressed in radians.
-| <<sqrt_function,SQRT Function>>       | Returns the square root of a numeric value expression.
-| <<tan_function,TAN Function>>         | Returns the tangent of a numeric value expression, where the expression is an angle expressed in radians.
-| <<tanh_function,TANH Function>>       | Returns the hyperbolic tangent of a numeric value expression, where the expression is an angle expressed in radians.
-| <<zeroifnull_function,ZEROIFNULL Function>> | Returns the value of the operand unless it is NULL, in which case it returns zero.
-|===
-
-See the individual entry for the function.
-
-[[sequence_functions]]
-== Sequence Functions
-
-Sequence functions operate on ordered rows of the intermediate result
-table of a SELECT statement that includes a SEQUENCE BY clause. Sequence
-functions are categorized generally as difference, moving, offset, or
-running.
-
-Some sequence functions, such as ROWS SINCE, require sequentially
-examining every row in the history buffer until the result is computed.
-Examining a large history buffer in this manner for a condition that has
-not been true for many rows could be an expensive operation. In
-addition, such operations may not be parallelized because the entire
-sorted result set must be available to compute the result of the
-sequence function.
-
-[[difference_sequence_functions]]
-=== Difference sequence functions
-
-[cols="25%,75%"]
-|===
-| <<diff1_function,DIFF1 Function>> | Calculates differences between values of a column expression in the current row and previous rows.
-| <<diff2_function,DIFF2 Function>> | Calculates differences between values of the result of DIFF1 of the current row and DIFF1 of previous rows.
-|===
-
-[[moving_sequence_functions]]
-=== Moving sequence functions
-
-[cols="25%,75%"]
-|===
-| <<movingcount_function,MOVINGCOUNT Function>>       | Returns the number of non-null values of a column expression in the current window.
-| <<movingmax_function,MOVINGMAX Function>>           | Returns the maximum of non-null values of a column expression in the current window.
-| <<movingmin_function,MOVINGMIN Function>>           | Returns the minimum of non-null values of a column expression in the current window.
-| <<movingstddev_function,MOVINGSTDDEV Function>>     | Returns the standard deviation of non-null values of a column expression in the current window.
-| <<movingsum_function,MOVINGSUM Function>>           | Returns the sum of non-null values of a column expression in the current window.
-| <<movingvariance_function,MOVINGVARIANCE Function>> | Returns the variance of non-null values of a column expression in the current window.
-|===
-
-Offset sequence function
-=== Offset sequence function
-
-[cols="25%,75%"]
-|===
-| <<offset_function,OFFSET Function>> | Retrieves columns from previous rows.
-|===
-
-<<<
-[[running_sequence_functions]]
-=== Running sequence functions
-
-[cols="25%,75%"]
-|===
-| <<runningavg_function,RUNNINGAVG Function>>             | Returns the average of non-null values of a column expression up to and including the current row.
-| <<runningcount_function,RUNNINGCOUNT Function>>         | Returns the number of rows up to and including the current row.
-| <<runningmax_function,RUNNINGMAX Function>>             | Returns the maximum of values of a column expression up to and including the current row.
-| <<runningmin_function,RUNNINGMIN Function>>             | Returns the minimum of values of a column expression up to and including the current row.
-| <<runningrank_function,RUNNINGRANK Function>>           | Returns the rank of the given value of an intermediate result table ordered by a SEQUENCE BY clause in a SELECT statement.
-| <<runningstddev_function,RUNNINGSTDDEV Function>>       | Returns the standard deviation of non-null values of a column expression up to and including the current row.
-| <<runningsum_function,RUNNINGSUM Function>>             | Returns the sum of non-null values of a column expression up to and including the current row.
-| <<runningvariance_function,RUNNINGVARIANCE Function>>   | Returns the variance of non-null values of a column expression up to and including the current row.
-|===
-
-[[other_sequence_functions]]
-=== Other sequence functions
-
-[cols="25%,75%"]
-|===
-| <<lastnotnull_function,LASTNOTNULL Function>>               | Returns the last non-null value for the specified column expression. If only null values have been returned, returns null.
-| <<rows_since_function,ROWS SINCE Function>>                 | Returns the number of rows counted since the specified condition was last true.
-| <<rows_since_changed_function,ROWS SINCE CHANGED Function>> | Returns the number of rows counted since the specified set of values last changed.
-| <<this_function,THIS Function>>                             | Used in ROWS SINCE to distinguish between the value of the column in the current row and the value of the column in previous rows.
-|===
-
-See <<sequence_by_clause,SEQUENCE BY Clause>> and the individual entry for each function.
-
-<<<
-[[other_functions_and_expressions]]
-== Other Functions and Expressions
-
-Use these other functions and expressions in an SQL value expression:
-
-
-[cols="25%,75%"]
-|===
-| <<authname_function,AUTHNAME Function>>                         | Returns the authorization name associated with the specified authorization ID number.
-| <<bitand_function,BITAND Function>>                             | Performs 'and' operation on corresponding bits of the two operands.
-| <<case_expression,CASE (Conditional) Expression>>               | A conditional expression. The two forms of the CASE expression are simple and searched.
-| <<cast_expression,CAST Expression>>                             | Converts a value from one data type to another data type that you specify.
-| <<coalesce_function,COALESCE Function>>                         | Returns the value of the first expression in the list that does not have a NULL value or if all 
-the expressions have NULL values, the function returns a NULL value.
-| <<converttohex_function,CONVERTTOHEX Function>>                 | Converts the specified value expression to hexadecimal for display purposes.
-| <<current_user_function,CURRENT_USER Function>>                 | Returns the database user name of the current user who invoked the function.
-| <<decode_function,DECODE Function>>                             | Compares _expr_ to each _test_expr_ value one by one in the order provided.
-| <<explain_function,EXPLAIN Function>>                           | Generates a result table describing an access plan for a SELECT, INSERT, DELETE, or UPDATE statement.
-| <<isnull_function,ISNULL Function>>                             | Returns the first argument if it is not null, otherwise it returns the second argument.
-| <<nullif_function,NULLIF Function>>                             | Returns the value of the first operand if the two operands are not equal, otherwise it returns NULL.
-| <<nvl_function,NVL Function>>                                   | Returns the value of the first operand unless it is NULL, in which case it returns the value of the second operand.
-| <<user_function,USER Function>>                                 | Returns either the database user name of the current user who invoked the function or the database user name 
-associated with the specified user ID number.
-|===
-
-See the individual entry for the function.
-
-<<<
-[[abs_function]]
-== ABS Function
-
-The ABS function returns the absolute value of a numeric value
-expression. ABS is a {project-name} SQL extension.
-
-```
-ABS (numeric-expression)
-```
-
-* `_numeric-expression_`
-+
-is an SQL numeric value expression that specifies the value for the
-argument of the ABS function. The result is returned as an unsigned
-numeric value if the precision of the argument is less than 10 or as a
-LARGEINT if the precision of the argument is greater than or equal to
-10. See <<numeric_value_expressions,Numeric Value Expressions>>.
-
-[[examples_of_abs]]
-=== Examples of ABS
-
-* This function returns the value 8:
-+
-```
-ABS (-20 + 12)
-```
-
-<<<
-[[acos_function]]
-== ACOS Function
-
-The ACOS function returns the arccosine of a numeric value expression as
-an angle expressed in radians.
-
-ACOS is a {project-name} SQL extension. 
-
-```
-ACOS (numeric-expression)
-```
-
-* `_numeric-expression_`
-+
-is an SQL numeric value expression that specifies the value for the
-argument of the ACOS  function. The range for the value of the argument is 
-from -1 to +1. See <<numeric_value_expressions,Numeric Value_Expressions>>.
-
-[[examples_of_acos]]
-=== Examples of ACOS
-
-* The ACOS function returns the value 3.49044274380724416E-001 or
-approximately 0.3491 in radians (which is 20 degrees).
-+
-```
-ACOS (0.9397)
-```
-
-* This function returns the value 0.3491. The function ACOS is the
-inverse of the function COS.
-+
-```
-ACOS (COS (0.3491))
-```
-
-<<<
-[[add_months_function]]
-=== ADD_MONTHS Function
-
-The ADD_MONTHS function adds the integer number of months specified by
-_int_expr_ to _datetime_expr_ and normalizes the result. ADD_MONTHS is a {project-name} SQL
-extension.
-
-```
-ADDMONTHS (datetimeexpr, intexpr [, int2 ])
-```
-
-* `_datetime_expr_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. The return value is the same type as the _datetime_expr._ See
-<<datetime_value_expressions,Datetime Value Expressions>>.
-
-* `_int_expr_`
-+
-is an SQL numeric value expression of data type SMALLINT or INTEGER that
-specifies the number of months. See <<numeric_value_expressions,
-Numeric Value Expressions>>.
-
-* `_int2_`
-+
-is an unsigned integer constant. If _int2_ is omitted or is the literal
-0, the normalization is the standard normalization. If _int2_ is the
-literal 1, the normalization includes the standard normalization and if
-the starting day (the day part of _datetime_expr_) is the last day of
-the starting month, then the ending day (the day part of the result
-value) is set to the last valid day of the result month. See
-<<standard_normalization,Standard Normalization>>. See
-<<numeric_value_expressions,Numeric Value Expressions>> .
-
-<<<
-[[examples_of_add_months]]
-=== Examples of ADD_MONTHS
-
-* This function returns the value DATE '2007-03-31':
-+
-```
-ADD_MONTHS(DATE '2007-02-28', 1, 1)
-```
-
-* This function returns the value DATE '2007-03-28':
-+
-```
-ADD_MONTHS(DATE '2007-02-28', 1, 0)
-```
-
-* This function returns the value DATE '2008-03-28':
-+
-```
-ADD_MONTHS(DATE '2008-02-28', 1, 1)
-```
-
-* This function returns the timestamp '2009-02-28 00:00:00':
-+
-```
-ADD_MONTHS(timestamp'2008-02-29 00:00:00',12,1)
-```
-
-<<<
-[[ascii_function]]
-== ASCII Function
-
-The ASCII function returns the integer that is the ASCII code of the
-first character in a character string expression associated with either
-the ISO8891 character set or the UTF8 character set.
-
-ASCII is a {project-name} SQL extension.
-
-```
-ASCII (character-expression) 
-```
-
-* `_character-expression`
-+
-is an SQL character value expression that specifies a string of
-characters. See <<character_value_expressions,Character Value Expressions>>.
-
-[[considerations_for_ascii]]
-=== Considerations For ASCII
-
-For a string expression in the UTF8 character set, if the value of the
-first byte in the string is greater than 127, {project-name} SQL returns this
-error message:
-
-```
-ERROR[8428] The argument to function ASCII is not valid.
-```
-
-[[examples_of_ascii]]
-=== Examples of ASCII
-
-* Select the column JOBDESC and return the ASCII code of the first
-character of the job description:
-+
-```
-SELECT jobdesc, ASCII (jobdesc) FROM persnl.job;
-
-JOBDESC           (EXPR)
------------------ --------
-MANAGER                 77
-PRODUCTION SUPV         80
-ASSEMBLER               65
-SALESREP                83
-...                    ...
-
---- 10 row(s) selected.
-```
-
-<<<
-[[asin_function]]
-== ASIN Function
-
-The ASIN function returns the arcsine of a numeric value expression as
-an angle expressed in radians.
-
-ASIN is a {project-name} SQL extension.
-
-```
-ASIN (numeric-expression)
-```
-
-* `_numeric-expression_`
-+
-is an SQL numeric value expression that specifies the value for the
-argument of the ASIN function. The range for the value of the argument is
-from -1 to +1. See <<numeric_value_expressions,Numeric Value Expressions>>.
-
-[[considerations_for_ascii]]
-=== Considerations for ASCII
-
-For a string expression in the UTF8 character set, if the value of the
-first byte in the string is greater than 127, {project-name} SQL returns this
-error message:
-
-```
-ERROR[8428] The argument to function ASCII is not valid.
-```
-
-[[examples_of_ascii]]
-=== Examples of ASCII
-
-* Select the column JOBDESC and return the ASCII code of the first
-character of the job description:
-+
-```
-SELECT jobdesc, ASCII (jobdesc) FROM persnl.job;
-
-JOBDESC           (EXPR)
------------------ --------
-MANAGER                 77
-PRODUCTION SUPV         80
-ASSEMBLER               65
-SALESREP                83
-...                    ...
-
---- 10 row(s) selected.
-```
-
-<<<
-[[asin_function]]
-== ASIN Function
-
-The ASIN function returns the arcsine of a numeric value expression as
-an angle expressed in radians.
-
-ASIN is a {project-name} SQL extension.
-
-```
-ASIN (numeric-expression)
-```
-
-* `_numeric-expression_`
-+
-is an SQL numeric value expression that specifies the value for the
-argument of the ASIN function. The range for the value of the argument
-is from -1 to +1. See <<numeric_value_expressions,Numeric Value Expressions>>.
-
-[[examples_of_asin]]
-=== Examples of ASIN
-
-* This function returns the value 3.49044414403046400e-001 or
-approximately 0.3491 in radians (which is 20 degrees):
-+
-```
-ASIN(0.3420)
-```
-
-* This function returns the value 0.3491. The function ASIN is the
-inverse of the function SIN.
-+
-```
-ASIN(SIN(0.3491))
-```
-
-<<<
-[[atan_function]]
-== ATAN Function
-
-The ATAN function returns the arctangent of a numeric value expression
-as an angle expressed in radians.
-
-ATAN is a {project-name} SQL extension.
-
-```
-ATAN ( numeric-expression )
-```
-
-* `_numeric-expression _`
-
-is an SQL numeric value expression that specifies the value for the
-argument of the atan function. See <<numeric_value_expressions,Numeric Value Expressions>>.
-
-[[examples_of_atan]]
-=== Examples of ATAN
-
-* This function returns the value 8.72766423249958272E-001 or
-approximately 0.8727 in radians (which is 50 degrees):
-+
-```
-ATAN (1.192)
-```
-
-* This function returns the value 0.8727. The function ATAN is the
-inverse of the function TAN.
-+
-```
-ATAN (TAN (0.8727))
-```
-
-<<<
-[[atan2_function]]
-== ATAN2 Function
-
-The ATAN2 function returns the arctangent of the x and y coordinates,
-specified by two numeric value expressions, as an angle expressed in
-radians.
-
-ATAN2 is a {project-name} SQL extension.
-
-```
-ATAN2 (numeric-expression-x,numeric-expression-y)
-```
-
-* `_numeric-expression-x_, _numeric-expression-y_`
-
-are SQL numeric value expressions that specify the value for the x and y
-coordinate arguments of the ATAN2 function. See
-<<numeric_value_expressions,Numeric Value Expressions>>.
-
-[[examples_of_atan2]]
-=== Examples of ATAN2
-
-* This function returns the value 2.66344329881899520E+000, or
-approximately 2.6634:
-+
-```
-ATAN2 (1.192,-2.3)
-```
-
-<<<
-[[authname_function]]
-== AUTHNAME Function
-
-The AUTHNAME function returns the name of the authorization ID that is
-associated with the specified authorization ID number.
-
-```
-AUTHNAME (auth-id)
-```
-
-* `_auth-id_`
-+
-is the 32-bit number associated with an authorization ID. See
-<<authorization_ids,Authorization IDs>>.
-
-The AUTHNAME function is similar to the <<user function,USER Function>>.
-
-[[considerations_for_authname]]
-=== Considerations for AUTHNAME
-
-* This function can be specified only in the top level of a SELECT statement.
-* The value returned is string data type VARCHAR(128) and is in ISO8859-1 encoding.
-
-[[examples_of_authname]]
-=== Examples of AUTHNAME
-
-* This example shows the authorization name associated with the
-authorization ID number, 33333:
-+
-```
->>SELECT AUTHNAME (33333) FROM (values(1)) x(a);
-
-(EXPR)
--------------------------
-DB ROOT
-
---- 1 row(s) selected.
-```
-
-<<<
-[[avg_function]]
-== AVG Function
-
-AVG is an aggregate function that returns the average of a set of
-numbers.
-
-```
-AVG ([ALL | DISTINCT] expression)
-```
-
-* `ALL | DISTINCT`
-+
-specifies whether duplicate values are included in the computation of
-the AVG of the _expression_. The default option is ALL, which causes
-duplicate values to be included. If you specify DISTINCT, duplicate
-values are eliminated before the AVG function is applied.
-
-* `_expression_`
-+
-specifies a numeric or interval value _expression_ that determines the
-values to average. The _expression_ cannot contain an aggregate function
-or a subquery. The DISTINCT clause specifies that the AVG function
-operates on distinct values from the one-column table derived from the
-evaluation of _expression_.
-
-See <<numeric_value_expressions,Numeric Value Expressions>> and
-<<interval_value_expressions,Interval Value Expressions>>.
-
-[[considerations_for_avg]]
-=== Considerations for AVG
-
-[[data-type-of-the-result]]
-==== Data Type of the Result
-
-The data type of the result depends on the data type of the argument. If
-the argument is an exact numeric type, the result is LARGEINT. If the
-argument is an approximate numeric type, the result
-is DOUBLE PRECISION. If the argument is INTERVAL data type, the result
-is INTERVAL with the same precision as the argument.
-
-The scale of the result is the same as the scale of the argument. If the
-argument has no scale, the result is truncated.
-
-
-[[operands-of-the-expression]]
-==== Operands of the Expression
-
-The expression includes columns from the rows of the SELECT result table but
-cannot include an aggregate function. These expressions are valid:
-
-```
-AVG (SALARY)
-AVG (SALARY * 1.1)
-AVG (PARTCOST * QTY_ORDERED)
-```
-
-[[avg_nulls]]
-==== Nulls
-
-All nulls are eliminated before the function is applied to the set of
-values. If the result table is empty, AVG returns NULL.
-
-[[examples_of_avg]]
-==== Examples of AVG
-
-* Return the average value of the SALARY column:
-+
-```
-SELECT AVG (salary) FROM persnl.employee;
-
-(EXPR)
----------------------
-             49441.52
-
---- 1 row(s) selected.
-```
-
-* Return the average value of the set of unique SALARY values:
-+
-```
-SELECT AVG(DISTINCT salary) AS Avg_Distinct_Salary FROM persnl.employee;
-
-AVG_DISTINCT_SALARY
----------------------
-             53609.89
-
---- 1 row(s) selected.
-```
-
-* Return the average salary by department:
-+
-```
-SELECT deptnum, AVG (salary) AS "AVERAGE SALARY"
-FROM persnl.employee
-WHERE deptnum < 3000 GROUP BY deptnum;
-
-Dept/Num "AVERAGE SALARY"
--------- ---------------------
-    1000              52000.17
-    2000              50000.10
-    1500              41250.00
-    2500              37000.00
-
---- 4 row(s) selected.
-```
-
-<<<
-[[bitand_function]]
-== BITAND Function
-
-The BITAND function performs an AND operation on corresponding bits of
-the two operands. If both bits are 1, the result bit is 1. Otherwise the
-result bit is 0.
-
-```
-BITAND (expression, expression)
-```
-
-* `_expression_`
-+
-The result data type is a binary number. Depending on the precision of
-the operands, the data type of the result can either be an INT (32-bit
-integer) or a LARGEINT (64-bit integer).
-+
-If the max precision of either operand is greater than 9, LARGEINT is
-chosen (numbers with precision greater than 9 are represented by
-LARGEINT). Otherwise, INT is chosen.
-+
-If both operands are unsigned, the result is unsigned. Otherwise, the
-result is signed. Both operands are converted to the result data type
-before performing the bit operation.
-
-[[considerations_for_bitand]]
-=== Considerations for BITAND
-
-BITAND can be used anywhere in an SQL query where an expression could be
-used. This includes SELECT lists, WHERE predicates, VALUES clauses, SET
-statement, and so on.
-
-This function returns a numeric data type and can be used in arithmetic
-expressions.
-
-Numeric operands can be positive or negative numbers. All numeric data
-types are allowed with the exceptions listed in the
-<<restrictions_for_bitand,Restrictions for BITAND>> section.
-
-[[restrictions_for_bitand]]
-==== Restrictions for BITAND
-
-The following are BITAND restrictions:
-
-* Must have two operands
-* Operands must be binary or decimal exact numerics
-* Operands must have scale of zero
-* Operands cannot be floating point numbers
-* Operands cannot be an extended precision numeric (the maximum precision of an extended numeric data type is 128)
-
-
-[[examples_of_bitand]]
-=== Examples of BITAND
-
-```
->>select bitand(1,3) from (values(1)) x(a);
-
-(EXPR)
---------------
-             1
-
---- 1 row(s) selected
-
->>select 1 & 3 from (values(1)) x(a);
-
-(EXPR)
---------------
-             1
-
---- 1 row(s) selected
-
->>select bitand(1,3) + 0 from (values(1)) x(a);
-
-(EXPR)
---------------
-             1
-
---- 1 row(s) selected
-```
-
-<<<
-[[case_expression]]
-== CASE (Conditional) Expression
-
-The CASE expression is a conditional expression with two forms: simple
-and searched.
-
-In a simple CASE expression, {project-name} SQL compares a value to a
-sequence of values and sets the CASE expression to the value associated
-with the first match &#8212; if a match exists. If no match exists, {project-name}
-SQL returns the value specified in the ELSE clause (which can be null).
-
-In a searched CASE expression, {project-name} SQL evaluates a sequence of
-conditions and sets the CASE expression to the value associated with the
-first condition that is true &#8212; if a true condition exists. If no true
-condition exists, {project-name} SQL returns the value specified in the ELSE
-clause (which can be null).
-
-*Simple CASE is*:
-
-```
-CASE case-expression
-   WHEN expression-1 THEN {result-expression-1 | NULL}
-   WHEN expression-2 THEN {result-expression-2 | NULL}
-   ...
-   WHEN expression-n THEN {result-expression-n | NULL}
-                      [ELSE {result-expression | NULL}]
-END
-```
-
-*Searched CASE is*:
-
-```
-CASE
-   WHEN _condition-1_ THEN {_result-expression-1_ | NULL}
-   WHEN _condition-2_ THEN {_result-expression-2_ | NULL}
-   ...
-   WHEN _condition-n_ THEN {_result-expression-n_ | NULL}
-                     [ELSE {_result-expression_ | NULL}]
-END
-```
-
-* `_case-expression_`
-+
-specifies a value expression that is compared to the value expressions
-in each WHEN clause of a simple CASE. The data type of each _expression_
-in the WHEN clause must be comparable to the data type of
-_case-expression_.
-
-* `_expression-1_ &#8230; _expression-n_`
-+
-specifies a value associated with each _result-expression_. If the
-value of an _expression_ in a WHEN clause matches the value of
-_case-expression_, simple CASE returns the associated
-_result-expression_ value. If no match exists, the CASE expression
-returns the value expression specified in the ELSE clause, or NULL if
-the ELSE value is not specified.
-
-* `_result-expression-1_ &#8230; _result-expression-n_`
-+
-specifies the result value expression associated with each _expression_
-in a WHEN clause of a simple CASE, or with each _condition_ in a WHEN
-clause of a searched CASE. All of the _result-expressions_ must have
-comparable data types, and at least one of the
-_result-expressions_ must return non-null.
-
-* `_result-expression_`
-+
-follows the ELSE keyword and specifies the value returned if none of the
-expressions in the WHEN clause of a simple CASE are equal to the case
-expression, or if none of the conditions in the WHEN clause of a
-searched CASE are true. If the ELSE _result-expression_ clause is not
-specified, CASE returns NULL. The data type of _result-expression_ must
-be comparable to the other results.
-
-* `_condition-1_ &#8230; _condition-n_`
-
-specifies conditions to test for in a searched CASE. If a _condition_ is
-true, the CASE expression returns the associated _result-expression_
-value. If no _condition_ is true, the CASE expression returns the value
-expression specified in the ELSE clause, or NULL if the ELSE value is
-not specified.
-
-[[considerations_for_case]]
-=== Considerations for CASE
-
-[[data_type_of_the_case_expression]]
-==== Data Type of the CASE Expression
-
-The data type of the result of the CASE expression depends on the data
-types of the result expressions. If the results all have the same data
-type, the CASE expression adopts that data type. If the results have
-comparable but not identical data types, the CASE expression adopts the
-data type of the union of the result expressions. This result data type
-is determined in these ways.
-
-[[character_data_type]]
-==== Character Data Type
-
-If any data type of the result expressions is variable-length character
-string, the result data type is variable-length character string with
-maximum length equal to the maximum length of the result expressions.
-
-Otherwise, if none of the data types is variable-length character
-string, the result data type is fixed-length character string with length
-equal to the maximum of the lengths of the result expressions.
-
-[[numeric_data_type]]
-==== Numeric Data Type
-
-If all of the data types of the result expressions are exact numeric,
-the result data type is exact numeric with precision and scale equal to
-the maximum of the precisions and scales of the result expressions.
-
-For example, if _result-expression-1_ and _result-expression-2_ have
-data type NUMERIC(5) and _result-expression-3_ has data type
-NUMERIC(8,5), the result data type is NUMERIC(10,5).
-
-If any data type of the result expressions is approximate numeric, the
-result data type is approximate numeric with precision equal to the
-maximum of the precisions of the result expressions.
-
-[[datetime_data_type]]
-==== Datetime Data Type
-
-If the data type of the result expressions is datetime, the result data
-type is the same datetime data type.
-
-[[interval_data_type]]
-==== Interval Data Type
-
-If the data type of the result expressions is interval, the result data
-type is the same interval data type (either year-month or day-time) with
-the start field being the most significant of the start fields of the
-result expressions and the end field being the least significant of the
-end fields of the result expressions.
-
-[[examples_of_case]]
-=== Examples of CASE
-
-* Use a simple CASE to decode JOBCODE and return NULL if JOBCODE does
-not match any of the listed values:
-+
-```
-SELECT
-  last_name
-, first_name
-, CASE jobcode
-    WHEN 100 THEN 'MANAGER'
-    WHEN 200 THEN 'PRODUCTION SUPV'
-    WHEN 250 THEN 'ASSEMBLER'
-    WHEN 300 THEN 'SALESREP'
-    WHEN 400 THEN 'SYSTEM ANALYST'
-    WHEN 420 THEN 'ENGINEER'
-    WHEN 450 THEN 'PROGRAMMER'
-    WHEN 500 THEN 'ACCOUNTANT'
-    WHEN 600 THEN 'ADMINISTRATOR ANALYST'
-    WHEN 900 THEN 'SECRETARY'
-    ELSE NULL
-  END
-FROM persnl.employee;
-
-LAST_NAME            FIRST_NAME      (EXPR)
--------------------- --------------- -----------------
-GREEN                ROGER           MANAGER
-HOWARD               JERRY           MANAGER
-RAYMOND              JANE            MANAGER
-...
-CHOU                 JOHN            SECRETARY
-CONRAD               MANFRED         PROGRAMMER
-HERMAN               JIM             SALESREP
-CLARK                LARRY           ACCOUNTANT
-HALL                 KATHRYN         SYSTEM ANALYST
-...
-
---- 62 row(s) selected.
-```
-
-* Use a searched CASE to return LAST_NAME, FIRST_NAME and a value based
-on SALARY that depends on the value of DEPTNUM:
-+
-```
-SELECT
-  last_name
-, first_name
-, deptnum
-, CASE
-    WHEN deptnum = 9000 THEN salary * 1.10
-    WHEN deptnum = 1000 THEN salary * 1.12 ELSE salary
-  END
-FROM persnl.employee;
-
-LAST_NAME        FIRST_NAME   DEPTNUM (EXPR)
----------------- ------------ ------- -------------------
-GREEN            ROGER           9000         193050.0000
-HOWARD           JERRY           1000         153440.1120
-RAYMOND          JANE            3000         136000.0000
-...
-
---- 62 row(s) selected.
-```
-
-<<<
-[[cast_expression]]
-== CAST Expression
-
-The CAST expression converts data to the data type you specify.
-
-```
-CAST ({expression | NULL} AS data-type) 
-```
-
-* `_expression_ | NULL`
-+
-specifies the operand to convert to the data type _data-type_.
-+
-If the operand is an _expression_, then _data-type_ depends on the
-data type of _expression_ and follows the rules outlined in
-<<valid_conversions_for_cast,Valid Conversions for CAST >>.
-+
-If the operand is NULL, or if the value of the _expression_ is null, the
-result of CAST is NULL, regardless of the data type you specify.
-
-* `_data-type_`
-+
-specifies a data type to associate with the operand of CAST. See
-<<data_types,Data Types>>.
-+
-When casting data to a CHAR or VARCHAR data type, the resulting data
-value is left justified. Otherwise, the resulting data value is right
-justified. Further, when you are casting to a CHAR or VARCHAR data type,
-you must specify the length of the target value.
-
-[[considerations_for_cast]]
-=== Considerations for CAST
-
-* Fractional portions are discarded when you use CAST of a numeric value to an INTERVAL type.
-* Depending on how your file is set up, using CAST might cause poor
-query performance by preventing the optimizer from choosing the most
-efficient plan and requiring the executor to perform a complete table or
-index scan.
-
-[[valid_conversions_for_cast]]
-==== Valid Conversions for CAST
-
-* An exact or approximate numeric value to any other numeric data type.
-* An exact or approximate numeric value to any character string data type.
-* An exact numeric value to either a single-field year-month or day-time interval such as INTERVAL DAY(2).
-* A character string to any other data type, with one restriction:
-
-The contents of the character string to be converted must be consistent
-in meaning with the data type of the result. For example, if you are
-converting to DATE, the contents of the character string must be 10
-characters consisting of the year, a hyphen, the month, another hyphen,
-and the day.
-
-* A date value to a character string or to a TIMESTAMP ({project-name} SQL fills in the time part with 00:00:00.00).
-* A time value to a character string or to a TIMESTAMP ({project-name} SQL fills in the date part with the current date).
-* A timestamp value to a character string, a DATE, a TIME, or another TIMESTAMP with different fractional seconds precision.
-* A year-month interval value to a character string, an exact numeric,
-or to another year-month INTERVAL with a different start field precision.
-* A day-time interval value to a character string, an exact numeric, or
-to another day-time INTERVAL with a different start field precision.
-
-[[examples_of_cast]]
-=== Examples of CAST
-
-* In this example, the fractional portion is discarded:
-+
-```
-CAST (123.956 as INTERVAL DAY(18))
-```
-
-* This example returns the difference of two timestamps in minutes:
-+
-```
-CAST((d.step_end - d.step_start) AS INTERVAL MINUTE)
-```
-
-* Suppose that your database includes a log file of user information.
-This example converts the current timestamp to a character string and
-concatenates the result to a character literal. Note the length must be
-specified.
-+
-```
-INSERT INTO stats.logfile (user_key, user_info)
-VALUES (001, 'User JBrook, executed at ' || CAST (CURRENT_TIMESTAMP AS CHAR(26)));
-```
-
-<<<
-[[ceiling_function]]
-== CEILING Function
-
-The CEILING function returns the smallest integer, represented as a
-FLOAT data type, greater than or equal to a numeric value expression.
-
-CEILING is a {project-name} SQL extension.
-
-```
-CEILING (numeric-expression)
-```
-
-* `_numeric-expression_`
-+
-is an SQL numeric value expression that specifies the value for the
-argument of the CEILING function.
-See <<numeric_value_expressions,Numeric Value Expressions>>.
-
-[[examples_of_ceiling]]
-=== Examples of CEILING
-
-* This function returns the integer value 3.00000000000000000E+000,
-represented as a FLOAT data type:
-+
-```
-CEILING (2.25)
-```
-
-<<<
-[[char_function]]
-=== CHAR Function
-
-The CHAR function returns the character that has the specified code
-value, which must be of exact numeric with scale 0.
-
-CHAR is a {project-name} SQL extension.
-
-```
-CHAR(code-value, [,char-set-name])
-```
-
-* `_code-value_`
-+
-is a valid code value in the character set in use.
-
-* `_char-set-name_`
-+
-can be ISO88591 or UTF8. The returned character will be associated with
-the character set specified by _char-set-name_.
-+
-The default for _char-set-name_ is ISO88591.
-
-[[considerations_for_char]]
-=== Considerations for CHAR
-
-* For the ISO88591 character set, the return type is VARCHAR(1).
-* For the UTF8 character set, the return type is VARCHAR(1).
-
-[[examples_of_char]]
-=== Examples of CHAR
-
-* Select the column CUSTNAME and return the ASCII code of the first
-character of the customer name and its CHAR value:
-+
-```
-SELECT custname, ASCII (custname), CHAR (ASCII (custname))
-FROM sales.customer;
-
-CUSTNAME           (EXPR) ( EXPR)
------------------- ------- -------
-CENTRAL UNIVERSITY      67 C
-BROWN MEDICAL CO        66 B
-STEVENS SUPPLY          83 S
-PREMIER INSURANCE       80 P
-...                    ... ...
-
---- 15 row(s) selected.
-```
-
-<<<
-[[char_length_function]]
-== CHAR_LENGTH Function
-
-The CHAR_LENGTH function returns the number of characters in a string.
-You can also use CHARACTER_LENGTH. Every character, including multi-byte
-characters, counts as one character.
-
-```
-CHAR[ACTER]_LENGTH (string-value-expression)
-```
-
-* `_string-value-expression_`
-+
-specifies the string value expression for which to return the length in
-characters. {project-name} SQL returns the result as a two-byte signed
-integer with a scale of zero. If _string-value-expression_ is null,
-{project-name} SQL returns a length of
-null. See <<character_value_expressions,Character Value Expressions>>.
-
-[[considerations_for_char_length]]
-=== Considerations for CHAR_LENGTH
-
-[[char_and_varchar_operands]]
-==== CHAR and VARCHAR Operands
-
-For a column declared as fixed CHAR, {project-name} SQL returns the maximum
-length of that column. For a VARCHAR column, {project-name} SQL returns the
-actual length of the string stored in that column.
-
-[[examples_of_char_length]]
-=== Examples of CHAR_LENGTH
-
-
-* This function returns 12 as the result. The concatenation operator is
-denoted by two vertical bars (\|\|).
-+
-```
-CHAR_LENGTH ('ROBERT' || ' ' || 'SMITH')
-```
-
-* The string '' is the null (or empty) string. This function returns 0
-(zero):
-+
-```
-CHAR_LENGTH ('')
-```
-
-* The DEPTNAME column has data type CHAR(12). Therefore, this function
-always returns 12:
-+
-```
-CHAR_LENGTH (deptname)
-```
-
-* The PROJDESC column in the PROJECT table has data type VARCHAR(18).
-This function returns the actual length of the column value &#8212; not 18 for
-shorter strings &#8212; because it is a VARCHAR value:
-+
-```
-SELECT CHAR_LENGTH (projdesc) FROM persnl.project;
-
-(EXPR)
-----------
-        14
-        13
-        13
-        17
-         9
-         9
-
---- 6 row(s) selected.
-```
-
-<<<
-[[coalesce_function]]
-== COALESCE Function
-
-The COALESCE function returns the value of the first expression in the
-list that does not have a NULL value or if all the expressions have NULL
-values, the function returns a NULL value.
-
-```
-COALESCE (expr1, expr2, ...)
-```
-
-* `_expr1_`
-+
-an expression to be compared.
-
-* `_expr2_`
-+
-an expression to be compared.
-
-[[examples_of_coalesce]]
-=== Examples of COALESCE
-
-* COALESCE returns the value of the first operand that is not NULL:
-+
-```
-SELECT COALESCE (office_phone, cell_phone, home_phone, pager, fax_num, '411')
-from emptbl;
-```
-
-<<<
-[[code_value_function]]
-== CODE_VALUE Function
-
-The CODE_VALUE function returns an unsigned integer (INTEGER UNSIGNED)
-that is the code point of the first character in a character value
-expression that can be associated with one of the supported character
-sets.
-
-CODE_VALUE is a {project-name} SQL extension.
-
-```
-CODE_VALUE(character-value-expression)
-   character-set
-```
-
-* `_character-value-expression_`
-+
-is a character string.
-
-
-[[examples_of_code_value_function]]
-=== Examples of CODE_VALUE Function
-
-* This function returns 97 as the result:
-+
-```
->>select code_value('abc') from (values(1))x;
-
-(EXPR)
-----------
-        97
-```
-
-<<<
-[[concat_function]]
-=== CONCAT Function
-
-The CONCAT function returns the concatenation of two character value
-expressions as a character string value. You can also use the
-concatenation operator (\|\|).
-
-CONCAT is a {project-name} SQL extension.
-
-```
-CONCAT (character-expr-1, character-expr-2)
-```
-
-* `_character-expr-1_, _character-expr-2_`
-+
-are SQL character value expressions (of data type CHAR or VARCHAR) that
-specify two strings of characters. Both character value expressions must
-be either ISO8859-1 character expressions or UTF8 character expressions.
-The result of the CONCAT function is the concatenation of
-_character-expr-1_ with _character-expr-2_. The result type is CHAR if
-both expressions are of type CHAR and it is VARCHAR if either of the
-expressions is of type VARCHAR.
-See <<character_value_expressions,Character Value Expressions>>.
-
-
-[[concatenation_operator]]
-=== Concatenation Operator (||)
-
-The concatenation operator, denoted by two vertical bars (||),
-concatenates two string values to form a new string value. To indicate
-that two strings are concatenated, connect the strings with two vertical
-bars (\|\|):
-
-```
-character-expr-1 || character-expr-2
-```
-
-An operand can be any SQL value expression of data type CHAR or VARCHAR.
-
-[[considerations_for_concat]]
-=== Considerations for CONCAT
-
-[[operands]]
-=== Operands
-
-
-A string value can be specified by any character value expression, such
-as a character string literal, character string function, column
-reference, aggregate function, scalar subquery, CASE expression, or CAST
-expression. The value of the operand must be of type CHAR or VARCHAR.
-
-If you use the CAST expression, you must specify the length of CHAR or
-VARCHAR.
-
-
-[[sql-parameters]]
-=== SQL Parameters
-
-You can concatenate an SQL parameter and a character value expression.
-The concatenated parameter takes on the data type attributes of the
-character value expression. Consider this example, where ?p is assigned
-a string value of '5 March':
-
-?p || ' 2002'
-
-The type assignment of the parameter ?p becomes CHAR(5), the same data
-type as the character literal ' 2002'. Because you assigned a string
-value of more than five characters to ?p, {project-name} SQL returns a
-truncation warning, and the result of the concatenation is 5 Mar 2002.
-
-To specify the type assignment of the parameter, use the CAST expression
-on the parameter as:
-
-CAST(?p AS CHAR(7)) || '2002'
-
-In this example, the parameter is not truncated, and the result of the
-concatenation is 5 March 2002.
-
-[[examples_of_concat]]
-=== Examples of CONCAT
-
-* Insert information consisting of a single character string. Use the
-CONCAT function to construct and insert the value:
-+
-```
-INSERT INTO stats.logfile (user_key, user_info)
-VALUES (001, CONCAT ('Executed at ', CAST (CURRENT_TIMESTAMP AS CHAR(26))));
-```
-
-* Use the concatenation operator || to construct and insert the value:
-+
-```
-INSERT INTO stats.logfile (user_key, user_info)
-VALUES (002, 'Executed at ' || CAST (CURRENT_TIMESTAMP AS CHAR(26)));
-```
-
-<<<
-[[converttohex_function]]
-== CONVERTTOHEX Function
-
-The CONVERTTOHEX function converts the specified value expression to
-hexadecimal for display purposes.
-
-CONVERTTOHEX is a {project-name} SQL extension.
-
-```
-CONVERTTOHEX (expression)
-```
-
-_expression_
-
-is any numeric, character, datetime, or interval expression.
-
-The primary purpose of the CONVERTTOHEX function is to eliminate any
-doubt as to the exact value in a column. It is particularly useful for
-character expressions where some characters may be from character sets
-that are not supported by the client terminal's locale or may be control
-codes or other non-displayable characters.
-
-[[considerations_for_converttohex]]
-=== Considerations for CONVERTTOHEX
-
-Although CONVERTTOHEX is usable on datetime and interval expressions,
-the displayed output shows the internal value and is, consequently, not
-particularly meaningful to general users and is subject to change in
-future releases.
-
-CONVERTTOHEX returns ASCII characters in ISO8859-1 encoding.
-
-<<<
-[[examples_of_converttohex]]
-=== Examples of CONVERTTOHEX
-
-* Display the contents of a smallint, integer, and largeint in
-hexadecimal:
-+
-```
-CREATE TABLE EG (S1 smallint, I1 int, L1 largeint);
-
-INSERT INTO EG VALUES( 37, 2147483647, 2305843009213693951);
-
-SELECT CONVERTTOHEX(S1), CONVERTTOHEX(I1), CONVERTTOHEX(L1) from EG;
-
-(EXPR) (EXPR)    EXPR)
------- -------- ----------------
-0025   7FFFFFFF 1FFFFFFFFFFFFFFF
-```
-
-* Display the contents of a CHAR(4) column, a VARCHAR(4) column, and a
-CHAR(4) column that uses the UTF8 character set. The varchar column does
-not have a trailing space character as the fixed-length columns have:
-+
-```
-CREATE TABLE EG_CH (FC4 CHAR(4), VC4 VARCHAR(4), FC4U CHAR(4) CHARACTER SET UTF8);
-
-INSERT INTO EG_CH values('ABC', 'abc', _UTF8'abc');
-
-SELECT CONVERTTOHEX(FC4), CONVERTTOHEX(VC4), CONVERTTOHEX(FC4U) from EG_CH;
-
-(EXPR)   (EXPR)   (EXPR)
--------- -------- ----------------
-41424320   616263 0061006200630020
-```
-
-* Display the internal values for a DATE column, a TIME column, a
-TIMESTAMP(2) column, and a TIMESTAMP(6) column:
-+
-```
-CREATE TABLE DT (D1 date, T1 time, TS1 timestamp(2), TS2 timestamp(6) );
-INSERT INTO DT values(current_date, current_time, current_timestamp, current_timestamp);
-
-SELECT CONVERTTOHEX(D1), CONVERTTOHEX(T1), CONVERTTOHEX(TS1), CONVERTTOHEX(TS2) from DT;
-
-(EXPR)      (EXPR)    (EXPR)                    (EXPR)
------------ --------- ------------------------- -------------------------
-   07D8040F    0E201E    07D8040F0E201E00000035    07D8040F0E201E00081ABB
-```
-
-<<<
-* Display the internal values for an INTERVAL YEAR column, an INTERVAL
-YEAR(2) TO MONTH column, and an INTERVAL DAY TO SECOND column:
-+
-```
-CREATE TABLE IVT ( IV1 interval year, IV2 interval year(2) to month, IV3 interval day to second);
-
-INSERT INTO IVT values( interval '1' year, interval '3-2' year(2) to
-month, interval '31:14:59:58' day to second);
-
-SELECT CONVERTTOHEX(IV1), CONVERTTOHEX(IV2), CONVERTTOHEX(IV3) from IVT;
-
-(EXPR) (EXPR)   (EXPR)
------- -------- -----------------------
-  0001     0026        0000027C2F9CB780
-```
-
-<<<
-[[converttimestamp_function]]
-== CONVERTTIMESTAMP Function
-
-The CONVERTTIMESTAMP function converts a Julian timestamp to a value
-with data type TIMESTAMP.
-
-CONVERTTIMESTAMP is a {project-name} SQL extension.
-
-```
-CONVERTTIMESTAMP (julian-timestamp)
-```
-
-* `_julian-timestamp_`
-+
-is an expression that evaluates to a Julian timestamp, which is a
-LARGEINT value.
-
-[[considerations_for_converttimestamp]]
-=== Considerations for CONVERTTIMESTAMP
-
-The _julian-timestamp_ value must be in the range from 148731
-63200000000 to 274927348799999999.
-
-
-[[relationship_to_the_juliantimestamp_function]]
-==== Relationship to the JULIANTIMESTAMP Function
-
-The operand of CONVERTTIMESTAMP is a Julian timestamp, and the function
-result is a value of data type TIMESTAMP. The operand of the
-CONVERTTIMESTAMP function is a value of data type TIMESTAMP, and the
-function result is a Julian timestamp. That is, the two functions have
-an inverse relationship to one another.
-
-[[use_of_converttimestamp]]
-==== Use of CONVERTTIMESTAMP
-
-You can use the inverse relationship between the JULIANTIMESTAMP and
-CONVERTTIMESTAMP functions to insert Julian timestamp columns into your
-database and display these column values in a TIMESTAMP format.
-
-<<<
-[[examples_of_converttimestamp]]
-=== Examples of CONVERTTIMESTAMP
-
-* Suppose that the EMPLOYEE table includes a column, named HIRE_DATE,
-which contains the hire date of each employee as a Julian timestamp.
-Convert the Julian timestamp into a TIMESTAMP value:
-+
-```
-SELECT CONVERTTIMESTAMP (hire_date) FROM persnl.employee;
-```
-
-* This example illustrates the inverse relationship between
-JULIANTIMESTAMP and CONVERTTIMESTAMP.
-+
-```
-SELECT CONVERTTIMESTAMP (JULIANTIMESTAMP (ship_timestamp)) FROM persnl.project;
-```
-+
-If, for example, the value of SHIP_TIMESTAMP is 2008-04-03
-21:05:36.143000, the result of CONVERTTIMESTAMP(JULIANTIMESTAMP(ship_timestamp))
-is the same value, 2008-04-03 21:05:36.143000.
-
-<<<
-[[cos_function]]
-== COS Function
-
-The COS function returns the cosine of a numeric value expression, where
-the expression is an angle expressed in radians.
-
-COS is a {project-name} SQL extension.
-
-```
-COS (numeric-expression)
-```
-
-* `_numeric-expression_`
-+
-is an SQL numeric value expression that specifies the value for the
-argument of the COS function.
-
-See <<numeric_value_expressions,Numeric Value Expressions>>.
-
-[[examples_of_cos]]
-=== Examples of COS
-
-* This function returns the value 9.39680940386503680E-001, or
-approximately 0.9397, the cosine of 0.3491 (which is 20 degrees):
-+
-```
-COS (0.3491)
-```
-
-<<<
-[[cosh_function]]
-=== COSH Function
-
-The COSH function returns the hyperbolic cosine of a numeric value
-expression, where the expression is an angle expressed in radians.
-
-COSH is a {project-name} SQL extension.
-
-```
-COSH (numeric-expression)
-```
-
-* `_numeric-expression_`
-+
-is an SQL numeric value expression that specifies the value for the
-argument of the COSH function.
-See <<numeric_value_expressions,Numeric Value Expressions>>.
-
-[[examples_of_cosh]]
-=== Examples of COSH
-
-* This function returns the value 1.88842387716101568E+000, or
-approximately 1.8884, the hyperbolic cosine of 1.25 in radians:
-+
-```
-COSH (1.25)
-```
-
-<<<
-[[count_function]]
-=== COUNT Function
-
-The COUNT function counts the number of rows that result from a query or
-the number of rows that contain a distinct value in a specific column.
-The result of COUNT is data type LARGEINT. The result can never be NULL.
-
-```
-COUNT {(*) | ([ALL | DISTINCT] expression)}
-```
-
-* `COUNT (*)`
-+
-returns the number of rows in the table specified in the FROM clause of
-the SELECT statement that contains COUNT (\*). If the result table is
-empty (that is, no rows are returned by the query) COUNT (*) returns
-zero.
-
-* `ALL | DISTINCT`
-+
-returns the number of all rows or the number of distinct rows in the
-one-column table derived from the evaluation of _expression_. The
-default option is ALL, which causes duplicate values to be included. If
-you specify DISTINCT, duplicate values are eliminated before the COUNT
-function is applied.
-
-* `_expression_`
-+
-specifies a value expression that determines the values to count. The
-_expression_ cannot contain an aggregate function or a subquery. The
-DISTINCT clause specifies that the COUNT function operates on distinct
-values from the one-column table derived from the evaluation of
-_expression_. See <<expressions,Expressions>>.
-
-[[considerations_for_count]]
-=== Considerations for COUNT
-
-[[operands-of-the-expression-1]]
-==== Operands of the Expression
-
-The operand of COUNT is either * or an expression that includes columns
-from the result table specified by the SELECT statement that contains
-COUNT. However, the expression cannot include an aggregate function or a
-subquery. These expressions are valid:
-
-```
-COUNT (*)
-COUNT (DISTINCT JOBCODE)
-COUNT (UNIT_PRICE * QTY_ORDERED)
-```
-
-<<<
-[[count_nulls]]
-==== Nulls
-
-COUNT is evaluated after eliminating all nulls from the one-column table
-specified by the operand. If the table has no rows, COUNT returns zero.
-
-COUNT(\*) does not eliminate null rows from the table specified in the
-FROM clause of the SELECT statement. If all rows in a table are null,
-COUNT(\*) returns the number of rows in the table.
-
-[[examples_of_count]]
-=== Examples of COUNT
-
-* Count the number of rows in the EMPLOYEE table:
-+
-```
-SELECT COUNT (*) FROM persnl.employee;
-
-(EXPR)
------------
-         62
-
---- 1 row(s) selected.
-```
-
-* Count the number of employees who have a job code in the EMPLOYEE
-table:
-+
-```
-SELECT COUNT (jobcode) FROM persnl.employee;
-
-(EXPR)
------------
-         56
-
---- 1 row(s) selected.
-
-SELECT COUNT(*)
-FROM persnl.employee
-WHERE jobcode IS NOT NULL;
-
-(EXPR)
------------
-         56
-
---- 1 row(s) selected.
-```
-
-<<<
-* Count the number of distinct departments in the EMPLOYEE table:
-+
-```
-SELECT COUNT (DISTINCT deptnum) FROM persnl.employee;
-
-(EXPR)
------------
-         11
-
---- 1 row(s) selected.
-```
-
-<<<
-[[current_function]]
-== CURRENT Function
-
-The CURRENT function returns a value of type TIMESTAMP based on the
-current local date and time.
-
-The function is evaluated once when the query starts execution and is
-not reevaluated (even if it is a long running query).
-
-You can also use <<current_timestamp_function,CURRENT_TIMESTAMP Function>>.
-
-```
-CURRENT [(precision)]
-```
-
-* `_precision_`
-+
-is an integer value in the range 0 to 6 that specifies the precision of
-(the number of decimal places in) the fractional seconds in the returned
-value. The default is 6.
-+
-For example, the function CURRENT (2) returns the current date and time
-as a value of data type TIMESTAMP, where the precision of the fractional
-seconds is 2, for example, 2008-06-26 09:01:20.89. The value returned is
-not a string value.
-
-[[examples_of_current]]
-=== Examples of CURRENT
-
-* The PROJECT table contains a column SHIP_TIMESTAMP of data type
-TIMESTAMP. Update a row by using the CURRENT value:
-+
-```
-UPDATE persnl.project
-SET ship_timestamp = CURRENT WHERE projcode = 1000;
-```
-
-<<<
-[[current_date_function]]
-== CURRENT_DATE Function
-
-The CURRENT_DATE function returns the local current date as a value of
-type DATE.
-
-The function is evaluated once when the query starts execution and is
-not reevaluated (even if it is a long running query).
-
-```
-CURRENT_DATE
-```
-
-The CURRENT_DATE function returns the current date, such as 2008-09-28.
-The value returned is a value of type DATE, not a string value.
-
-[[examples_of_current_date]]
-=== Examples of CURRENT_DATE
-
-* Select rows from the ORDERS table based on the current date:
-+
-```
-SELECT * FROM sales.orders
-WHERE deliv_date >= CURRENT_DATE;
-```
-
-* The PROJECT table has a column EST_COMPLETE of type INTERVAL DAY. If
-the current date is the start date of your project, determine the
-estimated date of completion:
-+
-```
-SELECT projdesc, CURRENT_DATE + est_complete FROM persnl.project;
-
-Project/Description (EXPR)
-------------------- ----------
-SALT LAKE CITY      2008-01-18
-ROSS PRODUCTS       2008-02-02
-MONTANA TOOLS       2008-03-03
-AHAUS TOOL/SUPPLY   2008-03-03
-THE WORKS           2008-02-02
-THE WORKS           2008-02-02
-
---- 6 row(s) selected.
-```
-
-<<<
-[[current_time_function]]
-== CURRENT_TIME Function
-
-The CURRENT_TIME function returns the current local time as a value of
-type TIME.
-
-The function is evaluated once when the query starts execution and is
-not reevaluated (even if it is a long running query).
-
-```
-CURRENT_TIME [(precision)]
-```
-
-* `_precision_`
-+
-is an integer value in the range 0 to 6 that specifies the precision of
-(the number of decimal places in) the fractional seconds in the returned
-value. The default is 0.
-+
-For example, the function CURRENT_TIME (2) returns the current time as a
-value of data type TIME, where the precision of the fractional seconds
-is 2, for example, 14:01:59.30. The value returned is not a string
-value.
-
-[[examples_of_current_time]]
-=== Examples of CURRENT_TIME
-
-* Use CURRENT_DATE and CURRENT_TIME as a value in an inserted row:
-+
-```
-INSERT INTO stats.logfile (user_key, run_date, run_time, user_name)
-VALUES (001, CURRENT_DATE, CURRENT_TIME, 'JuBrock');
-```
-
-<<<
-[[current_timestamp_function]]
-== CURRENT_TIMESTAMP Function
-
-The CURRENT_TIMESTAMP function returns a value of type TIMESTAMP based
-on the current local date and time.
-
-The function is evaluated once when the query starts execution and is
-not reevaluated (even if it is a long running query).
-
-You can also use the <<current_function,CURRENT Function>>.
-
-```
-CURRENT_TIMESTAMP [(_precision_)]
-```
-
-* `_precision_`
-+
-is an integer value in the range 0 to 6 that specifies the precision of
-(the number of decimal places in) the fractional seconds in the returned
-value. The default is 6.
-+
-For example, the function CURRENT_TIMESTAMP (2) returns the current date
-and time as a value of data type TIMESTAMP, where the precision of the
-fractional seconds is 2; for example, 2008-06-26 09:01:20.89. The value
-returned is not a string value.
-
-
-[[examples_of_current_timestamp]]
-=== Examples of CURRENT_TIMESTAMP
-
-* The PROJECT table contains a column SHIP_TIMESTAMP of data type
-TIMESTAMP. Update a row by using the CURRENT_TIMESTAMP value:
-+
-```
-UPDATE persnl.project
-SET ship_timestamp = CURRENT_TIMESTAMP WHERE projcode = 1000;
-```
-
-<<<
-[[current_user_function]]
-== CURRENT_USER Function
-
-The CURRENT_USER function returns the database user name of the current
-user who invoked the function. The current user is the authenticated
-user who started the session. That database user name is used for
-authorization of SQL statements in the current session.
-
-```
-CURRENT_USER
-```
-
-The CURRENT_USER function is similar to the <<user_function,USER Function>>.
-
-[[considerations_for_current_user]]
-=== Considerations for CURRENT_USER
-
-* This function can be specified only in the top level of a SELECT statement.
-* The value returned is string data type VARCHAR(128) and is in ISO8859-1 encoding.
-
-
-[[examples_of_current_user]]
-=== Examples of CURRENT_USER
-
-* This example retrieves the database user name for the current user:
-+
-```
-SELECT CURRENT_USER FROM (values(1)) x(a);
-
-(EXPR)
------------------------
-TSHAW
-
---- 1 row(s) selected.
-```
-
-<<<
-[[date_add_function]]
-== DATE_ADD Function
-
-The DATE_ADD function adds the interval specified by
-_interval_expression_ to _datetime_expr_. If the specified interval is
-in years or months, DATE_ADD normalizes the result. See
-<<standard_normalization,Standard Normalization>>. The type of the
-_datetime_expr_ is returned, unless the _interval_expression_ contains
-any time components, then a timestamp is returned.
-
-DATE_ADD is a {project-name} SQL extension.
-
-```
-DATE_ADD (datetime-expr, interval-expression)
-```
-
-* `_datetime-expr_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-* `_interval-expression_`
-+
-is an expression that can be combined in specific ways with addition
-operators. The _interval_expression_ accepts all interval expression
-types that the {project-name} database software considers as valid interval
-expressions. See <<interval_value_expressions,Interval Value Expressions>>.
-
-<<<
-[[examples_of_date_add]]
-=== Examples of DATE_ADD
-
-* This function returns the value DATE '2007-03-07'
-+
-```
-DATE_ADD(DATE '2007-02-28', INTERVAL '7' DAY)
-```
-
-* This function returns the value DATE '2008-03-06'
-+
-```
-DATE_ADD(DATE '2008-02-28', INTERVAL '7' DAY)
-```
-
-* This function returns the timestamp '2008-03-07 00:00:00'
-+
-```
-DATE_ADD(timestamp'2008-02-29 00:00:00', INTERVAL '7' DAY)
-```
-
-* This function returns the timestamp '2008-02-28 23:59:59'
-+
-```
-DATE_ADD(timestamp '2007-02-28 23:59:59', INTERVAL '12' MONTH)
-```
-+
-NOTE: compare this example with the last example under DATE_SUB.
-
-<<<
-[[date_sub_function]]
-== DATE_SUB Function
-
-The DATE_SUB function subtracts the specified _interval_expression_ from
-_datetime_expr_. If the specified interval is in years or months,
-DATE_SUB normalizes the result. See <<standard_normalization,Standard Normalization>>.
-
-The type of the _datetime_expr_ is returned, unless the _interval_expression_ contains
-any time components, then a timestamp is returned.
-
-DATE_SUB is a {project-name} SQL extension.
-
-```
-DATE_SUB (datetime-expr, interval-expression)
-```
-
-* `_datetime-expr_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. See <<datetime_value_expressions,Datetime_Value_Expression>>.
-
-* `_interval-expression_`
-+
-is an expression that can be combined in specific ways with subtraction
-operators. The _interval_expression_  accepts all interval expression
-types that the {project-name} database software considers as valid interval
-expressions. see <<interval_value_expressions,Interval Value Expressions>>.
-
-<<<
-[[examples_of_date_sub]]
-=== Examples of DATE_SUB
-
-* This function returns the value DATE '2009-02-28'
-+
-```
-DATE_SUB(DATE '2009-03-07', INTERVAL'7' DAY)
-```
-
-* This function returns the value DATE '2008-02-29'
-+
-```
-DATE_SUB(DATE '2008-03-07', INTERVAL'7' DAY)
-```
-
-* This function returns the timestamp '2008-02-29 00:00:00'
-+
-```
-DATE_SUB(timestamp '2008-03-31 00:00:00', INTERVAL '31' DAY)
-```
-
-* This function returns the timestamp '2007-02-28 23:59:59'
-+
-```
-DATE_SUB(timestamp '2008-02-29 23:59:59', INTERVAL '12' MONTH)
-```
-
-
-<<<
-[[dateadd_function]]
-== DATEADD Function
-
-The DATEADD function adds the interval of time specified by _datepart_
-and _num-expr_ to _datetime-expr_. If the specified interval is in
-years or months, DATEADD normalizes the result. See
-<<standard_normalization,Standard Normalization>>. The type of the
-_datetime-expr_ is returned, unless the interval expression contains any
-time components, then a timestamp is returned.
-
-DATEADD is a {project-name} SQL extension.
-
-```
-DATEADD(datepart, num-expr, datetime-expr)
-```
-
-* `_datepart_`
-+
-is YEAR, MONTH, DAY, HOUR, MINUTE, SECOND, QUARTER, WEEK, or one of the
-following abbreviations:
-+
-[cols="15%,85%"]
-|===
-| YEAR    | _YY_ and _YYYY_
-| MONTH   | _M_ and _MM_
-| DAY     | _D_ and _DD_
-| HOUR    | _HH_
-| MINUTE  | _MI_ and _M_
-| SECOND  | _SS_ and _S_
-| QUARTER | _Q_ and _QQ_
-| WEEK    | _WW_ and _WK_
-|===
-
-
-* `_num-expr_`
-+
-is an SQL exact numeric value expression that specifies how many
-_datepart_ units of time are to be added to _datetime_expr_. If
-_num_expr_ has a fractional portion, it is ignored. If _num_expr_ is
-negative, the return value precedes _datetime_expr_ by the specified
-amount of time. See <<numeric_value_expressions,Numeric Value Expressions>>.
-
-* `_datetime-expr_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. The type of the _datetime_expression_ is returned, unless the
-interval expression contains any time components, then a timestamp is
-returned. See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-<<<
-[[examples_of_dateadd]]
-=== Examples of DATEADD
-
-* This function adds seven days to the date specified in _start_date_
-+
-```
-DATEADD(DAY, 7,start_date)
-```
-
-* This function returns the value DATE '2009-03-07'
-+
-```
-DATEADD(DAY, 7 , DATE '2009-02-28')
-```
-
-* This function returns the value DATE '2008-03-06'
-+
-```
-DATEADD(DAY, 7, DATE '2008-02-28')
-```
-
-* This function returns the timestamp '2008-03-07 00:00:00'
-+
-```
-DATEADD(DAY, 7, timestamp'2008-02-29 00:00:00')
-```
-
-<<<
-[[datediff_function]]
-== DATEDIFF Function
-
-The DATEDIFF function returns the integer value for the number of
-_datepart_ units of time between _startdate_ and _enddate_. If
-_enddate_ precedes _startdate_, the return value is negative or zero.
-
-DATEDIFF is a {project-name} SQL extension.
-
-```
-DATEDIFF (datepart, startdate, enddate)
-```
-
-* `datepart`
-+
-is YEAR, MONTH, DAY, HOUR, MINUTE, SECOND, QUARTER, WEEK, or one of the
-following abbreviations:
-+
-[cols="15%,85%"]
-|===
-| YEAR    | _YY_ and _YYYY_
-| MONTH   | _M_ and _MM_
-| DAY     | _D_ and _DD_
-| HOUR    | _HH_
-| MINUTE  | _MI_ and _M_
-| SECOND  | _SS_ and _S_
-| QUARTER | _Q_ and QQ
-| WEEK    | _WW_ and _WK_
-|===
-
-* `startdate`
-+
-may be of type DATE or TIMESTAMP.
-See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-* `enddate`
-+
-may be of type DATE or TIMESTAMP.
-See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-The method of counting crossed boundaries such as days, minutes, and
-seconds makes the result given by DATEDIFF consistent across all data
-types. The result is a signed integer value equal to the number of
-datepart boundaries crossed between the first and second date.
-
-For example, the number of weeks between Sunday, January 4, and Sunday,
-January 1 , is 1. The number of months between March 31 and April 1
-would be 1 because the month boundary is crossed from March to April.
-The DATEDIFF function generates an error if the result is out of range
-for integer values. For seconds, the maximum number is equivalent to
-approximately 68 years. The DATEDIFF function generates an error if a
-difference in weeks is requested and one of the two dates precedes
-January 7 of the year 0001.
-
-<<<
-[[examples_of_datediff]]
-=== Examples of DATEDIFF
-
-* This function returns the value of 0 because no one-second boundaries
-are crossed.
-+
-```
-DATEDIFF( SECOND
-        , TIMESTAMP '2006-09-12 11:59:58.999998'
-        , TIMESTAMP '2006-09-12 11:59:58.999999'
-        )
-```
-
-* This function returns the value 1 because a one-second boundary is
-crossed even though the two timestamps differ by only one microsecond.
-+
-```
-DATEDIFF( SECOND
-        , TIMESTAMP '2006-09-12 11:59:58.999999'
-        , TIMESTAMP '2006-09-12 11:59:59.000000'
-        )
-```
-
-* This function returns the value of 0.
-+
-```
-DATEDIFF( YEAR
-        , TIMESTAMP '2006-12-31 23:59:59.999998'
-        , TIMESTAMP '2006-12-31 23:59:59.999999'
-        )
-```
-
-* This function returns the value of 1 because a year boundary is
-crossed.
-+
-```
-DATEDIFF( YEAR
-        , TIMESTAMP '2006-12-31 23:59:59.999999'
-        , TIMESTAMP '2007-01-01 00:00:00.000000'
-        )
-```
-
-* This function returns the value of 2 because two WEEK boundaries are
-crossed.
-+
-```
-DATEDIFF(WEEK, DATE '2006-01-01', DATE '2006-01-09')
-```
-
-* This function returns the value of -29.
-+
-```
-DATEDIFF(DAY, DATE '2008-03-01', DATE '2008-02-01')
-```
-
-<<<
-[[dateformat_function]]
-=== DATEFORMAT Function
-
-The DATEFORMAT function returns a datetime value as a character string
-literal in the DEFAULT, USA, or EUROPEAN format. The data type of the
-result is CHAR.
-
-DATEFORMAT is a {project-name} SQL extension.
-
-```
-DATEFORMAT (datetime-expression,{DEFAULT | USA | EUROPEAN})
-```
-
-* `_datetime-expression_`
-+
-is an expression that evaluates to a datetime value of type DATE, TIME,
-or TIMESTAMP. See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-* `DEFAULT | USA | EUROPEAN`
-+
-specifies a format for a datetime value. See <<datetime_literals,Datetime Literals>>.
-
-[[considerations_for_dateformat]]
-=== Considerations for DATEFORMAT
-
-The DATEFORMAT function returns the datetime value in ISO8859-1
-encoding.
-
-[[examples_of_dateformat]]
-=== Examples of DATEFORMAT
-
-* Convert a datetime literal in DEFAULT format to a string in USA
-format: DATEFORMAT (TIMESTAMP '2008-06-20 14:20:20.00', USA) The
-function returns this string literal:
-+
-```
-'06/20/2008 02:20:20.00 PM'
-```
-
-* Convert a datetime literal in DEFAULT format to a string in European
-format: DATEFORMAT (TIMESTAMP '2008-06-20 14:20:20.00', EUROPEAN) The
-function returns this string literal:
-+
-```
-'20.06.2008 14.20.20.00'
-```
-
-<<<
-[[date_part_function_of_an_interval]]
-== DATE_PART Function (of an Interval)
-
-The DATE_PART function extracts the datetime field specified by _text_
-from the _interval_ value specified by _interval_ and returns the result
-as an exact numeric value. The DATE_PART function accepts the
-specification of 'YEAR', 'MONTH', 'DAY', 'HOUR', 'MINUTE', or 'SECOND'
-for text.
-
-DATE_PART is a {project-name} SQL extension.
-
-```
-DATEPART (text, interval)
-```
-
-* `_text_`
-+
-specifies YEAR, MONTH, DAY, HOUR, MINUTE, or SECOND. The value must be
-enclosed in single quotes.
-
-* `_interval_`
-+
-_interval_ accepts all interval expression types that the {project-name}
-database software considers as valid interval expressions. See
-<<interval_value_expressions,Interval Value Expressions>>.
-
-The DATE_PART(_text_, _interval_) is equivalent to EXTRACT(_text_,
-_interval_), except that the DATE_PART function requires single quotes
-around the text specification, where EXTRACT does not allow single
-quotes.
-
-When SECOND is specified the fractional part of the second is returned.
-
-[[examples_of_date_part]]
-=== Examples of DATE_PART
-
-* This function returns the value of 7.
-+
-```
-DATE_PART('DAY', INTERVAL '07:04' DAY TO HOUR)
-```
-
-* This function returns the value of 6.
-+
-```
-DATE_PART('MONTH', INTERVAL '6' MONTH)
-```
-
-* This function returns the value of 36.33.
-+
-```
-DATE_PART('SECOND', INTERVAL '5:2:15:36.33' DAY TO SECOND(2))
-```
-
-<<<
-[[date_part_function_of_a_timestamp]]
-== DATE_PART Function (of a Timestamp)
-
-The DATE_PART function extracts the datetime field specified by _text_
-from the datetime value specified by _datetime_expr_ and returns the
-result as an exact numeric value. The DATE_PART function accepts the
-specification of 'YEAR', 'YEARQUARTER', 'YEARMONTH', 'YEARWEEK',
-'MONTH', 'DAY', 'HOUR', 'MINUTE', or 'SECOND' for text.
-
-The DATE_PART function of a timestamp can be changed to DATE_PART
-function of a datetime because the second argument can be either a
-timestamp or a date expression.
-
-DATE_PART is a {project-name} extension.
-
-```
-DATEPART(text, datetime-expr)
-```
-
-* `_text_`
-+
-specifies YEAR, YEARQUARTER, YEARMONTH, YEARWEEK, MONTH, DAY, HOUR,
-MINUTE, or SECOND. The value must be enclosed in single quotes.
-
-** *YEARMONTH*: Extracts the year and the month, as a 6-digit integer of
-the form yyyymm (100 \* year + month).
-** *YEARQUARTER*: Extracts the year and quarter, as a 5-digit integer of
-the form yyyyq, (10 \* year + quarter) with q being 1 for the first
-quarter, 2 for the second, and so on.
-** *YEARWEEK*: Extracts the year and week of the year, as a 6-digit integer
-of the form yyyyww (100 \* year + week). The week number will be computed
-in the same way as in the WEEK function.
-
-* `_datetime-expr_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-DATE_PART(_text_, _datetime-expr_) is mostly equivalent to
-EXTRACT(_text_, _datetime-expr_), except that DATE_PART requires
-single quotes around the text specification where EXTRACT does not allow
-single quotes. In addition, you cannot use the YEARQUARTER, YEARMONTH,
-and YEARWEEK text specification with EXTRACT.
-
-<<<
-[[examples_of_date_part]]
-=== Examples of DATE_PART
-
-* This function returns the value of 12.
-+
-```
-DATE_PART('month', date'12/05/2006')
-```
-
-* This function returns the value of 2006.
-+
-```
-DATE_PART('year', date'12/05/2006')
-```
-
-* This function returns the value of 31.
-+
-```
-DATE_PART('day', TIMESTAMP '2006-12-31 11:59:59.999999')
-```
-
-* This function returns the value 201 07.
-+
-```
-DATE_PART('YEARMONTH', date '2011-07-25')
-```
-
-<<<
-[[date_trunc_function]]
-== DATE_TRUNC Function
-
-The DATE_TRUNC function returns a value of type TIMESTAMP, which has all
-fields of lesser precision than _text_ set to zero (or 1 in the case of
-months or days).
-
-DATE_TRUNC is a {project-name} SQL extension.
-
-```
-DATE_TRUNC(text, datetime-expr)
-```
-
-* `_text_`
-+
-specifies 'YEAR', 'MONTH', 'DAY', 'HOUR', 'MINUTE', or 'SECOND'. The
-DATE_TRUNC function also accepts the specification of 'CENTURY' or 'DECADE'.
-
-* `_datetime_expr_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. DATE_TRUNC returns a value of type TIMESTAMP which has all
-fields of lesser precision than _text_ set to zero (or 1 in the case of
-months or days). See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-<<<
-[[examples_of_date_trunc]]
-=== Examples of DATE_TRUNC
-
-* This function returns the value of TIMESTAMP '2006-12-31 00:00:00'.
-+
-```
-DATE_TRUNC('day', TIMESTAMP '2006-12-31 11:59:59')
-```
-
-* This function returns the value of TIMESTAMP '2006-01-01 00:00:00'
-+
-```
-DATE_TRUNC('YEAR', TIMESTAMP '2006-12-31 11:59:59')
-```
-
-* This function returns the value of TIMESTAMP '2006-12-01 00:00:00'
-+
-```
-DATE_TRUNC('MONTH', DATE '2006-12-31')
-```
-
-Restrictions:
-
-* DATE_TRUNC( 'DECADE', &#8230;) cannot be used on years less than 10.
-* DATE_TRUNC( 'CENTURY', &#8230;) cannot be used on years less than 100.
-
-<<<
-[[day_function]]
-== DAY Function
-
-The DAY function converts a DATE or TIMESTAMP expression into an INTEGER
-value in the range 1 through 31 that represents the corresponding day of
-the month. The result returned by the DAY function is equal to the
-result returned by the DAYOFMONTH function.
-
-DAY is a {project-name} SQL extension.
-
-```
-DAY (datetime-expression)
-```
-
-* `_datetime-expression_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-[[examples_of_day]]
-=== Examples of Day
-
-* Return an integer that represents the day of the month from the
-start date column of the project table:
-+
-```
-SELECT start_date, ship_timestamp, DAY(start_date)
-FROM persnl.project
-WHERE projcode = 1000;
-
-Start/Date Time/Shipped               (EXPR)
----------- -------------------------- ------
-2008-04-10 2008-04-21 08:15:00.000000     10
-```
-
-<<<
-[[dayname_function]]
-== DAYNAME Function
-
-The DAYNAME function converts a DATE or TIMESTAMP expression into a
-character literal that is the name of the day of the week (Sunday,
-Monday, and so on).
-
-DAYNAME is a {project-name} SQL extension.
-
-```
-DAYNAME (datetime-expression)
-```
-
-* `_datetime-expression_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. See <<datetime_value_expressions,Datetime Value Expressions>>.
-
-[[considerations_for_dayname]]
-=== Considerations for DAYNAME
-
-The DAYNAME function returns the name of the day in ISO8859-1.
-
-[[examples_of_dayname]]
-=== Examples of DAYNAME
-
-Return the name of the day of the week from the start date column in the
-project table:
-+
-```
-SELECT start_date, ship_timestamp, DAYNAME(start_date)
-FROM persnl.project
-WHERE projcode = 1000;
-
-Start/Date Time/Shipped               (EXPR)
----------- -------------------------- ---------
-2008-04-10 2008-04-21 08:15:00.000000 Thursday
-```
-
-<<<
-[[dayofmonth_function]]
-== DAYOFMONTH Function
-
-The DAYOFMONTH function converts a DATE or TIMESTAMP expression into an
-INTEGER value in the range 1 through 31 that represents the
-corresponding day of the month. The result returned by the DAYOFMONTH
-function is equal to the result returned by the DAY function.
-
-DAYOFMONTH is a {project-name} SQL extension.
-
-```
-DAYOFMONTH (datetime-expression)
-```
-
-* `_datetime-expression_`
-+
-is an expression that evaluates to a datetime value of type DATE or
-TIMESTAMP. See <<

<TRUNCATED>


[14/15] incubator-trafodion git commit: Added links to the QQ Chinese Trafodion discussion group.

Posted by gt...@apache.org.
Added links to the QQ Chinese Trafodion discussion group.


Project: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/commit/9d3ad455
Tree: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/tree/9d3ad455
Diff: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/diff/9d3ad455

Branch: refs/heads/master
Commit: 9d3ad45550524e27910f486c6417cae04a384bd6
Parents: da748b4
Author: Gunnar Tapper <ta...@gmail.com>
Authored: Wed Nov 2 19:47:38 2016 +0000
Committer: Gunnar Tapper <ta...@gmail.com>
Committed: Wed Nov 2 19:47:38 2016 +0000

----------------------------------------------------------------------
 docs/src/site/markdown/index.md | 2 ++
 docs/src/site/site.xml          | 3 ++-
 2 files changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/9d3ad455/docs/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/docs/src/site/markdown/index.md b/docs/src/site/markdown/index.md
index 875b8e5..4206129 100644
--- a/docs/src/site/markdown/index.md
+++ b/docs/src/site/markdown/index.md
@@ -44,6 +44,8 @@ Trafodion provides SQL access to structured, semi-structured, and unstructured d
 <table><tr><td>
   <p><h5>We're working on release 2.1!</h5></p> 
   <p>Check out the <a href="https://cwiki.apache.org/confluence/display/TRAFODION/Roadmap">Roadmap</a> page for planned content.</p>
+  <p><h5>Want to disucss Trafodion in Chinese? Join the Trafodion discussion on Tencent QQ!</h5></p> 
+  <p><a href="http://im.qq.com/">QQ</a> Group ID: 176011868.</p>
 </td></tr></table>
 
 <!-- 20160524 GTA Need more logos before using this part.

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/9d3ad455/docs/src/site/site.xml
----------------------------------------------------------------------
diff --git a/docs/src/site/site.xml b/docs/src/site/site.xml
index c7dfd8b..64328df 100644
--- a/docs/src/site/site.xml
+++ b/docs/src/site/site.xml
@@ -217,7 +217,8 @@
       <item href="team-redirect.html" name="Team"/>
       <item href="presentations.html" name="Presentations"/>
       <item href="logo.html" name="Logo"/>
-      <item href="mail-lists.html" name="Mailing List"/>
+      <item href="mail-lists.html" name="Mailing Lists"/>
+      <item href="http://im.qq.com" name="QQ (Group ID:176011868)"/>
       <item href="http:divider" name=""/>
       <item href="source-repository.html" name="Source Repository"/>
       <item href="issue-tracking.html" name="Issue Tracking"/>


[15/15] incubator-trafodion git commit: Merge [TRAFODION-2323] PR-808 Major reorganization of the Client Installation Guide

Posted by gt...@apache.org.
Merge [TRAFODION-2323] PR-808 Major reorganization of the Client Installation Guide


Project: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/commit/e26b2060
Tree: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/tree/e26b2060
Diff: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/diff/e26b2060

Branch: refs/heads/master
Commit: e26b20601134e99bbb4a0986e8ec43153260c632
Parents: 6862bf7 9d3ad45
Author: Gunnar Tapper <gt...@apache.org>
Authored: Thu Nov 3 05:29:06 2016 +0000
Committer: Gunnar Tapper <gt...@apache.org>
Committed: Thu Nov 3 05:29:06 2016 +0000

----------------------------------------------------------------------
 .../jdbcT4/src/main/samples/t4jdbc.properties   |     2 +-
 docs/client_install/pom.xml                     |   600 +-
 .../src/asciidoc/_chapters/SQuirrel.adoc        |   148 +-
 .../src/asciidoc/_chapters/about.adoc           |   331 +-
 .../src/asciidoc/_chapters/dbviz.adoc           |   193 +-
 .../src/asciidoc/_chapters/howto.adoc           |   164 +
 .../src/asciidoc/_chapters/introduction.adoc    |   217 +-
 .../src/asciidoc/_chapters/jdbct4.adoc          |   742 +-
 .../src/asciidoc/_chapters/odb.adoc             |   311 +-
 .../src/asciidoc/_chapters/odbc_linux.adoc      |   701 +-
 .../src/asciidoc/_chapters/odbc_windows.adoc    |   490 +-
 .../src/asciidoc/_chapters/preparation.adoc     |   273 +
 .../src/asciidoc/_chapters/sample_prog.adoc     |   150 +-
 .../src/asciidoc/_chapters/tableau.adoc         |    83 +
 .../src/asciidoc/_chapters/trafci.adoc          |   984 +-
 docs/client_install/src/asciidoc/index.adoc     |   138 +-
 .../Database_Connection_in_DbVisualizer.jpg     |   Bin 58043 -> 63604 bytes
 .../src/images/DbVisualizer_Driver_Manager.jpg  |   Bin 80198 -> 79645 bytes
 .../src/images/Extracted_Files.jpg              |   Bin 28327 -> 26389 bytes
 .../src/images/InstallComplete.jpg              |   Bin 47365 -> 73963 bytes
 .../src/images/Physical_Connection.jpg          |   Bin 185241 -> 71998 bytes
 .../src/images/SQuirrel_Add_Alias.jpg           |   Bin 0 -> 50396 bytes
 .../src/images/SQuirrel_Extra_Class_Path.jpg    |   Bin 0 -> 54897 bytes
 .../src/images/SQuirrel_New_Driver.jpg          |   Bin 0 -> 31639 bytes
 .../src/images/tableau_connect.jpg              |   Bin 0 -> 39547 bytes
 .../src/images/trafci_Installation_Choices.jpg  |   Bin 0 -> 78358 bytes
 .../src/images/winodbc_admin_add.jpg            |   Bin 0 -> 60817 bytes
 .../src/images/winodbc_admin_add_general.jpg    |   Bin 0 -> 48930 bytes
 .../images/winodbc_admin_add_general_edited.jpg |   Bin 0 -> 50337 bytes
 .../src/images/winodbc_admin_add_network.jpg    |   Bin 0 -> 79462 bytes
 .../src/images/winodbc_admin_add_schema.jpg     |   Bin 0 -> 32986 bytes
 .../winodbc_admin_add_test_connection.jpg       |   Bin 0 -> 43949 bytes
 .../winodbc_admin_add_tested_connection.jpg     |   Bin 0 -> 44916 bytes
 .../images/winodbc_admin_add_translate_dll.jpg  |   Bin 0 -> 46249 bytes
 .../src/images/winodbc_admin_intro.jpg          |   Bin 0 -> 59907 bytes
 .../src/images/winodbc_destination.jpg          |   Bin 0 -> 36788 bytes
 .../src/images/winodbc_install_finished.jpg     |   Bin 0 -> 36769 bytes
 .../src/images/winodbc_license.jpg              |   Bin 0 -> 58430 bytes
 .../src/images/winodbc_ready_to_install.jpg     |   Bin 0 -> 34193 bytes
 .../src/images/winodbc_welcome.jpg              |   Bin 0 -> 39496 bytes
 .../src/resources/source/basicsql.cpp           |   850 +-
 .../src/resources/source/build.bat              |    50 +-
 .../client_install/src/resources/source/run.bat |    46 +-
 .../src/resources/tableau/trafodion.tdc         |    16 +
 .../resources/tableau/trafodion.tdc.template    |    16 +
 .../src/asciidoc/_chapters/binder_msgs.adoc     |     6 +-
 .../src/asciidoc/_chapters/install.adoc         |   Bin 13248 -> 2252 bytes
 .../src/resources/source/partLocations.java     |    42 +
 .../src/resources/source/partlocations.java     |    42 -
 .../src/resources/source/supplierInfo.java      |    38 +
 .../src/resources/source/supplierinfo.java      |    38 -
 .../src/resources/source/supplyQuantities.java  |    32 +
 .../src/resources/source/supplyquantities.java  |    32 -
 docs/sql_reference/pom.xml                      |   602 +-
 .../src/asciidoc/_chapters/about.adoc           |   424 +-
 .../src/asciidoc/_chapters/introduction.adoc    |  1036 +-
 .../src/asciidoc/_chapters/limits.adoc          |    74 +-
 .../src/asciidoc/_chapters/olap_functions.adoc  |  2156 +--
 .../src/asciidoc/_chapters/reserved_words.adoc  |   572 +-
 .../src/asciidoc/_chapters/runtime_stats.adoc   |  2706 +--
 .../src/asciidoc/_chapters/sql_clauses.adoc     |  2864 +--
 .../sql_functions_and_expressions.adoc          | 15770 +++++++--------
 .../_chapters/sql_language_elements.adoc        |  8176 ++++----
 .../src/asciidoc/_chapters/sql_statements.adoc  | 17004 +++++++++--------
 .../src/asciidoc/_chapters/sql_utilities.adoc   |  2382 +--
 docs/sql_reference/src/asciidoc/index.adoc      |   137 +-
 docs/src/site/markdown/index.md                 |     2 +
 docs/src/site/site.xml                          |     3 +-
 68 files changed, 30733 insertions(+), 29910 deletions(-)
----------------------------------------------------------------------



[09/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/introduction.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/introduction.adoc b/docs/sql_reference/src/asciidoc/_chapters/introduction.adoc
index b22f498..abbb0a3 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/introduction.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/introduction.adoc
@@ -1,518 +1,518 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[introduction]]
-= Introduction
-
-The {project-name} SQL database software allows you to use SQL statements, which comply closely to
-ANSI SQL:1999, to access data in {project-name} SQL tables, which map to HBase tables, and to access
-native HBase tables and Hive tables.
-
-This introduction describes:
-
-* <<sql_language,SQL Language>>
-* <<using_trafodion_sql_to_access_hbase_tables,Using {project-name} SQL to Access HBase Tables>>
-* <<using_trafodion_sql_to_access_hive_tables,Using {project-name} SQL to Access Hive Tables>>
-* <<data_consistency_and_access_options,Data Consistency and Access Options>>
-* <<transaction_management,Transaction Management>>
-* <<ansi_compliance_and_trafodion_sql_extensions,ANSI Compliance and {project-name} SQL Extensions>>
-* <<trafodion_sql_error_messages,{project-name} SQL Error Messages>>
-
-Other sections of this manual describe the syntax and semantics of individual statements, commands, and language elements.
-
-[[sql_language]]
-== SQL Language
-
-The SQL language consists of statements and other language elements that you can use to access SQL
-databases. For descriptions of individual SQL statements, see <<"sql_statements","SQL Statements">>.
-
-SQL language elements are part of statements and commands and include data types, expressions, functions,
-identifiers, literals, and predicates. For more information, see:
-
-* <<sql_language,SQL Language>>
-* <<elements,Elements>>
-* <<sql_clauses,SQL Clauses>>
-
-For information on specific functions and expressions, see:
-
-* <<sql_functions_and_expressions,SQL Functions and Expressions>>
-* <<olap_functions,OLAP Functions>>
-
-<<<
-[[using_trafodion_sql_to_access_hbase_tables]]
-== Using {project-name} SQL to Access HBase Tables
-
-You can use {project-name} SQL statements to read, update, and create HBase tables.
-
-* <<initializing_the_trafodion_metadata,Initializing the {project-name} Metadata>>
-* <<ways_to_access_hbase_tables,Ways to Access HBase Tables>>
-* <<trafodion_sql_tables_versus_native_hbase_tables,{project-name} SQL Tables Versus Native HBase Tables>>
-* <<supported_sql_statements_with_hbase_tables,Supported SQL Statements With HBase Tables>>
-
-For a list of Control Query Default (CQD) settings for the HBase environment, see the
-{docs-url}/cqd_reference/index.hmtl[{project-name} Control Query Default (CQD) Reference Guide].
-
-[[ways_to_access_hbase_tables]]
-=== Ways to Access HBase Tables
-{project-name} SQL supports these ways to access HBase tables:
-
-* <<accessing_trafodion_sql_tables,Accessing {project-name} SQL Tables>>
-* <<cell_per_row_access_to_hbase_tables,Cell-Per-Row Access to HBase Tables (Technology Preview)>>
-* <<rowwise_access_to_hbase_tables,Rowwise Access to HBase Tables (Technology Preview)>>
-
-<<<
-[[accessing_trafodion_sql_tables]]
-==== Accessing {project-name} SQL Tables
-
-A {project-name} SQL table is a relational SQL table generated by a `CREATE TABLE` statement and mapped
-to an HBase table. {project-name} SQL tables have regular ANSI names in the catalog `TRAFODION`.
-A {project-name} SQL table name can be a fully qualified ANSI name of the form
-`TRAFODION._schema-name.object-name_`.
-
-To access a {project-name} SQL table, specify its ANSI table name in a {project-name} SQL statement, similar
-to how you would specify an ANSI table name when running SQL statements in a relational database.
-
-*Example*
-
-```
-CREATE TABLE trafodion.sales.odetail
-( ordernum NUMERIC (6) UNSIGNED NO DEFAULT NOT NULL
-, partnum NUMERIC (4) UNSIGNED NO DEFAULT NOT NULL
-, unit_price NUMERIC (8,2) NO DEFAULT NOT NULL
-, qty_ordered NUMERIC (5) UNSIGNED NO DEFAULT NOT NULL
-, PRIMARY KEY (ordernum, partnum)
-);
-
-INSERT INTO trafodion.sales.odetail VALUES ( 900000, 7301, 425.00, 100 );
-
-SET SCHEMA trafodion.sales;
-
-SELECT * FROM odetail;
-```
-
-For more information about {project-name} SQL tables, see
-<<trafodion_sql_tables_versus_native_hbase_tables,{project-name} SQL Tables Versus Native HBase Tables>>.
-
-<<<
-[[cell_per_row_access_to_hbase_tables]]
-==== Cell-Per-Row Access to HBase Tables (Technology Preview)
-
-NOTE: This is a _Technology Preview (Complete But Not Tested)_ feature, meaning that it is functionally
-complete but has not been tested or debugged. 
-
-To access HBase data using cell-per-row mode, specify the schema `HBASE."_CELL_"` and the full ANSI
-name of the table as a delimited table name. You can specify the name of any HBase table, regardless of whether
-it was created through {project-name} SQL.
-
-*Example*
-
-```
-select * from hbase."_CELL_"."TRAFODION.MYSCH.MYTAB";
-select * from hbase."_CELL_"."table_created_in_HBase";
-```
-
-All tables accessed through this schema have the same column layout:
-
-```
->>invoke hbase."_CELL_"."table_created_in_HBase";
-  (
-  ROW_ID        VARCHAR(100)    ...
-, COL_FAMILY    VARCHAR(100)    ...
-, COL_NAME      VARCHAR(100)    ...
-, COL_TIMESTAMP LARGEINT        ...
-, COL_VALUE     VARCHAR(1000) ...
-)
-PRIMARY KEY (ROW_ID)
-
->>select * from hbase."_CELL_"."mytab";
-```
-
-<<<
-[[rowwise_access_to_hbase_tables]]
-==== Rowwise Access to HBase Tables (Technology Preview)
-
-NOTE: This is a _Technology Preview (Complete But Not Tested)_ feature, meaning that it is functionally
-complete but has not been tested or debugged.
-
-To access HBase data using rowwise mode, specify the schema `HBASE."_ROW_"` and the full ANSI name of the
-table as a delimited table name. You can specify the name of any HBase table, regardless of whether
-it was created through {project-name} SQL.
-
-*Example*
-
-```
-select * from hbase."_ROW_"."TRAFODION.MYSCH.MYTAB";
-select * from hbase."_ROW_"."table_created_in_HBase";
-```
-
-All column values of the row are returned as a single, big varchar:
-
-```
->>invoke hbase."_ROW_"."mytab";
-(
-  ROW_ID VARCHAR(100) ...
-, COLUMN_DETAILS VARCHAR(10000) ...
-)
-PRIMARY KEY (ROW_ID)
-
->>select * from hbase."_ROW_"."mytab";
-```
-
-<<<
-[[trafodion_sql_tables_versus_native_hbase_tables]]
-=== {project-name} SQL Tables Versus Native HBase Tables
-
-{project-name} SQL tables have many advantages over regular HBase tables:
-
-* They can be made to look like regular, structured SQL tables with fixed columns.
-* They support the usual SQL data types supported in relational databases.
-* They support compound keys, unlike HBase tables that have a single row key (a string).
-* They support indexes.
-* They support _salting_, which is a technique of adding a hash value of the row key as a
-key prefix to avoid hot spots for sequential keys. For the syntax,
-see the <<create_table_statement,CREATE TABLE Statement>>.
-
-The problem with {project-name} SQL tables is that they use a fixed format to represent column values,
-making it harder for native HBase applications to access them. Also, they have a fixed structure,
-so users lose the flexibility of dynamic columns that comes with HBase.
-
-[[supported_sql_statements_with_hbase_tables]]
-=== Supported SQL Statements With HBase Tables
-
-You can use these SQL statements with HBase tables:
-
-|===
-| <<select_statement,SELECT Statement>>             | <<insert_statement,INSERT Statement>>
-| <<update_statement,UPDATE Statement>>             | <<delete_statement,DELETE Statement>>
-| <<merge_statement,MERGE Statement>>               | <<get_statement,GET Statement>>
-| <<invoke_statement,INVOKE Statement>>             | <<alter_table_statement,ALTER TABLE Statement>>
-| <<create_index_statement,CREATE INDEX Statement>> | <<create_table_statement,CREATE TABLE Statement>>
-| <<create_view_statement,CREATE VIEW Statement>>   | <<drop_index_statement,DROP INDEX Statement>>
-| <<drop_table_statement,DROP TABLE Statement>>     | <<drop_view_statement,DROP VIEW Statement>>
-| <<grant_statement,GRANT Statement>>               | <<revoke_statement,REVOKE Statement>>
-|===
-
-<<<
-[[using_trafodion_sql_to_access_hive_tables]]
-== Using {project-name} SQL to Access Hive Tables
-
-You can use {project-name} SQL statements to access Hive tables.
-
-* <<ansi_names_for_hive_tables,ANSI Names for Hive Tables>>
-* <<type_mapping_from_hive_to_trafodion_sql,Type Mapping From Hive to {project-name} SQL>>
-* <<supported_sql_statements_with_hive_tables,Supported SQL Statements With Hive Tables>>
-
-For a list of Control Query Default (CQD) settings for the Hive environment, see the
-{docs-url}/cqd_reference/index.hmtl[{project-name} Control Query Default (CQD) Reference Guide].
-
-[[ansi_names_for_hive_tables]]
-=== ANSI Names for Hive Tables
-
-Hive tables appear in the {project-name} Hive ANSI name space in a special catalog and schema named `HIVE.HIVE`.
-
-To select from a Hive table named `T`, specify an implicit or explicit name, such as `HIVE.HIVE.T`,
-in a {project-name} SQL statement.
-
-*Example*
-This example should work if a Hive table named `T` has already been defined:
-
-```
-set schema hive.hive;
-
-CQD HIVE_MAX_STRING_LENGTH '20'; -- creates a more readable display
-select * from t; -- implicit table name
-
-set schema trafodion.seabase;
-
-select * from hive.hive.t; -- explicit table name
-```
-
-
-<<<
-[[type_mapping_from_hive_to_trafodion_sql]]
-=== Type Mapping From Hive to {project-name} SQL
-
-{project-name} performs the following data-type mappings:
-
-[cols="2*",options="header"]
-|===
-| Hive Type             | {project-name} SQL Type
-| `tinyint`             | `smallint`
-| `smallint`            | `smallint`
-| `int`                 | `int`
-| `bigint`              | `largeint`
-| `string`              | `varchar(_n_ bytes) character set utf8`^1^
-| `float`               | `real`
-| `double`              | `float(54)`
-| `timestamp`           | `timestamp(6)`^2^
-|===
-
-1. The value `_n_` is determined by `CQD HIVE_MAX_STRING_LENGTH`. See the
-{docs-url}/cqd_reference/index.hmtl[{project-name} Control Query Default (CQD) Reference Guide].
-2. Hive supports timestamps with nanosecond resolution (precision of 9). {project-name} SQL supports only microsecond resolution (precision 6).
-
-[[supported_sql_statements_with_hive_tables]]
-=== Supported SQL Statements With Hive Tables
-
-You can use these SQL statements with Hive tables:
-
-* <<select_statement,SELECT Statement>>
-* <<load_statement,LOAD Statement>>
-* GET TABLES (See the <<get_statement,GET Statement>>.)
-* <<invoke_statement,INVOKE Statement>>
-
-<<<
-[[data_consistency_and_access_options]]
-== Data Consistency and Access Options
-
-Access options for DML statements affect the consistency of the data that your query accesses.
-
-For any DML statement, you specify access options by using the `FOR _option_ ACCESS` clause and,
-for a `SELECT` statement, by using this same clause, you can also specify access options for individual
-tables and views referenced in the FROM clause.
-
-The possible settings for `_option_` in a DML statement are:
-
-* <<read_committed,READ COMMITTED>>
-
-Specifies that the data accessed by the DML statement must be from committed rows.
-
-The SQL default access option for DML statements is `READ COMMITTED`.
-
-For related information about transactions, see
-<<transaction_isolation_levels,Transaction Isolation Levels>>.
-
-[[read_committed]]
-=== READ COMMITTED
-
-This option allows you to access only committed data.
-
-The implementation requires that a lock can be acquired on the data requested by the DML statement\u2014but
-does not actually lock the data, thereby reducing lock request conflicts. If a lock cannot be granted
-(implying that the row contains uncommitted data), the DML statement request waits until the lock in
-place is released.
-
-READ COMMITTED provides the next higher level of data consistency (compared to READ UNCOMMITTED).
-A statement executing with this access option does not allow dirty reads, but both non-repeatable reads
-and phantoms are possible.
-
-READ COMMITTED provides sufficient consistency for any process that does not require a repeatable read
-capability.
-
-READ COMMITTED is the default isolation level.
-
-<<<
-[[transaction_management]]
-== Transaction Management
-
-A transaction (a set of database changes that must be completed as a group) is the basic recoverable unit
-in case of a failure or transaction interruption. Transactions are controlled through client tools that
-interact with the database using ODBC or JDBC.
-
-The typical order of events is:
-
-1.  Transaction is started.
-2.  Database changes are made.
-3.  Transaction is committed.
-
-If, however, the changes cannot be made or if you do not want to complete the transaction, then you can abort
-the transaction so that the database is rolled back to its original state.
-
-This subsection discusses these considerations for transaction management:
-
-* <<user_defined_and_system_defined_transactions,User-Defined and System-Defined Transactions>>
-* <<rules_for_dml_statements,Rules for DML Statements>>
-* <<effect_of_autocommit_option,Effect of AUTOCOMMIT Option>>
-* <<concurrency,Concurrency>>
-* <<transaction_isolation_levels,Transaction Isolation Levels>>
-
-[[user_defined_and_system_defined_transactions]]
-=== User-Defined and System-Defined Transactions
-Transactions you define are called _user-defined transactions_. To be sure that a sequence of statements executes
-successfully or not at all, you can define one transaction consisting of these statements by using the BEGIN WORK
-statement and COMMIT WORK statement. You can abort a transaction by using the ROLLBACK WORK statement.
-
-If AUTOCOMMIT is on, then you do not have to end the transaction explicitly as {project-name} SQL will end the transaction
-automatically. Sometimes an error occurs that requires the user-defined transaction to be aborted. {project-name} SQL
-will automatically abort the transaction and return an error indicating that the transaction was rolled back.
-
-<<<
-[[system_defined_transactions]]
-==== System-Defined Transactions
-
-In some cases, {project-name} SQL defines transactions for you. These transactions are called _system-defined transactions_.
-Most DML statements initiate transactions implicitly at the start of execution.
-See <<implicit_transactions,Implicit Transactions>>.
-
-However, even if a transaction is initiated implicitly, you must end a transaction explicitly with the COMMIT WORK
-statement or the ROLLBACK WORK statement. If AUTOCOMMIT is on, you do not need to end a transaction explicitly.
-
-[[rules_for_dml_statements]]
-=== Rules for DML Statements
-
-If deadlock occurs, the DML statement times out and receives an error.
-
-[[effect_of_autocommit_option]]
-=== Effect of AUTOCOMMIT Option
-
-AUTOCOMMIT is an option that can be set in a SET TRANSACTION statement. It specifies whether {project-name} SQL will commit
-automatically, or roll back if an error occurs, at the end of statement execution. This option applies to any statement
-for which the system initiates a transaction. See <<set_transaction_statement,SET TRANSACTION Statement>>.
-
-If this option is set to ON, {project-name} SQL automatically commits any changes, or rolls back any changes, made to the
-database at the end of statement execution.
-
-[[concurrency]]
-=== Concurrency
-
-Concurrency is defined by two or more processes accessing the same data at the same time. The degree of concurrency
-available &#8212; whether a process that requests access to data that is already being accessed is given access or placed
-in a wait queue &#8212; depends on the purpose of the access mode (read or update) and the isolation level. Currently, the only
-isolation level is READ COMMITTED.
-
-{project-name} SQL provides concurrent database access for most operations and controls database access through concurrency
-control and the mechanism for opening and closing tables. For DML operations, the access option affects the degree of
-concurrency. See <<data_consistency_and_access_options,Data Consistency and Access Options>>.
-
-<<<
-[[transaction_isolation_levels]]
-=== Transaction Isolation Levels
-
-A transaction has an isolation level that is <<read_committed,READ COMMITTED>>.
-
-[[read_committed]]
-==== READ COMMITTED
-
-This option, which is ANSI compliant, allows your transaction to access only committed data. No row locks are acquired
-when READ COMMITTED is the specified isolation level.
-
-READ COMMITTED provides the next level of data consistency. A transaction executing with this isolation level does not
-allow dirty reads, but both non-repeatable reads and phantoms are possible.
-
-READ COMMITTED provides sufficient consistency for any transaction that does not require a repeatable-read capability.
-
-The default isolation level is READ COMMITTED.
-
-<<<
-[[ansi_compliance_and_trafodion_sql_extensions]]
-== ANSI Compliance and {project-name} SQL Extensions
-
-{project-name} SQL complies most closely with Core SQL 99. {project-name} SQL also includes some features from SQL 99 and part of
-the SQL 2003 standard, and special {project-name} SQL extensions to the SQL language.
-
-Statements and SQL elements in this manual are ANSI compliant unless specified as {project-name} SQL extensions.
-
-[[ansi_compliant_statements]]
-=== ANSI-Compliant Statements
-
-These statements are ANSI compliant, but some might contain {project-name} SQL extensions:
-
-|===
-| <<alter_table_statement,ALTER TABLE Statement>>           | <<call_statement,CALL Statement>>
-| <<commit_work_statement,COMMIT WORK Statement>>           | <<create_function_statement,CREATE FUNCTION Statement>>
-| <<create_procedure_statement,CREATE PROCEDURE Statement>> | <<create_role_statement,CREATE ROLE Statement>>
-| <<create_schema_statement,CREATE SCHEMA Statement>>       | <<create_table_statement,CREATE TABLE Statement>>
-| <<create_view_statement,CREATE VIEW Statement>>           | <<delete_statement,DELETE Statement>>
-| <<drop_function_statement,DROP FUNCTION Statement>>       | <<drop_procedure_statement,DROP PROCEDURE Statement>>
-| <<drop_role_statement,DROP ROLE Statement>>               | <<drop_schema_statement,DROP SCHEMA Statement>>
-| <<drop_table_statement,DROP TABLE Statement>>             | <<drop_view_statement,DROP VIEW Statement>>
-| <<execute_statement,EXECUTE Statement>>                   | <<grant_statement,GRANT Statement>>
-| <<grant_role_statement,GRANT ROLE Statement>>             | <<insert_statement,INSERT Statement>>
-| <<merge_statement,MERGE Statement>>                       | <<prepare_statement,PREPARE Statement>>
-| <<revoke_statement,REVOKE Statement>>                     | <<revoke_role_statement,REVOKE ROLE Statement>>
-| <<rollback_work_statement,ROLLBACK WORK Statement>>       | <<select_statement,SELECT Statement>>
-| <<set_schema_statement,SET SCHEMA Statement>>             | <<set_transaction_statement,SET TRANSACTION Statement>>
-| <<table_statement,TABLE Statement>>                       | <<update_statement,UPDATE Statement>>
-| <<values_statement,VALUES Statement>>
-|===
-
-<<<
-[[statements_that_are_trafodion_sql_extensions]]
-=== Statements That Are {project-name} SQL Extensions
-
-These statements are {project-name} SQL extensions to the ANSI standard.
-
-|===
-| <<alter_library_statement,ALTER LIBRARY Statement>>                           | <<alter_user_statement,ALTER USER Statement>>
-| <<begin_work_statement,BEGIN WORK Statement>>                                 | <<control_query_cancel_statement,CONTROL QUERY CANCEL Statement>>
-| <<control_query_default_statement,CONTROL QUERY DEFAULT Statement>>           | <<create_index_statement,CREATE INDEX Statement>>
-| <<create_library_statement,CREATE LIBRARY Statement>>                         | <<drop_index_statement,DROP INDEX Statement>>
-| <<drop_library_statement,DROP LIBRARY Statement>>                             | <<explain_statement,EXPLAIN Statement>>
-| <<get_statement,GET Statement>>                                               | <<get_hbase_objects_statement,GET HBASE OBJECTS Statement>>
-| <<get_version_of_metadata_statement,GET VERSION OF METADATA Statement>>       | <<get_version_of_software_statement,GET VERSION OF SOFTWARE Statement>>
-| <<grant_component_privilege_statement,GRANT COMPONENT PRIVILEGE Statement>>   | <<invoke_statement,INVOKE Statement>>
-| <<load_statement,LOAD Statement>>                                             | <<register_user_statement,REGISTER USER Statement>>
-| <<revoke_component_privilege_statement,REVOKE COMPONENT PRIVILEGE Statement>> | <<showcontrol_statement,SHOWCONTROL Statement>>
-| <<showddl_statement,SHOWDDL Statement>>                                       | <<showddl_schema_statement,SHOWDDL SCHEMA Statement>>
-| <<showstats_statement,SHOWSTATS Statement>>                                   | <<unload_statement,UNLOAD Statement>>
-| <<unregister_user_statement,UNREGISTER USER Statement>>                       | <<update_statistics_statement,UPDATE STATISTICS Statement>>
-| <<upsert_statement,UPSERT Statement>>
-|===
-
-<<<
-[[ansi_compliant_functions]]
-=== ANSI-Compliant Functions
-
-These functions are ANSI compliant, but some might contain {project-name} SQL extensions:
-
-|===
-| <<avg,AVG function>>          | <<case, CASE expression>>
-| <<cast,CAST expression>>      | <<char_length,CHAR_LENGTH>>
-| <<coalesce,COALESCE>>         | <<count,COUNT Function>>
-| <<current,CURRENT>>           | <<current_date,CURRENT_DATE>>
-| <<current_time,CURRENT_TIME>> | <<current_timestamp,CURRENT_TIMESTAMP>>
-| <<current_user,CURRENT_USER>> | <<extract,EXTRACT>>
-| <<lower,LOWER>>               | <<max,MAX>>
-| <<min,MIN>>                   | <<nullif,NULLIF>>
-| <<octet_length,OCTET_LENGTH>> | <<position,POSITION>>
-| <<session_user,SESSION_USER>> | <<substring,SUBSTRING>>
-| <<sum,SUM>>                   | <<trim,TRIM>>
-| <<upper,UPPER>>
-|===
-
-All other functions are {project-name} SQL extensions.
-
-== {project-name} SQL Error Messages
-
-{project-name} SQL reports error messages and exception conditions. When an error condition occurs,
-{project-name} SQL returns a message number and a brief description of the condition.
-
-*Example*
-
-{project-name} SQL might display this error message:
-
-```
-*** ERROR[1000] A syntax error occurred.
-```
-
-The message number is the SQLCODE value (without the sign). In this example, the SQLCODE value is `1000`.
-
-
-
-
-
-
-
-
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[introduction]]
+= Introduction
+
+The {project-name} SQL database software allows you to use SQL statements, which comply closely to
+ANSI SQL:1999, to access data in {project-name} SQL tables, which map to HBase tables, and to access
+native HBase tables and Hive tables.
+
+This introduction describes:
+
+* <<sql_language,SQL Language>>
+* <<using_trafodion_sql_to_access_hbase_tables,Using {project-name} SQL to Access HBase Tables>>
+* <<using_trafodion_sql_to_access_hive_tables,Using {project-name} SQL to Access Hive Tables>>
+* <<data_consistency_and_access_options,Data Consistency and Access Options>>
+* <<transaction_management,Transaction Management>>
+* <<ansi_compliance_and_trafodion_sql_extensions,ANSI Compliance and {project-name} SQL Extensions>>
+* <<trafodion_sql_error_messages,{project-name} SQL Error Messages>>
+
+Other sections of this manual describe the syntax and semantics of individual statements, commands, and language elements.
+
+[[sql_language]]
+== SQL Language
+
+The SQL language consists of statements and other language elements that you can use to access SQL
+databases. For descriptions of individual SQL statements, see <<"sql_statements","SQL Statements">>.
+
+SQL language elements are part of statements and commands and include data types, expressions, functions,
+identifiers, literals, and predicates. For more information, see:
+
+* <<sql_language,SQL Language>>
+* <<elements,Elements>>
+* <<sql_clauses,SQL Clauses>>
+
+For information on specific functions and expressions, see:
+
+* <<sql_functions_and_expressions,SQL Functions and Expressions>>
+* <<olap_functions,OLAP Functions>>
+
+<<<
+[[using_trafodion_sql_to_access_hbase_tables]]
+== Using {project-name} SQL to Access HBase Tables
+
+You can use {project-name} SQL statements to read, update, and create HBase tables.
+
+* <<initializing_the_trafodion_metadata,Initializing the {project-name} Metadata>>
+* <<ways_to_access_hbase_tables,Ways to Access HBase Tables>>
+* <<trafodion_sql_tables_versus_native_hbase_tables,{project-name} SQL Tables Versus Native HBase Tables>>
+* <<supported_sql_statements_with_hbase_tables,Supported SQL Statements With HBase Tables>>
+
+For a list of Control Query Default (CQD) settings for the HBase environment, see the
+{docs-url}/cqd_reference/index.hmtl[{project-name} Control Query Default (CQD) Reference Guide].
+
+[[ways_to_access_hbase_tables]]
+=== Ways to Access HBase Tables
+{project-name} SQL supports these ways to access HBase tables:
+
+* <<accessing_trafodion_sql_tables,Accessing {project-name} SQL Tables>>
+* <<cell_per_row_access_to_hbase_tables,Cell-Per-Row Access to HBase Tables (Technology Preview)>>
+* <<rowwise_access_to_hbase_tables,Rowwise Access to HBase Tables (Technology Preview)>>
+
+<<<
+[[accessing_trafodion_sql_tables]]
+==== Accessing {project-name} SQL Tables
+
+A {project-name} SQL table is a relational SQL table generated by a `CREATE TABLE` statement and mapped
+to an HBase table. {project-name} SQL tables have regular ANSI names in the catalog `TRAFODION`.
+A {project-name} SQL table name can be a fully qualified ANSI name of the form
+`TRAFODION._schema-name.object-name_`.
+
+To access a {project-name} SQL table, specify its ANSI table name in a {project-name} SQL statement, similar
+to how you would specify an ANSI table name when running SQL statements in a relational database.
+
+*Example*
+
+```
+CREATE TABLE trafodion.sales.odetail
+( ordernum NUMERIC (6) UNSIGNED NO DEFAULT NOT NULL
+, partnum NUMERIC (4) UNSIGNED NO DEFAULT NOT NULL
+, unit_price NUMERIC (8,2) NO DEFAULT NOT NULL
+, qty_ordered NUMERIC (5) UNSIGNED NO DEFAULT NOT NULL
+, PRIMARY KEY (ordernum, partnum)
+);
+
+INSERT INTO trafodion.sales.odetail VALUES ( 900000, 7301, 425.00, 100 );
+
+SET SCHEMA trafodion.sales;
+
+SELECT * FROM odetail;
+```
+
+For more information about {project-name} SQL tables, see
+<<trafodion_sql_tables_versus_native_hbase_tables,{project-name} SQL Tables Versus Native HBase Tables>>.
+
+<<<
+[[cell_per_row_access_to_hbase_tables]]
+==== Cell-Per-Row Access to HBase Tables (Technology Preview)
+
+NOTE: This is a _Technology Preview (Complete But Not Tested)_ feature, meaning that it is functionally
+complete but has not been tested or debugged. 
+
+To access HBase data using cell-per-row mode, specify the schema `HBASE."_CELL_"` and the full ANSI
+name of the table as a delimited table name. You can specify the name of any HBase table, regardless of whether
+it was created through {project-name} SQL.
+
+*Example*
+
+```
+select * from hbase."_CELL_"."TRAFODION.MYSCH.MYTAB";
+select * from hbase."_CELL_"."table_created_in_HBase";
+```
+
+All tables accessed through this schema have the same column layout:
+
+```
+>>invoke hbase."_CELL_"."table_created_in_HBase";
+  (
+  ROW_ID        VARCHAR(100)    ...
+, COL_FAMILY    VARCHAR(100)    ...
+, COL_NAME      VARCHAR(100)    ...
+, COL_TIMESTAMP LARGEINT        ...
+, COL_VALUE     VARCHAR(1000) ...
+)
+PRIMARY KEY (ROW_ID)
+
+>>select * from hbase."_CELL_"."mytab";
+```
+
+<<<
+[[rowwise_access_to_hbase_tables]]
+==== Rowwise Access to HBase Tables (Technology Preview)
+
+NOTE: This is a _Technology Preview (Complete But Not Tested)_ feature, meaning that it is functionally
+complete but has not been tested or debugged.
+
+To access HBase data using rowwise mode, specify the schema `HBASE."_ROW_"` and the full ANSI name of the
+table as a delimited table name. You can specify the name of any HBase table, regardless of whether
+it was created through {project-name} SQL.
+
+*Example*
+
+```
+select * from hbase."_ROW_"."TRAFODION.MYSCH.MYTAB";
+select * from hbase."_ROW_"."table_created_in_HBase";
+```
+
+All column values of the row are returned as a single, big varchar:
+
+```
+>>invoke hbase."_ROW_"."mytab";
+(
+  ROW_ID VARCHAR(100) ...
+, COLUMN_DETAILS VARCHAR(10000) ...
+)
+PRIMARY KEY (ROW_ID)
+
+>>select * from hbase."_ROW_"."mytab";
+```
+
+<<<
+[[trafodion_sql_tables_versus_native_hbase_tables]]
+=== {project-name} SQL Tables Versus Native HBase Tables
+
+{project-name} SQL tables have many advantages over regular HBase tables:
+
+* They can be made to look like regular, structured SQL tables with fixed columns.
+* They support the usual SQL data types supported in relational databases.
+* They support compound keys, unlike HBase tables that have a single row key (a string).
+* They support indexes.
+* They support _salting_, which is a technique of adding a hash value of the row key as a
+key prefix to avoid hot spots for sequential keys. For the syntax,
+see the <<create_table_statement,CREATE TABLE Statement>>.
+
+The problem with {project-name} SQL tables is that they use a fixed format to represent column values,
+making it harder for native HBase applications to access them. Also, they have a fixed structure,
+so users lose the flexibility of dynamic columns that comes with HBase.
+
+[[supported_sql_statements_with_hbase_tables]]
+=== Supported SQL Statements With HBase Tables
+
+You can use these SQL statements with HBase tables:
+
+|===
+| <<select_statement,SELECT Statement>>             | <<insert_statement,INSERT Statement>>
+| <<update_statement,UPDATE Statement>>             | <<delete_statement,DELETE Statement>>
+| <<merge_statement,MERGE Statement>>               | <<get_statement,GET Statement>>
+| <<invoke_statement,INVOKE Statement>>             | <<alter_table_statement,ALTER TABLE Statement>>
+| <<create_index_statement,CREATE INDEX Statement>> | <<create_table_statement,CREATE TABLE Statement>>
+| <<create_view_statement,CREATE VIEW Statement>>   | <<drop_index_statement,DROP INDEX Statement>>
+| <<drop_table_statement,DROP TABLE Statement>>     | <<drop_view_statement,DROP VIEW Statement>>
+| <<grant_statement,GRANT Statement>>               | <<revoke_statement,REVOKE Statement>>
+|===
+
+<<<
+[[using_trafodion_sql_to_access_hive_tables]]
+== Using {project-name} SQL to Access Hive Tables
+
+You can use {project-name} SQL statements to access Hive tables.
+
+* <<ansi_names_for_hive_tables,ANSI Names for Hive Tables>>
+* <<type_mapping_from_hive_to_trafodion_sql,Type Mapping From Hive to {project-name} SQL>>
+* <<supported_sql_statements_with_hive_tables,Supported SQL Statements With Hive Tables>>
+
+For a list of Control Query Default (CQD) settings for the Hive environment, see the
+{docs-url}/cqd_reference/index.hmtl[{project-name} Control Query Default (CQD) Reference Guide].
+
+[[ansi_names_for_hive_tables]]
+=== ANSI Names for Hive Tables
+
+Hive tables appear in the {project-name} Hive ANSI name space in a special catalog and schema named `HIVE.HIVE`.
+
+To select from a Hive table named `T`, specify an implicit or explicit name, such as `HIVE.HIVE.T`,
+in a {project-name} SQL statement.
+
+*Example*
+This example should work if a Hive table named `T` has already been defined:
+
+```
+set schema hive.hive;
+
+CQD HIVE_MAX_STRING_LENGTH '20'; -- creates a more readable display
+select * from t; -- implicit table name
+
+set schema trafodion.seabase;
+
+select * from hive.hive.t; -- explicit table name
+```
+
+
+<<<
+[[type_mapping_from_hive_to_trafodion_sql]]
+=== Type Mapping From Hive to {project-name} SQL
+
+{project-name} performs the following data-type mappings:
+
+[cols="2*",options="header"]
+|===
+| Hive Type             | {project-name} SQL Type
+| `tinyint`             | `smallint`
+| `smallint`            | `smallint`
+| `int`                 | `int`
+| `bigint`              | `largeint`
+| `string`              | `varchar(_n_ bytes) character set utf8`^1^
+| `float`               | `real`
+| `double`              | `float(54)`
+| `timestamp`           | `timestamp(6)`^2^
+|===
+
+1. The value `_n_` is determined by `CQD HIVE_MAX_STRING_LENGTH`. See the
+{docs-url}/cqd_reference/index.hmtl[{project-name} Control Query Default (CQD) Reference Guide].
+2. Hive supports timestamps with nanosecond resolution (precision of 9). {project-name} SQL supports only microsecond resolution (precision 6).
+
+[[supported_sql_statements_with_hive_tables]]
+=== Supported SQL Statements With Hive Tables
+
+You can use these SQL statements with Hive tables:
+
+* <<select_statement,SELECT Statement>>
+* <<load_statement,LOAD Statement>>
+* GET TABLES (See the <<get_statement,GET Statement>>.)
+* <<invoke_statement,INVOKE Statement>>
+
+<<<
+[[data_consistency_and_access_options]]
+== Data Consistency and Access Options
+
+Access options for DML statements affect the consistency of the data that your query accesses.
+
+For any DML statement, you specify access options by using the `FOR _option_ ACCESS` clause and,
+for a `SELECT` statement, by using this same clause, you can also specify access options for individual
+tables and views referenced in the FROM clause.
+
+The possible settings for `_option_` in a DML statement are:
+
+* <<read_committed,READ COMMITTED>>
+
+Specifies that the data accessed by the DML statement must be from committed rows.
+
+The SQL default access option for DML statements is `READ COMMITTED`.
+
+For related information about transactions, see
+<<transaction_isolation_levels,Transaction Isolation Levels>>.
+
+[[read_committed]]
+=== READ COMMITTED
+
+This option allows you to access only committed data.
+
+The implementation requires that a lock can be acquired on the data requested by the DML statement\u2014but
+does not actually lock the data, thereby reducing lock request conflicts. If a lock cannot be granted
+(implying that the row contains uncommitted data), the DML statement request waits until the lock in
+place is released.
+
+READ COMMITTED provides the next higher level of data consistency (compared to READ UNCOMMITTED).
+A statement executing with this access option does not allow dirty reads, but both non-repeatable reads
+and phantoms are possible.
+
+READ COMMITTED provides sufficient consistency for any process that does not require a repeatable read
+capability.
+
+READ COMMITTED is the default isolation level.
+
+<<<
+[[transaction_management]]
+== Transaction Management
+
+A transaction (a set of database changes that must be completed as a group) is the basic recoverable unit
+in case of a failure or transaction interruption. Transactions are controlled through client tools that
+interact with the database using ODBC or JDBC.
+
+The typical order of events is:
+
+1.  Transaction is started.
+2.  Database changes are made.
+3.  Transaction is committed.
+
+If, however, the changes cannot be made or if you do not want to complete the transaction, then you can abort
+the transaction so that the database is rolled back to its original state.
+
+This subsection discusses these considerations for transaction management:
+
+* <<user_defined_and_system_defined_transactions,User-Defined and System-Defined Transactions>>
+* <<rules_for_dml_statements,Rules for DML Statements>>
+* <<effect_of_autocommit_option,Effect of AUTOCOMMIT Option>>
+* <<concurrency,Concurrency>>
+* <<transaction_isolation_levels,Transaction Isolation Levels>>
+
+[[user_defined_and_system_defined_transactions]]
+=== User-Defined and System-Defined Transactions
+Transactions you define are called _user-defined transactions_. To be sure that a sequence of statements executes
+successfully or not at all, you can define one transaction consisting of these statements by using the BEGIN WORK
+statement and COMMIT WORK statement. You can abort a transaction by using the ROLLBACK WORK statement.
+
+If AUTOCOMMIT is on, then you do not have to end the transaction explicitly as {project-name} SQL will end the transaction
+automatically. Sometimes an error occurs that requires the user-defined transaction to be aborted. {project-name} SQL
+will automatically abort the transaction and return an error indicating that the transaction was rolled back.
+
+<<<
+[[system_defined_transactions]]
+==== System-Defined Transactions
+
+In some cases, {project-name} SQL defines transactions for you. These transactions are called _system-defined transactions_.
+Most DML statements initiate transactions implicitly at the start of execution.
+See <<implicit_transactions,Implicit Transactions>>.
+
+However, even if a transaction is initiated implicitly, you must end a transaction explicitly with the COMMIT WORK
+statement or the ROLLBACK WORK statement. If AUTOCOMMIT is on, you do not need to end a transaction explicitly.
+
+[[rules_for_dml_statements]]
+=== Rules for DML Statements
+
+If deadlock occurs, the DML statement times out and receives an error.
+
+[[effect_of_autocommit_option]]
+=== Effect of AUTOCOMMIT Option
+
+AUTOCOMMIT is an option that can be set in a SET TRANSACTION statement. It specifies whether {project-name} SQL will commit
+automatically, or roll back if an error occurs, at the end of statement execution. This option applies to any statement
+for which the system initiates a transaction. See <<set_transaction_statement,SET TRANSACTION Statement>>.
+
+If this option is set to ON, {project-name} SQL automatically commits any changes, or rolls back any changes, made to the
+database at the end of statement execution.
+
+[[concurrency]]
+=== Concurrency
+
+Concurrency is defined by two or more processes accessing the same data at the same time. The degree of concurrency
+available &#8212; whether a process that requests access to data that is already being accessed is given access or placed
+in a wait queue &#8212; depends on the purpose of the access mode (read or update) and the isolation level. Currently, the only
+isolation level is READ COMMITTED.
+
+{project-name} SQL provides concurrent database access for most operations and controls database access through concurrency
+control and the mechanism for opening and closing tables. For DML operations, the access option affects the degree of
+concurrency. See <<data_consistency_and_access_options,Data Consistency and Access Options>>.
+
+<<<
+[[transaction_isolation_levels]]
+=== Transaction Isolation Levels
+
+A transaction has an isolation level that is <<read_committed,READ COMMITTED>>.
+
+[[read_committed]]
+==== READ COMMITTED
+
+This option, which is ANSI compliant, allows your transaction to access only committed data. No row locks are acquired
+when READ COMMITTED is the specified isolation level.
+
+READ COMMITTED provides the next level of data consistency. A transaction executing with this isolation level does not
+allow dirty reads, but both non-repeatable reads and phantoms are possible.
+
+READ COMMITTED provides sufficient consistency for any transaction that does not require a repeatable-read capability.
+
+The default isolation level is READ COMMITTED.
+
+<<<
+[[ansi_compliance_and_trafodion_sql_extensions]]
+== ANSI Compliance and {project-name} SQL Extensions
+
+{project-name} SQL complies most closely with Core SQL 99. {project-name} SQL also includes some features from SQL 99 and part of
+the SQL 2003 standard, and special {project-name} SQL extensions to the SQL language.
+
+Statements and SQL elements in this manual are ANSI compliant unless specified as {project-name} SQL extensions.
+
+[[ansi_compliant_statements]]
+=== ANSI-Compliant Statements
+
+These statements are ANSI compliant, but some might contain {project-name} SQL extensions:
+
+|===
+| <<alter_table_statement,ALTER TABLE Statement>>           | <<call_statement,CALL Statement>>
+| <<commit_work_statement,COMMIT WORK Statement>>           | <<create_function_statement,CREATE FUNCTION Statement>>
+| <<create_procedure_statement,CREATE PROCEDURE Statement>> | <<create_role_statement,CREATE ROLE Statement>>
+| <<create_schema_statement,CREATE SCHEMA Statement>>       | <<create_table_statement,CREATE TABLE Statement>>
+| <<create_view_statement,CREATE VIEW Statement>>           | <<delete_statement,DELETE Statement>>
+| <<drop_function_statement,DROP FUNCTION Statement>>       | <<drop_procedure_statement,DROP PROCEDURE Statement>>
+| <<drop_role_statement,DROP ROLE Statement>>               | <<drop_schema_statement,DROP SCHEMA Statement>>
+| <<drop_table_statement,DROP TABLE Statement>>             | <<drop_view_statement,DROP VIEW Statement>>
+| <<execute_statement,EXECUTE Statement>>                   | <<grant_statement,GRANT Statement>>
+| <<grant_role_statement,GRANT ROLE Statement>>             | <<insert_statement,INSERT Statement>>
+| <<merge_statement,MERGE Statement>>                       | <<prepare_statement,PREPARE Statement>>
+| <<revoke_statement,REVOKE Statement>>                     | <<revoke_role_statement,REVOKE ROLE Statement>>
+| <<rollback_work_statement,ROLLBACK WORK Statement>>       | <<select_statement,SELECT Statement>>
+| <<set_schema_statement,SET SCHEMA Statement>>             | <<set_transaction_statement,SET TRANSACTION Statement>>
+| <<table_statement,TABLE Statement>>                       | <<update_statement,UPDATE Statement>>
+| <<values_statement,VALUES Statement>>
+|===
+
+<<<
+[[statements_that_are_trafodion_sql_extensions]]
+=== Statements That Are {project-name} SQL Extensions
+
+These statements are {project-name} SQL extensions to the ANSI standard.
+
+|===
+| <<alter_library_statement,ALTER LIBRARY Statement>>                           | <<alter_user_statement,ALTER USER Statement>>
+| <<begin_work_statement,BEGIN WORK Statement>>                                 | <<control_query_cancel_statement,CONTROL QUERY CANCEL Statement>>
+| <<control_query_default_statement,CONTROL QUERY DEFAULT Statement>>           | <<create_index_statement,CREATE INDEX Statement>>
+| <<create_library_statement,CREATE LIBRARY Statement>>                         | <<drop_index_statement,DROP INDEX Statement>>
+| <<drop_library_statement,DROP LIBRARY Statement>>                             | <<explain_statement,EXPLAIN Statement>>
+| <<get_statement,GET Statement>>                                               | <<get_hbase_objects_statement,GET HBASE OBJECTS Statement>>
+| <<get_version_of_metadata_statement,GET VERSION OF METADATA Statement>>       | <<get_version_of_software_statement,GET VERSION OF SOFTWARE Statement>>
+| <<grant_component_privilege_statement,GRANT COMPONENT PRIVILEGE Statement>>   | <<invoke_statement,INVOKE Statement>>
+| <<load_statement,LOAD Statement>>                                             | <<register_user_statement,REGISTER USER Statement>>
+| <<revoke_component_privilege_statement,REVOKE COMPONENT PRIVILEGE Statement>> | <<showcontrol_statement,SHOWCONTROL Statement>>
+| <<showddl_statement,SHOWDDL Statement>>                                       | <<showddl_schema_statement,SHOWDDL SCHEMA Statement>>
+| <<showstats_statement,SHOWSTATS Statement>>                                   | <<unload_statement,UNLOAD Statement>>
+| <<unregister_user_statement,UNREGISTER USER Statement>>                       | <<update_statistics_statement,UPDATE STATISTICS Statement>>
+| <<upsert_statement,UPSERT Statement>>
+|===
+
+<<<
+[[ansi_compliant_functions]]
+=== ANSI-Compliant Functions
+
+These functions are ANSI compliant, but some might contain {project-name} SQL extensions:
+
+|===
+| <<avg,AVG function>>          | <<case, CASE expression>>
+| <<cast,CAST expression>>      | <<char_length,CHAR_LENGTH>>
+| <<coalesce,COALESCE>>         | <<count,COUNT Function>>
+| <<current,CURRENT>>           | <<current_date,CURRENT_DATE>>
+| <<current_time,CURRENT_TIME>> | <<current_timestamp,CURRENT_TIMESTAMP>>
+| <<current_user,CURRENT_USER>> | <<extract,EXTRACT>>
+| <<lower,LOWER>>               | <<max,MAX>>
+| <<min,MIN>>                   | <<nullif,NULLIF>>
+| <<octet_length,OCTET_LENGTH>> | <<position,POSITION>>
+| <<session_user,SESSION_USER>> | <<substring,SUBSTRING>>
+| <<sum,SUM>>                   | <<trim,TRIM>>
+| <<upper,UPPER>>
+|===
+
+All other functions are {project-name} SQL extensions.
+
+== {project-name} SQL Error Messages
+
+{project-name} SQL reports error messages and exception conditions. When an error condition occurs,
+{project-name} SQL returns a message number and a brief description of the condition.
+
+*Example*
+
+{project-name} SQL might display this error message:
+
+```
+*** ERROR[1000] A syntax error occurred.
+```
+
+The message number is the SQLCODE value (without the sign). In this example, the SQLCODE value is `1000`.
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/limits.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/limits.adoc b/docs/sql_reference/src/asciidoc/_chapters/limits.adoc
index 5bbe2f4..6c107da 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/limits.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/limits.adoc
@@ -1,37 +1,37 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[limits]]
-= Limits
-
-This section lists limits for various parts of {project-name} SQL.
-
-[cols="30%h,70%"]
-|===
-| Column Names | Up to 128 characters long, or 256 bytes of UTF8 text, whichever is less.
-| Schema Names | Up to 128 characters long, or 256 bytes of UTF8 text, whichever is less.
-| Table Names  | ANSI names are of the form _schema.object_, where each part can be up to 128 characters long,
-or 256 bytes of UTF8 text, whichever is less.
-|===
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[limits]]
+= Limits
+
+This section lists limits for various parts of {project-name} SQL.
+
+[cols="30%h,70%"]
+|===
+| Column Names | Up to 128 characters long, or 256 bytes of UTF8 text, whichever is less.
+| Schema Names | Up to 128 characters long, or 256 bytes of UTF8 text, whichever is less.
+| Table Names  | ANSI names are of the form _schema.object_, where each part can be up to 128 characters long,
+or 256 bytes of UTF8 text, whichever is less.
+|===


[03/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/sql_statements.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/sql_statements.adoc b/docs/sql_reference/src/asciidoc/_chapters/sql_statements.adoc
index 85e27f9..2bc2a6d 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/sql_statements.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/sql_statements.adoc
@@ -1,8495 +1,8509 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[sql_statements]]
-= SQL Statements
-
-This section describes the syntax and semantics of {project-name} SQL statements.
-
-{project-name} SQL statements are entered interactively or from script files using a client-based tool, such as the
-{project-name} Command Interface (TrafCI). To install and configure a client application that enables you to connect
-to and use a {project-name} database, see the
-{docs-url}/client_install/index.html[_{project-name} Client Installation_ _Guide_].
-
-[[sql_statements_categories]]
-== Categories
-
-The statements are categorized according to their functionality:
-
-* <<data_definition_language_statements,Data Definition Language (DDL) Statements>>
-* <<data_manipulation_language_statements,Data Manipulation Language (DML) Statements>>
-* <<transaction_control_statements,Transaction Control Statements>>
-* <<data_control_and_security_statements,Data Control and Security Statements>>
-* <<stored_procedure_and_user_defined_function_statements,Stored Procedure and User-Defined Function Statements>>
-* <<prepared_statements,Prepared Statements>>
-* <<control_statements,Control Statements>>
-* <<object_naming_statements,Object Naming Statements>>
-* <<show_get_and_explain_statements,"SHOW, GET, and EXPLAIN Statements">>
-
-<<<
-[[data_definition_language_statements]]
-=== Data Definition Language (DDL) Statements
-
-Use these DDL statements to create, drop, or alter the definition of a {project-name} SQL schema or object.
-
-NOTE: DDL statements are not currently supported in transactions. That means that you cannot run DDL statements inside a user-defined
-transaction (BEGIN WORK&#8230;COMMIT WORK) or when AUTOCOMMIT is OFF. To run these statements, AUTOCOMMIT must be turned ON
-(the default) for the session.
-
-[cols="2*", options="head{docs-url}/sql_reference/index.html#limitser"]
-|===
-| Statement                                                  | What It Does
-// | <<alter_library_statement,ALTER LIBRARY Statement>>        | Updates the physical filename for a library object in a {project-name} database.
-| <<alter_table_statement,ALTER TABLE Statement>>            | Changes attributes for a table.
-| <<alter_user_statement,ALTER USER Statement>>              | Changes attributes for a user.
-| <<create_function_statement,CREATE FUNCTION Statement>>    | Registers a user-defined function (UDF) written in C as a function within a {project-name} database.
-| <<create_index_statement,CREATE INDEX Statement>>          | Creates an index on a table.
-| <<create_library_statement,CREATE LIBRARY Statement>>      | Registers a library object in a {project-name} database.
-| <<create_procedure_statement,CREATE PROCEDURE Statement>>  | Registers a Java method as a stored procedure in Java (SPJ) within a {project-name} database.
-| <<create_role_statement,CREATE ROLE Statement>>            | Creates a role.
-| <<create_schema_statement,CREATE SCHEMA Statement>>        | Creates a schema in the database.
-| <<create_table_statement,CREATE TABLE Statement>>          | Creates a table.
-| <<create_view_statement,CREATE VIEW Statement>>            | Creates a view.
-| <<drop_function_statement,DROP FUNCTION Statement>>        | Removes a user-defined function (UDF) from the {project-name} database.
-| <<drop_index_statement,DROP INDEX Statement>>              | Drops an index.
-| <<drop_library_statement,DROP LIBRARY Statement>>          | Removes a library object from the {project-name} database and also removes the library file
-referenced by the library object.
-| <<drop_procedure_statement,DROP PROCEDURE Statement>>      | Removes a stored procedure in Java (SPJ) from the {project-name} database.
-| <<drop_role_statement,DROP ROLE Statement>>                | Drops a role.
-| <<drop_schema_statement,DROP SCHEMA Statement>>            | Drops a schema from the database.
-| <<drop_table_statement,DROP TABLE Statement>>              | Drops a table.
-| <<drop_view_statement,DROP VIEW Statement>>                | Drops a view.
-| <<register_user_statement,REGISTER USER Statement>>        | Registers a user in the SQL database, associating the user's login name
-with a database user name.
-| <<unregister_user_statement, UNREGISTER USER Statement>>   | Removes a database user name from the SQL database.
-|===
-
-
-<<<
-[[data_manipulation_language_statements]]
-=== Data Manipulation Language (DML) Statements
-
-Use these DML statements to delete, insert, select, or update rows in one or more tables:
-
-[cols="2*", options="header"]
-|===
-| Statement                               | What It Does
-| <<delete_statement,DELETE Statement>> | Deletes rows from a table or view.
-| <<insert_statement,INSERT Statement>> | Inserts data into tables and views.
-| <<merge_statement,MERGE Statement>>   | Either performs an upsert operation (that is, updates a table if the row
-exists or inserts into a table if the row does not exist) or updates (merges) matching rows from one table to another.
-| <<select_statement,SELECT Statement>> | Retrieves data from tables and views.
-| <<table_statement,TABLE Statement>>   | Equivalent to the query specification SELECT * FROM _table_
-| <<update_statement,UPDATE Statement>> | Updates values in columns of a table or view.
-| <<upsert_statement,UPSERT Statement>> | Updates a table if the row exists or inserts into a table if the row does not exist.
-| <<values_statement,VALUES Statement>> | Displays the results of the evaluation of the expressions and the results of row subqueries
-within the row value constructors.
-|===
-
-[[transaction_control_statements]]
-=== Transaction Control Statements
-
-Use these statements to specify user-defined transactions and to set attributes for the next transaction:
-
-[cols="2*",options="header"]
-|===
-| Statement                                                 | What It Does
-| <<begin_work_statement,BEGIN WORK Statement>>           | Starts a transaction.
-| <<commit_work_statement,COMMIT WORK Statement>>         | Commits changes made during a transaction and ends the transaction.
-| <<rollback_work_statement,ROLLBACK WORK Statement>>     | Undoes changes made during a transaction and ends the transaction.
-| <<set_transaction_statement,SET TRANSACTION Statement>> | Sets attributes for the next SQL transaction \u2014 whether to automatically
-commit database changes.
-|===
-
-<<<
-[[data_control_and_security_statements]]
-=== Data Control and Security Statements
-
-Use these statements to register users, create roles, and grant and revoke privileges:
-
-[cols="2*",options="header"]
-|===
-| Statement                                                                     | What It Does
-| <<alter_user_statement,ALTER USER Statement>>                                 | Changes attributes associated with a user who is registered in the database.
-| <<create_role_statement,CREATE ROLE Statement>>                               | Creates an SQL role.
-| <<drop_role_statement,DROP ROLE Statement>>                                   | Deletes an SQL role.
-| <<grant_statement,GRANT Statement>>                                           | Grants access privileges on an SQL object to specified users or roles.
-| <<grant_component_privilege_statement,GRANT COMPONENT PRIVILEGE Statement>>   | Grants one or more component privileges to a user or role.
-| <<grant_role_statement,GRANT ROLE Statement>>                                 | Grants one or more roles to a user.
-| <<register_user_statement,REGISTER USER Statement>>                           | Registers a user in the SQL database, associating the user's login name with a database user name.
-| <<revoke_statement,REVOKE Statement>>                                         | Revokes access privileges on an SQL object from specified users or roles.
-| <<revoke_component_privilege_statement,REVOKE COMPONENT PRIVILEGE Statement>> | Removes one or more component privileges from a user or role.
-| <<revoke_role_statement,REVOKE ROLE Statement>>                               | Removes one or more roles from a user.
-| <<unregister_user_statement,UNREGISTER USER Statement>>                       | Removes a database user name from the SQL database.
-|===
-
-<<<
-[[stored_procedure_and_user_defined_function_statements]]
-=== Stored Procedure and User-Defined Function Statements
-
-Use these statements to create and execute stored procedures in Java (SPJs) or create user-defined functions (UDFs) and to modify
-authorization to access libraries or to execute SPJs or UDFs:
-
-[cols="2*",options="header"]
-|===
-| Statement                                                 | What It Does
-// | <<alter_library_statement,ALTER LIBRARY Statement>>       | Updates the physical filename for a library object in a {project-name} database.
-| <<call_statement,CALL Statement>>                         | Initiates the execution of a stored procedure in Java (SPJ) in a {project-name} database.
-| <<create_function_statement,CREATE FUNCTION Statement>>   | Registers a user-defined function (UDF) written in C as a function within a {project-name} database.
-| <<create_library_statement,CREATE LIBRARY Statement>>     | Registers a library object in a {project-name} database.
-| <<create_procedure_statement,CREATE PROCEDURE Statement>> | Registers a Java method as a stored procedure in Java (SPJ) within a {project-name} database.
-| <<drop_function_statement,DROP FUNCTION Statement>>       | Removes a user-defined function (UDF) from the {project-name} database.
-| <<drop_library_statement,DROP LIBRARY Statement>>         | Removes a library object from the {project-name} database and also removes the library file
-referenced by the library object.
-| <<drop_procedure_statement,DROP PROCEDURE Statement>>     | Removes a stored procedure in Java (SPJ) from the {project-name} database.
-| <<grant_statement,GRANT Statement>>                       | Grants privileges for accessing a library object or executing an SPJ or UDF to specified users.
-| <<revoke_statement,REVOKE Statement>>                     | Revokes privileges for accessing a library object or executing an SPJ or UDF from specified users.
-UDF from specified users.
-|===
-
-[[prepared_statements]]
-=== Prepared Statements
-
-Use these statements to prepare and execute an SQL statement:
-
-[cols="2*",options="header"]
-|===
-| Statement                                                 | What It Does
-| <<execute_statement,EXECUTE Statement>>                   | Executes an SQL statement previously compiled by a PREPARE statement.
-| <<prepare_statement,PREPARE Statement>>                   | Compiles an SQL statement for later use with the EXECUTE statement in the same session.
-|===
-
-
-<<<
-[[control_statements]]
-=== Control Statements
-
-Use these statements to control the execution, default options, plans, and performance of DML statements:
-
-[cols="2*",options="header"]
-|===
-| Statement                                                                     | What It Does
-| <<control_query_cancel_statement,CONTROL QUERY CANCEL Statement>>             | Cancels an executing query that you identify with a query ID.
-| <<control_query_default_statement,CONTROL QUERY DEFAULT Statement>>           | Changes a default attribute to influence a query plan.
-|===
-
-[[object_naming_statements]]
-=== Object Naming Statements
-
-Use this statements to specify default ANSI names for the schema:
-
-[cols="2*",options="header"]
-|===
-| Statement                                        | What It Does
-| <<set_schema_statement,SET SCHEMA Statement>>    | Sets the default ANSI schema for unqualified object names for the current session.
-|===
-
-<<<
-[[show_get_and_explain_statements]]
-=== SHOW, GET, and EXPLAIN Statements
-
-Use these statements to display information about database objects or query execution plans:
-
-[cols="2*",options="header"]
-|===
-| Statement                                                               | What It Does
-| <<explain_statement,EXPLAIN Statement>>                                 | Displays information contained in the query execution plan.
-| <<get_statement,GET Statement>>                                         | Displays the names of database objects, components, component
-privileges, roles, or users that exist in the {project-name} instance.
-| <<get_hbase_objects_statement,GET HBASE OBJECTS Statement>>             | Displays a list of HBase objects through an SQL interface
-| <<get_version_of_metadata_statement,GET VERSION OF METADATA Statement>> | Displays the version of the metadata in the {project-name} instance and
-indicates if the metadata is current.
-| <<get_version_of_software_statement,GET VERSION OF SOFTWARE Statement>> | Displays the version of the {project-name} software that is installed on the
-system and indicates if it is current.
-| <<invoke_statement,INVOKE Statement>>                                   | Generates a record description that corresponds to a row in the
-specified table or view.
-| <<showcontrol_statement,SHOWCONTROL Statement>>                         | Displays the CONTROL QUERY DEFAULT attributes in effect.
-| <<showddl_statement,SHOWDDL Statement>>                                 | Describes the DDL syntax used to create an object as it exists in the
-metadata, or it returns a description of a user, role, or component in the form of a GRANT statement.
-| <<showddl_schema_statement,SHOWDDL SCHEMA Statement>>                   | Displays the DDL syntax used to create a schema as it exists in the
-metadata and shows the authorization ID that owns the schema.
-| <<showstats_statement,SHOWSTATS Statement>>                             | Displays the histogram statistics for one or more groups of columns
-within a table. These statistics are used to devise optimized access plans.
-
-|===
-
-////
-<<<
-[[alter_library_statement]]
-== ALTER LIBRARY Statement
-
-The ALTER LIBRARY statement updates the physical filename for a library object in a {project-name} database.
-A library object can be an SPJ's JAR file or a UDF's library file.
-
-ALTER LIBRARY is a {project-name} SQL extension.
-
-NOTE: DDL statements are not currently supported in transactions. That means that you cannot run this statement inside 
-user-defined transaction (`BEGIN WORK&#8230;COMMIT WORK`) or when AUTOCOMMIT is OFF. To run this statement, AUTOCOMMIT must be
-turned ON (the default) for the session.
-
-```
-ALTER LIBRARY [[catalog-name.]schema-name.]library-name
-   file library-filename
-   [host name host-name]
-   [local file host-filename]
-```
-
-[[alter_library_syntax]]
-=== Syntax Description of ALTER LIBRARY
-
-* `\[[_catalog-name_.]_schema-name_.]_library-name_`
-+
-specifies the ANSI logical name of the library object, where each part of the name is a valid sql identifier with a maximum of 128 characters.
-specify the name of a library object that has already been registered in the schema. if you do not fully qualify the library name, trafodion sql
-qualifies it according to the schema of the current session. for more information, see <<identifiers,identifiers>> and
-<<_database_object_names,database object names>>.
-
-* `file _library-filename_`
-+
-specifies the full path of the redeployed library file, which either an SPJ's jar file or a UDF's library file.
-
-* `host name _host-name_`
-+
-specifies the name of the client host machine where the deployed file resides.
-
-* `local file _host-filename_`
-+
-specifies the path on the client host machine where the deployed file is stored.
-
-<<<
-[[alter_library_considerations]]
-=== Considerations for ALTER LIBRARY
-
-* HOST NAME and LOCAL FILE are position dependent.
-
-==== Required Privileges
-
-To issue an ALTER LIBRARY statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are the owner of the library.
-* You have the ALTER or ALTER_LIBRARY component privilege for the SQL_OPERATIONS component.
-
-[[alter_library_examples]]
-=== Examples of ALTER LIBRARY
-
-* This ALTER LIBRARY statement updates the JAR file (SPJs) for a library named SALESLIB in the SALES schema:
-+
-```
-ALTER LIBRARY sales.saleslib FILE Sales2.jar;`
-```
-
-* This ALTER LIBRARY statement updates the library file (UDFs) for a library named MYUDFS in the default schema:
-+
-```
-ALTER LIBRARY myudfs FILE $TMUDFLIB;
-```
-////
-
-<<<
-[[alter_table_statement]]
-== ALTER TABLE Statement
-
-The ALTER TABLE statement changes a {project-name} SQL table. See <<Tables,Tables>>.
-
-NOTE: DDL statements are not currently supported in transactions. That means that you cannot run this
-statement inside a user-defined transaction (BEGIN WORK&#8230;COMMIT WORK) or when AUTOCOMMIT is OFF.
-To run this statement, AUTOCOMMIT must be turned ON (the default) for the session.
-
-```
-ALTER TABLE name alter-action
-
-alter-action is:
-
-     ADD [IF NOT EXISTS][COLUMN] column-definition
-   | ADD [CONSTRAINT constraint-name] table-constraint
-   | DROP CONSTRAINT constraint-name [RESTRICT]
-   | RENAME TO new-name
-   | DROP COLUMN [IF EXISTS] column-name
-
-column-definition is:
-
-   column-name data-type
-      ([DEFAULT default]
-         [[constraint constraint-name] column-constraint])
-
-data-type is:
-
-     char[acter] [(length)[characters]]
-         [CHARACTER SET char-set-name]
-         [UPSHIFT] [[not] casespecific]
-   | char[acter] varying (length)
-         [character set char-set-name]
-         [upshift] [[not] casespecific]
-   | varchar (length) [character set char-set-name]
-         [upshift] [[not] casespecific]
-   | numeric [(precision [,scale])] [signed|unsigned]
-   | nchar [(length) [character set char-set-name]
-         [upshift] [[not] casespecific]
-   | nchar varying(length) [character set char-set-name]
-         [upshift] [[not] casespecific]
-   | smallint [signed|unsigned]
-   | int[eger] [signed|unsigned]
-   | largeint
-   | dec[imal] [(precision [,scale])] [signed|unsigned]
-   | float [(precision)]
-   | real
-   | double precision
-   | date
-   | time [(time-precision)]
-   | timestamp [(timestamp-precision)]
-   | interval { start-field to end-field | single-field }
-
-default is:
-
-     literal
-   | null
-   | currentdate
-   | currenttime
-   | currenttimestamp }
-
-column-constraint is:
-
-     not null
-   | unique
-   | check (condition)
-   | references ref-spec
-
-table-constraint is:
-
-     unique (column-list)
-   | check (condition)
-   | foreign key (column-list) references ref-spec
-
-ref-spec is:
-
-   referenced-table [(column-list)]
-
-column-list is:
-
-   column-name[, column-name]...
-```
-
-<<<
-[[alter_table_syntax]]
-=== Syntax Description of ALTER TABLE
-
-* `_name_`
-+
-specifies the current name of the object. See <<database_object_names,Database Object Names>>.
-
-* `ADD [COLUMN] _column-definition_`
-+
-adds a column to _table_.
-+
-The clauses for the _column-definition_ are:
-
-** `_column-name_`
-+
-specifies the name for the new column in the table. _column-name_ is an SQL identifier. _column-name_ must be
-unique among column names in the table. If the column name is a {project-name} SQL reserved word, you must
-delimit it by enclosing it in double quotes. For example: `"sql".myview`. See <<Identifiers,Identifiers>>.
-
-** `_data-type_`
-+
-specifies the data type of the values that can be stored in _column-name_. See <<Data_Types,Data Types>>
-If a default is not specified, NULL is used.
-
-** `DEFAULT _default_`
-+
-specifies a default value for the column or specifies that the column does not have a default value. You can declare the default value
-explicitly by using the DEFAULT clause, or you can enable null to be used as the default by omitting both the DEFAULT and NOT NULL clauses.
-If you omit the DEFAULT clause and specify NOT NULL, {project-name} SQL returns an error. For existing rows of the table, the added column takes
-on its default value.
-+
-If you set the default to the datetime value CURRENT_DATE, CURRENT_TIME, or CURRENT_TIMESTAMP, {project-name} SQL uses January 1, 1 A.D.
-12:00:00.000000 as the default date and time for the existing rows.
-+
-For any row that you add after the column is added, if no value is specified for the column as part of the add row operation, the column
-receives a default value based on the current timestamp at the time the row is added.
-
-<<<
-** `[[constraint _constraint-name_] _column-constraint_]`
-+
-specifies a name for the column or table constraint. _constraint-name_ must have the same schema as _table_ and must be
-unique among constraint names in its schema. if you omit the schema portions of the name you specify in _constraint-name_,
-trafodion sql expands the constraint name by using the schema for _table_. see <<database_object_names,database object names>>.
-+
-if you do not specify a constraint name, trafodion sql constructs an sql identifier as the name for the constraint in the schema
-for _table._ the identifier consists of the fully qualified table name concatenated with a system-generated unique identifier.
-for example, a constraint on table a.b.c might be assigned a name such as a.b.c_123&#8230;_01&#8230;.
-
-*** `_column-constraint_` options:
-
-**** `not null` 
-+
-is a column constraint that specifies that the column cannot contain nulls. if you omit not null, nulls are allowed in the column.
-if you specify both not null and no default, then each row inserted in the table must include a value for the column. see <<null,null>>.
-
-**** `unique`
-+
-is a column constraint that specifies that the column cannot contain more than one occurrence of the same value. if you omit unique,
-duplicate values are allowed unless the column is part of the primary key. columns that you define as unique must be specified as not null.
-
-**** `check (_condition_)`
-+
-is a constraint that specifies a condition that must be satisfied for each row in the table. see <<search_condition,search condition>>.
-you cannot refer to the current_date, current_time, or current_timestamp function in a check constraint, and you cannot use
-subqueries in a check constraint.
-
-<<<
-**** `references _ref-spec_`
-+
-specifies a references column constraint. the maximum combined length of the columns for a references constraint is 2048 bytes. +
-
-***** `_ref-spec_` is:
-+
-`_referenced-table_ [(_column-list_)]`
-+
-`_referenced-table_` is the table referenced by the foreign key in a referential constraint. _referenced-table_ cannot be a view.
-_referenced-table_ cannot be the same as _table_. _referenced-table_ corresponds to the foreign key in the _table_.
- +
-`_column-list_` specifies the column or set of columns in the _referenced-table_ that corresponds to the foreign key in _table_. the
-columns in the column list associated with references must be in the same order as the columns in the column list associated with foreign
-key. if _column-list_ is omitted, the referenced table's primary key columns are the referenced columns.
-+
-a table can have an unlimited number of referential constraints, and you can specify the same foreign key in more than one referential
-constraint, but you must define each referential constraint separately. you cannot create self-referencing foreign key constraints.
-
-* `add [constraint _constraint-name_] _table-constraint_`
-+
-adds a constraint to the table and optionally specifies _constraint-name_ as the name for the constraint. the new constraint
-must be consistent with any data already present in the table. 
-
-<<<
-** `constraint _constraint-name_`
-+
-specifies a name for the column or table constraint. _constraint-name_ must have the same schema as _table_ and must be unique among constraint
-names in its schema. if you omit the schema portions of the name you specify in _constraint-name_, trafodion sql expands the constraint
-name by using the schema for table. see <<database_object_names,database object names>>. 
-+
-if you do not specify a constraint name, trafodion sql constructs an sql identifier as the name for the constraint in the schema for table. the
-identifier consists of the fully qualified table name concatenated with a system-generated unique identifier. for example, a constraint on table
-a.b.c might be assigned a name such as a.b.c_123&#8230;_01&#8230;.
-+
-** `_table-constraint_` options:
-
-*** `unique (_column-list_)`
-+
-is a table constraint that specifies that the column or set of columns cannot contain more 
-than one occurrence of the same value or set of values.
-+
-`_column-list_` cannot include more than one occurrence of the same column. in addition, the set of columns that you specify on a unique
-constraint cannot match the set of columns on any other unique constraint for the table or on the primary key constraint for the table.
-all columns defined as unique must be specified as not null.
-+
-a unique constraint is enforced with a unique index. if there is already a unique index on _column-list_, trafodion sql uses that index. if a
-unique index does not exist, the system creates a unique index.
-
-*** `check (_condition_)`
-+
-is a constraint that specifies a condition that must be satisfied for each row in the table.
-see <<search_condition,search condition>>. you cannot refer to the current_date, current_time, or current_timestamp function in a check
-constraint, and you cannot use subqueries in a check constraint.
-
-*** `foreign key (_column-list_) references _ref-spec_ not enforced`
-+
-is a table constraint that specifies a referential constraint for the table, declaring that a column or set of columns (called a foreign key)
-in _table_ can contain only values that match those in a column or set of columns in the table specified in the references
-clause. however, because not enforced is specified, this relationship is not checked.
-+
-the two columns or sets of columns must have the same characteristics (data type, length, scale, precision). without the foreign key clause,
-the foreign key in _table_ is the column being defined; with the foreign key clause, the foreign key is the column or set of columns specified in
-the foreign key clause. for information about _ref-spec_, see references _ref-spec_ not enforced.
-
-<<<
-* `drop constraint _constraint-name_ [restrict]`
-+
-drops a constraint from the table. +
-+
-if you drop a constraint, trafodion sql drops its dependent index if trafodion sql originally created the same index. if the constraint uses
-an existing index, the index is not dropped. +
-
-** `constraint _constraint-name_`
-+
-specifies a name for the column or table constraint. _constraint-name_ must have the same schema as _table_ and must be unique among constraint
-names in its schema. if you omit the schema portions of the name you specify in _constraint-name_, trafodion sql expands the constraint
-name by using the schema for table. see <<database_object_names,database object names>>.
-+
-if you do not specify a constraint name, trafodion sql constructs an sql identifier as the name for the constraint in the schema for table. the
-identifier consists of the fully qualified table name concatenated with a system-generated unique identifier. for example, a constraint on table
-a.b.c might be assigned a name such as a.b.c_123&#8230;_01&#8230;.
-
-* `rename to _new-name_`
-+
-changes the logical name of the object within the same schema.
-
-** `_new-name_`
-+
-specifies the new name of the object after the rename to operation occurs.
-
-<<<
-* `add if not exists _column-definition_`
-+
-adds a column to _table_ if it does not already exist in the table.
-+
-the clauses for the _column-definition_ are the same as described in add [column] _column-definition_.
-
-* `drop column [if exists] _column-name_`
-+
-drops the specified column from _table_, including the column\u2019s data. you cannot drop a primary key column.
-
-<<<
-[[alter_table_considerations]]
-=== Considerations for ALTER TABLE
-
-[[effect_of_adding_a_column_on_view_definitions]]
-==== Effect of Adding a Column on View Definitions
-
-The addition of a column to a table has no effect on existing view definitions. Implicit column references specified by SELECT * in view
-definitions are replaced by explicit column references when the definition clauses are originally evaluated.
-
-[[authorization_and_availability_requirements]]
-==== Authorization and Availability Requirements
-
-ALTER TABLE works only on user-created tables.
-
-===== Required Privileges
-
-To issue an ALTER TABLE statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are the owner of the table.
-* You have the ALTER or ALTER_TABLE component privilege for the SQL_OPERATIONS component.
-
-===== Privileges Needed to Create a Referential Integrity Constraint
-
-To create a referential integrity constraint (that is, a constraint on the table that refers to a column in another table), one of the
-following must be true:
-
-* You are DB ROOT.
-* You are the owner of the referencing and referenced tables.
-* You have these privileges on the referencing and referenced table:
-** For the referencing table, you have the ALTER or ALTER_TABLE component privilege for the SQL_OPERATIONS component.
-** For the referenced table, you have the REFERENCES (or ALL) privilege on the referenced table through your user name or through a granted role.
-
-If the constraint refers to the other table in a query expression, you must also have SELECT privileges on the other table.
-
-[[alter_table_examples]]
-===  Example of ALTER TABLE
-
-This example adds a column:
-
-```
-ALTER TABLE persnl.project
-   ADD COLUMN projlead
-      NUMERIC (4) UNSIGNED
-```
-
-<<<
-[[alter_user_statement]]
-== ALTER USER Statement
-
-The ALTER USER statement changes attributes associated with a user who is registered in the database.
-
-ALTER USER is a {project-name} SQL extension.
-
-```
-ALTER USER database-username alter-action[, alter-action]
-
-alter-action is:
-     SET EXTERNAL NAME directory-service-username
-   | SET { ONLINE | OFFLINE }
-```
-
-[[alter_user_syntax]]
-=== Syntax Description of ALTER USER
-
-* `_database-username_`
-+
-is the name of a currently registered database user.
-
-* `SET EXTERNAL NAME`
-+
-changes the name that identifies the user in the directory service. This is also the name the user specifies when
-connecting to the database.
-
-** `_directory-service-username_`
-+
-specifies the new name of the user in the directory service.
-
-* _directory-service-username_ is a regular or delimited case-insensitive
-identifier. See <<Case_Insensitive_Delimited_Identifiers,Case-Insensitive Delimited Identifiers>>.
-
-* SET { ONLINE | OFFLINE }
-+
-changes the attribute that controls whether the user is allowed to connect to the database. +
-
-** `ONLINE`
-+
-specifies that the user is allowed to connect to the database.
-
-** `OFFLINE`
-+
-specifies that the user is not allowed to connect to the database.
-
-<<<
-[[alter_user_considerations]]
-=== Considerations for ALTER USER
-
-Only a user with user administrative privileges (that is, a user who has been granted the MANAGE_USERS component privilege)
-can do the following:
-
-* Set the EXTERNAL NAME for any user
-* Set the ONLINE | OFFLINE attribute for any user
-
-Initially, DB_ROOT is the only database user who has been granted the MANAGE_USERS component privilege.
-
-[[alter_user_examples]]
-=== Examples of ALTER USER
-
-* To change a user's external name:
-+
-```
-ALTER USER ajones SET EXTERNAL NAME "Americas\ArturoJones";
-```
-
-* To change a user's attribute to allow the user to connect to the database:
-+
-```
-ALTER USER ajones SET ONLINE;
-```
-
-<<<
-[[begin_work_statement]]
-== BEGIN WORK Statement
-
-The BEGIN WORK statement enables you to start a transaction explicitly\u2014where the transaction consists of the set of operations
-defined by the sequence of SQL statements that begins immediately after BEGIN WORK and ends with the next COMMIT or ROLLBACK
-statement. See <<Transaction_Management,Transaction Management>>. BEGIN WORK will raise an error if a transaction is currently active.
-
-BEGIN WORK is a {project-name} SQL extension.
-
-```
-BEGIN WORK
-```
-
-[[begin_work_considerations]]
-=== Considerations for BEGIN WORK
-
-BEGIN WORK starts a transaction. COMMIT WORK or ROLLBACK WORK ends a transaction.
-
-[[begin_work_examples]]
-=== Example of BEGIN WORK
-
-Group three separate statements\u2014two INSERT statements and an UPDATE statement\u2014that update the database within a single transaction:
-
-```
---- This statement initiates a transaction.
-BEGIN WORK;
-
---- SQL operation complete.
-
-INSERT INTO sales.orders VALUES (125, DATE '2008-03-23', DAT '2008-03-30', 75, 7654);
-
---- 1 row(s) inserted.
-
-INSERT INTO sales.odetail VALUES (125, 4102, 25000, 2);
-
---- 1 row(s) inserted.
-
-UPDATE invent.partloc SET qty_on_hand = qty_on_hand - 2 WHERE partnum = 4102 AND loc_code = 'G45';
-
---- 1 row(s) updated.
-
---- This statement ends a transaction.
-COMMIT WORK;
-
---- SQL operation complete.
-```
-
-<<<
-[[call_statement]]
-== CALL Statement
-
-The CALL statement invokes a stored procedure in Java (SPJ) in a {project-name} SQL database.
-
-```
-CALL procedure-ref ([argument-list])
-
-procedure-ref is:
-   [[catalog-name.]schema-name.]procedure-name
-
-argument-list is:
-   sql-expression[, sql-expression]...
-```
-
-[[call_syntax]]
-=== Syntax Description of CALL
-
-* `_procedure-ref_`
-+
-specifies an ANSI logical name of the form:
-+
-`\[[_catalog-name_.]_schema-name_.]_procedure-name_`
-+
-where each part of the name is a valid sql identifier with a maximum of 128 characters. for more information, see
-<<identifiers,identifiers>> and <<database_object_names,database object names>>.
-+
-if you do not fully qualify the procedure name, trafodion sql qualifies it according to the schema of the current session.
-
-* `_argument-list_`
-+
-accepts arguments for in, in-out, or out parameters. the arguments consist of sql expressions, including dynamic parameters,
-separated by commas:
-+
-`_sql-expression_[{, _sql-expression_}&#8230;]`
-+
-<<<
-+
-each expression must evaluate to a value of one of these data types:
-+
-** character value
-** date-time value
-** numeric value
-+
-interval value expressions are disallowed in SPJs. for more information, see
-<<call_input_parameter_arguments,input parameter arguments>> and
-<<call_output_parameter_arguments,output parameter arguments>>.
-+
-do not specify result sets in the argument list.
-
-[[call_considerations]]
-=== Considerations for CALL
-
-[[call_usage_restrictions]]
-==== Usage Restrictions
-
-You can use a CALL statement as a stand-alone SQL statement in applications or command-line interfaces,
-such as TrafCI. You cannot use a CALL statement inside a compound statement or with row sets.
-
-[[call_required_privileges]]
-==== Required Privileges
-
-To issue a CALL statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are the owner of the stored procedure.
-* You have the EXECUTE (or ALL) privileges, either directly through your username or through a granted role.
-For more information, see the <<GRANT_Statement,GRANT Statement>>.
-
-When the stored procedure executes, it executes as the {project-name} ID.
-
-<<<
-[[call_input_parameter_argument]]
-==== Input Parameter Arguments
-
-You pass data to an SPJ by using IN or INOUT parameters. For an IN
-parameter argument, use one of these SQL expressions:
-
-* Literal
-* SQL function (including CASE and CAST expressions)
-* Arithmetic or concatenation operation
-* Scalar subquery
-* Dynamic parameter (for example, ?) in an application
-* Named (for example, ?param) or unnamed (for example, ?) parameter in TrafCI
-
-For an INOUT parameter argument, you can use only a dynamic, named, or unnamed parameter. For more information, see
-<<Expressions,Expressions>>.
-
-[[call_output_parameter_arguments]]
-==== Output Parameter Arguments
-
-An SPJ returns values in OUT and INOUT parameters. Output parameter arguments must be dynamic parameters in an
-application (for example, ?) or named or unnamed parameters in DCI (for example, ?param or ?). Each
-calling application defines the semantics of the OUT and INOUT parameters in its environment.
-
-[[call_data_conversion_parameter_arguments]]
-==== Data Conversion of Parameter Arguments
-
-{project-name} SQL performs an implicit data conversion when the data type of a parameter argument is compatible with
-but does not match the formal data type of the stored procedure. For stored procedure input values,
-the conversion is from the actual argument value to the formal parameter type. For stored procedure output values,
-the conversion is from the actual output value, which has the data type of the formal parameter, to the declared
-type of the dynamic parameter.
-
-[[call_null_input_and_output]]
-==== Null Input and Output
-
-You can pass a null value as input to or output from an SPJ, provided that the corresponding Java data type of the
-parameter supports nulls. If a null is input or output for a parameter that does not support nulls, {project-name} SQL
-returns an error.
-
-<<<
-[[call_transaction_semantics]]
-==== Transaction Semantics
-
-The CALL statement automatically initiates a transaction if no active transaction exists. However, the failure of
-a CALL statement does not always automatically abort the transaction.
-
-[[call_examples]]
-=== Examples of CALL
-
-* In TrafCI, execute an SPJ named MONTHLYORDERS, which has one IN parameter represented by a literal and one OUT
-parameter represented by an unnamed parameter, ?:
-+
-```
-CALL sales.monthlyorders(3,?);
-```
-
-<<<
-* This CALL statement executes a stored procedure, which accepts one IN parameter (a date literal), returns one OUT
-parameter (a row from the column, NUM_ORDERS), and returns two result sets:
-+
-```
-CALL sales.ordersummary('01/01/2001', ?);
-
-NUM_ORDERS
---------------------
-                  13
-
-ORDERNUM   NUM_PARTS      AMOUNT          ORDER_DATE LAST_NAME
----------- -------------- --------------- ---------- ------------------
-    100210              4        19020.00 2006-04-10 HUGHES
-    100250              4        22625.00 2006-01-23 HUGHES
-    101220              4        45525.00 2006-07-21 SCHNABL
-    200300              3        52000.00 2006-02-06 SCHAEFFER
-    200320              4         9195.00 2006-02-17 KARAJAN
-    200490              2         1065.00 2006-03-19 WEIGL
-.
-.
-.
-
---- 13 row(s) selected.
-
-ORDERNUM   PARTNUM  UNIT_PRICE   QTY_ORDERED PARTDESC
----------- -------- ------------ ----------- ------------------
-    100210     2001      1100.00           3 GRAPHIC PRINTER,M1
-    100210     2403       620.00           6 DAISY PRINTER,T2
-    100210      244      3500.00           3 PC GOLD, 30 MB
-    100210     5100       150.00          10 MONITOR BW, TYPE 1
-    100250     6500        95.00          10 DISK CONTROLLER
-    100250     6301       245.00          15 GRAPHIC CARD, HR
-.
-.
-.
-
---- 70 row(s) selected.
-
---- SQL operation complete.
-```
-
-<<<
-[[commit_work_statement]]
-== COMMIT WORK Statement
-
-The COMMIT WORK statement commits any changes to objects made during the current transaction and ends
-the transaction. See <<Transaction_Management,Transaction Management>>.
-
-WORK is an optional keyword that has no effect.
-
-COMMIT WORK issued outside of an active transaction generates error 8605.
-
-```
-COMMIT [WORK]
-```
-
-[[commit_work_considerations]]
-=== Considerations for COMMIT WORK
-
-BEGIN WORK starts a transaction. COMMIT WORK or ROLLBACK WORK ends a transaction.
-
-<<<
-[[commit_work_examples]]
-=== Example of COMMIT WORK
-
-Suppose that your application adds information to the inventory. You have received 24 terminals from
-a new supplier and want to add the supplier and update the quantity on hand. The part number for the
-terminals is 5100, and the supplier is assigned supplier number 17. The cost of each terminal is $800.
-
-The transaction must add the order for terminals to PARTSUPP, add the supplier to the SUPPLIER table,
-and update QTY_ON_HAND in PARTLOC. After the INSERT and UPDATE statements execute successfully,
-you commit the transaction, as shown:
-
-```
--- This statement initiates a transaction.
-BEGIN WORK;
-
---- SQL operation complete.
-
--- This statement inserts a new entry into PARTSUPP.
-INSERT INTO invent.partsupp
-VALUES (5100, 17, 800.00, 24);
-
---- 1 row(s) inserted.
-
--- This statement inserts a new entry into SUPPLIER.
-INSERT INTO invent.supplier
-VALUES (17, 'Super Peripherals','751 Sanborn Way',
- 'Santa Rosa', 'California', '95405');
-
---- 1 row(s) inserted.
-
--- This statement updates the quantity in PARTLOC.
-UPDATE invent.partloc
-SET qty_on_hand = qty_on_hand + 24
-WHERE partnum = 5100 AND loc_code = 'G43';
-
---- 1 row(s) updated.
-
--- This statement ends a transaction.
-COMMIT WORK;
-
---- SQL operation complete.
-```
-
-<<<
-[[control_query_cancel_statement]]
-== CONTROL QUERY CANCEL Statement
-
-The CONTROL QUERY CANCEL statement cancels an executing query that you identify with a query ID.
-You can execute the CONTROL QUERY CANCEL statement in a client-based tool like TrafCI or through any ODBC or JDBC
-application.
-
-CONTROL QUERY CANCEL is a {project-name} SQL extension.
-
-```
-CONTROL QUERY CANCEL QID query-id [COMMENT 'comment-text']
-```
-
-[[control_query_cancel_syntax]]
-=== Syntax Description of CONTROL QUERY CANCEL
-
-* `_query-id_`
-+
-specifies the query ID of an executing query, which is a unique identifier generated by the SQL compiler.
-
-* `'_comment-text_'`
-+
-specifies an optional comment to be displayed in the canceled query\u2019s error message.
-
-[[control_query_cancel_considerations]]
-=== Considerations for CONTROL QUERY CANCEL
-
-[[control_query_cancel_benefitsl]]
-==== Benefits of CONTROL QUERY CANCEL
-
-For many queries, the CONTROL QUERY CANCEL statement allows the termination of the query without stopping the
-master executor process (MXOSRVR). This type of cancellation has these benefits over standard ODBC/JDBC cancel
-methods:
-
-* An ANSI-defined error message is returned to the client session, and SQLSTATE is set to HY008.
-* Important cached objects persist after the query is canceled, including the master executor process and its
-compiler, the compiled statements cached in the master, and the compiler\u2019s query cache and its cached metadata
-and histograms.
-* The client does not need to reestablish its connection, and its prepared statements are preserved.
-* When clients share connections using a middle-tier application server, the effects of canceling one client\u2019s
-executing query no longer affect other clients sharing the same connection.
-
-[[control_query_cancel_restrictions]]
-==== Restrictions on CONTROL QUERY CANCEL
-
-Some executing queries may not respond to a CONTROL QUERY CANCEL statement within a 60-second interval. For those
-queries, {project-name} SQL stops their ESP processes if there are any. If this action allows the query to be canceled,
-you will see all the benefits listed above.
-
-If the executing query does not terminate within 120 seconds after the CONTROL QUERY CANCEL statement is issued,
-{project-name} SQL stops the master executor process, terminating the query and generating a lost connection error.
-In this case, you will not see any of the benefits listed above. Instead, you will lose your connection and will
-need to reconnect and re-prepare the query. This situation often occurs with the CALL, DDL, and utility statements
-and rarely with other statements.
-
-The CONTROL QUERY CANCEL statement does not work with these statements:
-
-* Unique queries, which operate on a single row and a single partition
-* Queries that are not executing, such as a query that is being compiled
-* CONTROL QUERY DEFAULT, BEGIN WORK, COMMIT WORK, ROLLBACK WORK, and EXPLAIN statements
-* Statically compiled metadata queries
-* Queries executed in anomalous conditions, such as queries without runtime statistics or without a query ID
-
-[[control_query_cancel_required_privileges]]
-==== Required Privileges
-
-To issue a CONTROL QUERY CANCEL statement, one of the following must be true:
-
-* You are DB ROOT.
-* You own (that is, issued) the query.
-* You have the QUERY_CANCEL component privilege for the SQL_OPERATIONS component.
-
-<<<
-[[control_query_cancel_examples]]
-=== Example of CONTROL QUERY CANCEL
-
-This CONTROL QUERY CANCEL statement cancels a specified query and provides a comment concerning the cancel
-operation:
-
-```
-control query cancel qid
-MXID11000010941212288634364991407000000003806U3333300_156016_S1 comment
-'Query is consuming too many resources.';
-```
-
-In a separate session, the client that issued the query will see this
-error message indicating that the query has been canceled:
-
-```
->>execute s1;
-
-*** ERROR[8007] The operation has been canceled. Query is consuming too many resources.
-```
-
-<<<
-[[control_query_default_statement]]
-== CONTROL QUERY DEFAULT Statement
-
-The CONTROL QUERY DEFAULT statement changes the default settings for the current process. You can execute
-the CONTROL QUERY DEFAULT statement in a client-based tool like TrafCI or through any ODBC or JDBC application.
-
-CONTROL QUERY DEFAULT is a {project-name} SQL extension.
-
-```
-{ CONTROL QUERY DEFAULT | CQD } control-default-option
-
-control-default-option is:
-  attribute {'attr-value' | RESET}
-```
-
-[[control_query_default_syntax]]
-=== Syntax Description of CONTROL QUERY DEFAULT
-
-* `_attribute_`
-+
-is a character string that represents an attribute name. For descriptions of these attributes,
-see the {docs-url}/cqd_reference/index.html[{project-name} Control Query Default (CQD) Reference Guide].
-
-* `_attr-value_`
-+
-is a character string that specifies an attribute value. You must specify _attr-value_ as a quoted string\u2014even
-if the value is a number.
-
-* `RESET`
-
-specifies that the attribute that you set by using a CONTROL QUERY DEFAULT statement in the current session is
-to be reset to the value or values in effect at the start of the current session.
-
-<<<
-[[control_query_default_considerations]]
-=== Considerations for CONTROL QUERY DEFAULT
-
-[[control_query_default_scope]]
-==== Scope of CONTROL QUERY DEFAULT
-
-The result of the execution of a CONTROL QUERY DEFAULT statement stays in effect until the current process
-terminates or until the execution of another statement for the same attribute overrides it.
-
-CQDs are applied at compile time, so CQDs do not affect any statements that are already prepared. For example:
-
-```
-PREPARE x FROM SELECT * FROM t;
-CONTROL QUERY DEFAULT SCHEMA 'myschema';
-EXECUTE x;                              -- uses the default schema SEABASE
-SELECT * FROM t2;                       -- uses MYSCHEMA;
-PREPARE y FROM SELECT * FROM t3;
-CONTROL QUERY DEFAULT SCHEMA 'seabase';
-EXECUTE y;                              -- uses MYSCHEMA;
-```
-
-[[control_query_default_examples]]
-=== Examples of CONTROL QUERY DEFAULT
-
-* Increase the cache refresh time for the histogram cache to two hours (7,200 minutes).
-+
-```
-CONTROL QUERY DEFAULT CACHE_HISTOGRAMS_REFRESH_INTERVAL '7200';
-```
-
-* Reset the CACHE_HISTOGRAMS_REFRESH_INTERVAL attribute to its initial value in the current process:
-+
-```
-CONTROL QUERY DEFAULT CACHE_HISTOGRAMS_REFRESH_INTERVAL RESET;
-```
-
-<<<
-[[create_function_statement]]
-== CREATE FUNCTION Statement
-
-The CREATE FUNCTION statement registers a user-defined function (UDF) written in C as a function within
-a {project-name} database. Currently, {project-name} supports the creation of _scalar UDFs_, which return a single
-value or row when invoked. Scalar UDFs are invoked as SQL expressions in the SELECT list or WHERE clause
-of a SELECT statement.
-
-NOTE: DDL statements are not currently supported in transactions. That means that you cannot run this
-statement inside a user-defined transaction (BEGIN WORK&#8230;COMMIT WORK) or when AUTOCOMMIT is OFF. To run
-this statement, AUTOCOMMIT must be turned ON (the default) for the session.
-
-```
-CREATE FUNCTION function-ref ([parameter-declaration[, parameter-declaration]...])
-    {RETURN | RETURNS}
-       (return-parameter-declaration[, return-parameter-declaration]...)
-    EXTERNAL NAME 'character-string-literal'
-    LIBRARY [[catalog-name.]schema-name.]library-name
-    [language c]
-    [parameter style sql]
-    [no sql]
-    [not deterministic | deterministic]
-    [final call | no final call]
-    [no state area | state area size]
-    [no parallelism | allow any parallelism]
-
-function-ref is:
-   [[catalog-name.]schema-name.]function-name
-
-parameter-declaration is:
-   [in] [sql-parameter-name] sql-datatype
-
-return-parameter-declaration is:
-   [out] [sql-parameter-name] sql-datatype
-```
-
-<<<
-[[create_function_syntax]]
-=== Syntax Description of CREATE FUNCTION
-
-* `_function-ref_ ( [_parameter-declaration_[,_parameter-declaration_]&#8230;] )`
-+
-specifies the name of the function and any SQL parameters that correspond to the signature of the external function.
-
-** `_function-ref_`
-+
-specifies an ANSI logical name of the form:
-+
-`\[[_catalog-name_.]_schema-name_.]_function-name_`
-+
-where each part of the name is a valid sql identifier with a maximum of 128 characters. for more information, see
-<<identifiers,identifiers>> and <<database_object_names,database object names>>.
-+
-specify a name that is unique and does not exist for any procedure or function in the same schema.
-+
-if you do not fully qualify the function name, trafodion sql qualifies it according to the schema of the current session.
-
-** `_parameter-declaration_`
-+
-specifies an sql parameter that corresponds to the signature of the external function:
-+
-`[in] [_sql-parameter-name_] _sql-datatype_`
-
-*** `in`
-+
-specifies that the parameter passes data to the function.
-
-*** `_sql-parameter-name_`
-+
-specifies an sql identifier for the parameter. for more information, see <<identifiers,identifiers>>.
-
-<<<
-*** `_sql-datatype_`
-+
-specifies an sql data type that corresponds to the data type of the parameter in the signature of the
-external function. _sql-datatype_ is one of the supported sql data types in trafodion. see
-<<data_types,data types>>.
-
-* `{return | returns} (_return-parameter-declaration_[,_return-parameter-declaration_]&#8230;)`
-+
-specifies the type of output of the function.
-
-** `_return-parameter-declaration_`
-+
-specifies an sql parameter for an output value:
-+
-`[out] [_sql-parameter-name_] _sql-datatype_`
-
-*** `out`
-+
-specifies that the parameter accepts data from the function.
-
-*** `_sql-parameter-name_`
-+
-specifies an sql identifier for the return parameter. for more information, see <<identifiers,identifiers>>.
-+
-*** `_sql-datatype_`
-+
-specifies an sql data type for the return parameter. _sql-datatype_ is one of the supported sql data types in
-trafodion. see <<data_types,data types>>.
-
-* `external name '_method-name_'`
-+
-specifies the case-sensitive name of the external function\u2019s method.
-
-* `library \[[_catalog-name_.]_schema-name_.]_library-name_`
-+
-specifies the ANSI logical name of a library containing the external function. if you do not fully qualify the
-library name, trafodion sql qualifies it according to the schema of the current session.
-
-* `language c`
-+
-specifies that the external function is written in the c language. this clause is optional.
-
-* `parameter style sql`
-+
-specifies that the run-time conventions for arguments passed to the external function are those of the sql
-language. this clause is optional.
-
-* `no sql`
-+
-specifies that the function does not perform sql operations. this clause is optional.
-
-* `deterministic | not deterministic`
-+
-specifies whether the function always returns the same values for out parameters for a given set of argument
-values (deterministic, the default behavior) or does not return the same values (not deterministic). if the
-function is deterministic, trafodion sql is not required to execute the function each time to produce results;
-instead, trafodion sql caches the results and reuses them during subsequent executions, thus optimizing the execution.
-
-* `final call | no final call`
-+
-specifies whether or not a final call is made to the function. a final call enables the function to free up
-system resources. the default is final call.
-
-* `no state area | state area _size_`
-+
-specifies whether or not a state area is allocated to the function. _size_ is an integer denoting memory in
-bytes. acceptable values range from 0 to 16000. the default is no state area.
-
-* `no parallelism | allow any parallelism`
-+
-specifies whether or not parallelism is applied when the function is invoked. the default is allow any parallelism.
-
-<<<
-[[create_function_considerations]]
-=== Considerations for CREATE FUNCTION
-
-[[create_function_required_privileges]]
-==== Required Privileges
-
-To issue a CREATE FUNCTION statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are creating the function in a shared schema, and you have the USAGE (or ALL) privilege on the library that
-will be used in the creation of the function. The USAGE privilege provides you with read access to the library\u2019s
-underlying library file.
-* You are the private schema owner and have the USAGE (or ALL) privilege on the library that will be used in the
-creation of the function. The USAGE privilege provides you with read access to the library\u2019s underlying library file.
-* You have the CREATE or CREATE_ROUTINE component level privilege for the SQL_OPERATIONS component and have the
-USAGE (or ALL) privilege on the library that will be used in the creation of the function. The USAGE
-privilege provides you with read access to the library\u2019s underlying library file.
-+
-NOTE: In this case, if you create a function in a private schema, it will be owned by the schema owner.
-
-<<<
-[[create_function_examples]]
-=== Examples of CREATE FUNCTION
-
-* This CREATE FUNCTION statement creates a function that adds two integers:
-+
-```
-create function add2 (int, int)
-       returns (total_value int)
-       external name 'add2'
-       library myudflib;
-```
-
-* This CREATE FUNCTION statement creates a function that returns the minimum, maximum, and average values of
-five input integers:
-+
-```
-create function mma5 (int, int, int, int, int)
-       returns (min_value int, max_value int, avg_value int)
-       external name 'mma5'
-       library myudflib;
-```
-
-* This CREATE FUNCTION statement creates a function that reverses an input string of at most 32 characters:
-+
-```
-create function reverse (varchar(32))
-       returns (reversed_string varchar(32))
-       external name 'reverse'
-       library myudflib;
-```
-
-<<<
-[[create_index_statement]]
-== CREATE INDEX Statement
-
-The CREATE INDEX statement creates an SQL index based on one or more columns of a table or table-like object.
-The CREATE VOLATILE INDEX statement creates an SQL index with a lifespan that is limited to the SQL session that
-the index is created. Volatile indexes are dropped automatically when the session ends. See <<Indexes,Indexes>>.
-
-CREATE INDEX is a {project-name} SQL extension.
-
-NOTE: DDL statements are not currently supported in transactions. That means that you cannot run this statement
-inside a user-defined transaction (BEGIN WORK&#8230;COMMIT WORK) or when AUTOCOMMIT is OFF. To run this statement,
-AUTOCOMMIT must be turned ON (the default) for the session.
-
-```
-CREATE [VOLATILE] INDEX index ON table
-   (column-name [ASC[ENDING] | DESC[ENDING]]
-   [,column-name [ASC[ENDING] | DESC[ENDING]]]...)
-   [HBASE_OPTIONS (hbase-options-list)]
-   [SALT LIKE TABLE]
-
-hbase-options-list is:
-   hbase-option = 'value'[, hbase-option = 'value']...
-```
-
-[[create_index_syntax]]
-=== Syntax Description of CREATE INDEX
-
-* `_index_`
-+
-is an SQL identifier that specifies the simple name for the new index. You cannot qualify _index_ with its schema
-name. Indexes have their own name space within a schema, so an index name might be the same as a table or constraint
-name. However, no two indexes in a schema can have the same name.
-
-* `_table_`
-+
-is the name of the table for which to create the index. See <<database_object_names,Database Object Names>>.
-
-* `_column-name_ [ASC[ENDING] | DESC[ENDING]] [,_column-name_ [ASC[ENDING] | DESC[ENDING]]]&#8230;`
-+
-specifies the columns in _table_ to include in the index. The order of the columns in the index need not correspond
-to the order of the columns in the table.
-+
-ASCENDING or DESCENDING specifies the storage and retrieval order for rows in the index. The default is ASCENDING.
-+
-Rows are ordered by values in the first column specified for the index. If multiple index rows share the same value
-for the first column, the values in the second column are used to order the rows, and so forth. If duplicate index
-rows occur in a non-unique index, their order is based on the sequence specified for the columns of the key of the
-underlying table. For ordering (but not for other purposes), nulls are greater than other values.
-
-* `HBASE_OPTIONS (_hbase-option_ = '_value_'[, _hbase-option_ = '_value_']&#8230;)`
-+
-a list of HBase options to set for the index. These options are applied independently of any HBase options set for
-the index\u2019s table.
-
-// TODO: The Word document did not list all default values. 
-** `_hbase-option_ = '_value_'`
-+
-is one of the these HBase options and its assigned value:
-+
-[cols="35%,65%",options="header"]
-|===
-| HBase Option           | Accepted Values^1^
-| BLOCKCACHE             | 'true' \| 'false'
-| BLOCKSIZE              | *'65536'( \| '_positive-integer_'
-| BLOOMFILTER            | 'NONE' \| 'ROW' \| 'ROWCOL'
-| CACHE_BLOOMS_ON_WRITE  | 'true' \| 'false'
-| CACHE_DATA_ON_WRITE    | 'true' \| 'false'
-| CACHE_INDEXES_ON_WRITE | 'true' \| 'false'
-| COMPACT                | 'true' \| 'false'
-| COMPACT_COMPRESSION    | 'GZ' \| 'LZ4' \| 'LZO' \| 'NONE' \| 'SNAPPY'
-| COMPRESSION            | 'GZ' \| 'LZ4' \| 'LZO' \| 'NONE' \| 'SNAPPY'
-| DATA_BLOCK_ENCODING    | 'DIFF' \| 'FAST_DIFF' \| 'NONE' \| 'PREFIX'
-| DURABILITY             | 'USE_DEFAULT' \| 'SKIP_WAL' \| 'ASYNC_WAL' \| 'SYNC_WAL' \| 'FSYNC_WAL'
-| EVICT_BLOCKS_ON_CLOSE  | *'true'* \| 'false'
-| IN_MEMORY              | *'true'* \| 'false'
-| KEEP_DELETED_CELLS     | *'true'* \| 'false'
-| MAX_FILESIZE           | '_positive-integer_'
-| MAX_VERSIONS           | '1' \| '_positive-integer_'
-| MEMSTORE_FLUSH_SIZE    | '_positive-integer_'
-| MIN_VERSIONS           | '0' \| '_positive-integer_'
-| PREFIX_LENGTH_KEY      | '_positive-integer_', which should be less than maximum length of the key for the table.
-It applies only if the SPLIT_POLICY is `KeyPrefixRegionSplitPolicy`.
-| REPLICATION_SCOPE      | '0' \| *'1'*
-| SPLIT_POLICY           | 'org.apache.hadoop.hbase.regionserver. +
-ConstantSizeRegionSplitPolicy' \| +
-'org.apache.hadoop.hbase.regionserver. +
-IncreasingToUpperBoundRegionSplitPolicy' \| +
-'org.apache.hadoop.hbase.regionserver. +
-KeyPrefixRegionSplitPolicy'
-| TTL                    | '-1' (forever) \| '_positive-integer_'
-|===
-+
-^1^ Values in boldface are default values.
-
-* `SALT LIKE TABLE`
-+
-causes the index to use the same salting scheme (that is,
-`SALT USING _num_ PARTITIONS [ON (_column_[, _column_]&#8230;)])` as its base table.
-
-<<<
-[[create_index_considerations]]
-=== Considerations for CREATE INDEX
-
-Indexes are created under a single transaction. When an index is created, the following steps occur:
-
-* Transaction begins (either a user-started transaction or a system-started transaction).
-* Rows are written to the metadata.
-* Physical labels are created to hold the index (as non audited).
-* The base table is locked for read shared access which prevents inserts, updates, and deletes on the base table from occurring.
-* The index is loaded by reading the base table for read uncommitted access using side tree inserts.
-+
-NOTE: A side tree insert is a fast way of loading data that can perform specialized optimizations because the
-partitions are not audited and empty.
-
-* After load is complete, the index audit attribute is turned on and it is attached to the base table (to bring the index on-line).
-* The transaction is committed, either by the system or later by the requester.
-
-If the operation fails after basic semantic checks are performed, the index no longer exists and the entire transaction
-is rolled back even if it is a user-started transaction.
-
-[[create_index_authorization_and_availability_requirements]]
-==== Authorization and Availability Requirements
-
-An index always has the same security as the table it indexes.
-
-CREATE INDEX locks out INSERT, DELETE, and UPDATE operations on the table being indexed. If other processes have rows in the table locked
-when the operation begins, CREATE INDEX waits until its lock request is granted or timeout occurs.
-
-You cannot access an index directly.
-
-<<<
-[[create_index_required_privileges]]
-==== Required Privileges
-
-To issue a CREATE INDEX statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are creating the table in a shared schema.
-* You are the private schema owner.
-* You are the owner of the table.
-* You have the ALTER, ALTER_TABLE, CREATE, or CREATE_INDEX component privilege for the SQL_OPERATIONS component.
-+
-NOTE: In this case, if you create an index in a private schema, it will be owned by the schema owner.
-
-[[create_index_limits]]
-==== Limits on Indexes
-
-For non-unique indexes, the sum of the lengths of the columns in the index plus the sum of the length of
-the clustering key of the underlying table cannot exceed 2048 bytes.
-
-No restriction exists on the number of indexes per table.
-
-[[create_index_examples]]
-=== Examples of CREATE INDEX
-
-* This example creates an index on two columns of a table:
-+
-```
-CREATE INDEX xempname
-ON persnl.employee (last_name, first_name);
-```
-
-<<<
-[[create_library_statement]]
-== CREATE LIBRARY Statement
-
-The CREATE LIBRARY statement registers a library object in a {project-name} database. A library object
-can be an SPJ's JAR file or a UDF's library file.
-
-CREATE LIBRARY is a {project-name} SQL extension.
-
-NOTE: DDL statements are not currently supported in transactions. That means that you cannot run
-this statement inside a user-defined transaction (BEGIN WORK&#8230;COMMIT WORK) or when AUTOCOMMIT
-is OFF. To run this statement, AUTOCOMMIT must be turned ON (the default) for the session.
-
-```
-CREATE LIBRARY [[catalog-name.]schema-name.]library-name
-   file 'library-filename'
-   [host name 'host-name']
-   [local file 'host-filename']
-```
-
-[[create_library_syntax]]
-=== Syntax Description of CREATE LIBRARY
-
-* `\[[_catalog-name_.]_schema-name_.]_library-name_`
-+
-specifies the ANSI logical name of the library object, where each part of the name is a valid sql
-identifier with a maximum of 128 characters. specify a name that is unique and does not exist for
-libraries in the same schema. if you do not fully qualify the library name, trafodion sq qualifies
-it according to the schema of the current session. for more information, see <<identifiers,identifiers>>
-and <<database_object_names,database object names>>.
-
-<<<
-* `file '_library-filename_'`
-+
-specifies the full path of a deployed library file, which either an SPJ's jar file or a UDF's library file.
-+
-note: make sure to upload the library file to the trafodion cluster and then copy the library file to the
-same directory on all the nodes in the cluster before running the create library statement. otherwise, you
-will see an error message indicating that the jar or dll file was not found.
-
-* `host name '_host-name_'`
-+
-specifies the name of the client host machine where the deployed file resides.
-
-* `local file '_host-filename_'`
-+
-specifies the path on the client host machine where the deployed file is stored.
-
-[[create_library_considerations]]
-=== Considerations for CREATE LIBRARY
-
-* A library object cannot refer to a library file referenced by another library object. If the _library-filename_
-is in use by another library object, the CREATE LIBRARY command will fail.
-* The _library-filename_ must specify an existing file. Otherwise, the CREATE LIBRARY command will fail.
-* The CREATE LIBRARY command does not verify that the specified _library-filename_ is a valid executable file.
-* HOST NAME and LOCAL FILE are position dependent.
-
-<<<
-[[create_library_required_privileges]]
-==== Required Privileges
-
-To issue a CREATE LIBRARY statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are creating the library in a shared schema and have the MANAGE_LIBRARY privilege.
-* You are the private schema owner and have the MANAGE_LIBRARY privilege.
-* You have the CREATE or CREATE_LIBRARY component privilege for the SQL_OPERATIONS component and have
-the MANAGE_LIBRARY privilege.
-+
-NOTE: In this case, if you create a library in a private schema, it will be owned by the schema owner.
-
-[[create_library_examples]]
-=== Examples of CREATE LIBRARY
-
-* This CREATE LIBRARY statement registers a library named SALESLIB in the SALES schema for a JAR file (SPJs):
-+
-```
-CREATE LIBRARY sales.saleslib FILE '/opt/home/trafodion/spjjars/Sales.jar';
-```
-
-* This CREATE LIBRARY statement registers a library named MYUDFS in the default schema for a library file (UDFs):
-+
-```
-CREATE LIBRARY myudfs FILE $UDFLIB;
-```
-
-<<<
-[[create_procedure_statement]]
-== CREATE PROCEDURE Statement
-
-The CREATE PROCEDURE statement registers a Java method as a stored procedure in Java (SPJ) within a {project-name} database.
-
-NOTE: DDL statements are not currently supported in transactions. That means that you cannot run this statement
-inside a user-defined transaction (BEGIN WORK&#8230;COMMIT WORK) or when AUTOCOMMIT is OFF. To run this statement,
-AUTOCOMMIT must be turned ON (the default) for the session.
-
-```
-CREATE PROCEDURE procedure-ref([sql-parameter-list])
-   EXTERNAL NAME 'java-method-name [java-signature]'
-   LIBRARY [[catalog-name.]schema-name.]library-name
-   [external security external-security-type]
-   language java
-   parameter style java
-   [no sql | contains sql | modifies sql data | reads sql data]
-   [dynamic result sets integer]
-   [transaction required | no transaction required]
-   [deterministic | not deterministic]
-   [no isolate | isolate]
-
-procedure-ref is:
-   [[catalog-name.]schema-name.]procedure-name
-
-sql-parameter-list is:
-   sql-parameter[, sql-parameter]...
-
-sql-parameter is:
-   [parameter-mode] [sql-identifier] sql-datatype
-
-parameter-mode is:
-   in
- | out
- | inout
-
-java-method-name is:
-   [package-name.]class-name.method-name
-
-java-signature is:
-   ([java-parameter-list])
-
-java-parameter-list is:
-   java-datatype[, java-datatype]...
-
-external-security-type is:
-   definer
- | invoker
-```
-
-NOTE: delimited variables in this syntax diagram are case-sensitive. case-sensitive variables include _java-method-name_,
-_java-signature_, and _class-file-path_, and any delimited part of the _procedure-ref_.
-the remaining syntax is not case-sensitive.
-
-[[create_procedure_syntax]]
-=== Syntax Description of CREATE PROCEDURE
-
-* `_procedure-ref_([_sql-parameter_[, _sql-parameter_]&#8230;])`
-+
-specifies the name of the stored procedure in Java (SPJ) and any SQL parameters that correspond to the signature of
-the SPJ method.
-
-** `_procedure-ref_`
-+
-specifies an ANSI logical name of the form:
-+
-`\[[_catalog-name_.]_schema-name_.]_procedure-name_`
-+
-where each part of the name is a valid SQL identifier with a maximum of 128 characters. For more information,
-see <<identifiers,identifiers>> and <<database_object_names,database object names>>.
-+
-specify a name that is unique and does not exist for any procedure or function in the same schema. {project-name}
-does not support the overloading of procedure names. That is, you cannot register the same procedure name more than
-once with different underlying SPJ methods.
-+
-If you do not fully qualify the procedure name, then {project-name} qualifies it according to the schema of the current session.
-
-** `_sql-parameter_`
-+
-specifies an SQL parameter that corresponds to the signature of the SPJ method:
-+
-`[_parameter-mode_] [_sql-identifier_] _sql-datatype_`
-
-*** `_parameter-mode_`
-+
-specifies the mode `in`, `out`, or `inout` of a parameter. The default is `in`.
-
-**** `in`
-+
-specifies a parameter that passes data to an SPJ.
-
-**** `out`
-+
-specifies a parameter that accepts data from an SPJ. The parameter must be an array.
-
-**** `inout`
-+
-specifies a parameter that passes data to and accepts data from an SPJ. The parameter must be an array.
-
-*** `_sql-identifier_`
-+
-specifies an SQL identifier for the parameter. For more information, see <<identifiers,identifiers>>.
-
-*** `_sql-datatype_`
-+
-specifies an SQL data type that corresponds to the Java parameter of the SPJ method.
-+
-_sql-datatype_ can be:
-+
-[cols="60%,40%",options="header"]
-|===
-| sql data type | maps to java data type&#8230;
-| char[acter] +
-char[acter] varying +
-varchar +
-pic[ture] x^1^ +
-nchar +
-nchar varying +
-national char[acter] +
-national char[acter] varying | java.lang.string
-| date | java.sql.date
-| time | java.sql.time
-| timestamp | java.sql.timestamp
-| dec[imal]^2^ +
-pic[ture] s9^3^ +
-numeric (including numeric with a precision greater than eighteen)^2^ | java.math.bigdecimal
-| smallint^2^ | short
-| int[eger]^2^ | int or java.lang.integer^4^
-| largeint^2^ | long or java.lang.long^4^
-| float | double or java.lang.double^4^
-| real | float or java.lang.float^4^
-| double precision | double or java.lang.double^4^
-|===
-+
-1. the trafodion database stores pic x as a char data type.
-2. numeric data types of sql parameters must be signed, which is the default in the trafodion database.
-3. the trafodion database stores pic s9 as a decimal or numeric data type.
-4. by default, the sql data type maps to a java primitive type. the sql data type maps to a java wrapper class
-only if you specify the wrapper class in the java signature of the external name clause.
-+
-for more information, see <<data_types,data types>>.
-
-* `external name '_java-method-name_ [_java-signature_]'`
-
-** `_java-method-name_`
-+
-specifies the case-sensitive name of the SPJ method of the form:
-+
-`[_package-name_.]_class-name_._method-name_`
-+
-The Java method must exist in a Java class file, _class-name_.class, within a library registered in the database.
-The Java method must be defined as `public` and `static` and have a return type of `void`.
-+
-If the class file that contains the SPJ method is part of a package, then you must also specify the package name.
-If you do not specify the package name, the create procedure statement fails to register the SPJ.
-
-** `_java-signature_`
-+
-specifies the signature of the SPJ method and consists of:
-+
-`([_java-datatype_[, _java-datatype_]&#8230;])`
-+
-The Java signature is necessary only if you want to specify a Java wrapper class (for example, `java.lang.integer`) instead of a java
-primitive data type (for example, `int`). An SQL data type maps to a Java primitive data type by default.
-+
-The Java signature is case-sensitive and must be placed within parentheses, such as `(java.lang.integer, java.lang.integer`).
-The signature must specify each of the parameter data types in the order they appear in the Java method definition within
-the class file. Each Java data type that corresponds to an out or inout parameter must be followed by empty square
-brackets (`[ ]`), such as `java.lang.integer[]`.
-+
-<<<
-*** `_java-datatype_`
-+
-Specifies a mappable Java data type. For the mapping of the Java data types to SQL data types, see _sql-datatype_.
-
-* `library \[[_catalog-name_.]_schema-name_.]_library-name_`
-+
-specifies the ANSI logical name of a library containing the SPJ method. If you do not fully qualify the library name,
-then {project-name} qualifies it according to the schema of the current session.
-
-* `external security _external-security-type_`
-+
-determines the privileges, or rights, that users have when executing (or calling) the SPJ. An SPJ can have one of these
-types of external security:
-
-** `invoker` determines that users can execute, or invoke, the stored procedure using the privileges of the user who invokes
-the stored procedure. This behavior is referred to as _invoker rights_ and is the default behavior if external security is
-not specified. Invoker rights allow a user who has the execute privilege on the SPJ to call the SPJ using his or her existing
-privileges. In this case, the user must be granted privileges to access the underlying database objects on which the SPJ operates.
-+
-NOTE: Granting a user privileges to the underlying database objects gives the user direct access to those database objects,
-which could pose a risk to more sensitive or critical data to which users should not have access. For example, an SPJ
-might operate on a subset of the data in an underlying database object but that database object might contain other
-more sensitive or critical data to which users should not have access.
-
-** `definer` determines that users can execute, or invoke, the stored procedure using the privileges of the user who created
-the stored procedure. This behavior is referred to as _definer rights_. The advantage of definer rights is that users are
-allowed to manipulate data by invoking the stored procedure without having to be granted privileges to the underlying
-database objects. That way, users are restricted from directly accessing or manipulating more sensitive or critical data in
-the database. However, be careful about the users to whom you grant execute privilege on an SPJ with definer external security
-because those users will be able to execute the SPJ without requiring privileges to the underlying database objects.
-
-<<<
-* `language java`
-+
-specifies that the external user-defined routine is written in the java language.
-
-* `parameter style java`
-+
-specifies that the run-time conventions for arguments passed to the external user-defined routine are those of the Java language.
-
-* `no sql`
-+
-specifies that the SPJ cannot perform SQL operations.
-
-* `contains sql | modifies sql data | reads sql data`
-+
-specifies that the SPJ can perform SQL operations. All these options behave the same as `contains sql`, meaning that the SPJ
-can read and modify SQL data. Use one of these options to register a method that contains SQL statements. Ff you do not specify
-an SQL access mode, then the default is `contains sql`.
-
-* `dynamic result sets _integer_`
-+
-specifies the maximum number of result sets that the SPJ can return. This option is applicable only if the method signature
-contains a `java.sql.resultset[]` object. If the method contains a result set object, then the valid range is 1 to 255 inclusive.
-The actual number of result sets returned by the SPJ method can be fewer than or equal to this number. If you do not specify
-this option, then the default value is 0 (zero), meaning that the SPJ does not return result sets.
-
-* `transaction required | no transaction required`
-+
-determines whether the SPJ must run in a transaction inherited from the calling application (`transaction required`, the default
-option) or whether the SPJ runs without inheriting the calling application\u2019s transaction (`no transaction required`). Typically,
-you want the stored procedure to inherit the transaction from the calling application. However, if the SPJ method does
-not access the database or if you want the stored procedure to manage its own transactions, then you should set the stored
-procedure\u2019s transaction attribute to no transaction required. For more information, see
-<<effects_of_the_transaction_attribute_on_spjs,effects of the transaction attribute on SPJs>>.
-
-<<<
-* `deterministic | not deterministic`
-+
-specifies whether the SPJ always returns the same values for out and inout parameters for a given set of argument values
-(`deterministic`) or does not return the same values (`not deterministic`, the default option). If you specify `deterministic`,
-{project-name} is not required to call the SPJ each time to produce results; instead, {project-name} caches the results and
-reuses them during subsequent calls, thus optimizing the CALL statement.
-
-* `no isolate | isolate`
-+
-specifies that the SPJ executes either in the environment of the database server (`no isolate`) or in an isolated environment
-(`isolate`, the default option). {project-name} allows both options but always executes the SPJ in the UDR server process (`isolate`).
-
-[[create_procedure_considerations]]
-=== Considerations for CREATE PROCEDURE
-
-[[create_procedure_required_privileges]]
-==== Required Privileges
-
-To issue a CREATE PROCEDURE statement, one of the following must be true:
-
-* You are DB ROOT.
-* You are creating the procedure in a shared schema, and you have the USAGE (or ALL) privilege on the library that will be
-used in the creation of the stored procedure. The USAGE privilege provides you with read access to the library\u2019s underlying
-JAR file, which contains the SPJ Java method.
-* You are the private schema owner and have the USAGE (or ALL) privilege on the library that will be used in the creation of
-the stored procedure. The USAGE privilege provides you with read access to the library\u2019s underlying JAR file, which contains
-the SPJ Java method.
-* You have the CREATE or CREATE_ROUTINE component level privilege for the SQL_OPERATIONS component and have the USAGE (or ALL)
-privilege on the library that will be used in the creation of the stored procedure. The USAGE privilege provides you with read
-access to the library\u2019s underlying JAR file, which contains the SPJ Java method.
-+
-NOTE: In this case, if you create a stored procedure in a private schema, it will be owned by the schema owner.
-
-<<<
-[[effects_of_the_transaction_attribute_on_spjs]]
-==== Effects of the Transaction Attribute on SPJs
-
-===== Transaction Required
-
-_Using Transaction Control Statements or Methods_
-
-If you specify TRANSACTION REQUIRED (the default option), a CALL statement automatically initiates a transaction if there is
-no active transaction. In this case, you should not use transaction control statements (or equivalent JDBC transaction methods)
-in the SPJ method. Transaction control statements include COMMIT WORK and ROLLBACK WORK, and the equivalent JDBC transaction
-methods are `Connection.commit()` and `Connection.rollback()`. If you try to use transaction control statements or methods in an
-SPJ method when the stored procedure\u2019s transaction attribute is set to TRANSACTION REQUIRED, then the transaction control statements
-or methods in the SPJ method are ignored, and the Java virtual machine (JVM) does not report any errors or warnings. When the
-stored procedure\u2019s transaction attribute is set to TRANSACTION REQUIRED, then you should rely on the transaction control statements
-or methods in the application that calls the stored procedure and allow the calling application to manage the transactions.
-
-_Committing or Rolling Back a Transaction_
-
-If you do not use transaction control statements in the calling application, then the transaction initiated by the CALL statement
-might not automatically commit or roll back changes to the database. When AUTOCOMMIT is ON (the default setting), the database
-engine automatically commits or rolls back any changes made to the database at the end of the CALL statement execution. However,
-when AUTOCOMMIT is OFF, the current transaction remains active until the end of the client session or until you explicitly commit
-or roll back the transaction. To ensure an atomic unit of work when calling an SPJ, use the COMMIT WORK statement in the calling
-application to commit the transaction when the CALL statement succeeds, and use the ROLLBACK WORK statement to roll back the
-transaction when the CALL statement fails.
-
-<<<
-===== No Transaction Required
-
-In some cases, you might not want the SPJ method to inherit the transaction from the calling application. Instead, you might want
-the stored procedure to manage its own transactions or to run without a transaction. Not inheriting the calling application\u2019s
-transaction is useful in these cases:
-
-* The stored procedure performs several long-running operations, such as multiple DDL or table maintenance operations, on the
-database. In this case, you might want to commit those operations periodically from within the SPJ method to avoid locking tables
-for a long time.
-* The stored procedure performs certain SQL operations that must run without an active transaction. For example, INSERT, UPDATE,
-and DELETE statements with the WITH NO ROLLBACK option are rejected when a transaction is already active, as is the case when a
-stored procedure inherits a transaction from the calling application. The PURGEDATA utility is also rejected when a transaction
-is already active.
-* The stored procedure does not access the database. In this case, the stored procedure does not need to inherit the transaction
-from the calling application. By setting the stored procedure\u2019s transaction attribute to NO TRANSACTION REQUIRED, you can avoid
-the overhead of the calling application\u2019s transaction being propagated to the stored procedure.
-
-In these cases, you should set the stored procedure\u2019s transaction attribute to NO TRANSACTION REQUIRED when creating the stored
-procedure.
-
-If you specify NO TRANSACTION REQUIRED and if the SPJ method creates a JDBC default connection, that connection will have autocommit
-enabled by default. You can either use the autocommit transactions or disable autocommit (conn.setAutoCommit(false);) and use the
-JDBC transaction methods, `Connection.commit()` and `Connection.rollback()`, to commit or roll back work where needed.
-
-<<<
-[[create_procedure_examples]]
-=== Examples of CREATE PROCEDURE
-
-* This CREATE PROCEDURE statement registers an SPJ named LOWERPRICE, which does not accept any arguments:
-+
-```
-SET SCHEMA SALES;
-
-CREATE PROCEDURE lowerprice()
-   EXTERNAL NAME 'Sales.lowerPrice'
-   LIBRARY saleslib
-   LANGUAGE JAVA
-   PARAMETER STYLE JAVA
-   MODIFIES SQL DATA;
-```
-+
-Because the procedure name is not qualified by a catalog and schema, {project-name} qualifies it according to the current
-session settings, where the catalog is TRAFODION (by default) and the schema is set to SALES. Since the procedure needs
-to be able to read and modify SQL data, MODIFIES SQL DATA is specified in the CREATE PROCEDURE statement.
-+
-To call this SPJ, use this CALL statement:
-+
-```
-CALL lowerprice();
-```
-+
-The LOWERPRICE procedure lowers the price of items with 50 or fewer orders by 10 percent in the database.
-
-* This CREATE PROCEDURE statement registers an SPJ named TOTALPRICE, which accepts three input parameters and returns a numeric value, the
-total price to an INOUT parameter:
-+
-```
-CREATE PROCEDURE trafodion.sales.totalprice(IN qty NUMERIC (18),
-                                            IN rate VARCHAR (10),
-                                            INOUT price NUMERIC (18,2))
-   EXTERNAL NAME 'Sales.totalPrice'
-   LIBRARY sales.saleslib
-   LANGUAGE JAVA
-   PARAMETER STYLE JAVA
-   NO SQL;
-```
-+
-<<<
-+
-To call this SPJ in TrafCI, use these statements:
-+
-```
-SET PARAM ?p 10.00;
-CALL sales.totalprice(23, 'standard', ?p);
-
-p
---------------------
-              253.97
-
---- SQL operation complete.
-```
-+
-Since the procedure does not read and modify any SQL data, NO SQL is specified in the CREATE PROCEDURE statement.
-
-* This CREATE PROCEDURE statement registers an SPJ named MONTHLYORDERS, which accepts an integer value for the month
-and returns the number of orders:
-+
-```
-CREATE PROCEDURE sales.monthlyorders(IN INT, OUT number INT)
-   EXTERNAL NAME 'Sales.numMonthlyOrders (int, java.lang.Integer[])'
-   LIBRARY sales.saleslib
-   LANGUAGE JAVA
-   PARAMETER STYLE JAVA
-   READS SQL DATA;
-```
-+
-Because the OUT parameter is supposed to map to the Java wrapper class, java.lang.Integer, you must specify the Java
-signature in the EXTERNAL NAME clause. To invoke this SPJ, use this CALL statement:
-+
-```
-CALL sales.monthlyorders(3, ?);
-
-ORDERNUM
------------
-          4
-
---- SQL operation complete.
-```
-
-<<<
-* This CREATE PROCEDURE statement registers an SPJ named ORDERSUMMARY, which accepts a date (formatted as a string) and
-returns information about the orders on or after that date.
-+
-```
-CREATE PROCEDURE sales.ordersummary(IN on_or_after_date VARCHAR (20),
-                                    OUT num_orders LARGEINT)
-   EXTERNAL NAME 'Sales.orderSummary (int, long[])'
-   LIBRARY sales.saleslib
-   EXTERNAL SECURITY invoker
-   LANGUAGE JAVA
-   PARAMETER STYLE JAVA
-   READS SQL DATA
-   DYNAMIC RESULT SETS 2;
-```
-+
-To invoke this SPJ, use this CALL statement:
-+
-```
-CALL trafodion.sales.ordersummary('01-01-2014', ?);
-```
-+
-The ORDERSUMMARY procedure returns this information about the orders on or after the specified date, 01-01-2014:
-+
-```
-NUM_ORDERS
---------------------
-                  13
-
-ORDERNUM NUM_PARTS            AMOUNT               ORDER_DATE LAST_NAME
--------- -------------------- -------------------- ---------- --------------------
-  100210                    4             19020.00 2014-04-10 HUGHES
-  100250                    4             22625.00 2014-01-23 HUGHES
-  101220                    4             45525.00 2014-07-21 SCHNABL
-  ... ... ... ... ...
-
---- 13 row(s) selected.
-
-ORDERNUM PARTNUM UNIT_PRICE QTY_ORDERED PARTDESC
--------- ------- ---------- ----------- ------------------
-  100210     244    3500.00           3 PC GOLD, 30 MB
-  100210    2001    1100.00           3 GRAPHIC PRINTER,M1
-  100210    2403     620.00           6 DAISY PRINTER,T2
-  ... ... ... ... ...
-
---- 70 row(s) selected.
-
---- SQL operation complete.
-```
-
-<<<
-[[create_role_statement]]
-== CREATE ROLE Statement
-
-The CREATE ROLE statement creates an SQL role. See <<Roles,Roles>>.
-
-```
-CREATE ROLE role-name [ WITH ADMIN grantor ]
-
-grantor is:
-   database-username
-```
-
-[[create_role_syntax]]
-=== Syntax Description of CREATE ROLE
-
-* `_role-name_`
-+
-is an SQL identifier that specifies the new role. _role-name_ is a regular or delimited
-case-insensitive identifier.
-See <<Case_Insensitive_Delimited_Identifiers,Case-Insensitive Delimited Identifiers>>.
-_role-name_ cannot be an existing role name, and it cannot be a registered database username. However,
-_role-name_ can be a configured directory-service username.
-
-* `WITH ADMIN _grantor_`
-+
-specifies a role owner other than the current user. This is an optional clause.
-
-* `_grantor_`
-
-specifies a registered database username to whom you assign the role owner.
-
-<<<
-[[create_role_considerations]]
-=== Considerations for CREATE ROLE
-
-* To create a role, you must either be DB ROOT or have been granted the MANAGE_ROLES component privilege for SQL_OPERATIONS.
-* PUBLIC, _SYSTEM, NONE, and database user names beginning with DB are reserved. You cannot specify a _role-name_ with any such name.
-
-[[create_role_ownership]]
-==== Role Ownership
-
-You can give role ownership to a user by specifying the user in the WITH ADMIN _grantor_ clause with the _grantor_ as the user.
-
-The role owner can perform these operations:
-
-* Grant and revoke the role to users.
-* Drop the role.
-
-Role ownership is permanent. After you create the role, the ownership of the role cannot b

<TRUNCATED>


[08/15] incubator-trafodion git commit: Major reorganization of the Client Installation Guide.

Posted by gt...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/olap_functions.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/olap_functions.adoc b/docs/sql_reference/src/asciidoc/_chapters/olap_functions.adoc
index 8923214..d6ace1b 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/olap_functions.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/olap_functions.adoc
@@ -1,1078 +1,1078 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[olap_functions]]
-= OLAP Functions
-
-This section describes the syntax and semantics of the On Line
-Analytical Process (OLAP) window functions. The OLAP window functions
-are ANSI compliant.
-
-[[considerations_for_window_functions]]
-== Considerations for Window Functions
-
-These considerations apply to all window functions.
-
-* `_inline-window-specification_`
-+
-The window defined by the _inline-window-specification_ consists of the
-rows specified by the _window-frame-clause_, bounded by the current
-partition. If no PARTITION BY clause is specified, the partition is
-defined to be all the rows of the intermediate result. If a PARTITION BY
-clause is specified, the partition is the set of rows which have the
-same values for the expressions specified in the PARTITION clause.
-
-* `_window-frame-clause_`
-+
-DISTINCT is not supported for window functions.
-+
-Use of a FOLLOWING term is not supported. Using a FOLLOWING term results
-in an error.
-+
-If no _window-frame-clause_ is specified, "ROWS BETWEEN UNBOUNDED
-PRECEDING AND UNBOUNDED FOLLOWING" is assumed. This clause is not
-supported because it involves a FOLLOWING term and will result in an
-error.
-+
-"ROWS CURRENT ROW" is equivalent to "ROWS BETWEEN CURRENT ROW AND
-CURRENT ROW".
-+
-"ROWS _preceding-row_" is equivalent to "ROWS BETWEEN _preceding-row_
-AND CURRENT ROW".
-
-=== Nulls
-
-All nulls are eliminated before the function is applied to the set of
-values. If the window contains all NULL values, the result of the window
-function is NULL.
-
-If the specified window for a particular row consists of rows that are
-all before the first row of the partition (no rows in the window), the
-result of the window function is NULL.
-
-<<<
-[[order_by_clause_supports_expressions_for_olap_functions]]
-== ORDER BY Clause Supports Expressions For OLAP Functions
-
-The ORDER BY clause of the OLAP functions now supports expressions.
-However, use of multiple OLAP functions with different expressions in
-the same query is not supported. The following examples show how
-expressions may be used in the ORDER BY clause.
-
-```
-SELECT
-  -1 * annualsalary neg_total
-, RANK() OVER (ORDER BY -1 * annualsalary) olap_rank
-FROM employee;
-```
-
-Using an aggregate in the ORDER BY clause:
-
-```
-SELECT
-  num
-, RANK() OVER (ORDER BY SUM(annualsalary)) olap_rank
-FROM employee
-GROUP BY num;
-```
-
-Using multiple functions with the same expression in the ORDER BY clause:
-
-```
-SELECT
-  num
-, workgroupnum
-, RANK() OVER (ORDER BY SUM (annualsalary)*num) olap_rank
-, DENSE_RANK() OVER (ORDER BY SUM (annualsalary)*num) olap_drank
-, ROW_NUMBER() OVER (ORDER BY SUM (annualsalary)*num) olap_mum
-FROM employee
-GROUP BY num, workgroupnum, annualsalary;
-```
-
-Using more functions with the same expression in the ORDER BY clause:
-
-```
-SELECT
-  num
-, workgroupnum
-, annualsalary
-, SUM(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
-, AVG(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
-, MIN(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
-, MAX(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
-, VARIANCE(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
-, STDDEV(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
-, COUNT(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
-FROM employee
-GROUP BY num, workgroupnum, annualsalary;
-```
-
-<<<
-[[limitations_for_window_functions]]
-== Limitations for Window Functions
-
-These limitations apply to all window functions.
-
-* The ANSI _window-clause_ is not supported by {project-name}. Only the
-_inline-window-specification_ is supported. An attempt to use an ANSI
-_window-clause_ will result in a syntax error.
-
-* The _window-frame-clause_ cannot contain a FOLLOWING term, either
-explicitly or implicitly. Because the default window frame clause
-contains an implicit FOLLOWING ("ROWS BETWEEN UNBOUNDED PRECEDING AND
-UNBOUNDED FOLLOWING"), the default is not supported. So, practically,
-the _window-frame-clause_ is not optional. An attempt to use a FOLLOWING
-term, either explicitly or implicitly will result in the "4343" error
-message.
-
-* The window frame units can only be ROWS. RANGE is not supported by
-{project-name}. An attempt to use RANGE will result in a syntax error.
-
-* The ANSI _window-frame-exclusion-specification_ is not supported by
-{project-name}. An attempt to use a _window-frame-exclusion-specification_
-will result in a syntax error.
-
-* Multiple _inline-window-specifications_ in a single SELECT clause are
-not supported. For each window function within a SELECT clause, the
-ORDER BY clause and PARTITION BY specifications must be identical. The
-window frame can vary within a SELECT clause. An attempt to use multiple
-_inline-window-specifications_ in a single SELECT clause will result in
-the "4340" error message.
-
-* The ANSI _null-ordering-specification_ within the ORDER BY clause is
-not supported by {project-name}. Null values will always be sorted as if they
-are greater than all non-null values. This is slightly different than a
-null ordering of NULLS LAST. An attempt to use a
-_null-ordering-specification_ will result in a syntax error.
-
-* The ANSI _filter-clause_ is not supported for window functions by
-{project-name}. The _filter-clause_ applies to all aggregate functions
-(grouped and windowed) and that the _filter-clause_ is not currently
-supported for grouped aggregate functions. An attempt to use a
-_filter-clause_ will result in a syntax error.
-
-* The DISTINCT value for the _set-qualifier-clause_ within a window
-function is not supported. Only the ALL value is supported for the
-_set-qualifier-clause_ within a window function. An attempt to use
-DISTINCT in a window function will result in the "4341" error message.
-
-<<<
-[[avg_window_function]]
-== AVG Window Function
-
-AVG is a window function that returns the average of non-null values of
-the given expression for the current window specified by the
-_inline-window specification_.
-
-```
-AVG ([ALL] expression) OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-                       [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-[ window-frame-clause ]
-```
-
-* `_window-frame-clause_` is:
-+
-```
-  ROWS CURRENT ROW
-| ROWS preceding-row
-| ROWS BETWEEN preceding-row AND preceding-row
-| ROWS BETWEEN preceding-row AND CURRENT ROW
-| ROWS BETWEEN preceding-row AND following-row
-| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
-| ROWS BETWEEN CURRENT ROW AND following-row
-| ROWS BETWEEN following-row AND following-row
-```
-
-* `_preceding-row_` is:
-+
-```
-  UNBOUNDED PRECEDING
-| unsigned-integer PRECEDING
-```
-
-* `_following-row_` is:
-+
-```
-  UNBOUNDED FOLLOWING
-| unsigned-integer FOLLOWING
-```
-
-<<<
-* `ALL`
-+
-specifies whether duplicate values are included in the computation of
-the AVG of the _expression_. The default option is ALL, which causes
-duplicate values to be included.
-
-* `_expression_`
-+
-specifies a numeric or interval value _expression_ that determines the
-values to average. See <<numeric_value_expressions,Numeric Value Expressions>>
-and <<interval_value_expressions,Interval Value Expressions>>.
-
-* `_inline-window-specification_`
-+
-specifies_the_window_over_which_the_avg_is_computed. The
-_inline-window-specification_ can contain an optional partition by
-clause, an optional ORDER BY clause and an optional window frame clause.
-The PARTITION BY clause specifies how the intermediate result is
-partitioned and the ORDER BY clause specifies how the rows are ordered
-within each partition.
-
-* `_window-frame-clause_`
-+
-specifies the window within the partition over which the AVG is
-computed.
-
-<<<
-[[examples_of_avg_window_function]]
-=== Examples of AVG Window Function
-
-* Return the running average value of the SALARY column:
-+
-```
-SELECT
-  empnum
-, AVG(salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the running average value of the SALARY column within each
-department:
-+
-```
-SELECT
-  deptnum
-, empnum
-, AVG(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the moving average of salary within each department over a
-window of the last 4 rows:
-+
-```
-SELECT
-  deptnum
-, empnum
-, AVG(SALARY) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
-FROM persnl.employee;
-```
-
-<<<
-[[count_window_function]]
-== COUNT Window Function
-
-COUNT is a window function that returns the count of the non null values
-of the given expression for the current window specified by the
-inline-window-specification.
-
-```
-COUNT {(*) | ([ALL] expression) } OVER inline-window-specification
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-[ window-frame-clause ]
-```
-
-* `_window-frame-clause_` is:
-+
-```
-  ROW CURRENT ROW
-| ROW preceding-row
-| ROW BETWEEN preceding-row AND preceding-row
-| ROW BETWEEN preceding-row AND CURRENT ROW
-| ROW BETWEEN preceding-row AND following-row
-| ROW BETWEEN CURRENT ROW AND CURRENT ROW
-| ROW BETWEEN CURRENT ROW AND following-row
-| ROW BETWEEN following-row AND following-row
-```
-
-* `_preceding-row_` is:
-+
-```
-   UNBOUNDED PRECEDING
-|  unsigned-integer PRECEDING
-```
-
-* `_following-row_` is:
-```
-  UNBOUNDED FOLLOWING
-| unsigned-integer FOLLOWING
-```
-
-* `ALL`
-+
-specifies whether duplicate values are included in the computation of
-the COUNT of the _expression_. The default option is ALL, which causes
-duplicate values to be included.
-
-<<<
-* `_expression_`
-+
-specifies a value _expression_ that is to be counted. See
-<<expressions,Expressions>>.
-
-* `_inline-window-specification_`
-+
-specifies the window over which the COUNT is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause, an optional ORDER BY clause and an optional window frame clause.
-The PARTITION BY clause specifies how the intermediate result is
-partitioned and the ORDER BY clause specifies how the rows are ordered
-within each partition.
-
-* `_window-frame-clause_`
-+
-specifies the window within the partition over which the COUNT is
-computed.
-
-<<<
-[[examples_of_count_window_function]]
-=== Examples of COUNT Window Function
-
-* Return the running count of the SALARY column:
-+
-```
-SELECT
-  empnum
-, COUNT(salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the running count of the SALARY column within each department:
-+
-```
-SELECT
-  deptnum
-, empnum
-, COUNT(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the moving count of salary within each department over a window
-of the last 4 rows:
-+
-```
-SELECT
-  deptnum
-, empnum
-, COUNT(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the running count of employees within each department:
-+
-```
-SELECT
-  deptnum
-, empnum
-, COUNT(*) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-
-<<<
-[[dense_rank_window_function]]
-== DENSE_RANK Window Function
-
-DENSE_RANK is a window function that returns the ranking of each row of
-the current partition specified by the inline-window-specification. The
-ranking is relative to the ordering specified in the
-inline-window-specification. The return value of DENSE_RANK starts at 1
-for the first row of the window. Values of the given expression that are
-equal have the same rank. The value of DENSE_RANK advances 1 when the
-value of the given expression changes.
-
-```
-DENSE_RANK() OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-```
-
-* `_inline-window-specification_`
-+
-specifies the window over which the DENSE_RANK is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause and an optional ORDER BY clause. The PARTITION BY clause
-specifies how the intermediate result is partitioned and the ORDER BY
-clause specifies how the rows are ordered within each partition.
-
-[[examples_of_dense_rank_window_function]]
-=== Examples of DENSE_RANK Window Function
-
-* Return the dense rank for each employee based on employee number:
-+
-```
-SELECT
-  DENSE_RANK() OVER (ORDER BY empnum)
-, *
-FROM persnl.employee;
-```
-
-* Return the dense rank for each employee within each department based
-on salary:
-+
-```
-SELECT
-  DENSE_RANK() OVER (PARTITION BY deptnum ORDER BY salary)
-, *
-FROM persnl.employee;
-```
-
-<<<
-[[max_window_function]]
-=== MAX Window Function
-
-MAX is a window function that returns the maximum value of all non null
-values of the given expression for the current window specified by the
-inline-window-specification.
-
-```
-MAX ([ALL] expression) OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-[ window-frame-clause ]
-```
-
-* `_window-frame-clause_` is:
-+
-```
-  ROWS CURRENT ROW
-| ROWS preceding-row
-| ROWS BETWEEN preceding-row AND preceding-row
-| ROWS BETWEEN preceding-row AND CURRENT ROW
-| ROWS BETWEEN preceding-row AND following-row
-| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
-| ROWS BETWEEN CURRENT ROW AND following-row
-| ROWS BETWEEN following-row AND following-row
-```
-
-* `_preceding-row_` is:
-+
-```
-  UNBOUNDED PRECEDING
-| unsigned-integer PRECEDING
-```
-
-* `_following-row_` is:
-+
-```
-  UNBOUNDED FOLLOWING
-| unsigned-integer FOLLOWING
-```
-
-* `ALL`
-+
-specifies whether duplicate values are included in the computation of
-the MAX of the _expression_. The default option is ALL, which causes
-duplicate values to be included.
-
-<<<
-* `_expression_`
-+
-specifies an expression that determines the values over which the MAX is
-computed. See <<expressions,Expressions>>.
-
-* `_inline-window-specification_`
-+
-specifies the window over which the MAX is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause, an optional ORDER BY clause and an optional window frame clause.
-The PARTITION BY clause specifies how the intermediate result is
-partitioned and the ORDER BY clause specifies how the rows are ordered
-within each partition.
-
-* `_window-frame-clause_`
-+
-specifies the window within the partition over which the MAX is
-computed.
-
-<<<
-[[examples_of_max_window_function]]
-=== Examples of MAX Window Function
-
-* Return the running maximum of the SALARY column:
-+
-```
-SELECT
-  empnum
-, MAX(salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the running maximum of the SALARY column within each department:
-+
-```
-SELECT
-  deptnum
-, empnum, MAX(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the moving maximum of salary within each department over a window of the last 4 rows:
-+
-```
-SELECT
-  deptnum
-, empnum
-, MAX(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
-FROM persnl.employee;
-```
-
-<<<
-[[min_window_function]]
-== MIN Window Function
-
-MIN is a window function that returns the minimum value of all non null
-values of the given expression for the current window specified by the
-inline-window-specification.
-
-```
-MIN ([ALL] expression) OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-[ window-frame-clause ]
-```
-
-* `_window-frame-clause_` is:
-+
-```
-  ROWS CURRENT ROW
-| ROWS preceding-row
-| ROWS BETWEEN preceding-row AND preceding-row
-| ROWS BETWEEN preceding-row AND CURRENT ROW
-| ROWS BETWEEN preceding-row AND following-row
-| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
-| ROWS BETWEEN CURRENT ROW AND following-row
-| ROWS BETWEEN following-row AND following-row
-```
-
-* `_preceding-row_` is:
-+
-```
-  UNBOUNDED PRECEDING
-| unsigned-integer PRECEDING
-```
-
-* `_following-row_` is:
-+
-```
-  UNBOUNDED FOLLOWING
-| unsigned-integer FOLLOWING
-```
-
-* `ALL1
-+
-specifies whether duplicate values are included in the computation of
-the MIN of the _expression_. The default option is ALL, which causes
-duplicate values to be included.
-
-<<<
-* `_expression_`
-+
-specifies an expression that determines the values over which the MIN is
-computed See <<expressions,Expressions>>.
-
-* `_inline-window-specification_`
-+
-specifies the window over which the MIN is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause, an optional ORDER BY clause and an optional window frame clause.
-The PARTITION BY clause specifies how the intermediate result is
-partitioned and the ORDER BY clause specifies how the rows are ordered
-within each partition.
-
-* `_window-frame-clause_`
-+
-specifies the window within the partition over which the MIN is
-computed.
-
-<<<
-[[examples_of_min_window_function]]
-=== Examples of MIN Window Function
-
-* Return the running minimum of the SALARY column:
-+
-```
-SELECT
-  empnum
-, MIN(salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the running minimum of the SALARY column within each department:
-+
-```
-SELECT
-  deptnum
-, empnum
-, MIN(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the moving minimum of salary within each department over a window of the last 4 rows:
-+
-```
-SELECT
-  deptnum
-, empnum
-, MIN(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
-FROM persnl.employee;
-```
-
-<<<
-[[rank_window_function]]
-== RANK Window Function
-
-RANK is a window function that returns the ranking of each row of the
-current partition specified by the inline-window-specification. The
-ranking is relative to the ordering specified in the
-_inline-window-specification_. The return value of RANK starts at 1 for
-the first row of the window. Values that are equal have the same rank.
-The value of RANK advances to the relative position of the row in the
-window when the value changes.
-
-```
-RANK() OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-```
-
-* `_inline-window-specification_`
-+
-specifies the window over which the RANK is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause and an optional ORDER BY clause. The PARTITION BY clause
-specifies how the intermediate result is partitioned and the ORDER BY
-clause specifies how the rows are ordered within each partition.
-
-[[examples_of_rank_window_function]]
-=== Examples of RANK Window Function
-
-* Return the rank for each employee based on employee number:
-+
-```
-SELECT
-  RANK() OVER (ORDER BY empnum)
-, *
-FROM persnl.employee;
-```
-
-* Return the rank for each employee within each department based on salary:
-+
-```
-SELECT
-  RANK() OVER (PARTITION BY deptnum ORDER BY salary)
-, *
-FROM persnl.employee;
-```
-
-<<<
-[[row_number_window_function]]
-=== ROW_NUMBER Window Function
-
-ROW_NUMBER is a window function that returns the row number of each row
-of the current window specified by the inline-window-specification.
-
-```
-ROW_NUMBER () OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-```
-
-* `_inline-window-specification_`
-+
-specifies the window over which the ROW_NUMBER is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause and an optional ORDER BY clause. The PARTITION BY clause
-specifies how the intermediate result is partitioned and the ORDER BY
-clause specifies how the rows are ordered within each partition.
-
-* `_window-frame-clause_`
-+
-specifies the window within the partition over which the ROW_NUMBER is
-computed.
-
-[[examples_of_row_number_window_function]]
-=== Examples of ROW_NUMBER Window Function
-
-* Return the row number for each row of the employee table:
-+
-```
-SELECT
-  ROW_NUMBER () OVER(ORDER BY empnum)
-, *
-FROM persnl.employee;
-```
-
-* Return the row number for each row within each department:
-+
-```
-SELECT
-  ROW_NUMBER () OVER(PARTITION BY deptnum ORDER BY empnum)
-, *
-FROM persnl.employee;
-```
-
-<<<
-[[stddev_window_function]]
-=== STDDEV Window Function
-
-STDDEV is a window function that returns the standard deviation of non
-null values of the given expression for the current window specified by
-the inline-window-specification.
-
-```
-STDDEV ([ALL] expression) OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-[ window-frame-clause ]
-```
-
-* `_window-frame-clause_` is:
-+
-```
-  ROWS CURRENT ROW
-| ROWS preceding-row
-| ROWS BETWEEN preceding-row AND preceding-row
-| ROWS BETWEEN preceding-row AND CURRENT ROW
-| ROWS BETWEEN preceding-row AND following-row
-| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
-| ROWS BETWEEN CURRENT ROW AND following-row
-| ROWS BETWEEN following-row AND following-row
-```
-
-* `_preceding-row_` is:
-+
-```
-  UNBOUNDED PRECEDING
-| unsigned-integer PRECEDING
-```
-
-* `_following-row_` is:
-+
-```
-  UNBOUNDED FOLLOWING
-| unsigned-integer FOLLOWING
-```
-
-<<<
-* `ALL`
-+
-specifies whether duplicate values are included in the computation of
-the STDDEV of the _expression_. The default option is ALL, which causes
-duplicate values to be included.
-
-* `_expression_`
-+
-specifies a numeric or interval value _expression_ that determines the
-values over which STDDEV is computed.
-
-* `_inline-window-specification_`
-+
-specifies the window over which the STDDEV is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause, an optional ORDER BY clause and an optional window frame clause.
-The PARTITION BY clause specifies how the intermediate result is
-partitioned and the ORDER BY clause specifies how the rows are ordered
-within each partition.
-
-* `_window-frame-clause_`
-+
-specifies the window within the partition over which the STDDEV is
-computed.
-
-[[examples_of_stddev]]
-=== Examples of STDDEV
-
-* Return the standard deviation of the salary for each row of the
-employee table:
-+
-```
-SELECT
-  STDDEV(salary) OVER(ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-, *
-FROM persnl.employee;
-```
-
-* Return the standard deviation for each row within each department:
-+
-```
-SELECT
-  STDDEV() OVER(PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-, *
-FROM persnl.employee;
-```
-
-<<<
-[[sum_window_function]]
-== SUM Window Function
-
-SUM is a window function that returns the sum of non null values of the
-given expression for the current window specified by the
-inline-window-specification.
-
-```
-SUM ([ALL] expression) OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-[ window-frame-clause ]
-```
-
-* `_window-frame-clause_` is:
-+
-```
-  ROWS CURRENT ROW
-| ROWS preceding-row
-| ROWS BETWEEN preceding-row AND preceding-row
-| ROWS BETWEEN preceding-row AND CURRENT ROW
-| ROWS BETWEEN preceding-row AND following-row
-| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
-| ROWS BETWEEN CURRENT ROW AND following-row
-| ROWS BETWEEN following-row AND following-row
-```
-* `_preceding-row_` is:
-+
-```
-  UNBOUNDED PRECEDING
-| unsigned-integer PRECEDING
-```
-
-* `_following-row_` is:
-+
-```
-  UNBOUNDED FOLLOWING
-| unsigned-integer FOLLOWING
-```
-
-<<<
-* `ALL`
-+
-specifies whether duplicate values are included in the computation of
-the SUM of the _expression_. The default option is ALL, which causes
-duplicate values to be included.
-
-* `_expression_`
-+
-specifies a numeric or interval value expression that determines the
-values to sum. See <<expressions,Expressions>>.
-
-* `_inline-window-specification_`
-+
-specifies the window over which the SUM is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause, an optional ORDER BY clause and an optional window frame clause.
-The PARTITION BY clause specifies how the intermediate result is
-partitioned and the ORDER BY clause specifies how the rows are ordered
-within each partition.
-
-* `_window-frame-clause_`
-+
-specifies the window within the partition over which the SUM is computed.
-
-<<<
-[[examples_of_sum_window_function]]
-=== Examples of SUM Window Function
-
-* Return the running sum value of the SALARY column:
-+
-```
-SELECT
-  empnum
-, SUM (salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the running sum of the SALARY column within each department:
-+
-```
-SELECT
-  deptnum
-, empnum, SUM (salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the moving sum of the SALARY column within each department over a window of the last 4 rows:
-+
-```
-SELECT
-  deptnum
-, empnum
-, SUM (salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
-FROM persnl.employee;
-```
-
-<<<
-[[variance_window_function]]
-== VARIANCE Window Function
-
-VARIANCE is a window function that returns the variance of non null
-values of the given expression for the current window specified by the
-inline-window-specification.
-
-```
-VARIANCE ([ALL] expression) OVER (inline-window-specification)
-```
-
-* `_inline-window-specification_` is:
-+
-```
-[PARTITION BY expression [, expression]...]
-[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
-          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
-[ window-frame-clause ]
-```
-* `_window-frame-clause_` is:
-+
-```
-  ROWS CURRENT ROW
-| ROWS preceding-row
-| ROWS BETWEEN preceding-row AND preceding-row
-| ROWS BETWEEN preceding-row AND CURRENT ROW
-| ROWS BETWEEN preceding-row AND following-row
-| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
-| ROWS BETWEEN CURRENT ROW AND following-row
-| ROWS BETWEEN following-row AND following-row
-```
-
-* `_preceding-row_` is:
-+
-```
-  UNBOUNDED PRECEDING
-| unsigned-integer PRECEDING
-```
-
-* `_following-row_` is:
-+
-```
-  UNBOUNDED FOLLOWING
-| unsigned-integer FOLLOWING
-```
-
-<<<
-* `ALL`
-+
-specifies whether duplicate values are included in the computation of
-the VARIANCE of the _expression_. The default option is ALL, which causes
-duplicate values to be included.
-
-* `_expression_`
-+
-specifies a numeric or interval value expression that determines the
-values over which the variance is computed.
-See <<expressions,Expressions>>.
-
-* `_inline-window-specification_`
-+
-specifies the window over which the VARIANCE is computed. The
-_inline-window-specification_ can contain an optional PARTITION BY
-clause, an optional ORDER BY clause and an optional window frame clause.
-The PARTITION BY clause specifies how the intermediate result is
-partitioned and the ORDER BY clause specifies how the rows are ordered
-within each partition.
-
-* `_window-frame-clause_`
-+
-specifies the window within the partition over which the VARIANCE is
-computed.
-
-[[examples_of_variance_window_function]]
-=== Examples of VARIANCE Window Function
-
-* Return the variance of the SALARY column:
-+
-```
-SELECT
-  empnum
-, VARIANCE (salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-* Return the variance of the SALARY column within each department:
-+
-```
-SELECT
-  deptnum
-, empnum
-, VARIANCE (salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
-FROM persnl.employee;
-```
-
-
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[olap_functions]]
+= OLAP Functions
+
+This section describes the syntax and semantics of the On Line
+Analytical Process (OLAP) window functions. The OLAP window functions
+are ANSI compliant.
+
+[[considerations_for_window_functions]]
+== Considerations for Window Functions
+
+These considerations apply to all window functions.
+
+* `_inline-window-specification_`
++
+The window defined by the _inline-window-specification_ consists of the
+rows specified by the _window-frame-clause_, bounded by the current
+partition. If no PARTITION BY clause is specified, the partition is
+defined to be all the rows of the intermediate result. If a PARTITION BY
+clause is specified, the partition is the set of rows which have the
+same values for the expressions specified in the PARTITION clause.
+
+* `_window-frame-clause_`
++
+DISTINCT is not supported for window functions.
++
+Use of a FOLLOWING term is not supported. Using a FOLLOWING term results
+in an error.
++
+If no _window-frame-clause_ is specified, "ROWS BETWEEN UNBOUNDED
+PRECEDING AND UNBOUNDED FOLLOWING" is assumed. This clause is not
+supported because it involves a FOLLOWING term and will result in an
+error.
++
+"ROWS CURRENT ROW" is equivalent to "ROWS BETWEEN CURRENT ROW AND
+CURRENT ROW".
++
+"ROWS _preceding-row_" is equivalent to "ROWS BETWEEN _preceding-row_
+AND CURRENT ROW".
+
+=== Nulls
+
+All nulls are eliminated before the function is applied to the set of
+values. If the window contains all NULL values, the result of the window
+function is NULL.
+
+If the specified window for a particular row consists of rows that are
+all before the first row of the partition (no rows in the window), the
+result of the window function is NULL.
+
+<<<
+[[order_by_clause_supports_expressions_for_olap_functions]]
+== ORDER BY Clause Supports Expressions For OLAP Functions
+
+The ORDER BY clause of the OLAP functions now supports expressions.
+However, use of multiple OLAP functions with different expressions in
+the same query is not supported. The following examples show how
+expressions may be used in the ORDER BY clause.
+
+```
+SELECT
+  -1 * annualsalary neg_total
+, RANK() OVER (ORDER BY -1 * annualsalary) olap_rank
+FROM employee;
+```
+
+Using an aggregate in the ORDER BY clause:
+
+```
+SELECT
+  num
+, RANK() OVER (ORDER BY SUM(annualsalary)) olap_rank
+FROM employee
+GROUP BY num;
+```
+
+Using multiple functions with the same expression in the ORDER BY clause:
+
+```
+SELECT
+  num
+, workgroupnum
+, RANK() OVER (ORDER BY SUM (annualsalary)*num) olap_rank
+, DENSE_RANK() OVER (ORDER BY SUM (annualsalary)*num) olap_drank
+, ROW_NUMBER() OVER (ORDER BY SUM (annualsalary)*num) olap_mum
+FROM employee
+GROUP BY num, workgroupnum, annualsalary;
+```
+
+Using more functions with the same expression in the ORDER BY clause:
+
+```
+SELECT
+  num
+, workgroupnum
+, annualsalary
+, SUM(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
+, AVG(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
+, MIN(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
+, MAX(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
+, VARIANCE(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
+, STDDEV(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
+, COUNT(AnnualSalary) OVER (ORDER BY SUM(annualsalary)*num ROWS UNBOUNDED PRECEDING)
+FROM employee
+GROUP BY num, workgroupnum, annualsalary;
+```
+
+<<<
+[[limitations_for_window_functions]]
+== Limitations for Window Functions
+
+These limitations apply to all window functions.
+
+* The ANSI _window-clause_ is not supported by {project-name}. Only the
+_inline-window-specification_ is supported. An attempt to use an ANSI
+_window-clause_ will result in a syntax error.
+
+* The _window-frame-clause_ cannot contain a FOLLOWING term, either
+explicitly or implicitly. Because the default window frame clause
+contains an implicit FOLLOWING ("ROWS BETWEEN UNBOUNDED PRECEDING AND
+UNBOUNDED FOLLOWING"), the default is not supported. So, practically,
+the _window-frame-clause_ is not optional. An attempt to use a FOLLOWING
+term, either explicitly or implicitly will result in the "4343" error
+message.
+
+* The window frame units can only be ROWS. RANGE is not supported by
+{project-name}. An attempt to use RANGE will result in a syntax error.
+
+* The ANSI _window-frame-exclusion-specification_ is not supported by
+{project-name}. An attempt to use a _window-frame-exclusion-specification_
+will result in a syntax error.
+
+* Multiple _inline-window-specifications_ in a single SELECT clause are
+not supported. For each window function within a SELECT clause, the
+ORDER BY clause and PARTITION BY specifications must be identical. The
+window frame can vary within a SELECT clause. An attempt to use multiple
+_inline-window-specifications_ in a single SELECT clause will result in
+the "4340" error message.
+
+* The ANSI _null-ordering-specification_ within the ORDER BY clause is
+not supported by {project-name}. Null values will always be sorted as if they
+are greater than all non-null values. This is slightly different than a
+null ordering of NULLS LAST. An attempt to use a
+_null-ordering-specification_ will result in a syntax error.
+
+* The ANSI _filter-clause_ is not supported for window functions by
+{project-name}. The _filter-clause_ applies to all aggregate functions
+(grouped and windowed) and that the _filter-clause_ is not currently
+supported for grouped aggregate functions. An attempt to use a
+_filter-clause_ will result in a syntax error.
+
+* The DISTINCT value for the _set-qualifier-clause_ within a window
+function is not supported. Only the ALL value is supported for the
+_set-qualifier-clause_ within a window function. An attempt to use
+DISTINCT in a window function will result in the "4341" error message.
+
+<<<
+[[avg_window_function]]
+== AVG Window Function
+
+AVG is a window function that returns the average of non-null values of
+the given expression for the current window specified by the
+_inline-window specification_.
+
+```
+AVG ([ALL] expression) OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+                       [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+[ window-frame-clause ]
+```
+
+* `_window-frame-clause_` is:
++
+```
+  ROWS CURRENT ROW
+| ROWS preceding-row
+| ROWS BETWEEN preceding-row AND preceding-row
+| ROWS BETWEEN preceding-row AND CURRENT ROW
+| ROWS BETWEEN preceding-row AND following-row
+| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
+| ROWS BETWEEN CURRENT ROW AND following-row
+| ROWS BETWEEN following-row AND following-row
+```
+
+* `_preceding-row_` is:
++
+```
+  UNBOUNDED PRECEDING
+| unsigned-integer PRECEDING
+```
+
+* `_following-row_` is:
++
+```
+  UNBOUNDED FOLLOWING
+| unsigned-integer FOLLOWING
+```
+
+<<<
+* `ALL`
++
+specifies whether duplicate values are included in the computation of
+the AVG of the _expression_. The default option is ALL, which causes
+duplicate values to be included.
+
+* `_expression_`
++
+specifies a numeric or interval value _expression_ that determines the
+values to average. See <<numeric_value_expressions,Numeric Value Expressions>>
+and <<interval_value_expressions,Interval Value Expressions>>.
+
+* `_inline-window-specification_`
++
+specifies_the_window_over_which_the_avg_is_computed. The
+_inline-window-specification_ can contain an optional partition by
+clause, an optional ORDER BY clause and an optional window frame clause.
+The PARTITION BY clause specifies how the intermediate result is
+partitioned and the ORDER BY clause specifies how the rows are ordered
+within each partition.
+
+* `_window-frame-clause_`
++
+specifies the window within the partition over which the AVG is
+computed.
+
+<<<
+[[examples_of_avg_window_function]]
+=== Examples of AVG Window Function
+
+* Return the running average value of the SALARY column:
++
+```
+SELECT
+  empnum
+, AVG(salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the running average value of the SALARY column within each
+department:
++
+```
+SELECT
+  deptnum
+, empnum
+, AVG(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the moving average of salary within each department over a
+window of the last 4 rows:
++
+```
+SELECT
+  deptnum
+, empnum
+, AVG(SALARY) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
+FROM persnl.employee;
+```
+
+<<<
+[[count_window_function]]
+== COUNT Window Function
+
+COUNT is a window function that returns the count of the non null values
+of the given expression for the current window specified by the
+inline-window-specification.
+
+```
+COUNT {(*) | ([ALL] expression) } OVER inline-window-specification
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+[ window-frame-clause ]
+```
+
+* `_window-frame-clause_` is:
++
+```
+  ROW CURRENT ROW
+| ROW preceding-row
+| ROW BETWEEN preceding-row AND preceding-row
+| ROW BETWEEN preceding-row AND CURRENT ROW
+| ROW BETWEEN preceding-row AND following-row
+| ROW BETWEEN CURRENT ROW AND CURRENT ROW
+| ROW BETWEEN CURRENT ROW AND following-row
+| ROW BETWEEN following-row AND following-row
+```
+
+* `_preceding-row_` is:
++
+```
+   UNBOUNDED PRECEDING
+|  unsigned-integer PRECEDING
+```
+
+* `_following-row_` is:
+```
+  UNBOUNDED FOLLOWING
+| unsigned-integer FOLLOWING
+```
+
+* `ALL`
++
+specifies whether duplicate values are included in the computation of
+the COUNT of the _expression_. The default option is ALL, which causes
+duplicate values to be included.
+
+<<<
+* `_expression_`
++
+specifies a value _expression_ that is to be counted. See
+<<expressions,Expressions>>.
+
+* `_inline-window-specification_`
++
+specifies the window over which the COUNT is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause, an optional ORDER BY clause and an optional window frame clause.
+The PARTITION BY clause specifies how the intermediate result is
+partitioned and the ORDER BY clause specifies how the rows are ordered
+within each partition.
+
+* `_window-frame-clause_`
++
+specifies the window within the partition over which the COUNT is
+computed.
+
+<<<
+[[examples_of_count_window_function]]
+=== Examples of COUNT Window Function
+
+* Return the running count of the SALARY column:
++
+```
+SELECT
+  empnum
+, COUNT(salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the running count of the SALARY column within each department:
++
+```
+SELECT
+  deptnum
+, empnum
+, COUNT(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the moving count of salary within each department over a window
+of the last 4 rows:
++
+```
+SELECT
+  deptnum
+, empnum
+, COUNT(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the running count of employees within each department:
++
+```
+SELECT
+  deptnum
+, empnum
+, COUNT(*) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+
+<<<
+[[dense_rank_window_function]]
+== DENSE_RANK Window Function
+
+DENSE_RANK is a window function that returns the ranking of each row of
+the current partition specified by the inline-window-specification. The
+ranking is relative to the ordering specified in the
+inline-window-specification. The return value of DENSE_RANK starts at 1
+for the first row of the window. Values of the given expression that are
+equal have the same rank. The value of DENSE_RANK advances 1 when the
+value of the given expression changes.
+
+```
+DENSE_RANK() OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+```
+
+* `_inline-window-specification_`
++
+specifies the window over which the DENSE_RANK is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause and an optional ORDER BY clause. The PARTITION BY clause
+specifies how the intermediate result is partitioned and the ORDER BY
+clause specifies how the rows are ordered within each partition.
+
+[[examples_of_dense_rank_window_function]]
+=== Examples of DENSE_RANK Window Function
+
+* Return the dense rank for each employee based on employee number:
++
+```
+SELECT
+  DENSE_RANK() OVER (ORDER BY empnum)
+, *
+FROM persnl.employee;
+```
+
+* Return the dense rank for each employee within each department based
+on salary:
++
+```
+SELECT
+  DENSE_RANK() OVER (PARTITION BY deptnum ORDER BY salary)
+, *
+FROM persnl.employee;
+```
+
+<<<
+[[max_window_function]]
+=== MAX Window Function
+
+MAX is a window function that returns the maximum value of all non null
+values of the given expression for the current window specified by the
+inline-window-specification.
+
+```
+MAX ([ALL] expression) OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+[ window-frame-clause ]
+```
+
+* `_window-frame-clause_` is:
++
+```
+  ROWS CURRENT ROW
+| ROWS preceding-row
+| ROWS BETWEEN preceding-row AND preceding-row
+| ROWS BETWEEN preceding-row AND CURRENT ROW
+| ROWS BETWEEN preceding-row AND following-row
+| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
+| ROWS BETWEEN CURRENT ROW AND following-row
+| ROWS BETWEEN following-row AND following-row
+```
+
+* `_preceding-row_` is:
++
+```
+  UNBOUNDED PRECEDING
+| unsigned-integer PRECEDING
+```
+
+* `_following-row_` is:
++
+```
+  UNBOUNDED FOLLOWING
+| unsigned-integer FOLLOWING
+```
+
+* `ALL`
++
+specifies whether duplicate values are included in the computation of
+the MAX of the _expression_. The default option is ALL, which causes
+duplicate values to be included.
+
+<<<
+* `_expression_`
++
+specifies an expression that determines the values over which the MAX is
+computed. See <<expressions,Expressions>>.
+
+* `_inline-window-specification_`
++
+specifies the window over which the MAX is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause, an optional ORDER BY clause and an optional window frame clause.
+The PARTITION BY clause specifies how the intermediate result is
+partitioned and the ORDER BY clause specifies how the rows are ordered
+within each partition.
+
+* `_window-frame-clause_`
++
+specifies the window within the partition over which the MAX is
+computed.
+
+<<<
+[[examples_of_max_window_function]]
+=== Examples of MAX Window Function
+
+* Return the running maximum of the SALARY column:
++
+```
+SELECT
+  empnum
+, MAX(salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the running maximum of the SALARY column within each department:
++
+```
+SELECT
+  deptnum
+, empnum, MAX(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the moving maximum of salary within each department over a window of the last 4 rows:
++
+```
+SELECT
+  deptnum
+, empnum
+, MAX(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
+FROM persnl.employee;
+```
+
+<<<
+[[min_window_function]]
+== MIN Window Function
+
+MIN is a window function that returns the minimum value of all non null
+values of the given expression for the current window specified by the
+inline-window-specification.
+
+```
+MIN ([ALL] expression) OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+[ window-frame-clause ]
+```
+
+* `_window-frame-clause_` is:
++
+```
+  ROWS CURRENT ROW
+| ROWS preceding-row
+| ROWS BETWEEN preceding-row AND preceding-row
+| ROWS BETWEEN preceding-row AND CURRENT ROW
+| ROWS BETWEEN preceding-row AND following-row
+| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
+| ROWS BETWEEN CURRENT ROW AND following-row
+| ROWS BETWEEN following-row AND following-row
+```
+
+* `_preceding-row_` is:
++
+```
+  UNBOUNDED PRECEDING
+| unsigned-integer PRECEDING
+```
+
+* `_following-row_` is:
++
+```
+  UNBOUNDED FOLLOWING
+| unsigned-integer FOLLOWING
+```
+
+* `ALL1
++
+specifies whether duplicate values are included in the computation of
+the MIN of the _expression_. The default option is ALL, which causes
+duplicate values to be included.
+
+<<<
+* `_expression_`
++
+specifies an expression that determines the values over which the MIN is
+computed See <<expressions,Expressions>>.
+
+* `_inline-window-specification_`
++
+specifies the window over which the MIN is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause, an optional ORDER BY clause and an optional window frame clause.
+The PARTITION BY clause specifies how the intermediate result is
+partitioned and the ORDER BY clause specifies how the rows are ordered
+within each partition.
+
+* `_window-frame-clause_`
++
+specifies the window within the partition over which the MIN is
+computed.
+
+<<<
+[[examples_of_min_window_function]]
+=== Examples of MIN Window Function
+
+* Return the running minimum of the SALARY column:
++
+```
+SELECT
+  empnum
+, MIN(salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the running minimum of the SALARY column within each department:
++
+```
+SELECT
+  deptnum
+, empnum
+, MIN(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the moving minimum of salary within each department over a window of the last 4 rows:
++
+```
+SELECT
+  deptnum
+, empnum
+, MIN(salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
+FROM persnl.employee;
+```
+
+<<<
+[[rank_window_function]]
+== RANK Window Function
+
+RANK is a window function that returns the ranking of each row of the
+current partition specified by the inline-window-specification. The
+ranking is relative to the ordering specified in the
+_inline-window-specification_. The return value of RANK starts at 1 for
+the first row of the window. Values that are equal have the same rank.
+The value of RANK advances to the relative position of the row in the
+window when the value changes.
+
+```
+RANK() OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+```
+
+* `_inline-window-specification_`
++
+specifies the window over which the RANK is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause and an optional ORDER BY clause. The PARTITION BY clause
+specifies how the intermediate result is partitioned and the ORDER BY
+clause specifies how the rows are ordered within each partition.
+
+[[examples_of_rank_window_function]]
+=== Examples of RANK Window Function
+
+* Return the rank for each employee based on employee number:
++
+```
+SELECT
+  RANK() OVER (ORDER BY empnum)
+, *
+FROM persnl.employee;
+```
+
+* Return the rank for each employee within each department based on salary:
++
+```
+SELECT
+  RANK() OVER (PARTITION BY deptnum ORDER BY salary)
+, *
+FROM persnl.employee;
+```
+
+<<<
+[[row_number_window_function]]
+=== ROW_NUMBER Window Function
+
+ROW_NUMBER is a window function that returns the row number of each row
+of the current window specified by the inline-window-specification.
+
+```
+ROW_NUMBER () OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+```
+
+* `_inline-window-specification_`
++
+specifies the window over which the ROW_NUMBER is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause and an optional ORDER BY clause. The PARTITION BY clause
+specifies how the intermediate result is partitioned and the ORDER BY
+clause specifies how the rows are ordered within each partition.
+
+* `_window-frame-clause_`
++
+specifies the window within the partition over which the ROW_NUMBER is
+computed.
+
+[[examples_of_row_number_window_function]]
+=== Examples of ROW_NUMBER Window Function
+
+* Return the row number for each row of the employee table:
++
+```
+SELECT
+  ROW_NUMBER () OVER(ORDER BY empnum)
+, *
+FROM persnl.employee;
+```
+
+* Return the row number for each row within each department:
++
+```
+SELECT
+  ROW_NUMBER () OVER(PARTITION BY deptnum ORDER BY empnum)
+, *
+FROM persnl.employee;
+```
+
+<<<
+[[stddev_window_function]]
+=== STDDEV Window Function
+
+STDDEV is a window function that returns the standard deviation of non
+null values of the given expression for the current window specified by
+the inline-window-specification.
+
+```
+STDDEV ([ALL] expression) OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+[ window-frame-clause ]
+```
+
+* `_window-frame-clause_` is:
++
+```
+  ROWS CURRENT ROW
+| ROWS preceding-row
+| ROWS BETWEEN preceding-row AND preceding-row
+| ROWS BETWEEN preceding-row AND CURRENT ROW
+| ROWS BETWEEN preceding-row AND following-row
+| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
+| ROWS BETWEEN CURRENT ROW AND following-row
+| ROWS BETWEEN following-row AND following-row
+```
+
+* `_preceding-row_` is:
++
+```
+  UNBOUNDED PRECEDING
+| unsigned-integer PRECEDING
+```
+
+* `_following-row_` is:
++
+```
+  UNBOUNDED FOLLOWING
+| unsigned-integer FOLLOWING
+```
+
+<<<
+* `ALL`
++
+specifies whether duplicate values are included in the computation of
+the STDDEV of the _expression_. The default option is ALL, which causes
+duplicate values to be included.
+
+* `_expression_`
++
+specifies a numeric or interval value _expression_ that determines the
+values over which STDDEV is computed.
+
+* `_inline-window-specification_`
++
+specifies the window over which the STDDEV is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause, an optional ORDER BY clause and an optional window frame clause.
+The PARTITION BY clause specifies how the intermediate result is
+partitioned and the ORDER BY clause specifies how the rows are ordered
+within each partition.
+
+* `_window-frame-clause_`
++
+specifies the window within the partition over which the STDDEV is
+computed.
+
+[[examples_of_stddev]]
+=== Examples of STDDEV
+
+* Return the standard deviation of the salary for each row of the
+employee table:
++
+```
+SELECT
+  STDDEV(salary) OVER(ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+, *
+FROM persnl.employee;
+```
+
+* Return the standard deviation for each row within each department:
++
+```
+SELECT
+  STDDEV() OVER(PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+, *
+FROM persnl.employee;
+```
+
+<<<
+[[sum_window_function]]
+== SUM Window Function
+
+SUM is a window function that returns the sum of non null values of the
+given expression for the current window specified by the
+inline-window-specification.
+
+```
+SUM ([ALL] expression) OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+[ window-frame-clause ]
+```
+
+* `_window-frame-clause_` is:
++
+```
+  ROWS CURRENT ROW
+| ROWS preceding-row
+| ROWS BETWEEN preceding-row AND preceding-row
+| ROWS BETWEEN preceding-row AND CURRENT ROW
+| ROWS BETWEEN preceding-row AND following-row
+| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
+| ROWS BETWEEN CURRENT ROW AND following-row
+| ROWS BETWEEN following-row AND following-row
+```
+* `_preceding-row_` is:
++
+```
+  UNBOUNDED PRECEDING
+| unsigned-integer PRECEDING
+```
+
+* `_following-row_` is:
++
+```
+  UNBOUNDED FOLLOWING
+| unsigned-integer FOLLOWING
+```
+
+<<<
+* `ALL`
++
+specifies whether duplicate values are included in the computation of
+the SUM of the _expression_. The default option is ALL, which causes
+duplicate values to be included.
+
+* `_expression_`
++
+specifies a numeric or interval value expression that determines the
+values to sum. See <<expressions,Expressions>>.
+
+* `_inline-window-specification_`
++
+specifies the window over which the SUM is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause, an optional ORDER BY clause and an optional window frame clause.
+The PARTITION BY clause specifies how the intermediate result is
+partitioned and the ORDER BY clause specifies how the rows are ordered
+within each partition.
+
+* `_window-frame-clause_`
++
+specifies the window within the partition over which the SUM is computed.
+
+<<<
+[[examples_of_sum_window_function]]
+=== Examples of SUM Window Function
+
+* Return the running sum value of the SALARY column:
++
+```
+SELECT
+  empnum
+, SUM (salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the running sum of the SALARY column within each department:
++
+```
+SELECT
+  deptnum
+, empnum, SUM (salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the moving sum of the SALARY column within each department over a window of the last 4 rows:
++
+```
+SELECT
+  deptnum
+, empnum
+, SUM (salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS 3 PRECEDING)
+FROM persnl.employee;
+```
+
+<<<
+[[variance_window_function]]
+== VARIANCE Window Function
+
+VARIANCE is a window function that returns the variance of non null
+values of the given expression for the current window specified by the
+inline-window-specification.
+
+```
+VARIANCE ([ALL] expression) OVER (inline-window-specification)
+```
+
+* `_inline-window-specification_` is:
++
+```
+[PARTITION BY expression [, expression]...]
+[ORDER BY expression [ASC[ENDING] | DESC[ENDING]]
+          [,expression [ASC[ENDING] | DESC[ENDING]]]...]
+[ window-frame-clause ]
+```
+* `_window-frame-clause_` is:
++
+```
+  ROWS CURRENT ROW
+| ROWS preceding-row
+| ROWS BETWEEN preceding-row AND preceding-row
+| ROWS BETWEEN preceding-row AND CURRENT ROW
+| ROWS BETWEEN preceding-row AND following-row
+| ROWS BETWEEN CURRENT ROW AND CURRENT ROW
+| ROWS BETWEEN CURRENT ROW AND following-row
+| ROWS BETWEEN following-row AND following-row
+```
+
+* `_preceding-row_` is:
++
+```
+  UNBOUNDED PRECEDING
+| unsigned-integer PRECEDING
+```
+
+* `_following-row_` is:
++
+```
+  UNBOUNDED FOLLOWING
+| unsigned-integer FOLLOWING
+```
+
+<<<
+* `ALL`
++
+specifies whether duplicate values are included in the computation of
+the VARIANCE of the _expression_. The default option is ALL, which causes
+duplicate values to be included.
+
+* `_expression_`
++
+specifies a numeric or interval value expression that determines the
+values over which the variance is computed.
+See <<expressions,Expressions>>.
+
+* `_inline-window-specification_`
++
+specifies the window over which the VARIANCE is computed. The
+_inline-window-specification_ can contain an optional PARTITION BY
+clause, an optional ORDER BY clause and an optional window frame clause.
+The PARTITION BY clause specifies how the intermediate result is
+partitioned and the ORDER BY clause specifies how the rows are ordered
+within each partition.
+
+* `_window-frame-clause_`
++
+specifies the window within the partition over which the VARIANCE is
+computed.
+
+[[examples_of_variance_window_function]]
+=== Examples of VARIANCE Window Function
+
+* Return the variance of the SALARY column:
++
+```
+SELECT
+  empnum
+, VARIANCE (salary) OVER (ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+* Return the variance of the SALARY column within each department:
++
+```
+SELECT
+  deptnum
+, empnum
+, VARIANCE (salary) OVER (PARTITION BY deptnum ORDER BY empnum ROWS UNBOUNDED PRECEDING)
+FROM persnl.employee;
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/da748b4d/docs/sql_reference/src/asciidoc/_chapters/reserved_words.adoc
----------------------------------------------------------------------
diff --git a/docs/sql_reference/src/asciidoc/_chapters/reserved_words.adoc b/docs/sql_reference/src/asciidoc/_chapters/reserved_words.adoc
index 0362601..0da11ce 100644
--- a/docs/sql_reference/src/asciidoc/_chapters/reserved_words.adoc
+++ b/docs/sql_reference/src/asciidoc/_chapters/reserved_words.adoc
@@ -1,286 +1,286 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-*/
-////
-
-[[reserved_words]]
-= Reserved Words
-The words listed in this appendix are reserved for use by {project-name} SQL.
-To prevent syntax errors, avoid using these words as identifiers in
-{project-name} SQL. In {project-name} SQL, if an operating system name contains a
-reserved word, you must enclose the reserved word in double quotes (")
-to access that column or object.
-
-NOTE: In {project-name} SQL, ABSOLUTE, DATA, EVERY, INITIALIZE, OPERATION,
-PATH, SPACE, STATE, STATEMENT, STATIC, and START are not reserved words.
-
-{project-name} SQL treats these words as reserved when they are part of
-{project-name} SQL stored text. They cannot be used as identifiers unless you
-enclose them in double quotes.
-
-[[reserved_sql_identifiers_a]]
-== Reserved SQL Identifiers: A
-
-
-[cols="5*l"]
-|===
-| ACTION   | ADD   | ADMIN    | AFTER         | AGGREGATE
-| ALIAS|   | ALL   | ALLOCATE | ALTER         | AND
-| ANY      | ARE   | ARRAY    | AS            | ASC
-| ASSERTION| ASYNC | AT       | AUTHORIZATION | AVG
-|===
-
-
-[[reserved_sql_identifiers_b]]
-== Reserved SQL Identifiers: B
-
-
-[cols="5*l"]
-|===
-| BEFORE     | BEGIN | BETWEEN | BINARY | BIT
-| BIT_LENGTH | BLOB  | BOOLEAN | BOTH   | BREADTH
-| BY         |       |         |        |
-|===
-
-[[reserved_sql_identifiers_c]]
-== Reserved SQL Identifiers: C
-
-
-[cols="5*l"]
-|===
-| CALL         | CASCADE      | CASCADED          | CASE             | CAST
-| CATALOG      | CHAR         | CHARACTER         | CHARACTER_LENGTH | CHAR_LENGTH
-| CHECK        | CLASS        | CLOB              | CLOSE            | COALESCE
-| COLLATE      | COLLATION    | COLUMN            | COMMIT           | COMPLETION
-| CONNECT      | CONNECTION   | CONSTRAINT        | CONSTRAINTS      | CONSTRUCTOR
-| CONTINUE     | CONVERT      | CORRESPONDING     | COUNT            | CREATE
-| CROSS        | CUBE         | CURRENT           | CURRENT_DATE     | CURRENT_PATH
-| CURRENT_ROLE | CURRENT_TIME | CURRENT_TIMESTAMP | CURRENT_USER     | CURRNT_USR_INTN
-| CURSOR       | CYCLE        |                   |                  |
-|===
-
-
-[[reserved_sql_identifiers_d]]
-== Reserved SQL Identifiers: D
-
-[cols="5*l"]
-|===
-| DATE       | DATETIME   | DAY        | DEALLOCATE    | DEC
-| DECIMAL    | DECLARE    | DEFAULT    | DEFERRABLE    | DEFERRED
-| DELETE     | DEPTH      | DEREF      | DESC          | DESCRIBE
-| DESCRIPTOR | DESTROY    | DESTRUCTOR | DETERMINISTIC | DIAGNOSTICS
-| DICTIONARY | DISCONNECT | DISTINCT   | DOMAIN        | DOUBLE
-| DROP       | DYNAMIC    |            |               |
-|===
-
-
-[[reserved_sql_identifiers_e]]
-== Reserved SQL Identifiers: E
-
-
-[cols="5*l"]
-|===
-| EACH    | ELSE   | ELSEIF   | END       | END-EXEC
-| EQUALS  | ESCAPE | EXCEPT   | EXCEPTION | EXEC
-| EXECUTE | EXISTS | EXTERNAL | EXTRACT   |
-|===
-
-
-== Reserved SQL Identifers:  F
-
-[cols="5*l"]
-|===
-| FALSE   | FETCH    | FIRST    | FLOAT | FOR
-| FOREIGN | FOUND    | FRACTION | FREE  | FROM
-| FULL    | FUNCTION |          |       |
-|===
-
-
-[[reserved_sql_identifiers_g]]
-== Reserved SQL Identifiers G
-
-[cols="5*l"]
-|===
-| GENERAL | GET   | GLOBAL   | GO | GOTO
-| GRANT   | GROUP | GROUPING |    |
-|===  
-
-[[reserved_sql_identifiers_h]]
-== Reserved SQL Identifiers: H
-
-[[reserved_sql_identifiers_i]]
-== Reserved SQL Identifiers: I
-
-
-[cols="5*l"]
-|===
-| IDENTITY    | IF        | IGNORE | IMMEDIATE | IN
-| INDICATOR   | INITIALLY | INNER  | INOUT     | INPUT
-| INSENSITIVE | INSERT    | INT    | INTEGER   | INTERSECT
-| INTERVAL    | INTO      | IS     | ISOLATION | ITERATE
-|===
-
-
-[[reserved_sql_identifiers_j]]
-== Reserved SQL Identifiers J
-
-[[reserved_sql_identifiers_k]]
-== Reserved SQL Identifiers: K
-
-[[reserved_sql_identifiers_l]]
-== Reserved SQL Identifiers: L
-
-[cols="5*l"]
-|===
-| LANGUAGE | LARGE | LAST      | LATERAL        | LEADING
-| LEAVE    | LEFT  | LESS      | LEVEL          | LIKE
-| LIMIT    | LOCAL | LOCALTIME | LOCALTIMESTAMP | LOCATOR
-| LOOP     | LOWER |           |                |
-|===
-
-
-[[reserved_sql_identifiers_m]]
-== Reserved SQL Identifiers: M
-
-[cols="5*l"]
-|===
-| MAINTAIN | MAP   | MATCH  | MATCHED  | MAX
-| MERGE    | MIN   | MINUTE | MODIFIES | MODIFY
-| MODULE   | MONTH |        |          |
-|===
-
-
-[[reserved_sql_identifiers_n]]
-== Reserved SQL Identifiers: N
-
-[cols="5*l"]
-|===
-| NAMES | NATIONAL | NATURAL | NCHAR | NCLOB
-| NEW   | NEXT     | NO      | NONE  | NOT
-| NULL  | NULLIF   | NUMERIC |       |
-|===
-
-[[reserved_sql_identifiers_o]]
-== Reserved SQL Identifiers: O
-
-[cols="5*l"]
-|===
-| OCTET_LENGTH | OF    | OFF    | OID        | OLD
-| ON           | ONLY  | OPEN   | OPERATORS  | OPTION
-| OPTIONS      | OR    | ORDER  | ORDINALITY | OTHERS
-| OUT          | OUTER | OUTPUT | OVERLAPS   |
-|===
-
-
-[[reserved_sql_identifiers_p]]
-== Reserved SQL Identifiers: P
-
-[cols="5*l"]
-|===
-| PAD        | PARAMETER | PARAMETERS | PARTIAL    | PENDANT
-| POSITION   | POSTFIX   | PRECISION  | PREFIX    | PREORDER
-| PREPARE    | PRESERVE  | PRIMARY    | PRIOR     | PRIVATE
-| PRIVILEGES | PROCEDURE | PROTECTED  | PROTOTYPE | PUBLIC
-|===
-
-
-[[reserved_sql_identifiers_q]]
-== Reserved SQL Identifiers: Q
-
-[[reserved_sql_identifiers_r]]
-== Reserved SQL Identifiers: R
-
-[cols="5*l"]
-|===
-| READ       | READS       | REAL     | RECURSIVE | REF
-| REFERENCES | REFERENCING | RELATIVE | REORG     | REORGANIZE
-| REPLACE    | RESIGNAL    | RESTRICT | RESULT    | RETURN
-| RETURNS    | REVOKE      | RIGHT    | ROLLBACK  | ROLLUP
-| ROUTINE    | ROW         | ROWS     |           |
-|===
-
-[[reserved_sql_identifiers_s]]
-== Reserved SQL Identifiers: S
-
-[cols="5*l"]
-|===
-| SAVEPOINT    | SCHEMA         | SCOPE         | SCROLL      | SEARCH
-| SECOND       | SECTION        | SELECT        | SENSITIVE   | SESSION
-| SESSION_USER | SESSN_USR_INTN | SET           | SETS        | SIGNAL
-| SIMILAR      | SIZE           | SMALLINT      | SOME        | SPECIFIC
-| SPECIFICTYPE | SQL            | SQL_CHAR      | SQL_DATE    | SQL_DECIMAL
-| SQL_DOUBLE   | SQL_FLOAT      | SQL_INT       | SQL_INTEGER | SQL_REAL
-| SQL_SMALLINT | SQL_TIME       | SQL_TIMESTAMP | SQL_VARCHAR | SQLCODE
-| SQLERROR     | SQLEXCEPTION   | SQLSTATE      | SQLWARNING  | STRUCTURE
-| SUBSTRING    | SUM            | SYNONYM       | SYSTEM_USER |
-|===
-
-
-[[reserved_sql_identifiers_t]]
-== Reserved SQL Identifiers: T
-
-[cols="5*l"]
-|===
-| TABLE           | TEMPORARY | TERMINATE | TEST        | THAN
-| THEN            | THERE     | TIME      | TIMESTAMP   | TIMEZONE_HOUR
-| TIMEZONE_MINUTE | TO        | TRAILING  | TRANSACTION | TRANSLATE
-| TRANSLATION     | TRANSPOSE | TREAT     | TRIGGER     | TRIM
-| TRUE            |           |           |             |
-|===
-
-[[reserved_sql_identifiers_u]]
-== Reserved SQL Identifiers: U
-
-[cols="5*l"]
-|===
-| UNDER  | UNION | UNIQUE  | UNKNOWN | UNNEST
-| UPDATE | UPPER | UPSHIFT | USAGE   | USER
-| USING  |       |         |         |
-|===
-
-[[reserved_sql_identifiers_v]]
-== Reserved SQL Identifiers: V
-
-[cols="5*l"]
-|===
-| VALUE | VALUES  | VARCHAR | VARIABLE | VARYING
-| VIEW  | VIRTUAL | VISIBLE |          |
-|===
-
-[[reserved_sql_identifiers_w]]
-== Reserved SQL Identifiers: W
-
-[cols="5*l"]
-|===
-| WAIT | WHEN    | WHENEVER | WHERE | WHILE
-| WITH | WITHOUT | WORK     | WRITE |
-|===
-
-
-[[reserved_sql_identifiers_y]]
-== Reserved SQL Identifiers Y
-
-[[reserved_sql_identifiers_z]]
-== Reserved SQL Identifiers: Z
-
-
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+*/
+////
+
+[[reserved_words]]
+= Reserved Words
+The words listed in this appendix are reserved for use by {project-name} SQL.
+To prevent syntax errors, avoid using these words as identifiers in
+{project-name} SQL. In {project-name} SQL, if an operating system name contains a
+reserved word, you must enclose the reserved word in double quotes (")
+to access that column or object.
+
+NOTE: In {project-name} SQL, ABSOLUTE, DATA, EVERY, INITIALIZE, OPERATION,
+PATH, SPACE, STATE, STATEMENT, STATIC, and START are not reserved words.
+
+{project-name} SQL treats these words as reserved when they are part of
+{project-name} SQL stored text. They cannot be used as identifiers unless you
+enclose them in double quotes.
+
+[[reserved_sql_identifiers_a]]
+== Reserved SQL Identifiers: A
+
+
+[cols="5*l"]
+|===
+| ACTION   | ADD   | ADMIN    | AFTER         | AGGREGATE
+| ALIAS|   | ALL   | ALLOCATE | ALTER         | AND
+| ANY      | ARE   | ARRAY    | AS            | ASC
+| ASSERTION| ASYNC | AT       | AUTHORIZATION | AVG
+|===
+
+
+[[reserved_sql_identifiers_b]]
+== Reserved SQL Identifiers: B
+
+
+[cols="5*l"]
+|===
+| BEFORE     | BEGIN | BETWEEN | BINARY | BIT
+| BIT_LENGTH | BLOB  | BOOLEAN | BOTH   | BREADTH
+| BY         |       |         |        |
+|===
+
+[[reserved_sql_identifiers_c]]
+== Reserved SQL Identifiers: C
+
+
+[cols="5*l"]
+|===
+| CALL         | CASCADE      | CASCADED          | CASE             | CAST
+| CATALOG      | CHAR         | CHARACTER         | CHARACTER_LENGTH | CHAR_LENGTH
+| CHECK        | CLASS        | CLOB              | CLOSE            | COALESCE
+| COLLATE      | COLLATION    | COLUMN            | COMMIT           | COMPLETION
+| CONNECT      | CONNECTION   | CONSTRAINT        | CONSTRAINTS      | CONSTRUCTOR
+| CONTINUE     | CONVERT      | CORRESPONDING     | COUNT            | CREATE
+| CROSS        | CUBE         | CURRENT           | CURRENT_DATE     | CURRENT_PATH
+| CURRENT_ROLE | CURRENT_TIME | CURRENT_TIMESTAMP | CURRENT_USER     | CURRNT_USR_INTN
+| CURSOR       | CYCLE        |                   |                  |
+|===
+
+
+[[reserved_sql_identifiers_d]]
+== Reserved SQL Identifiers: D
+
+[cols="5*l"]
+|===
+| DATE       | DATETIME   | DAY        | DEALLOCATE    | DEC
+| DECIMAL    | DECLARE    | DEFAULT    | DEFERRABLE    | DEFERRED
+| DELETE     | DEPTH      | DEREF      | DESC          | DESCRIBE
+| DESCRIPTOR | DESTROY    | DESTRUCTOR | DETERMINISTIC | DIAGNOSTICS
+| DICTIONARY | DISCONNECT | DISTINCT   | DOMAIN        | DOUBLE
+| DROP       | DYNAMIC    |            |               |
+|===
+
+
+[[reserved_sql_identifiers_e]]
+== Reserved SQL Identifiers: E
+
+
+[cols="5*l"]
+|===
+| EACH    | ELSE   | ELSEIF   | END       | END-EXEC
+| EQUALS  | ESCAPE | EXCEPT   | EXCEPTION | EXEC
+| EXECUTE | EXISTS | EXTERNAL | EXTRACT   |
+|===
+
+
+== Reserved SQL Identifers:  F
+
+[cols="5*l"]
+|===
+| FALSE   | FETCH    | FIRST    | FLOAT | FOR
+| FOREIGN | FOUND    | FRACTION | FREE  | FROM
+| FULL    | FUNCTION |          |       |
+|===
+
+
+[[reserved_sql_identifiers_g]]
+== Reserved SQL Identifiers G
+
+[cols="5*l"]
+|===
+| GENERAL | GET   | GLOBAL   | GO | GOTO
+| GRANT   | GROUP | GROUPING |    |
+|===  
+
+[[reserved_sql_identifiers_h]]
+== Reserved SQL Identifiers: H
+
+[[reserved_sql_identifiers_i]]
+== Reserved SQL Identifiers: I
+
+
+[cols="5*l"]
+|===
+| IDENTITY    | IF        | IGNORE | IMMEDIATE | IN
+| INDICATOR   | INITIALLY | INNER  | INOUT     | INPUT
+| INSENSITIVE | INSERT    | INT    | INTEGER   | INTERSECT
+| INTERVAL    | INTO      | IS     | ISOLATION | ITERATE
+|===
+
+
+[[reserved_sql_identifiers_j]]
+== Reserved SQL Identifiers J
+
+[[reserved_sql_identifiers_k]]
+== Reserved SQL Identifiers: K
+
+[[reserved_sql_identifiers_l]]
+== Reserved SQL Identifiers: L
+
+[cols="5*l"]
+|===
+| LANGUAGE | LARGE | LAST      | LATERAL        | LEADING
+| LEAVE    | LEFT  | LESS      | LEVEL          | LIKE
+| LIMIT    | LOCAL | LOCALTIME | LOCALTIMESTAMP | LOCATOR
+| LOOP     | LOWER |           |                |
+|===
+
+
+[[reserved_sql_identifiers_m]]
+== Reserved SQL Identifiers: M
+
+[cols="5*l"]
+|===
+| MAINTAIN | MAP   | MATCH  | MATCHED  | MAX
+| MERGE    | MIN   | MINUTE | MODIFIES | MODIFY
+| MODULE   | MONTH |        |          |
+|===
+
+
+[[reserved_sql_identifiers_n]]
+== Reserved SQL Identifiers: N
+
+[cols="5*l"]
+|===
+| NAMES | NATIONAL | NATURAL | NCHAR | NCLOB
+| NEW   | NEXT     | NO      | NONE  | NOT
+| NULL  | NULLIF   | NUMERIC |       |
+|===
+
+[[reserved_sql_identifiers_o]]
+== Reserved SQL Identifiers: O
+
+[cols="5*l"]
+|===
+| OCTET_LENGTH | OF    | OFF    | OID        | OLD
+| ON           | ONLY  | OPEN   | OPERATORS  | OPTION
+| OPTIONS      | OR    | ORDER  | ORDINALITY | OTHERS
+| OUT          | OUTER | OUTPUT | OVERLAPS   |
+|===
+
+
+[[reserved_sql_identifiers_p]]
+== Reserved SQL Identifiers: P
+
+[cols="5*l"]
+|===
+| PAD        | PARAMETER | PARAMETERS | PARTIAL    | PENDANT
+| POSITION   | POSTFIX   | PRECISION  | PREFIX    | PREORDER
+| PREPARE    | PRESERVE  | PRIMARY    | PRIOR     | PRIVATE
+| PRIVILEGES | PROCEDURE | PROTECTED  | PROTOTYPE | PUBLIC
+|===
+
+
+[[reserved_sql_identifiers_q]]
+== Reserved SQL Identifiers: Q
+
+[[reserved_sql_identifiers_r]]
+== Reserved SQL Identifiers: R
+
+[cols="5*l"]
+|===
+| READ       | READS       | REAL     | RECURSIVE | REF
+| REFERENCES | REFERENCING | RELATIVE | REORG     | REORGANIZE
+| REPLACE    | RESIGNAL    | RESTRICT | RESULT    | RETURN
+| RETURNS    | REVOKE      | RIGHT    | ROLLBACK  | ROLLUP
+| ROUTINE    | ROW         | ROWS     |           |
+|===
+
+[[reserved_sql_identifiers_s]]
+== Reserved SQL Identifiers: S
+
+[cols="5*l"]
+|===
+| SAVEPOINT    | SCHEMA         | SCOPE         | SCROLL      | SEARCH
+| SECOND       | SECTION        | SELECT        | SENSITIVE   | SESSION
+| SESSION_USER | SESSN_USR_INTN | SET           | SETS        | SIGNAL
+| SIMILAR      | SIZE           | SMALLINT      | SOME        | SPECIFIC
+| SPECIFICTYPE | SQL            | SQL_CHAR      | SQL_DATE    | SQL_DECIMAL
+| SQL_DOUBLE   | SQL_FLOAT      | SQL_INT       | SQL_INTEGER | SQL_REAL
+| SQL_SMALLINT | SQL_TIME       | SQL_TIMESTAMP | SQL_VARCHAR | SQLCODE
+| SQLERROR     | SQLEXCEPTION   | SQLSTATE      | SQLWARNING  | STRUCTURE
+| SUBSTRING    | SUM            | SYNONYM       | SYSTEM_USER |
+|===
+
+
+[[reserved_sql_identifiers_t]]
+== Reserved SQL Identifiers: T
+
+[cols="5*l"]
+|===
+| TABLE           | TEMPORARY | TERMINATE | TEST        | THAN
+| THEN            | THERE     | TIME      | TIMESTAMP   | TIMEZONE_HOUR
+| TIMEZONE_MINUTE | TO        | TRAILING  | TRANSACTION | TRANSLATE
+| TRANSLATION     | TRANSPOSE | TREAT     | TRIGGER     | TRIM
+| TRUE            |           |           |             |
+|===
+
+[[reserved_sql_identifiers_u]]
+== Reserved SQL Identifiers: U
+
+[cols="5*l"]
+|===
+| UNDER  | UNION | UNIQUE  | UNKNOWN | UNNEST
+| UPDATE | UPPER | UPSHIFT | USAGE   | USER
+| USING  |       |         |         |
+|===
+
+[[reserved_sql_identifiers_v]]
+== Reserved SQL Identifiers: V
+
+[cols="5*l"]
+|===
+| VALUE | VALUES  | VARCHAR | VARIABLE | VARYING
+| VIEW  | VIRTUAL | VISIBLE |          |
+|===
+
+[[reserved_sql_identifiers_w]]
+== Reserved SQL Identifiers: W
+
+[cols="5*l"]
+|===
+| WAIT | WHEN    | WHENEVER | WHERE | WHILE
+| WITH | WITHOUT | WORK     | WRITE |
+|===
+
+
+[[reserved_sql_identifiers_y]]
+== Reserved SQL Identifiers Y
+
+[[reserved_sql_identifiers_z]]
+== Reserved SQL Identifiers: Z
+
+