You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@trafodion.apache.org by sa...@apache.org on 2017/03/03 07:48:36 UTC

[1/3] incubator-trafodion git commit: [TRAFODION-2482] documentation for python installer

Repository: incubator-trafodion
Updated Branches:
  refs/heads/release2.1 58caf9fd3 -> b02f1973d


http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/92c80fd3/docs/provisioning_guide/src/asciidoc/_chapters/script_install.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/script_install.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/script_install.adoc
index ba0f933..f80c260 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/script_install.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/script_install.adoc
@@ -43,7 +43,7 @@ The first step in the installation process is to unpack the {project-name} Insta
 ```
 $ mkdir $HOME/trafodion-installer
 $ cd $HOME/trafodion-downloads
-$ tar -zxf apache-trafodion-installer-1.3.0-incubating-bin.tar.gz -C $HOME/trafodion-installer
+$ tar -zxf apache-trafodion-pyinstaller-2.1.0-incubating.tar.gz -C $HOME/trafodion-installer
 $
 ```
 
@@ -61,236 +61,145 @@ The following example shows a guided install of {project-name} on a two-node Clo
 1. Run the {project-name} Installer in guided mode.
 +
 ```
-$ cd $HOME/trafodion-installer/installer
-$ ./trafodion_install
-
-******************************
- TRAFODION INSTALLATION START
-******************************
-
-***INFO: testing sudo access
-***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-02-15-04-45-30.log
-***INFO: Config directory: /etc/trafodion
-***INFO: Working directory: /usr/lib/trafodion
-
-*******************************
- Trafodion Configuration Setup
-*******************************
-
-***INFO: Please press [Enter] to select defaults.
-
-Enter trafodion password, default is [traf123]: traf123
-Enter list of nodes (blank separated), default []: trafodion-1 trafodion-2
-Enter Trafodion userid's home directory prefix, default is [/home]: /home
-Specify full path to EPEL RPM (including .rpm), default is None:
-***INFO: Will attempt to download RPM if EPEL is not installed on all nodes.
-Specify location of Java 1.7.0_65 or higher (JDK), default is []: /usr/java/jdk1.7.0_67-cloudera
-Enter full path (including .tar or .tar.gz) of trafodion tar file []: /home/centos/trafodion-download/apache-trafodion-1.3.0-incubating-bin.tar.gz
-Enter Hadoop admin username, default is [admin]:
-Enter Hadoop admin password, default is [admin]:
-Enter Hadoop external network URL:port (no 'http://' needed), default is []: trafodion-1.apache.org:7180
-Enter HDFS username, default is [hdfs]:
-Enter HBase username, default is [hbase]:
-Enter HBase group, default is [hbase]:
-Enter directory to install trafodion to, default is [/home/trafodion/apache-trafodion-1.3.0-incubating-bin]:
-Total number of client connections per node, default [16]: 8
-Enable simple LDAP security (Y/N), default is N: N
-***INFO: Configuration file: /etc/trafodion/trafodion_config
-***INFO: Trafodion configuration setup complete
-
-************************************
- Trafodion Configuration File Check
-************************************
-
-
-The authenticity of host 'trafodion-1 (10.1.30.71)' can't be established.
-RSA key fingerprint is 83:96:d4:5e:c1:b8:b1:62:8d:c6:78:a7:7f:1f:6a:d7.
-Are you sure you want to continue connecting (yes/no)? yes
-***INFO: Testing sudo access on node trafodion-1
-***INFO: Testing sudo access on node trafodion-2
-***INFO: Testing ssh on trafodion-1
-***INFO: Testing ssh on trafodion-2
-***INFO: Getting list of all cloudera nodes
-***INFO: Getting list of all cloudera nodes
-***INFO: cloudera list of nodes:  trafodion-1 trafodion-2
-***INFO: Testing ssh on trafodion-1
-***INFO: Testing ssh on trafodion-2
-***INFO: Testing sudo access on trafodion-1
-***INFO: Testing sudo access on trafodion-2
-***DEBUG: trafodionFullName=trafodion_server-1.3.0.tgz
-***INFO: Trafodion version = 1.3.0
-***DEBUG: HBase's java_exec=/usr/java/jdk1.7.0_67-cloudera/bin/java
-
-******************************
- TRAFODION SETUP
-******************************
-
-***INFO: Starting Trafodion environment setup (2016-02-15-07-09-58)
-=== 2016-02-15-07-09-58 ===
-# @@@ START COPYRIGHT @@@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-.
-.
-.
-and hold each Contributor harmless for any liability incurred by,
-or claims asserted against, such Contributor by reason of your
-accepting any such warranty or additional liability.
-
-END OF TERMS AND CONDITIONS
-
-BY TYPING "ACCEPT" YOU AGREE TO THE TERMS OF THIS AGREEMENT:ACCEPT
-***INFO: testing sudo access
-***INFO: Checking all nodes in specified node list
-trafodion-1
-trafodion-2
-***INFO: Total number of nodes = 2
-***INFO: Starting Trafodion Package Setup (2016-02-15-07-11-09)
-***INFO: Installing required packages
-***INFO: Log file located in /var/log/trafodion
-***INFO: ... pdsh on node trafodion-1
-***INFO: ... pdsh on node trafodion-2
-***INFO: Checking if log4cxx is installed ...
-***INFO: Checking if sqlite is installed ...
-***INFO: Checking if expect is installed ...
-***INFO: Installing expect on all nodes
-.
-.
-.
-***INFO: modifying limits in /usr/lib/trafodion/trafodion.conf on all nodes
-***INFO: create Trafodion userid "trafodion"
-***INFO: Trafodion userid's (trafodion) home directory: /home/trafodion
-***INFO: testing sudo access
-Generating public/private rsa key pair.
-Created directory '/home/trafodion/.ssh'.
-Your identification has been saved in /home/trafodion/.ssh/id_rsa.
-Your public key has been saved in /home/trafodion/.ssh/id_rsa.pub.
-The key fingerprint is:
-4b:b3:60:38:c9:9d:19:f8:cd:b1:c8:cd:2a:6e:4e:d0 trafodion@trafodion-1
-The key's randomart image is:
-+--[ RSA 2048]----+
-|                 |
-|     .           |
-|    . . .        |
-|   o * X o       |
-|  . E X S        |
-|   . o + +       |
-|    o . o        |
-|   o..           |
-|   oo            |
-+-----------------+
-***INFO: creating .bashrc file
-***INFO: Setting up userid trafodion on all other nodes in cluster
-***INFO: Creating known_hosts file for all nodes
-trafodion-1
-trafodion-2
-***INFO: trafodion user added successfully
-***INFO: Trafodion environment setup completed
-***INFO: creating sqconfig file
-***INFO: Reserving DCS ports
-
-******************************
- TRAFODION MODS
-******************************
-
-***INFO: Cloudera installed will run traf_cloudera_mods98
-***INFO: Detected JAVA version 1.7
-***INFO: copying hbase-trx-cdh5_3-1.3.0.jar to all nodes
-***INFO: Cloudera Manager is on trafodion-1
-***INFO: Detected JAVA version 1.7
-***INFO: copying hbase-trx-cdh5_3-1.3.0.jar to all nodes
-***INFO: Cloudera Manager is on trafodion-1
-  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
-.
-.
-.
-***INFO: Hadoop restart completed successfully
-***INFO: waiting for HDFS to exit safemode
-Safe mode is OFF
-***INFO: Setting HDFS ACLs for snapshot scan support
-***INFO: Trafodion Mods ran successfully.
-
-******************************
- TRAFODION START
-******************************
-
-/usr/lib/trafodion/installer/..
-***INFO: Log file location /var/log/trafodion/trafodion_install_2016-02-15-07-08-07.log
-***INFO: traf_start
-******************************************
-******************************************
-******************************************
-******************************************
-/home/trafodion/apache-trafodion-1.3.0-incubating-bin
-***INFO: untarring build file /usr/lib/trafodion/apache-trafodion-1.3.0-incubating-bin/trafodion_server-1.3.0.tgz to /home/trafodion/apache-trafodion-1.3.0-incubating-bin
-.
-.
-.
-******* Generate public/private certificates *******
-
- Cluster Name : Cluster%201
-Generating Self Signed Certificate....
-***********************************************************
- Certificate file :server.crt
- Private key file :server.key
- Certificate/Private key created in directory :/home/trafodion/sqcert
-***********************************************************
-
-***********************************************************
- Updating Authentication Configuration
-***********************************************************
-Creating folders for storing certificates
-
-***INFO: copying /home/trafodion/sqcert directory to all nodes
-***INFO: copying install to all nodes
-***INFO: starting Trafodion instance
-Checking orphan processes.
-Removing old mpijob* files from /home/trafodion/apache-trafodion-1.3.0-incubating-bin/tmp
-
-Removing old monitor.port* files from /home/trafodion/apache-trafodion-1.3.0-incubating-bin/tmp
-
-Executing sqipcrm (output to sqipcrm.out)
-Starting the SQ Environment (Executing /home/trafodion/apache-trafodion-1.3.0-incubating-bin/sql/scripts/gomon.cold)
-Background SQ Startup job (pid: 7276)
-.
-.
-.
-Zookeeper is listening on port 2181
-DcsMaster is listening on port 23400
-
-Process         Configured      Actual          Down
----------       ----------      ------          ----
-DcsMaster       1               1
-DcsServer       2               2
-mxosrvr         8               8
+$ cd $HOME/trafodion-installer/python-installer
+$ ./db_install.py
+**********************************
+  Trafodion Installation ToolKit
+**********************************
+Enter HDP/CDH web manager URL:port, (full URL, if no http/https prefix, default prefix is http://): 192.168.0.31:7180
+Enter HDP/CDH web manager user name [admin]:
+Enter HDP/CDH web manager user password:
+Confirm Enter HDP/CDH web manager user password:
 
+TASK: Environment Discover ***************************************************************
 
-You can monitor the SQ shell log file : /home/trafodion/apache-trafodion-1.3.0-incubating-bin/logs/sqmon.log
+Time Cost: 0 hour(s) 0 minute(s) 4 second(s)
+Enter full path to Trafodion tar file [/data/python-installer/apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz]:
+Enter directory name to install trafodion to [apache-trafodion-2.1.0]:
+Enter trafodion user password:
+Confirm Enter trafodion user password:
+Enter number of DCS client connections per node [4]:
+Enter trafodion scratch file folder location(should be a large disk),
+if more than one folder, use comma seperated [$TRAF_HOME/tmp]:
+Start instance after installation (Y/N)  [Y]:
+Enable LDAP security (Y/N)  [N]:
+Enable DCS High Avalability (Y/N)  [N]:
+*****************
+  Final Configs
+*****************
++------------------+-----------------------------------------------------------------------------------+
+| config type      | value                                                                             |
++------------------+-----------------------------------------------------------------------------------+
+| dcs_cnt_per_node | 4                                                                                 |
+| dcs_ha           | N                                                                                 |
+| first_rsnode     | node-1                                                                            |
+| hbase_user       | hbase                                                                             |
+| hdfs_user        | hdfs                                                                              |
+| home_dir         | /home                                                                             |
+| java_home        | /usr/lib/jvm/java-1.7.0-openjdk.x86_64                                            |
+| ldap_security    | N                                                                                 |
+| mgr_url          | http://192.168.0.31:7180                                                          |
+| mgr_user         | admin                                                                             |
+| node_list        | node-1,node-2                                                                     |
+| scratch_locs     | $TRAF_HOME/tmp                                                                    |
+| traf_dirname     | apache-trafodion-2.1.0                                                            |
+| traf_package     | /data/python-installer/apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz  |
+| traf_start       | Y                                                                                 |
+| traf_user        | trafodion                                                                         |
++------------------+-----------------------------------------------------------------------------------+
+Confirm result (Y/N) [N]: y
 
+** Generating config file to save configs ...
 
-Startup time  0 hour(s) 1 minute(s) 9 second(s)
-Apache Trafodion Conversational Interface 1.3.0
-Copyright (c) 2015 Apache Software Foundation
->> initialize trafodion;
---- SQL operation complete.
->>
+**********************
+  Installation Start
+**********************
 
-End of MXCI Session
+  TASK: Environment Check ******************************************************************
 
-***INFO: Installation completed successfully.
+  Host [node-2]: Script [traf_check.py] .......................................... [  OK  ]
 
-*********************************
- TRAFODION INSTALLATION COMPLETE
-*********************************
 
-$
+  Host [node-1]: Script [traf_check.py] .......................................... [  OK  ]
+
+
+  TASK: Copy Trafodion package file ********************************************************
+
+  Script [copy_files.py] ......................................................... [  OK  ]
+
+
+  TASK: Trafodion user Setup ***************************************************************
+
+  Host [node-2]: Script [traf_user.py] ........................................... [  OK  ]
+
+
+  Host [node-1]: Script [traf_user.py] ........................................... [  OK  ]
+
+
+  TASK: Install Trafodion dependencies *****************************************************
+
+  Host [node-2]: Script [traf_dep.py] ............................................ [  OK  ]
+
+
+  Host [node-1]: Script [traf_dep.py] ............................................ [  OK  ]
+
+
+  TASK: Install Trafodion package **********************************************************
+
+  Host [node-2]: Script [traf_package.py] ........................................ [  OK  ]
+
+
+  Host [node-1]: Script [traf_package.py] ........................................ [  OK  ]
+
+
+  TASK: Environment Setup ******************************************************************
+
+  Host [node-1]: Script [traf_setup.py] .......................................... [  OK  ]
+
+
+  Host [node-2]: Script [traf_setup.py] .......................................... [  OK  ]
+
+
+  TASK: DCS/REST Setup *********************************************************************
+
+  Host [node-2]: Script [dcs_setup.py] ........................................... [  OK  ]
+
+
+  Host [node-1]: Script [dcs_setup.py] ........................................... [  OK  ]
+
+
+  TASK: Hadoop modification and restart ****************************************************
+
+  ***[INFO]: Restarting CDH services ...
+  Check CDH services restart status (timeout: 600 secs) .................
+  ***[OK]: CDH services restart successfully!
+
+  ***[INFO]: Deploying CDH client configs ...
+  Check CDH services deploy status (timeout: 300 secs) ..
+  ***[OK]: CDH services deploy successfully!
+
+  Script [hadoop_mods.py] ......................................................... [  OK  ]
+
+
+  TASK: Set permission of HDFS folder for Trafodion user ***********************************
+
+  Host [node-1]: Script [hdfs_cmds.py] ............................................ [  OK  ]
+
+
+  TASK: Sqconfig Setup *********************************************************************
+
+  Host [node-1]: Script [traf_sqconfig.py] ........................................ [  OK  ]
+
+
+  TASK: Start Trafodion ********************************************************************
+
+  Host [node-1]: Script [traf_start.py] ........................................... [  OK  ]
+
+
+  Time Cost: 0 hour(s) 7 minute(s) 45 second(s)
+  *************************
+    Installation Complete
+  *************************
 ```
 
 2. Switch to the {project-name} Runtime User and check the status of {project-name}.
@@ -298,18 +207,21 @@ $
 ```
 $ sudo su - trafodion
 $ sqcheck
+*** Checking Trafodion Environment ***
+
 Checking if processes are up.
 Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
 
 The SQ environment is up!
 
-
 Process         Configured      Actual      Down
 -------         ----------      ------      ----
 DTM             2               2
 RMS             4               4
-MXOSRVR         8               8
-
+DcsMaster       1               1
+DcsServer       2               2
+mxosrvr         8               8
+RestServer      1               1
 $
 ```
 
@@ -321,11 +233,11 @@ operations.
 [[install-automated-install]]
 == Automated Install
 
-The `--config_file` option runs the {project-name} in Automated Setup mode. Refer to <<introduction-trafodion-installer,{project-name} Installer>>
+The `--config-file` option runs the {project-name} in Automated Setup mode. Refer to <<introduction-trafodion-installer,{project-name} Installer>>
 in the <<introduction,Introduction>> chapter for instructions of how you edit your configuration file.
 
 Edit your config file using the information you collected in the <<prepare-gather-configuration-information,Gather Configuration Information>>
-step in the <<prepare,Prepare>> chapter. 
+step in the <<prepare,Prepare>> chapter.
 
 
 The following example shows an automated install of {project-name} on a two-node Hortonworks Hadoop cluster that does not have Kerberos nor LDAP enabled.
@@ -334,164 +246,135 @@ The following example shows an automated install of {project-name} on a two-node
 
 1. Run the {project-name} Installer in Automated Setup mode.
 +
+
 ```
-$ cd $HOME/trafodion-installer/installer
-$ ./trafodion_install --config_file my
-******************************
- TRAFODION INSTALLATION START
-******************************
-
-***INFO: testing sudo access
-***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-02-16-21-12-03.log
-***INFO: Config directory: /etc/trafodion
-***INFO: Working directory: /usr/lib/trafodion
-
-************************************
- Trafodion Configuration File Check
-************************************
-
-
-***INFO: Testing sudo access on node trafodion-1
-***INFO: Testing sudo access on node trafodion-2
-***INFO: Testing ssh on trafodion-1
-***INFO: Testing ssh on trafodion-2
-***INFO: Getting list of all hortonworks nodes
-***INFO: Getting list of all hortonworks nodes
-***INFO: hortonworks list of nodes:  trafodion-1 trafodion-2
-***INFO: Testing ssh on trafodion-1
-***INFO: Testing ssh on trafodion-2
-***INFO: Testing sudo access on trafodion-1
-***INFO: Testing sudo access on trafodion-2
-***DEBUG: trafodionFullName=trafodion_server-1.3.0.tgz
-***INFO: Trafodion version = 1.3.0
-***DEBUG: HBase's java_exec=/usr/jdk64/jdk1.7.0_67/bin/java
-
-******************************
- TRAFODION SETUP
-******************************
-
-***INFO: Starting Trafodion environment setup (2016-02-16-21-12-31)
-=== 2016-02-16-21-12-31 ===
-# @@@ START COPYRIGHT @@@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-.
-.
-.
-9. Accepting Warranty or Additional Liability. While redistributing
-the Work or Derivative Works thereof, You may choose to offer, and
-charge a fee for, acceptance of support, warranty, indemnity, or
-other liability obligations and/or rights consistent with this
-License. However, in accepting such obligations, You may act only
-on Your own behalf and on Your sole responsibility, not on behalf
-of any other Contributor, and only if You agree to indemnify, defend,
-and hold each Contributor harmless for any liability incurred by,
-or claims asserted against, such Contributor by reason of your
-accepting any such warranty or additional liability.
-
-END OF TERMS AND CONDITIONS
-
-BY TYPING "ACCEPT" YOU AGREE TO THE TERMS OF THIS AGREEMENT: ***INFO: testing sudo access
-***INFO: Checking all nodes in specified node list
-trafodion-1
-trafodion-2
-***INFO: Total number of nodes = 2
-***INFO: Starting Trafodion Package Setup (2016-02-16-21-12-35)
-***INFO: Installing required packages
-***INFO: Log file located in /var/log/trafodion
-***INFO: ... EPEL rpm
-***INFO: ... pdsh on node trafodion-1
-***INFO: ... pdsh on node trafodion-2
-***INFO: Checking if log4cxx is installed ...
-***INFO: Checking if sqlite is installed ...
-***INFO: Checking if expect is installed ...
-.
-.
-.
-***INFO: trafodion user added successfully
-***INFO: Trafodion environment setup completed
-***INFO: creating sqconfig file
-***INFO: Reserving DCS ports
-
-******************************
- TRAFODION MODS
-******************************
-
-***INFO: Hortonworks installed will run traf_hortonworks_mods98
-***INFO: Detected JAVA version 1.7
-***INFO: copying hbase-trx-hdp2_2-1.3.0.jar to all nodes
-PORT=:8080
-.
-.
-.
-Starting the REST environment now
-starting rest, logging to /home/trafodion/apache-trafodion-1.3.0-incubating-bin/rest-1.3.0/bin/../logs/rest-trafodion-1-rest-trafodion-1.out
-SLF4J: Class path contains multiple SLF4J bindings.
-SLF4J: Found binding in [jar:file:/home/trafodion/apache-trafodion-1.3.0-incubating-bin/rest-1.3.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
-SLF4J: Found binding in [jar:file:/usr/hdp/2.2.9.0-3393/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
-SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
-SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
-
-
-DcsMaster is not started. Please start DCS using 'dcsstart' command...
-
-Process         Configured      Actual          Down
----------       ----------      ------          ----
-DcsMaster       1               0               1
-DcsServer       2               0               2
-mxosrvr         8               8
+$ cd $HOME/trafodion-installer/python-installer
+$ ./db_install.py --config-file my_config --silent
+**********************************
+  Trafodion Installation ToolKit
+**********************************
+
+** Loading configs from config file ...
+
+TASK: Environment Discover ***************************************************************
+
+Time Cost: 0 hour(s) 0 minute(s) 4 second(s)
+
+
+**********************
+  Installation Start
+**********************
+
+  TASK: Environment Check ******************************************************************
+
+  Host [node-2]: Script [traf_check.py] .......................................... [  OK  ]
+
+
+  Host [node-1]: Script [traf_check.py] .......................................... [  OK  ]
+
+
+  TASK: Copy Trafodion package file ********************************************************
+
+  Script [copy_files.py] ......................................................... [  OK  ]
+
+
+  TASK: Trafodion user Setup ***************************************************************
+
+  Host [node-2]: Script [traf_user.py] ........................................... [  OK  ]
+
+
+  Host [node-1]: Script [traf_user.py] ........................................... [  OK  ]
+
+
+  TASK: Install Trafodion dependencies *****************************************************
+
+  Host [node-2]: Script [traf_dep.py] ............................................ [  OK  ]
+
+
+  Host [node-1]: Script [traf_dep.py] ............................................ [  OK  ]
+
+
+  TASK: Install Trafodion package **********************************************************
+
+  Host [node-2]: Script [traf_package.py] ........................................ [  OK  ]
+
+
+  Host [node-1]: Script [traf_package.py] ........................................ [  OK  ]
 
 
-You can monitor the SQ shell log file : /home/trafodion/apache-trafodion-1.3.0-incubating-bin/logs/sqmon.log
+  TASK: Environment Setup ******************************************************************
 
+  Host [node-1]: Script [traf_setup.py] .......................................... [  OK  ]
 
-Startup time  0 hour(s) 1 minute(s) 9 second(s)
-Apache Trafodion Conversational Interface 1.3.0
-Copyright (c) 2015 Apache Software Foundation
->> initialize trafodion;
---- SQL operation complete.
->>
 
-End of MXCI Session
+  Host [node-2]: Script [traf_setup.py] .......................................... [  OK  ]
 
-***INFO: Installation completed successfully.
 
-*********************************
- TRAFODION INSTALLATION COMPLETE
-*********************************
+  TASK: DCS/REST Setup *********************************************************************
 
-$ 
+  Host [node-2]: Script [dcs_setup.py] ........................................... [  OK  ]
+
+
+  Host [node-1]: Script [dcs_setup.py] ........................................... [  OK  ]
+
+
+  TASK: Hadoop modification and restart ****************************************************
+
+  ***[INFO]: Restarting CDH services ...
+  Check CDH services restart status (timeout: 600 secs) .................
+  ***[OK]: CDH services restart successfully!
+
+  ***[INFO]: Deploying CDH client configs ...
+  Check CDH services deploy status (timeout: 300 secs) ..
+  ***[OK]: CDH services deploy successfully!
+
+  Script [hadoop_mods.py] ......................................................... [  OK  ]
+
+
+  TASK: Set permission of HDFS folder for Trafodion user ***********************************
+
+  Host [node-1]: Script [hdfs_cmds.py] ............................................ [  OK  ]
+
+
+  TASK: Sqconfig Setup *********************************************************************
+
+  Host [node-1]: Script [traf_sqconfig.py] ........................................ [  OK  ]
+
+
+  TASK: Start Trafodion ********************************************************************
+
+  Host [node-1]: Script [traf_start.py] ........................................... [  OK  ]
+
+
+  Time Cost: 0 hour(s) 7 minute(s) 45 second(s)
+  *************************
+    Installation Complete
+  *************************
 ```
 
 2. Switch to the {project-name} Runtime User and check the status of {project-name}.
 +
-*Example*
-+
 ```
 $ sudo su - trafodion
 $ sqcheck
+*** Checking Trafodion Environment ***
+
 Checking if processes are up.
 Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
 
 The SQ environment is up!
 
-
 Process         Configured      Actual      Down
 -------         ----------      ------      ----
 DTM             2               2
 RMS             4               4
-MXOSRVR         8               8
-
+DcsMaster       1               1
+DcsServer       2               2
+mxosrvr         8               8
+RestServer      1               1
 $
 ```
 
 {project-name} is now running on your Hadoop cluster. Please refer to the <<activate,Activate>> chapter for
 basic instructions on how to verify the {project-name} management and how to perform basic management
 operations.
-

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/92c80fd3/docs/provisioning_guide/src/asciidoc/_chapters/script_remove.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/script_remove.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/script_remove.adoc
index 16c8f48..5ad55a7 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/script_remove.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/script_remove.adoc
@@ -25,12 +25,12 @@
 
 [[remove]]
 = Remove
-You use the {project-name} Provisioning User for these instructions.	
+You use the {project-name} Provisioning User for these instructions.
 
-NOTE: You do not need to use the `trafodion_uninstaller` script if upgrading {project-name}. Instead, use the `trafodion_install` script,
+NOTE: You do not need to use the `db_uninstall.py` script before upgrading {project-name}. Instead, use the `db_install.py` script,
 which automatically upgrades the version of {project-name}. Please refer to the <<install,Install>> chapter for further instructions.
 
-Run the commands from the first node of the cluster. Do not run them from a machine that is not part of the {project-name} cluster.
+Run the commands from the first node of the cluster. You can also run it from any node, but you need to specifiy the hostnames of Trafodion cluster.
 
 == Stop {project-name}
 
@@ -63,11 +63,7 @@ Processing cluster.conf on local host trafodion-1
 .
 .
 .
-```
-
-<<<
 
-```
 [$Z000HDS] 001,00024772 001 GEN  ES--A-- $Z010K7S    NONE        mxosrvr
 [$Z000HDS] 001,00024782 001 GEN  ES--U-- $ZLOBSRV1   NONE        mxlobsrvr
 shutdown
@@ -84,35 +80,25 @@ Mon Feb 15 07:49:26 UTC 2016
 [admin@trafodion-1 ~]$
 ```
 
-== Run `trafodion_uninstaller`
+== Run `db_uninstall.py`
 
-The `trafodion_uninstaller` completely removes {project-name}.
+The `db_uninstall.py` completely removes {project-name}, includes {project-name} user's home directory.
 
 *Example*
 
 ```
-[admin@trafodion-1 ~]$ cd $HOME/trafodion-installer/installer
-[admin@trafodion-1 installer]$ ./trafodion_uninstaller
-Do you want to uninstall Trafodion (Everything will be removed)? (Y/N) y
-***INFO: testing sudo access
-***INFO: NOTE, rpms that were installed will not be removed.
-***INFO: stopping Trafodion instance
-SQ environment is not up.
-Going to execute ckillall
-
-Can't find file /home/trafodion/.vnc/trafodion-1:1.pid
-You'll have to kill the Xvnc process manually
-
-***INFO: restoring linux system files that were changed
-***INFO: removing hbase-trx* from Hadoop directories
-pdsh@trafodion-1: trafodion-1: ssh exited with exit code 1
-pdsh@trafodion-1: trafodion-2: ssh exited with exit code 1
-pdsh@trafodion-1: trafodion-1: ssh exited with exit code 1
-pdsh@trafodion-1: trafodion-2: ssh exited with exit code 1
-***INFO remove the Trafodion userid and group
-***INFO: removing all files from /home/trafodion/apache-trafodion-1.3.0-incubating-bin
-***INFO: removing all files from /usr/lib/trafodion and /var/log/trafodion
-***INFO: removing all files from /etc/trafodion
-***INFO: Trafodion uninstall complete.
+[admin@trafodion-1 ~]$ cd $HOME/trafodion-installer/python-installer
+[admin@trafodion-1 installer]$ ./db_uninstall.py
+*****************************
+  Trafodion Uninstall Start
+*****************************
+Uninstall Trafodion on [node-1 node-2] [N]: y
+
+***[INFO]: Remove Trafodion on node [node-1] ... 
+
+***[INFO]: Remove Trafodion on node [node-2] ... 
+*********************************
+  Trafodion Uninstall Completed
+*********************************
 [admin@trafodion-1 installer]$
 ```

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/92c80fd3/docs/provisioning_guide/src/asciidoc/_chapters/script_upgrade.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/script_upgrade.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/script_upgrade.adoc
index f7cb54a..c6a05f6 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/script_upgrade.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/script_upgrade.adoc
@@ -53,9 +53,9 @@ You unpack the updated {project-name} Installer into a new directory.
 *Example*
 
 ```
-$ mkdir $HOME/trafodion-installer-2.0
+$ mkdir $HOME/trafodion-installer
 $ cd $HOME/trafodion-downloads
-$ tar -zxf apache-trafodion-installer-2.0.0-incubating-bin.tar.gz -C $HOME/trafodion-installer
+$ tar -zxf apache-trafodion-pyinstaller-2.1.0-incubating.tar.gz -C $HOME/trafodion-installer
 $
 ```
 
@@ -80,12 +80,6 @@ Shutting down (normal) the SQ environment!
 Wed Feb 17 05:12:40 UTC 2016
 Processing cluster.conf on local host trafodion-1
 [$Z000KAE] Shell/shell Version 1.0.1 Apache_Trafodion Release 1.3.0 (Build release [1.3.0-0-g5af956f_Bld2], date 20160112_1927)
-ps
-```
-
-<<<
-
-```
 [$Z000KAE] %ps
 [$Z000KAE] NID,PID(os)  PRI TYPE STATES  NAME        PARENT      PROGRAM
 [$Z000KAE] ------------ --- ---- ------- ----------- ----------- ---------------
@@ -130,159 +124,9 @@ You perform this step as the {project-name} Provisioning User.
 
 As in the case with an installation, the {project-name} Installer prompts you for the information you collected in the
 <<prepare-gather-configuration-information, Gather Configuration Information>> step in the <<prepare,Prepare>> chapter.
-Some of the prompts are populated with the current values.
-
-The following example shows a guided upgrade of {project-name} on a two-node Cloudera Hadoop cluster without Kerberos nor LDAP enabled.
-
-*Example*
-
-1. Run the updated {project-name} Installer in Guided Setup mode to perform the upgrade. Change information
-at prompts as applicable.
-+
-```
-$ cd $HOME/trafodion-installer-2.0/installer
-$ ./trafodion_install 
-******************************
- TRAFODION INSTALLATION START
-******************************
-
-***INFO: testing sudo access
-***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-02-17-08-15-33.log
-***INFO: Config directory: /etc/trafodion
-***INFO: Working directory: /usr/lib/trafodion
-
-*******************************
- Trafodion Configuration Setup
-*******************************
-
-***INFO: Please press [Enter] to select defaults.
-
-Enter trafodion password, default is [traf123]:
-Enter list of nodes (blank separated), default []: trafodion-1.apache.org trafodion-2.apache.org
-Specify location of Java 1.7.0_65 or higher (JDK), default is [/usr/java/jdk1.7.0_67-cloudera]:
-Enter full path (including .tar or .tar.gz) of trafodion tar file []: /home/centos/trafodion-download/apache-trafodion-2.0.0-incubating-bin.tar.gz
-Enter Hadoop admin username, default is [admin]:
-Enter Hadoop admin password, default is [admin]:
-Enter Hadoop external network URL:port (no 'http://' needed), default is []: trafodion-1.apache.org:7180
-Enter HDFS username, default is [hdfs]:
-Enter HBase username, default is [hbase]:
-Enter HBase group, default is [hbase]:
-Enter directory to install trafodion to, default is [/home/trafodion/apache-trafodion-1.3.0-incubating-bin]: /home/centos/apache-trafodion-2.0.0-incubating-bin
-Start Trafodion after install (Y/N), default is Y:
-Total number of client connections per node, default [16]: 8
-Enable simple LDAP security (Y/N), default is N:
-***INFO: Configuration file: /etc/trafodion/trafodion_config
-***INFO: Trafodion configuration setup complete
-
-************************************
- Trafodion Configuration File Check
-************************************
-
-
-***INFO: Testing sudo access on node trafodion-1
-***INFO: Testing sudo access on node trafodion-2
-***INFO: Testing ssh on trafodion-1
-***INFO: Testing ssh on trafodion-2
-***INFO: Getting list of all cloudera nodes
-***INFO: Getting list of all cloudera nodes
-***INFO: cloudera list of nodes:  trafodion-1 trafodion-2
-***INFO: Testing ssh on trafodion-1
-***INFO: Testing ssh on trafodion-2
-***INFO: Testing sudo access on trafodion-1
-***INFO: Testing sudo access on trafodion-2
-***INFO: Checking cloudera Version
-***INFO: nameOfVersion=cdh5.3.0
-***INFO: HADOOP_PATH=/usr/lib/hbase/lib
-***INFO: Trafodion scanner will not be run.
-***DEBUG: trafodionFullName=trafodion_server-1.3.0.tgz
-***INFO: Trafodion version = 1.3.0
-***DEBUG: HBase's java_exec=/usr/java/jdk1.7.0_67-cloudera/bin/java
-
-******************************
- TRAFODION SETUP
-******************************
-
-***INFO: Installing required RPM packages
-***INFO: Starting Trafodion Package Setup (2016-02-17-08-16-11)
-***INFO: Installing required packages
-***INFO: Log file located in /var/log/trafodion
-***INFO: ... pdsh on node trafodion-1
-***INFO: ... pdsh on node trafodion-2
-***INFO: Checking if log4cxx is installed ...
-***INFO: Checking if sqlite is installed ...
-***INFO: Checking if expect is installed ...
-***INFO: Checking if perl-DBD-SQLite* is installed ...
-***INFO: Checking if protobuf is installed ...
-***INFO: Checking if xerces-c is installed ...
-***INFO: Checking if perl-Params-Validate is installed ...
-***INFO: Checking if perl-Time-HiRes is installed ...
-***INFO: Checking if gzip is installed ...
-***INFO: creating sqconfig file
-***INFO: Reserving DCS ports
-
-******************************
- TRAFODION MODS
-******************************
-
-***INFO: Cloudera installed will run traf_cloudera_mods98
-***INFO: Detected JAVA version 1.7
-***INFO: copying hbase-trx-cdh5_3-1.3.0.jar to all nodes
-***INFO: Cloudera Manager is on trafodion-1
-.
-.
-.
-Zookeeper is listening on port 2181
-DcsMaster is listening on port 23400
-
-Process         Configured      Actual          Down
----------       ----------      ------          ----
-DcsMaster       1               1
-DcsServer       2               2
-mxosrvr         8               8
-
-
-You can monitor the SQ shell log file : /home/trafodion/apache-trafodion-2.0.0-incubating-bin/logs/sqmon.log
-
-
-Startup time  0 hour(s) 1 minute(s) 9 second(s)
-Apache Trafodion Conversational Interface 1.3.0
-Copyright (c) 2015 Apache Software Foundation
->>
-
-End of MXCI Session
-
-***INFO: Installation completed successfully.
-
-*********************************
- TRAFODION INSTALLATION COMPLETE
-*********************************
 
-$
-```
-
-2. Switch to the {project-name} Runtime User and check the status of {project-name}.
-+
-```
-$ sudo su - trafodion
-$ sqcheck
-Checking if processes are up.
-Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
-
-The SQ environment is up!
-
-
-Process         Configured      Actual      Down
--------         ----------      ------      ----
-DTM             2               2
-RMS             4               4
-MXOSRVR         8               8
-
-$
-```
-
-{project-name} is now running on your Hadoop cluster. Please refer to the <<activate,Activate>> chapter for
-basic instructions on how to verify the {project-name} management and how to perform basic management
-operations.
+From user's perspective, guided upgrade doesn't require the {project-name} runtime user's password. The rest of things have no difference with guided install.
+Please refer to <<install-guided-install, Guided Install>> for the *example* of installing {project-name} on a two-node Cloudera Hadoop cluster.
 
 
 <<<
@@ -291,14 +135,12 @@ operations.
 
 You perform this step as the {project-name} Provisioning User.
 
-The `--config_file` option runs the {project-name} in Automated Setup mode. Refer to <<introduction-trafodion-installer,{project-name} Installer>>
+The `--config-file` option runs the {project-name} in Automated Setup mode. Refer to <<introduction-trafodion-installer,{project-name} Installer>>
 in the <<introduction,Introduction>> chapter for instructions of how you edit your configuration file.
 
 At a minimum, you need to change the following settings:
 
-* `LOCAL_WORKDIR`
-* `TRAF_PACKAGE`
-* `TRAF_HOME`
+* `traf_package`
 
 *Example*
 
@@ -307,126 +149,16 @@ $ cd $HOME/trafodion-configuration
 $ cp my_config my_config_2.0
 $ # Pre edit content
 
-export LOCAL_WORKDIR="/home/centos/trafodion-installer/installer"
-export TRAF_PACKAGE="/home/centos/trafodion-download/apache-trafodion-1.3.0-incubating-bin.tar.gz"
-export TRAF_HOME="/home/trafodion/apache-trafodion-1.3.0-incubating-bin"
+traf_package = "/home/centos/trafodion-download/apache-trafodion-2.0.0-incubating.tar.gz"
 
 $ # Use your favorite editor to modify my_config_2.0
 $ emacs my_config_2.0
 $ # Post edit changes
 
-export LOCAL_WORKDIR="/home/centos/trafodion-installer-2.0/installer"
-export TRAF_PACKAGE="/home/centos/trafodion-download/apache-trafodion-2.0.0-incubating-bin.tar.gz"
-export TRAF_HOME="/home/trafodion/apache-trafodion-2.0.0-incubating-bin"
+traf_package = "/home/centos/trafodion-download/apache-trafodion-2.1.0-incubating.tar.gz"
 ```
 
-
-The following example shows an upgrade of {project-name} on a two-node Hortonworks Hadoop cluster using
-Automated Setup mode without Kerberos nor LDAP enabled.
-
 NOTE: The {project-name} Installer performs the same configuration changes as it does for an installation,
 including restarting Hadoop services.
 
-*Example*
-
-1. Run the updated {project-name} Installer using the modified my_config_2.0 file.
-+
-```
-$ cd $HOME/trafodion-installer-2.0/installer
-$ ./trafodion_install --config_file $HOME/trafodion-configuration/my_config_2.0
-******************************
- TRAFODION INSTALLATION START
-******************************
-
-***INFO: Testing sudo access on node trafodion-1
-***INFO: Testing sudo access on node trafodion-2
-***INFO: Testing ssh on trafodion-1
-***INFO: Testing ssh on trafodion-2
-***INFO: Getting list of all hortonworks nodes
-***INFO: Getting list of all hortonworks nodes
-***INFO: hortonworks list of nodes:  trafodion-1 trafodion-2
-***INFO: Testing ssh on trafodion-1
-***INFO: Testing ssh on trafodion-2
-***INFO: Testing sudo access on trafodion-1
-***INFO: Testing sudo access on trafodion-2
-***INFO: Trafodion scanner will not be run.
-***DEBUG: trafodionFullName=trafodion_server-2.0.0.tgz
-***INFO: Trafodion version = 2.0.0
-***DEBUG: HBase's java_exec=/usr/jdk64/jdk1.7.0_67/bin/java
-
-******************************
- TRAFODION SETUP
-******************************
-
-***INFO: Installing required RPM packages
-***INFO: Starting Trafodion Package Setup (2016-02-17-05-33-29)
-***INFO: Installing required packages
-***INFO: Log file located in /var/log/trafodion
-***INFO: ... pdsh on node trafodion-1
-***INFO: ... pdsh on node trafodion-2
-***INFO: Checking if log4cxx is installed ...
-.
-.
-.
-DcsMaster is not started. Please start DCS using 'dcsstart' command...
-
-Process         Configured      Actual          Down
----------       ----------      ------          ----
-DcsMaster       1               0               1
-DcsServer       2               0               2
-mxosrvr         8               8
-
-
-You can monitor the SQ shell log file : /home/trafodion/apache-trafodion-2.0.0-incubating-bin/logs/sqmon.log
-
-
-Startup time  0 hour(s) 1 minute(s) 9 second(s)
-Apache Trafodion Conversational Interface 1.3.0
-Copyright (c) 2015 Apache Software Foundation
->>Metadata Upgrade: started
-
-Version Check: started
-  Metadata is already at Version 1.1.
-Version Check: done
-
-Metadata Upgrade: done
-
-
---- SQL operation complete.
->>
-
-End of MXCI Session
-
-***INFO: Installation completed successfully.
-
-*********************************
- TRAFODION INSTALLATION COMPLETE
-*********************************
-
-$
-```
-
-2. Switch to the {project-name} Runtime User and check the status of {project-name}.
-+
-```
-$ sudo su - trafodion
-$ sqcheck
-Checking if processes are up.
-Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
-
-The SQ environment is up!
-
-
-Process         Configured      Actual      Down
--------         ----------      ------      ----
-DTM             2               2
-RMS             4               4
-MXOSRVR         8               8
-
-$
-```
-
-{project-name} is now running on your Hadoop cluster. Please refer to the <<activate,Activate>> chapter for
-basic instructions on how to verify the {project-name} management and how to perform basic management
-operations.
-
+Please refer to <<install-automated-install, Automated Install>> for the *example* of installing {project-name} on a two-node Cloudera Hadoop cluster.



[2/3] incubator-trafodion git commit: [TRAFODION-2482] documentation for python installer

Posted by sa...@apache.org.
[TRAFODION-2482] documentation for python installer


Project: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/commit/92c80fd3
Tree: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/tree/92c80fd3
Diff: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/diff/92c80fd3

Branch: refs/heads/release2.1
Commit: 92c80fd3750d7b90b147c62aa095b6d63ecf4a50
Parents: 3e0d531
Author: Eason <hf...@gmail.com>
Authored: Fri Feb 24 12:52:46 2017 +0800
Committer: Eason <hf...@gmail.com>
Committed: Fri Feb 24 12:52:46 2017 +0800

----------------------------------------------------------------------
 .../src/asciidoc/_chapters/enable_security.adoc |  19 +-
 .../src/asciidoc/_chapters/introduction.adoc    | 435 +++++---------
 .../src/asciidoc/_chapters/prepare.adoc         | 110 ++--
 .../src/asciidoc/_chapters/quickstart.adoc      | 485 +--------------
 .../src/asciidoc/_chapters/requirements.adoc    |  37 +-
 .../src/asciidoc/_chapters/script_install.adoc  | 599 ++++++++-----------
 .../src/asciidoc/_chapters/script_remove.adoc   |  50 +-
 .../src/asciidoc/_chapters/script_upgrade.adoc  | 286 +--------
 8 files changed, 520 insertions(+), 1501 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/92c80fd3/docs/provisioning_guide/src/asciidoc/_chapters/enable_security.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/enable_security.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/enable_security.adoc
index 7ce495f..df6ced7 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/enable_security.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/enable_security.adoc
@@ -86,28 +86,25 @@ TBD - A future update will include details on how tickets can be managed at the
 
 === Kerberos installation
 The {project-name} installation scripts automatically determine if Kerberos is enabled on the node.  If it is enabled,
-then the environment variable SECURE_HADOOP is set to "Y".  
+then the environment variable SECURE_HADOOP is set to "Y".
 
 The following are questions that will be asked related to Kerberos:
 
 * Enter KDC server address, default is []: \u2013 no default
 * Enter admin principal (include realm), default is []:  - no default
-* Enter fully qualified name for HBase keytab, default is []: - Installer searches for a valid keytab based on the distribution
-* Enter fully qualified name for HDFS keytab, default is []: - Installer searches for a valid keytab based on the distribution
-* Enter max lifetime for Trafodion principal (valid format required), default is [24hours]:
-* Enter renew lifetime for Trafodion principal (valid format required), default is [7days]:
-* Enter Trafodion keytab name, default is []:  - Installer determines default name based on the distribution
-* Enter keytab location, default is []:  - Installer determins default name based on the distribution
+* Enter password for admin principal:
 
-NOTE: The {project-name} installer always asked for the KDC admin password when enabling Kerberos independent on whether running in Automated
-of Guided mode. It does not save this password.
+NOTE: KDC admin password will be saved only in configuration file `db_config.bakYYMMDD_HHMM` in installer folder when installation completed.
+You can delete this file for secure perspective.
+NOTE: Keytab files are auto detected by installer in CDH/HDP cluster.
+NOTE: Installer doesn't support kerberos enabled Apache Hadoop for this release.
 
 [[enable-security-ldap]]
 == Configuring LDAP
 {project-name} does not manage user names and passwords internally but supports authentication via directory servers using
 the OpenLDAP protocol, also known as LDAP servers. You can configure the LDAP servers during installation by answering the {project-name}
 Installer's prompts. To configure LDAP after installation run the {project-name} security installer directly.  Installing LDAP also enables database
-authorization (privilege support). 
+authorization (privilege support).
 
 Once authentication and authorization are enabled, {project-name} allows users to be registered in the database and allows privileges
 on objects to be granted to users and roles (which are granted to users). {project-name} also supports component-level (or system-level)
@@ -510,7 +507,7 @@ SQL USER CONNECTED user not connected
 SQL USER DB NAME   SQLUSER1
 SQL USER ID        33367
 TERMINAL CHARSET   ISO88591
-TRANSACTION ID     
+TRANSACTION ID
 TRANSACTION STATE  not in progress
 WARNINGS           on
 ```

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/92c80fd3/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc
index 0351129..ca3e543 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc
@@ -50,9 +50,9 @@ respective environment's configuration settings per {project-name} requirements.
 <<requirements-trafodion-provisioning-user,{project-name} Provisioning User>> for more information
 about the requirements and usage associated with this user ID.
 
-* *Runtime User*: A Linux-level user under which the {project-name} software runs. This user ID must be registered
-as a user in the Hadoop Distributed File System (HDFS) to store and  access objects in HDFS, HBase, and Hive.
-In addition, this  user ID requires passwordless access among the nodes where {project-name} is installed.
+* *Runtime User*: A Linux-level user under which the {project-name} software runs, default name is `trafodion`. This user ID must be registered
+as a user in the Hadoop Distributed File System (HDFS) to store and access objects in HDFS, HBase, and Hive.
+In addition, this user ID requires passwordless access among the nodes where {project-name} is installed.
 Refer to <<requirements-trafodion-runtime-user,{project-name} Runtime User>> for more information about this user ID.
 
 * *{project-name} Database Users*: {project-name} users are managed by {project-name} security features (grant, revoke, etc.),
@@ -142,13 +142,13 @@ include basic management tasks such as starting and checking the status of the {
 
 * *<<enable-security,Enable Security>>*: Activities related to enabling security features on an already installed
 {project-name} installation.  These activities include tasks such as adding Kerberos principals and keytabs,
-and setting up the LDAP configuration files.
+and setting up the LDAP configuration files. *Only support in bash installer for now*
 
 [[introduction-provisioning-master-node]]
 == Provisioning Master Node
-All provisioning tasks are performed from a single node in the cluster, which must be part
-of the Hadoop environment you're adding {project-name} to. This node is referred to as the
-"*Provisioning Master Node*" in this guide.
+All provisioning tasks are performed from a single node in the cluster, which can be any node
+as long as it has access to the Hadoop environment you're adding {project-name} to.
+This node is referred to as the "*Provisioning Master Node*" in this guide.
 
 The {project-name} Provisioning User must have access to all other nodes from the Provisioning
 Master Node in order to perform provisioning tasks on the cluster.
@@ -165,24 +165,23 @@ Next, you unpack the tar file.
 ```
 $ mkdir $HOME/trafodion-installer
 $ cd $HOME/trafodion-downloads
-$ tar -zxf apache-trafodion-installer-1.3.0-incubating-bin.tar.gz -C $HOME/trafodion-installer
-$ 
+$ tar -zxf apache-trafodion-pyinstaller-2.1.0-incubating.tar.gz -C $HOME/trafodion-installer
+$
 ```
 
 <<<
 The {project-name} Installer supports two different modes:
 
 1. *Guided Setup*: Prompts for information as it works through the installation/upgrade process. This mode is recommended for new users.
-2. *Automated Setup*: Required information is provided in a pre-formatted bash-script configuration file, which is provided
-via a command argument when running the {project-name} Installer thereby suppressing all prompts. There is one exception, 
-if Kerberos is enabled on the cluster, then you will always be prompted for the KDC admin password.  We do not store the 
-KDC admin password as part of installation anywhere.
+2. *Automated Setup*: Required information is provided in a pre-formatted ini configuration file, which is provided
+via a command argument when running the {project-name} Installer thereby suppressing all prompts. This ini configuration file only exists
+on the *Provisioning Master Node*, please secure this file or delete it after you installed {project-name} successfully.
 +
-A template of the configuration file is available here within the installer directory: `trafodion_config_default`.
+A template of the configuration file is available here within the installer directory: `configs/db_config_default.ini`.
 Make a copy of the file in your directory and populate the needed information.
 +
 Automated Setup is recommended since it allows you to record the required provisioning information ahead of time.
-Refer to <<introduction-trafodion-automated-setup,Automated Setup>> for information about how to
+Refer to <<introduction-trafodion-installer-automated-setup,Automated Setup>> for information about how to
 populate this file.
 
 [[introduction-trafodion-installer-usage]]
@@ -191,21 +190,30 @@ populate this file.
 The following shows help for the {project-name} Installer.
 
 ```
-./trafodion_install --help
-
-This script will install Trafodion. It will create a configuration
-file (if one has not been created), setup of the environment needed
-for Trafodion, configure HBase with Hbase-trx and co-processors needed,
-and install a specified Trafodion build.
+$ ./db_install.py -h
+**********************************
+  Trafodion Installation ToolKit
+**********************************
+Usage: db_install.py [options]
+  Trafodion install main script.
 
 Options:
-    --help             Print this message and exit
-    --accept_license   If provided, the user agrees to accept all the
-                       provisions in the Trafodion license.  This allows
-                       for automation by skipping the display and prompt of
-                       the Trafodion license.
-    --config_file      If provided, all install prompts will be
-                       taken from this file and not prompted for.
+  -h, --help            show this help message and exit
+  -c FILE, --config-file=FILE
+                        Json format file. If provided, all install prompts
+                        will be taken from this file and not prompted for.
+  -u USER, --remote-user=USER
+                        Specify ssh login user for remote server,
+                        if not provided, use current login user as default.
+  -v, --verbose         Verbose mode, will print commands.
+  --silent              Do not ask user to confirm configuration result
+  --enable-pwd          Prompt SSH login password for remote hosts.
+                        If set, 'sshpass' tool is required.
+  --build               Build the config file in guided mode only.
+  --reinstall           Reinstall Trafodion without restarting Hadoop.
+  --apache-hadoop       Install Trafodion on top of Apache Hadoop.
+  --offline             Enable local repository for offline installing
+                        Trafodion.
 ```
 
 <<<
@@ -217,6 +225,8 @@ or an upgrade by looking for the {project-name} Runtime User in the `/etc/passwd
 
 * If the user ID doesn't exist, then the {project-name} Installer runs in install mode.
 * If the user ID exists, then the {project-name} Installer runs in upgrade mode.
+* If `--reinstall` option is specified, then the {project-name} Installer will not restart Hadoop. It's only available when
+you reinstall the same release version, otherwise an error will be reported during installation.
 
 
 [[introduction-trafodion-installer-guided-setup]]
@@ -233,16 +243,16 @@ Refer to the following sections for examples:
 [[introduction-trafodion-installer-automated-setup]]
 === Automated Setup
 
-The `--config_file` option runs the {project-name} in Automated Setup mode.
+The `--config-file` option runs the {project-name} in Automated Setup mode.
 
 Before running the {project-name} Installer with this option, you do the following:
 
-1. Copy the `trafodion_config_default` file.
+1. Copy the `db_config_default.ini` file.
 +
 *Example*
 +
 ```
-cp trafodion_config_default my_config
+cp configs/db_config_default.ini my_config
 ```
 
 2. Edit the new file using information you collect in the
@@ -254,258 +264,137 @@ section in the <<prepare,Prepare>> chapter.
 *Example*
 +
 ```
-./trafodion_installer --config_file my_config
+./db_install.py --config-file my_config
 ```
 
 NOTE: Your {project-name} Configuration File contains the password for the {project-name} Runtime User
 and for the Distribution Manager. Therefore, we recommend that you secure the file in a manner
-that matches the security policies of your organization. 
+that matches the security policies of your organization.
 
-NOTE: If you are installing {project-name} on a version of Hadoop that has been instrumented with Kerberos,
-you will be asked for a password associated with a Kerberos administrator.  
+==== Example: Quick start using a {project-name} Configuration File
+The {project-name} Installer supports a minimum configuration to quick start your installation in two steps.
+1. Copy {project-name} server binary file to your installer directory.
+```
+cp /path/to/apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz python-installer/
+```
+2. Modify configuration file `my_config`, add the Hadoop Distribution Manager URL in `mgr_url`.
+```
+mgr_url = 192.168.0.1:8080
+```
+Once completed, run the {project-name} Installer with the --config-file option.
 
 ==== Example: Creating a {project-name} Configuration File
 
 Using the instructions in <<prepare-gather-configuration-information,Gather Configuration Information>>
-in the <<prepare,Prepare>> chapter, you record the following information.
-
-[cols="30%l,50%,20%",options="header"]
-|===
-| ID                      | Information                                                                                | Setting                       
-| ADMIN                   | Administrator user name for Apache Ambari or Cloudera Manager.                             | admin                         
-| ADMIN_PRINCIPAL         | Kerberos principal for the KDC admin user including the realm.                             |
-| BACKUP_DCS_NODES        | List of nodes where to start the backup DCS Master components.                             | 
-| CLOUD_CONFIG            | Whether you're installing {project-name} on a cloud environment.                                | N 
-| CLOUD_TYPE              | What type of cloud environment you're installing {project-name} on.                             | 
-| CLUSTER_NAME            | The name of the Hadoop Cluster.                                                            | Cluster 1
-| DB_ROOT_NAME            | LDAP name used to connect as database root user | trafodion
-| DCS_BUILD               | Tar file containing the DCS component.                                                     | 
-| DCS_PRIMARY_MASTER_NODE | The node where the primary DCS should run.                                                 | 
-| DCS_SERVER_PARM         | Number of concurrent client sessions per node.                                             | 8
-| ENABLE_HA               | Whether to run DCS in high-availability (HA) mode.                                         | N
-| EPEL_RPM                | Location of EPEL RPM. Specify if you don't have access to the Internet.                    | 
-| FLOATING_IP             | IP address if running DCS in HA mode.                                                      | 
-| HADOOP_TYPE             | The type of Hadoop distribution you're installing {project-name} on.                            | cloudera
-| HBASE_GROUP             | Linux group name for the HBASE administrative user.                                         | hbase
-| HBASE_KEYTAB            | Kerberos service keytab for HBase admin principal.                                                      | Default based on distribution
-| HBASE_USER              | Linux user name for the HBASE administrative user.                                          | hbase
-| HDFS_KEYTAB             | Kerberos service keytab for HDFS admin principal.                                                       | Default based on distribution
-| HDFS_USER               | Linux user name for the HDFS administrative user.                                           | hdfs 
-| HOME_DIR                | Root directory under which the `trafodion` home directory should be created.               | /home 
-| INIT_TRAFODION          | Whether to automatically initialize the {project-name} database.                                | Y
-| INTERFACE               | Interface type used for $FLOATING_IP.                                                      | 
-| JAVA_HOME               | Location of Java 1.7.0_65 or higher (JDK).                                                 | /usr/java/jdk1.7.0_67-cloudera
-| KDC_SERVER              | Location of Kerberos server for admin access                                               |
-| LDAP_CERT               | Full path to TLS certificate.                                                              | 
-| LDAP_HOSTS              | List of nodes where LDAP Identity Store servers are running.                               | 
-| LDAP_ID                 | List of LDAP unique identifiers.                                                           | 
-| LDAP_LEVEL              | LDAP Encryption Level.                                                                     | 
-| LDAP_PASSWORD           | Password for LDAP_USER.                                                                    | 
-| LDAP_PORT               | Port used to communicate with LDAP Identity Store.                                         | 
-| LDAP_SECURITY           | Whether to enable LDAP authentication.                                                     | N   
-| LDAP_USER               | LDAP Search user name.                                                                     | 
-| LOCAL_WORKDIR           | The directory where the {project-name} Installer is located.                                    | /home/centos/trafodion-installer/installer
-| MANAGEMENT_ENABLED      | Whether your installation uses separate management nodes.                                  | N
-| MANAGEMENT_NODES        | The FQDN names of management nodes, if any.                                                | 
-| MAX_LIFETIME            | Kerberos ticket lifetime for {project-name} principal                                      | 24hours
-| NODE_LIST               | The FQDN names of the nodes where {project-name} will be installed.                             | trafodion-1 trafodion-2
-| PASSWORD                | Administrator password for Apache Ambari or Cloudera Manager.                              | admin
-| RENEW_LIFETIME          | Kerberos ticket renewal lifetime for {project-name} principal                              | 7days
-| REST_BUILD              | Tar file containing the REST component.                                                    | 
-| SECURE_HADOOP           | Indicates whether Hadoop has Kerberos enabled                                               | Based on whether Kerberos is enabled for your Hadoop installation
-| TRAF_HOME                 | Target directory for the {project-name} software.                                               | /home/trafodion/apache-trafodion-1.3.0-incubating-bin
-| START                   | Whether to start {project-name} after install/upgrade.                                          | Y
-| SUSE_LINUX              | Whether your installing {project-name} on SUSE Linux.                                           | false
-| TRAF_PACKAGE            | The location of the {project-name} installation package tar file or core installation tar file. | /home/centos/trafodion-download/apache-trafodion-1.3.0-incubating-bin.tar.gz
-| TRAF_KEYTAB             | Kerberos keytab for `trafodion` principal.                                                      | Default keytab based on distribution
-| TRAF_KEYTAB_DIR         | Location of Kerberos keytab for the `trafodion` principal.                                      | Default location based on distribution
-| TRAF_USER               | The {project-name} runtime user ID. Must be `trafodion` in this release.                         | trafodion
-| TRAF_USER_PASSWORD      | The password used for the `trafodion:trafodion` user ID.                                   | traf123
-| URL                     | FQDN and port for the Distribution Manager's REST API.                                     | trafodion-1.apache.org:7180
-|===
-
-Next, you edit `my_config` to contain the following:
+in the <<prepare,Prepare>> chapter, record the information and edit `my_config` to contain the following:
 
 ```
-#!/bin/bash
-# @@@ START COPYRIGHT @@@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-# @@@ END COPYRIGHT @@@
-
-#====================================================
-# Trafodion Configuration File
-# This file contains default values for the installer.
-
-# Users can also edit this file and provide values for all parameters
-# and then specify this file on the run line of trafodion_install.
-# Example:
-# ./trafodion_install --config_file <Trafodion-config-file>
-# WARNING: This mode is for advanced users!
-#
-#=====================================================
-
-
-#=====================================================
-#Must be set to 'true' if on a SUSE linux system. If on another type of system
-#this must be set to false.
-
-export SUSE_LINUX="false"
-
-# The working directory where Trafodion installer untars files, etc.
-# do not change this unless you really know what you are doing
-export TRAF_WORKDIR="/usr/lib/trafodion"
-
-# This is the directory where the installer scripts were untarred to
-export LOCAL_WORKDIR="/home/centos/trafodion-installer/installer"
-
-# The maximum number of dcs servers, i.e. client connections
-export DCS_SERVERS_PARM="8"
-
-# "true" if this is an upgrade
-export UPGRADE_TRAF="false"
-
-# Trafodion userid, This is the userid the Trafodion instance will run under
-export TRAF_USER="trafodion"
-
-# Trafodion userid's password
-export TRAF_USER_PASSWORD="traf123"
-
-# a blank separated list of nodes in your cluster
-# node names should include full domain names
-#This can not be left blank!
-export NODE_LIST="trafodion-1 trafodion-2"
-
-# count of nodes in node list
-export node_count="2"
-
-# another list of the same nodes in NODE_LIST but specified in a pdsh usable format
-# i.e.  "-w centos-cdh[1-6]"  or "-w node1 -w node2 -w node3"
-export MY_NODES="-w trafodion-[1-2]"
-
-# the directory prefix for the trafodion userid's $HOME directory
-# i.e. /opt/home, not /opt/home/trafodion
-export HOME_DIR="/home"
-
-#JAVA HOME must be a JDK. Must include FULL Path. Must be 1.7.0_65 or higher.
-
-export JAVA_HOME="/usr/java/jdk1.7.0_67-cloudera"
-
-# If your machine doesn't have external internet access then you must
-# specify the location of the EPEL rpm, otherwise leave blank and it
-# will be installed from the internet
-export EPEL_RPM=""
-
-# full path of the Trafodion package tar file
-export TRAF_PACKAGE="/home/centos/trafodion-download/apache-trafodion-1.3.0-incubating-bin.tar.gz"
-
-# if TRAF_PACKAGE wasn't specified then these two values must be specified
-# TRAF_BUILD is the trafodion_server tar file
-# DCS_BUILD is the DCS tar file
-# REST_BUILD is the REST tar file
-export TRAF_BUILD=""
-export DCS_BUILD=""
-export REST_BUILD=""
-# Either "cloudera" or "hortonworks" (all lowercase)
-export HADOOP_TYPE="cloudera"
-
-# The URL for Cloudera/Hortonworks REST API (i.e. node1.host.com:8080)
-export URL="trafodion-1.apache.org:7180"
-
-# Cloudera/Hortonworks UI admin's userid and password
-export ADMIN="admin"
-export PASSWORD="admin"
-
-# hadoop cluster name
-export CLUSTER_NAME=""
-
-# the Hadoop HDFS userid
-export HDFS_USER="hdfs"
-
-# the Hadoop HBase userid and group
-export HBASE_USER="hbase"
-export HBASE_GROUP="hbase"
-
-# The hadoop HBase service name
-export HBASE="hbase"
-
-# full path of where to install Trafodion to
-# Example is used below. If $HOME_DIR or $TRAF_USER have been changed
-# then this will need to be changed.
-# On an upgrade, it is recommend to choose a different directory.
-# First time install : /home/trafodion/traf
-# On Upgrade: /home/trafodion/traf_<date>
-# By doing this the previous version will remain and allow for an easier rollback.
-export TRAF_HOME="/home/trafodion/apache-trafodion-1.3.0-incubating-bin"
-
-# Start Trafodion after install completes
-export START="Y"
-
-# initialize trafodion after starting
-export INIT_TRAFODION="Y"
-
-# full path to the sqconfig file
-# Default is to leave as is and this file will be created.
-export SQCONFIG=""
-
-#-----------------  security configuration information -----------------
-#Enter in Kerberos details if Kerberos is enabled on your cluster
-
-#Indicate Kerberos is enabled
-export SECURE_HADOOP="N"
-
-#Location of Kerberos server for admin access
-export KDC_SERVER=""
-
-#Kerberos Admin principal used to create Trafodion principals and keytabs
-#Please include realm, for example: trafadmin/admin@MYREALM.COM
-export ADMIN_PRINCIPAL=""
-
-#Keytab for HBase admin user, used to grant Trafodion user CRWE privilege
-export HBASE_KEYTAB=""
-
-#Keytab for HDFS admin user, used to create data directories for Trafodion 
-export HDFS_KEYTAB=""
-
-#Kerberos ticket defaults for the Trafodion user
-export MAX_LIFETIME="24hours"
-export RENEW_LIFETIME="7days"
-
-#Trafodion keytab information
-export TRAF_KEYTAB=""
-export TRAF_KEYTAB_DIR=""
-
-#Enter in LDAP configuration information
-#Turn on authentication - MUST have existing LDAP configured.
-export LDAP_SECURITY="Y"
-
-#Name of LDAP Config file
-export LDAP_AUTH_FILE="traf_authentication_config_`hostname -s`"
-
-#LDAP name to map to database user DB__ROOT
-DB_ROOT_NAME="trafodion"
-#-----------------      end security configuration     -----------------
-
-export CONFIG_COMPLETE="true"
+[dbconfigs]
+# NOTICE: if you are using CDH/HDP hadoop distro,
+# you can only specifiy management url address for a quick install
+
+##################################
+# Common Settings
+##################################
+
+# trafodion username and password
+traf_user = trafodion
+traf_pwd = traf123
+# trafodion user's home directory
+home_dir = /home
+# the directory location of trafodion binary
+# if not provided, the default value will be {package_name}-{version}
+traf_dirname =
+
+# trafodion used java(JDK) path on trafodion nodes
+# if not provided, installer will auto detect installed JDK
+java_home =
+
+# cloudera/ambari management url(i.e. http://192.168.0.1:7180 or just 192.168.0.1)
+# if 'http' or 'https' prefix is not provided, the default one is 'http'
+# if port is not provided, the default port is cloudera port '7180'
+mgr_url = 192.168.0.1:8080
+# user name for cloudera/ambari management url
+mgr_user = admin
+# password for cloudera/ambari management url
+mgr_pwd = admin
+# set the cluster number if multiple clusters managed by one Cloudera manager
+# ignore it if only one cluster being managed
+cluster_no = 1
+
+# trafodion tar package file location
+# no need to provide it if the package can be found in current installer's directory
+traf_package =
+# the number of dcs servers on each node
+dcs_cnt_per_node = 4
+
+# scratch file location, seperated by comma if more than one
+scratch_locs = $TRAF_HOME/tmp
+
+# start trafodion instance after installation completed
+traf_start = Y
+
+##################################
+# DCS HA configuration
+##################################
+
+# set it to 'Y' if enable DCS HA
+dcs_ha = N
+# if HA is enabled, provide floating ip, network interface and the hostname of backup dcs master nodes
+dcs_floating_ip =
+# network interface that dcs used
+dcs_interface =
+# backup dcs master nodes, seperated by comma if more than one
+dcs_backup_nodes =
+
+##################################
+# Offline installation setting
+##################################
+
+# set offline mode to Y if no internet connection
+offline_mode = N
+# if offline mode is set, you must provide a local repository directory with all needed RPMs
+local_repo_dir =
+
+##################################
+# LDAP security configuration
+##################################
+
+# set it to 'Y' if enable LDAP security
+ldap_security = N
+# LDAP user name and password to be assigned as DB admin privilege
+db_admin_user = admin
+db_admin_pwd = traf123
+# LDAP user to be assigned DB root privileges (DB__ROOT)
+db_root_user = trafodion
+# if LDAP security is enabled, provide the following items
+ldap_hosts =
+# 389 for no encryption or TLS, 636 for SSL
+ldap_port = 389
+ldap_identifiers =
+ldap_encrypt = 0
+ldap_certpath =
+
+# set to Y if user info is needed
+ldap_userinfo = N
+# provide if ldap_userinfo = Y
+ladp_user =
+ladp_pwd =
+
+##################################
+# Kerberos security configuration
+##################################
+# if kerberos is enabled in your hadoop system, provide below info
+
+# KDC server address
+kdc_server =
+# include realm, i.e. admin/admin@EXAMPLE.COM
+admin_principal =
+# admin password for admin principal, it is used to create trafodion user's principal and keytab
+kdcadmin_pwd =
 ```
 
-Once completed, run the {project-name} Installer with the `--config_file` option.
+Once completed, run the {project-name} Installer with the `--config-file` option.
 
 Refer to the following sections for examples:
 
@@ -518,9 +407,3 @@ Refer to the following sections for examples:
 {project-name} stores its provisioning information in the following directories on each node in the cluster:
 
 * `/etc/trafodion`: Configuration information.
-* `/usr/lib/trafodion`: Copies of the files required by the installer.
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/92c80fd3/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc
index c14e79c..34ca1a8 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc
@@ -101,7 +101,7 @@ are met for each node in the cluster where you intend to install {project-name}.
 [[prepare-configure-kerberos]]
 == Configure Kerberos
 
-If your Hadoop installation has enabled Kerberos, then {project-name} needs to have Kerberos enabled.  If not, 
+If your Hadoop installation has enabled Kerberos, then {project-name} needs to have Kerberos enabled.  If not,
 then {project-name} will not run. If you plan to enable Kerberos in {project-name}, then you need to have access to a KDC (Kerberos Key Distribution
 Center) and administration credentials so you can create the necessary {project-name} principals and keytabs.
 
@@ -125,76 +125,48 @@ You need to gather/decide information about your environment to aid installation
 
 [cols="25%l,25%,15%l,35%",options="header"]
 |===
-| ID^1^              | Information                                                    | Default                       | Notes
-| ADMIN              | Administrator user name for Apache Ambari or Cloudera Manager. | admin                         | A user that can change configuration and restart services via the
-distribution manager's REST API.
-| ADMIN_PRINCIPAL^2^ | Kerberos admin principal to manage principals and keytabs      | None                          | Required if Kerberos is enabled.
-| BACKUP_DCS_NODES   | List of nodes where to start the backup DCS Master components. | None                          | Blank separated FQDN list. Not needed if $ENABLE_HA = N.
-| CLOUD_CONFIG       | Whether you're installing {project-name} on a cloud environment.    | N                             | N = bare-metal or VM installation.
-| CLOUD_TYPE         | What type of cloud environment you're installing {project-name} on. | None | { AWS \| OpenStack \| Other }. Not applicable for bare-metal or VM installation.
-| CLUSTER_NAME       | The name of the Hadoop Cluster.                                | None | From Apache Ambari or Cloudera Manager.
-| DB_ROOT_NAME^2^    | LDAP name used to connect as database root user                | trafodion                     | Required when LDAP is enabled.
-| DCS_BUILD          | Tar file containing the DCS component.                         | None | Not needed if using a {project-name} package installation tar file.
-| DCS_PRIMARY_MASTER_NODE | The node where the primary DCS should run.                | None | The DCS Master handles JDBC and ODBC connection requests.
-| DCS_SERVER_PARM    | Number of concurrent client sessions per node.                 | 16 | This number specifies the concurrent sessions per node to be supported. Each session could require up to 1GB of physical memory. The number can be changed post-installation. For more information,
+| ID                 | Information                                                    | Default                       | Notes
+| admin_principal    | Kerberos admin principal to manage principals and keytabs      | None                          | Required if Kerberos is enabled.
+| cluster_no         | Cluster number if multiple clusters managed in ClouderaManager | 1                             | Not required in Hortonworks distro
+| dcs_cnt_per_node   | Number of concurrent client sessions per node.                 | 4                             | This number specifies the concurrent sessions per node to be supported. Each session could require up to 1GB of physical memory. The number can be changed post-installation. For more information,
 refer to the {docs-url}/client_install/index.html[{project-name} Client Installation Guide].
-| ENABLE_HA          | Whether to run DCS in high-availability (HA) mode.             | N                             | You need the floating IP address, the interface, and the backup nodes for DCS Master if enabling this feature.
-| EPEL_RPM           | Location of EPEL RPM.                                          | None                          | Specify if you don't have access to the Internet.
-Downloaded automatically by the {project-name} Installer.
-| FLOATING_IP        | IP address if running DCS in HA mode.                          | None                          | Not needed if $ENABLE_HA = N. An FQDN name or IP address.
-| HADOOP_TYPE        | The type of Hadoop distribution you're installing {project-name} on. | None                         | Lowercase. cloudera or hadoop.
-| HBASE_GROUP        | Linux group name for the HBASE administrative user.             | hbase                         | Required in order to provide access to select HDFS directories to this user ID. 
-| HBASE_KEYTAB^2^    | HBase credentials used to grant {project-name} CRWE privileges | based on distribution         | Required if Kerberos is enabled.
-| HBASE_USER         | Linux user name for the HBASE administrative user.              | hbase                         | Required in order to provide access to select HDFS directories to this user ID. 
-| HDFS_KEYTAB^2^     | HDFS credentials used to set privileges on HDFS directories. . | based on distribution         | Required if Kerberos is enabled.
-| HDFS_USER          | Linux user name for the HDFS administrative user.               | hdfs                          | The {project-name} Installer uses `sudo su` to make HDFS
-configuration changes under this user.
-| HOME_DIR           | Root directory under which the `trafodion` home directory should be created. | /home           | *Example* +
- +
-If the home directory of the `trafodion` user is
-`/opt/home/trafodion`, then specify the root directory as `/opt/home`. 
-| INIT_TRAFODION     | Whether to automatically initialize the {project-name} database.    | N                             | Applies if $START=Y only.
-| INTERFACE          | Interface type used for $FLOATING_IP.                          | None                          | Not needed if $ENABLE_HA = N. 
-| JAVA_HOME          | Location of Java 1.7.0_65 or higher (JDK).                     | $JAVA_HOME setting            | Fully qualified path of the JDK. For example:
-`/usr/java/jdk1.7.0_67-cloudera`
-| KDC_SERVER^2^      | Location of host where Kerberos server exists                  | None                          | Required if Kerberos enabled.
-| LDAP_CERT^2^       | Full path to TLS certificate.                                  | None                          | Required if $LDAP_LEVEL = 1 or 2.
-| LDAP_HOSTS^2^      | List of nodes where LDAP Identity Store servers are running.   | None                          | Blank separated. FQDN format.
-| LDAP_ID^2^         | List of LDAP unique identifiers.                               | None                          | Blank separated.    
-| LDAP_LEVEL^2^      | LDAP Encryption Level.                                         | 0                             | 0: Encryption not used, 1: SSL, 2: TLS
-| LDAP_PASSWORD^2^   | Password for LDAP_USER.                                        | None                          | If LDAP_USER is required only.
-| LDAP_PORT^2^       | Port used to communicate with LDAP Identity Store.             | None                          | Examples: 389 for no encryption or TLS, 636 for SSL.
-| LDAP_SECURITY^2^   | Whether to enable simple LDAP authentication.                | N                             | If Y, then you need to provide LDAP_HOSTS.
-| LDAP_USER^2^       | LDAP Search user name.                                         | None                          | Required if you need additional LDAP functionally such as LDAPSearch. If so, must provide LDAP_PASSWORD, too.   
-| LOCAL_WORKDIR      | The directory where the {project-name} Installer is located.        | None                          | Full path, no environmental variables.
-| MANAGEMENT_ENABLED | Whether your installation uses separate management nodes.      | N                             | Y if using separate management nodes for Apache Ambari or Cloudera Manager.
-| MANAGEMENT_NODES   | The FQDN names of management nodes, if any.                    | None                          | Provide a blank-separated list of node names.
-| MAX_LIFETIME^2^    | Kerberos ticket lifetime for Trafodion principal               | 24hours                       | Can be specified when Kerberos is enabled.   
-| NODE_LIST          | The FQDN names of the nodes where {project-name} will be installed. | None                          | Provide a blank-separated list of node names. The {project-name}
-Provisioning ID must have passwordless and `sudo` access to these nodes.
-| PASSWORD           | Administrator password for Apache Ambari or Cloudera Manager.  | admin                         | A user that can change configuration and restart services via the
+| dcs_ha             | Whether to run DCS in high-availability (HA) mode.             | N                             | If Y, you need to provide below dcs configurations.
+| db_admin_user      | LDAP name used to connect as database admin user               | admin                         | Required when LDAP is enabled.
+| db_root_user       | LDAP name used to connect as database root user                | trafodion                     | Required when LDAP is enabled.
+| dcs_backup_nodes   | List of nodes where to start the backup DCS Master components. | None                          | Required when LDAP is enabled. Comma separated FQDN list.
+| dcs_floating_ip    | IP address if running DCS in HA mode.                          | None                          | Required when LDAP is enabled. An FQDN name or IP address.
+| dcs_interface      | Interface type used for dcs_floating_ip.                       | None                          | Required when LDAP is enabled. For example, eth0.
+| home_dir           | Root directory under which the `trafodion` home directory should be created.   | /home                         | *Example* +
+If the home directory of the `trafodion` user is `/opt/home/trafodion`, then specify the root directory as `/opt/home`.
+| java_home          | Location of Java 1.7.0_65 or higher (JDK).                     | auto detected                 | Fully qualified path of the JDK. For example: `/usr/java/jdk1.7.0_67-cloudera`
+| kdcadmin_pwd^1^    | Password for kerberos admin principal                          | None                          | Should be removed from configuration file or secured after install.
+| kdc_server^1^      | Location of host where Kerberos server exists                  | None                          | Required if Kerberos enabled.
+| ldap_security^1^   | Whether to enable simple LDAP authentication.                  | N                             | If Y, then you need to provide below ldap configurations.
+| ldap_encrypt^1^    | LDAP Encryption Level.                                         | 0                             | 0: Encryption not used, 1: SSL, 2: TLS
+| ldap_certpath ^1^  | Full path to TLS certificate.                                  | None                          | Required if ldap_encrypt = 1 or 2.
+| ldap_hosts^1^      | List of nodes where LDAP Identity Store servers are running.   | None                          | Comma separated. FQDN format.
+| ldap_identifiers^1^| List of LDAP unique identifiers.                               | None                          | Comma separated.
+| ldap_port^1^       | Port used to communicate with LDAP Identity Store.             | None                          | Examples: 389 for no encryption or TLS, 636 for SSL.
+| ldap_userinfo      | Whether to use LDAP Search user name.                          | N                             | If Y, then you need to provide ldap_user and ldap_pwd.
+| ladp_user^1^       | LDAP Search user name.                                         | None                          | Required if you need additional LDAP functionally such as LDAPSearch. If so, must provide ldap_pwd too.
+| ladp_pwd^1^        | Password for ldap_user.                                        | None                          | If ldap_userinfo is required.
+| local_repo_dir     | folder location of Trafodion local repository                  | None                          | Required if offline_mode = Y. A local folder with all trafodion rpm dependencies and repodata. For example: `/opt/trafodion_repo`
+| mgr_url            | FQDN and port for the Distribution Manager's REST API.         | None                          | Include `http://` or `https://` as applicable. If no prefix, default is `http://`.
+Specify in the form: `<IP-address>:<port>` or `<node name>:<port>` Example: `https://vm-1.yourcompany.local:8080`
+| mgr_user           | Administrator user name for Apache Ambari or Cloudera Manager. | admin                         | A user that can change configuration and restart services via the
+distribution manager's REST API.
+| mgr_pwd            | Administrator password for Apache Ambari or Cloudera Manager.  | admin                         | A user that can change configuration and restart services via the
 distribution manager's REST API.
-| RENEW_LIFETIME^2^  | Number times Kerberos ticket is for the Trafodion principal    | 7days                         | Can be specified when Kerberos is enabled.   
-| REST_BUILD         | Tar file containing the REST component.                        | None | Not needed if using a {project-name} package installation tar file.
-| SECURE_HADOOP^2^   | Indicates whether Hadoop has enabled Kerberos                   | Y only if Kerberos enabled | Based on whether Kerberos is enabled for your Hadoop installation
-| TRAF_HOME            | Target directory for the {project-name} software.                   | $HOME_DIR/trafodion           | {project-name} is installed in this directory on all nodes in `$NODE_LIST`.
-| START              | Whether to start {project-name} after install/upgrade.              | N                             | 
-| SUSE_LINUX         | Whether your installing {project-name} on SUSE Linux.               | false                         | Auto-detected by the {project-name} Installer.
-| TRAF_KEYTAB^2^     | Name to use when specifying {project-name} keytab              | based on distribution         |  Required if Kerberos is enabled.
-| TRAF_KEYTAB_DIR^2^ | Location  of {project-name} keytab                             | based on distribution         |  Required if Kerberos is enabled.
-| TRAF_PACKAGE       | The location of the {project-name} installation package tar file or core installation tar file. | None | The package file contains the {project-name} server,
-DCS, and REST software while the core installation file contains the {project-name} server software only. If you're using a core installation file, then you need to
-record the location of the DCS and REST installation tar files, too. Normally, you perform {project-name} provisioning using a {project-name} package installation tar file.
-| TRAF_USER          | The {project-name} runtime user ID.                                  | trafodion                     | Must be `trafodion` in this release.
-| TRAF_USER_PASSWORD | The password used for the `trafodion:trafodion` user ID.       | traf123                       | Must be 6-8 characters long.
-| URL                | FQDN and port for the Distribution Manager's REST API.         | None                          | Include `http://` or `https://` as applicable. Specify in the form:
-`<IP-address>:<port>` or `<node name>:<port>` Example: `https://susevm-1.yourcompany.local:8080`
+| offline_mode       | Whether to install Trafodion without internet connection.      | N                             | If Y, then you need to provide local directory in local_repo_dir
+| scratch_locs       | Overflow scratch files location for large queries that cannot fit in memory.    | $TRAF_HOME/tmp                |  Comma seperated if more than one folder, it should be set in a large disk
+| traf_dirname       | Target folder name for the {project-name} software.            | apache-trafodion-{version}    | {project-name} is installed in this directory under `$HOME` on all nodes in `$NODE_LIST`.
+| traf_package       | The location of the {project-name} server package tar file.    | auto detected in installer folder | The package file contains the {project-name} server, DCS, and REST software
+| traf_pwd           | The password used for the {project-name} runtime user ID.      | traf123                      | Must be 6-8 characters long.
+| traf_start         | Whether to start {project-name} after install/upgrade.         | Y
+| traf_user          | The {project-name} runtime user ID.                            | trafodion                    | Must be `trafodion` in this release.
 |===
 
-1. The ID matches the environmental variables used in the {project-name} Installation configuration file. Refer to <<install-trafodion-installer,{project-name} Installer>>
-for more information.
-2. Refer to <<enable-security,Enable Security>> for more information about these security settings.
-
+1. Refer to <<enable-security,Enable Security>> for more information about these security settings.
 
 <<<
 [[prepare-install-required-software-packages]]
@@ -212,13 +184,13 @@ If none of these situations exist, then we highly recommend that you use the {pr
 
 You perform this step as a user with `root` or `sudo` access.
 
-Install the packages listed in <<requirements-software-packages,Software Packages>> above on all nodes in the cluster. 
+Install the packages listed in <<requirements-software-packages,Software Packages>> above on all nodes in the cluster.
 
 <<<
 [[prepare-download-trafodion-binaries]]
 == Download {project-name} Binaries
 
-You download the {project-name} binaries from the {project-name} {download-url}[Download] page. 
+You download the {project-name} binaries from the {project-name} {download-url}[Download] page.
 Download the following packages:
 
 Command-line Installation

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/92c80fd3/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc
index 8abb145..fbdac4f 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc
@@ -48,60 +48,55 @@ NOTE: You can download and install the {project-name} Clients once you've instal
 
 *Example*
 
+Download the Trafodion Installer and Server binaries:
 ```
 $ mkdir $HOME/trafodion-download
 $ cd $HOME/trafodion-download
 $ # Download the Trafodion Installer binaries
-$ wget http://apache.cs.utah.edu/incubator/trafodion/trafodion-1.3.0.incubating/apache-trafodion-installer-1.3.0-incubating-bin.tar.gz
+$ wget http://apache.cs.utah.edu/incubator/trafodion/trafodion-2.1.0.incubating/apache-trafodion-pyinstaller-2.1.0-incubating.tar.gz
 Resolving http://apache.cs.utah.edu... 192.168.1.56
 Connecting to http://apache.cs.utah.edu|192.168.1.56|:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 68813 (67K) [application/x-gzip]
-Saving to: "apache-trafodion-installer-1.3.0-incubating-bin.tar.gz"
+Saving to: "apache-trafodion-pyinstaller-2.1.0-incubating.tar.gz"
 
 100%[=====================================================================================================================>] 68,813       124K/s   in 0.5s
 
-2016-02-14 04:19:42 (124 KB/s) - "apache-trafodion-installer-1.3.0-incubating-bin.tar.gz" saved [68813/68813]
-```
+2016-02-14 04:19:42 (124 KB/s) - "apache-trafodion-pyinstaller-2.1.0-incubating.tar.gz" saved [68813/68813]
 
-<<<
-```
-$ # Download the Trafodion Server binaries
-$ wget http://apache.cs.utah.edu/incubator/trafodion/trafodion-1.3.0.incubating/apache-trafodion-1.3.0-incubating-bin.tar.gz
+$ wget http://apache.cs.utah.edu/incubator/trafodion/trafodion-2.1.0.incubating/apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz
 Resolving http://apache.cs.utah.edu... 192.168.1.56
 Connecting to http://apache.cs.utah.edu|192.168.1.56|:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 214508243 (205M) [application/x-gzip]
-Saving to: "apache-trafodion-1.3.0-incubating-bin.tar.gz"
+Saving to: "apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz"
 
 100%[=====================================================================================================================>] 214,508,243 3.90M/s   in 55s
 
-2016-02-14 04:22:14 (3.72 MB/s) - "apache-trafodion-1.3.0-incubating-bin.tar.gz" saved [214508243/214508243]
+2016-02-14 04:22:14 (3.72 MB/s) - "apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz" saved [214508243/214508243]
 
 $ ls -l
-total 209552
--rw-rw-r-- 1 centos centos 214508243 Jan 12 20:10 apache-trafodion-1.3.0-incubating-bin.tar.gz
--rw-rw-r-- 1 centos centos     68813 Jan 12 20:10 apache-trafodion-installer-1.3.0-incubating-bin.tar.gz
+-rw-rw-r--. 1 centos centos     74237 Feb 13 14:53 apache-trafodion_pyinstaller-2.1.0-incubating.tar.gz
+-rw-rw-r--. 1 centos centos 183114066 Feb 10 22:34 apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz
 $
 ```
 
 [[quickstart-unpack-installer]]
-== Unpack Installer
+== Unpack Installer and Server package
 
 The first step in the installation process is to unpack the {project-name} Installer tar file.
+{project-name} server package tar file can be auto detected by installer if put it in installer's folder.
 
 *Example*
 
 ```
 $ mkdir $HOME/trafodion-installer
 $ cd $HOME/trafodion-downloads
-$ tar -zxf apache-trafodion-installer-1.3.0-incubating-bin.tar.gz -C $HOME/trafodion-installer
-$ ls $HOME/trafodion-installer/installer
-bashrc_default           tools                             traf_config_check           trafodion_apache_hadoop_install  traf_package_setup
-build-version-1.3.0.txt  traf_add_user                     traf_config_setup           trafodion_config_default         traf_setup
-dcs_installer            traf_apache_hadoop_config_setup   traf_create_systemdefaults  trafodion_install                traf_sqconfig
-rest_installer           traf_authentication_conf_default  traf_getHadoopNodes         trafodion_license                traf_start
-setup_known_hosts.exp    traf_cloudera_mods98              traf_hortonworks_mods98     trafodion_uninstaller
+$ tar -zxf apache-trafodion-pyinstaller-2.1.0-incubating.tar.gz -C $HOME/trafodion-installer
+$ cp -f apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz $HOME/trafodion-installer
+$ ls $HOME/trafodion-installer/python-installer
+apache-trafodion_server-2.1.0-RH-x86_64-incubating.tar.gz  db_install.py    DISCLAIMER    LICENSE  prettytable.py  scripts
+configs                                                    db_uninstall.py  discovery.py  NOTICE   README.md
 $
 ```
 
@@ -110,19 +105,15 @@ $
 
 Collect/decide the following information:
 
-=== Location of {project-name} Server-Side Binary
 
-You need the fully-qualified name of the {project-name} server-side binary. 
+=== Java Location
 
-*Example*
+Java location can be automatically detected by installer. You need to provide the java location only if installer cannot detect it.
 
-```
-/home/trafodion-downloads/apache-trafodion-installer-1.3.0-incubating-bin.tar.gz
-```
-
-=== Java Location
+How to detect java location manually:
 
-You need to record the location of the Java. For example, use `ps -ef | grep java | grep hadoop | grep hbase` to determine what version HBase is running.
+1. Login to trafodion' node
+2. Use `ps -ef | grep java | grep hadoop | grep hbase` to determine what version HBase is running.
 
 *Example*
 
@@ -136,7 +127,9 @@ The Java location is: `/usr/jdk64/jdk1.7.0_67`
 <<<
 === Data Nodes
 
-{project-name} is installed on all data nodes in your Hadoop cluster. You need to record the fully-qualified domain name node for each node.
+{project-name} is installed on all data nodes in your Hadoop cluster. Data nodes can be automatically detected by installer while installing on a HDP/CDH cluster.
+
+You need to record hostname for each node when you install {project-name} on Apache Hadoop.
 For example, refer to `/etc/hosts`.
 
 *Example*
@@ -146,29 +139,16 @@ $ cat /etc/hosts
 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 
-172.31.56.238	      ip-172-31-56-238.ec2.internal node01
-172.31.61.110	      ip-172-31-61-110.ec2.internal node02
-172.31.57.143	      ip-172-31-57-143.ec2.internal node03
-```
-
-Record the node names in a space-separated list.
-
-*Example*
-
-```
-ip-172-31-56-238.ec2.internal ip-172-31-61-110.ec2.internal ip-172-31-57-143.ec2.internal
+172.31.56.238	      node-1.internal node-1
+172.31.61.110	      node-2.internal node-2
 ```
 
-=== {project-name} Runtime User Home Directory
-
-The Installer creates the `trafodion` user ID. You need to decide the home directory for this user. 
-
-The default is: `/home`
+Record the node names in a comma-separated list `node-1, node-2` or in regular expression mode `node-[1-2]`
 
 === Distribution Manager URL
 
 The Installer interacts with the Distribution Manager (for example, Apache Ambari or Cloudera Manager) to modify the
-Hadoop configuration. 
+Hadoop configuration.
 
 *Example*
 
@@ -182,412 +162,7 @@ http://myhost.com:8080
 [[quickstart-run-installer]]
 == Run Installer
 
-You run the Installer once you've collected the base information as described in 
+You run the Installer once you've collected the base information as described in
 <<quickstart-collect-information, Collect Information>> above.
 
-The following example shows a guided install of {project-name} on a three-node Hortonworks Hadoop cluster.
-
-NOTE: By default, the {project-name} Installer invokes `sqlci` so that you can enter the `initialize trafodion;` command.
-This is shown in the example below.
-
-*Example*
-
-1. Run the {project-name} Installer in guided mode.
-+
-```
-$ cd $HOME/trafodion-installer/installer
-$ ./trafodion_install 2>&1 | tee install.log
-******************************
- TRAFODION INSTALLATION START
-******************************
-
-***INFO: testing sudo access
-***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-06-30-21-02-38.log
-***INFO: Config directory: /etc/trafodion
-***INFO: Working directory: /usr/lib/trafodion
-
-************************************
- Trafodion Configuration File Setup
-************************************
-
-***INFO: Please press [Enter] to select defaults.
-
-Is this a cloud environment (Y/N), default is [N]: N
-Enter trafodion password, default is [traf123]: 
-Enter list of data nodes (blank separated), default []: ip-172-31-56-238.ec2.internal ip-172-31-61-110.ec2.internal ip-172-31-57-143.ec2.internal
-Do you h ave a set of management nodes (Y/N), default is N: N
-Enter Trafodion userid's home directory prefix, default is [/home]: /opt
-Specify  location of Java 1.7.0_65 or higher (JDK), default is []: /usr/jdk64/jdk1.7.0_67
-Enter full path (including .tar or .tar.gz) of trafodion tar file []: /home/trafodion-downloads/apache-trafodion_server-2.0.1-incubating.tar.gz
-Enter Backup/Restore username (can be Trafodion), default is [trafodion]: 
-Specify the Hadoop distribut ion installed (1: Cloudera, 2: Hortonworks, 3: Other): 2
-Enter Hadoop admin username, default is [admin]: Enter Hadoop admin pas sword, default is [admin]: 
-Enter full Hadoop external network URL:port (include 'http://' or 'https://), default is []: http://ip-172-31-56-238.ec2.internal:8080
-Enter  HDFS username or username running HDFS, default is [hdfs]: 
-Enter HBase username or username running HBase, default is [hbase]:
-Enter HBase group, default is [hbase]: 
-Enter Zookeeper username or username running Zookeeper, default is [zookeeper]: 
-Enter  directory to install trafodion to, default is [/opt/trafodion/apache-trafodion_server-2.0.1-incubating]: 
-Start Trafodion after install (Y/N), default is Y: 
-Total number of client connections per cluster, default [24]: 96
-Enter the node of primary DcsMaste r, default [ip-172-31-56-238.ec2.internal]: 
-Enable High Availability (Y/N), default is N: 
-Enable simple LDAP security (Y/N), d efault is N: 
-***INFO: Trafodion configuration setup complete
-***INFO: Trafodion Configuration File Check
-***INFO: Testing sudo access on node ip-172-31-56-238
-***INFO: Testing sudo access on node ip-172-31-61-110
-***INFO: Testing sudo access on node ip-172-31-57-143
-***INFO: Testing ssh on ip-172-31-56-238
-***INFO: Testing ssh on ip-172-31-61-110
-***INFO: Testing ssh on ip-172-31-57-143
-#!/bin/bash
-#
-# @@@ START COPYRIGHT @@@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-.
-.
-.
-9. Accepting Warranty or Additional Liability. While redistributing
-the Work or Derivative Works thereof, You may choose to offer, and
-charge a fee for, acceptance of support, warranty, indemnity, or
-other liability obligations and/or rights consistent with this
-License. However, in accepting such obligations, You may act only
-on Your own behalf and on Your sole responsibility, not on behalf
-of any other Contributor, and only if You agree to indemnify, defend,
-and hold each Contributor harmless for any liability incurred by,
-or claims asserted against, such Contributor by reason of your
-accepting any such warranty or additional liability.
-
-END OF TERMS AND CONDITIONS
-
-BY TYPING "ACCEPT" YOU AGREE TO THE TERMS OF THIS AGREEMENT: ***INFO: testing sudo access
-***INFO: Starting Trafodion Package Setup (2016-06-30-21-06-40)
-***INFO: Installing required packages
-***INFO: Log file located in /var/log/trafodion
-***INFO: ... pdsh on node ip-172-31-56-238
-***INFO: ... pdsh on node ip-172-31-61-110
-***INFO: ... pdsh on node ip-172-31-57-143
-***INFO: Checking if apr is installed ...
-***INFO: Checking if apr-util is installed ...
-***INFO: Checking if sqlite is installed ...
-***INFO: Checking if expect is installed ...
-***INFO: Checking if perl-DBD-SQLite* is installed ...
-***INFO: Checking if protobuf is installed ...
-***INFO: Checking if xerces-c is installed ...
-***INFO: Checking if perl-Params-Validate is installed ...
-***INFO: Checking if perl-Time-HiRes is installed ...
-***INFO: Checking if gzip is installed ...
-***INFO: Checking if lzo is installed ...
-***INFO: Checking if lzop is installed ...
-***INFO: Checking if unzip is installed ...
-***INFO: modifying limits in /usr/lib/trafodion/trafodion.conf on all nodes
-***INFO: create Trafodion userid "trafodion" 
-***INFO: Trafodion userid's (trafodion) home directory: /opt/trafodion
-***INFO: testing sudo access
-Generating public/private rsa key pair.
-Created directory '/opt/trafodion/.ssh'.
-Your identification has been saved in /opt/trafodion/.ssh/id_rsa.
-Your public key has been saved in /opt/trafodion/.ssh/id_rsa.pub.
-The key fingerprint is:
-12:59:ab:d7:59:a2:0e:e8:38:1c:e9:e1:86:f6:18:23 trafodion@ip-172-31-56-238
-The key's randomart image is:
-+--[ RSA 2048]----+
-|        .        |
-|       o .       |
-|      o . . .    |
-|   . . o o +     |
-|  + . + S o      |
-| = =   =         |
-|E+B .   .        |
-|o.=.             |
-| . .             |
-+-----------------+
-***INFO: creating .bashrc file
-***INFO: Setting up userid trafodion on all other nodes in cluster
-***INFO: Creating known_hosts file for all nodes
-ip-172-31-56-238
-ip-172-31-56-238 ip-172-31-61-110 ip-172-31-57-143
-ip-172-31-61-110
-ip-172-31-56-238 ip-172-31-61-110 ip-172-31-57-143
-ip-172-31-57-143
-ip-172-31-56-238 ip-172-31-61-110 ip-172-31-57-143
-***INFO: trafodion user added successfully
-***INFO: Trafodion environment setup completed
-***INFO: creating sqconfig file
-***INFO: Reserving DCS ports
-
-***INFO: Creating trafodion sudo access file
-
-
-******************************
- TRAFODION MODS
-******************************
-
-***INFO: Hortonworks installed will run traf_hortonworks_mods
-***INFO: copying hbase-trx-hdp2_3-*.jar to all nodes
-***INFO: hbase-trx-hdp2_3-*.jar copied correctly! Huzzah.
-USERID=admin
-PASSWORD=admin
-PORT=:8080
-{
-  "resources" : [
-    {
-      "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/configurations/service_config_versions?ser
-vice_name=HBASE&service_config_version=2",
-.
-.
-.
-    {
-      "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/tasks/128",
-      "Tasks" : {
-        "cluster_name" : "trafodion",
-        "id" : 128,
-        "request_id" : 12,
-        "stage_id" : 2
-      }
-    },
-    {
-      "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/tasks/129",
-      "Tasks" : {
-        "cluster_name" : "trafodion",
-        "id" : 129,
-        "request_id" : 12,
-        "stage_id" : 2
-      }
-    },
-    {
-      "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/tasks/130",
-      "Tasks" : {
-        "cluster_name" : "trafodion",
-        "id" : 130,
-        "request_id" : 12,
-        "stage_id" : 2
-      }
-    }
-  ],
-  "stages" : [
-    {
-      "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/stages/0",
-      "Stage" : {
-        "cluster_name" : "trafodion",
-        "request_id" : 12,
-        "stage_id" : 0
-      }
-    },
-    {
-      "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/stages/1",
-      "Stage" : {
-        "cluster_name" : "trafodion",
-        "request_id" : 12,
-        "stage_id" : 1
-      }
-    },
-    {
-      "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/stages/2",
-      "Stage" : {
-        "cluster_name" : "trafodion",
-        "request_id" : 12,
-        "stage_id" : 2
-      }
-    }
-  ]
-}***INFO: ...polling every 30 seconds until HBase start is completed.
-***INFO: HBase restart completed
-***INFO: Setting HDFS ACLs for snapshot scan support
-cp: `trafodion_config' and `/home/trafinstall/trafodion-2.0.1/installer/trafodion_config' are the same file
-***INFO: Trafodion Mods ran successfully.
-
-******************************
- TRAFODION CONFIGURATION
-******************************
-
-/usr/lib/trafodion/installer/..
-/opt/trafodion/apache-trafodion_server-2.0.1-incubating
-***INFO: untarring file  to /opt/trafodion/apache-trafodion_server-2.0.1-incubating
-***INFO: modifying .bashrc to set Trafodion environment variables
-***INFO: copying .bashrc file to all nodes
-***INFO: copying sqconfig file (/opt/trafodion/sqconfig) to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/sql/script
-s/sqconfig
-***INFO: Creating /opt/trafodion/apache-trafodion_server-2.0.1-incubating directory on all nodes
-***INFO: Start of DCS install
-***INFO: DCS Install Directory: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1
-***INFO: modifying /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/dcs-env.sh
-***INFO: modifying /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/dcs-site.xml
-***INFO: creating /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/servers file
-***INFO: End of DCS install.
-***INFO: Start of REST Server install
-***INFO: Rest Install Directory: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1
-***INFO: modifying /opt/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1/conf/rest-site.xml
-***INFO: End of REST Server install.
-***INFO: starting sqgen
-ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110
-
-Creating directories on cluster nodes
-/usr/bin/pdsh -R exec -w ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110 -x ip-172-31-56-238 ssh -q -n %h mkdir -p /opt/tra
-fodion/apache-trafodion_server-2.0.1-incubating/etc 
-/usr/bin/pdsh -R exec -w ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110 -x ip-172-31-56-238 ssh -q -n %h mkdir -p /opt/tra
-fodion/apache-trafodion_server-2.0.1-incubating/logs 
-/usr/bin/pdsh -R exec -w ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110 -x ip-172-31-56-238 ssh -q -n %h mkdir -p /opt/tra
-fodion/apache-trafodion_server-2.0.1-incubating/tmp 
-/usr/bin/pdsh -R exec -w ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110 -x ip-172-31-56-238 ssh -q -n %h mkdir -p /opt/tra
-fodion/apache-trafodion_server-2.0.1-incubating/sql/scripts 
-
-Generating SQ environment variable file: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/etc/ms.env
-
-Note: Using cluster.conf format type 2.
-
-Generating SeaMonster environment variable file: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/etc/seamonster.env
-
-
-Generated SQ startup script file: ./gomon.cold
-Generated SQ startup script file: ./gomon.warm
-Generated SQ cluster config file: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/tmp/cluster.conf
-Generated SQ Shell          file: sqshell
-Generated RMS Startup       file: rmsstart
-Generated RMS Stop          file: rmsstop
-Generated RMS Check         file: rmscheck.sql
-Generated SSMP Startup      file: ssmpstart
-Generated SSMP Stop         file: ssmpstop
-Generated SSCP Startup      file: sscpstart
-Generated SSCP Stop         file: sscpstop
-
-
-Copying the generated files to all the nodes in the cluster
-.
-.
-.
-SQ Startup script (/opt/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts/gomon.cold) ran successfully. Performin
-g further checks...
-Checking if processes are up.
-Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
-
-The SQ environment is up!
-
-
-Process		Configured	Actual	    Down
--------		----------	------	    ----
-DTM		3		3	    
-RMS		6		6	    
-DcsMaster	1		0	    1
-DcsServer	3		0	    3
-mxosrvr		96		0	    96
-
-Thu Jun 30 21:15:29 UTC 2016
-Checking if processes are up.
-Checking attempt: 1; user specified max: 1. Execution time in seconds: 0.
-
-The SQ environment is up!
-
-
-Process		Configured	Actual	    Down
--------		----------	------	    ----
-DTM		3		3	    
-RMS		6		6	    
-DcsMaster	1		0	    1
-DcsServer	3		0	    3
-mxosrvr		96		0	    96
-
-Starting the DCS environment now
-starting master, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-1-mast
-er-ip-172-31-56-238.out
-ip-172-31-56-238: starting server, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dc
-s-trafodion-1-server-ip-172-31-56-238.out
-ip-172-31-57-143: starting server, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dc
-s-trafodion-3-server-ip-172-31-57-143.out
-ip-172-31-61-110: starting server, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dc
-s-trafodion-2-server-ip-172-31-61-110.out
-Checking if processes are up.
-Checking attempt: 1; user specified max: 2. Execution time in seconds: 1.
-
-The SQ environment is up!
-
-
-Process		Configured	Actual	    Down
--------		----------	------	    ----
-DTM		3		3	    
-RMS		6		6	    
-DcsMaster	1		1	    
-DcsServer	3		3	    
-mxosrvr		96		7	    89
-
-Starting the REST environment now
-starting rest, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1/bin/../logs/rest-trafodion-1-rest
--ip-172-31-56-238.out
-
-
-
-Zookeeper listen port: 2181
-DcsMaster listen port: 23400
-
-Configured Primary DcsMaster: "ip-172-31-56-238.ec2.internal"
-Active DcsMaster            : "ip-172-31-56-238"
-
-Process		Configured	Actual		Down
----------	----------	------		----
-DcsMaster	1		1		
-DcsServer	3		3		
-mxosrvr		96		94		2
-
-
-You can monitor the SQ shell log file : /opt/trafodion/apache-trafodion_server-2.0.1-incubating/logs/sqmon.log
-
-
-Startup time  0 hour(s) 2 minute(s) 19 second(s)
-Apache Trafodion Conversational Interface 2.0.1
-Copyright (c) 2015-2016 Apache Software Foundation
->>
---- SQL operation complete.
->>
-
-End of MXCI Session
-
-***INFO: Installation setup completed successfully.
-
-******************************
- TRAFODION INSTALLATION END
-******************************
-```
-
-2. Switch to the {project-name} Runtime User and check the status of {project-name}.
-+
-```
-$ sudo su - trafodion
-$ sqcheck
-Checking if processes are up.
-Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
-
-The SQ environment is up!
-
-
-Process		Configured	Actual	    Down
--------		----------	------	    ----
-DTM		3		3	    
-RMS		6		6	    
-DcsMaster	1		1	    
-DcsServer	3		3	    
-mxosrvr		96		96	    
-$
-```
-
-{project-name} is now running on your Hadoop cluster. Please refer to the <<activate,Activate>> chapter for
-basic instructions on how to verify the {project-name} management and how to perform basic management
-operations.
-
+Please refer to <<install-guided-install, Guided Install>> for the *example* of installing {project-name} on a two-node Cloudera Hadoop cluster.

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/92c80fd3/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc
index e33a15e..0d038dc 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc
@@ -86,8 +86,8 @@ Please verify these requirements on each node you will install {project-name} on
 [cols="20%a,40%a,40%a",options="header"]
 |===
 | Function | Requirement                                                                                  | Verification Guidance
-| Linux    | 64-bit version of Red Hat 6.5 or later, or SUSE SLES 11.3 or later.                          |
-| sshd     | The `ssh` daemon is running on each node in the cluster.                                     | 
+| Linux    | 64-bit version of Red Hat(RHEL) or CentOS 6.5 -6.8                                           | `cat /etc/redhat-release`
+| sshd     | The `ssh` daemon is running on each node in the cluster.                                     |
 &#8226; `ps aux  \| grep sshd` +
 &#8226; `sudo netstat -plant \| grep :22`
 | ntpd     | The `ntp` daemon is running and synchronizing time on each node in the cluster.              |
@@ -107,16 +107,12 @@ the port is *not* open.
 | passwordless ssh | The user name used to provision {project-name} must have passwordless ssh access to all nodes. | ssh to the nodes, ensure that no password prompt appears.
 | sudo privileges  | The user name used to provision {project-name} must sudo access to a number of root functions . | `sudo echo "test"` on each node.
 | bash     | Available for shell-script execution.                                                        | `bash --version`
-| java     | Available to run the {project-name} software. Same version as HBase is using.                     | `java --version`
+| java     | Available to run the {project-name} software. Same version as HBase is using.                | `java --version`
 | perl     | Available for script execution.                                                              | `perl --version`
 | python   | Available for script execution.                                                              | `python --version`
 | yum      | Available for installs, updates, and removal of software packages.                           | `yum --version`
 | rpm      | Available for installs, updates, and removal of software packages.                           | `rpm --version`
 | scp      | Available to copy files among nodes in the cluster.                                          | `scp --help`
-| curl     | Available to transfer data with URL syntax.                                                  | `curl --version`
-| wget     | Available to download files from the Web.                                                    | `wget --version`
-| pdsh     | Available to run shell commands in parallel.                                                 | `pdsh -V`
-| pdcp     | Available to copy files among nodes in parallel. part of the `pdsh` package.                 | `pdcp -V`                                         
 |===
 
 
@@ -214,19 +210,17 @@ http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch[Fedo
 | protobuf             | Data serialization.                                                               | yum install protobuf
 | xerces-c             | C++ XML parsing.                                                                  | yum install xerces-c
 | gzip                 | Data compress/decompress.                                                         | yum install gzip
-| rpm-build^2^         | Build binary and source software packages.                                        | yum install rpm-build 
-| apr-devel^2^         | Support files used to build applications using the APR library.                   | yum install apr-devel
-| apr-util-devel^2^    | Support files used to build applications using the APR utility library.           | yum install apr-util-devel
-| doxygen^2^           | Generate documentation from annotated C++ sources.                                | yum install doxygen
-| gcc^2^               | GNU Compiler Collection                                                           | yum install gcc
-| gcc_c++^2^           | GNU C++ compiler.                                                                 | yum install gcc_c++
+| apr-devel            | Support files used to build applications using the APR library.                   | yum install apr-devel
+| apr-util-devel       | Support files used to build applications using the APR utility library.           | yum install apr-util-devel
 |===
 
-1. `log4c&#43;&#43;` was recently withdrawn from public repositories. Therefore, you will need to build the `log4c&#43;&#43;` RPM
-on your system and then install the RPM using the procedure described in <<log4cplusplus_installation,log4c++ Installation>>.
-2. Software package required to build `log4c&#43;&#43;`. Not required otherwise. These packages are *not* installed by the {project-name} Installer in this release.
-
-The {project-name} Installer requires Internet access to install the required software packages.
+The {project-name} Installer requires both Internet access/Offline mode to install the required software packages.
+Specify `db_install.py --offline` to use the offline install feature. Before that, you need to prepare a local repository
+folder for all the above dependencies.
+To create a local repository, be sure the `createrepo` package is installed, then run createrepo command in your rpm folder.
+```
+$ createrepo -d .
+```
 
 [[requirements-trafodion-user-ids-and-their-privileges]]
 == {project-name} User IDs and Their Privileges
@@ -265,17 +259,14 @@ The user ID that performs the {project-name} installation steps. Typically, this
 ** Run Java version command on each node in the cluster.
 ** Run Hadoop version command on each node in the cluster.
 ** Run HBase version command on each node in the cluster.
-** Create directories and files in:
-*** `/etc`
-*** `/usr/lib`
-*** `/var/log`
+** Create directories and files in `/etc/trafodion`:
 ** Invoke `su` to execute commands as other users; for example, `trafodion`.
 ** Edit `sysctl.conf` and activate changes using `sysctl -p`:
 *** Modify kernel limits.
 *** Reserve IP ports.
 
 ^1^ `sudo` is *required* in the current release of {project-name}. This restriction may be relaxed in later releases.
-Alternative mechanisms for privileged access (such as running as `root` or `sudo` alternative commands) are not supported.
+Alternative mechanisms for privileged access (such as `sudo` alternative commands) are not supported.
 
 [[requirements-distribution-manager-user]]
 ==== Distribution Manager User


[3/3] incubator-trafodion git commit: Python documentation TRAFODION-2482 to 2.1

Posted by sa...@apache.org.
Python documentation TRAFODION-2482 to 2.1


Project: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/commit/b02f1973
Tree: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/tree/b02f1973
Diff: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/diff/b02f1973

Branch: refs/heads/release2.1
Commit: b02f1973dfeb1a022a149795bda94196eb64d617
Parents: 58caf9f 92c80fd
Author: Sandhya Sundaresan <sa...@apache.org>
Authored: Fri Mar 3 07:48:02 2017 +0000
Committer: Sandhya Sundaresan <sa...@apache.org>
Committed: Fri Mar 3 07:48:02 2017 +0000

----------------------------------------------------------------------
 .../src/asciidoc/_chapters/enable_security.adoc |  19 +-
 .../src/asciidoc/_chapters/introduction.adoc    | 435 +++++---------
 .../src/asciidoc/_chapters/prepare.adoc         | 110 ++--
 .../src/asciidoc/_chapters/quickstart.adoc      | 485 +--------------
 .../src/asciidoc/_chapters/requirements.adoc    |  37 +-
 .../src/asciidoc/_chapters/script_install.adoc  | 599 ++++++++-----------
 .../src/asciidoc/_chapters/script_remove.adoc   |  50 +-
 .../src/asciidoc/_chapters/script_upgrade.adoc  | 286 +--------
 8 files changed, 520 insertions(+), 1501 deletions(-)
----------------------------------------------------------------------