You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@sqoop.apache.org by ka...@apache.org on 2015/11/19 00:01:18 UTC

[5/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/SecurityGuideOnSqoop2.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/SecurityGuideOnSqoop2.rst b/docs/src/site/sphinx/SecurityGuideOnSqoop2.rst
deleted file mode 100644
index 7194d3b..0000000
--- a/docs/src/site/sphinx/SecurityGuideOnSqoop2.rst
+++ /dev/null
@@ -1,239 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-=========================
-Security Guide On Sqoop 2
-=========================
-
-Most Hadoop components, such as HDFS, Yarn, Hive, etc., have security frameworks, which support Simple, Kerberos and LDAP authentication. currently Sqoop 2 provides 2 types of authentication: simple and kerberos. The authentication module is pluggable, so more authentication types can be added. Additionally, a new role based access control is introduced in Sqoop 1.99.6. We recommend to use this capability in multi tenant environments, so that malicious users can’t easily abuse your created link and job objects.
-
-Simple Authentication
-=====================
-
-Configuration
--------------
-Modify Sqoop configuration file, normally in <Sqoop Folder>/conf/sqoop.properties.
-
-::
-
-  org.apache.sqoop.authentication.type=SIMPLE
-  org.apache.sqoop.authentication.handler=org.apache.sqoop.security.authentication.SimpleAuthenticationHandler
-  org.apache.sqoop.anonymous=true
-
--	Simple authentication is used by default. Commenting out authentication configuration will yield the use of simple authentication.
-
-Run command
------------
-Start Sqoop server as usual.
-
-::
-
-  <Sqoop Folder>/bin/sqoop.sh server start
-
-Start Sqoop client as usual.
-
-::
-
-  <Sqoop Folder>/bin/sqoop.sh client
-
-Kerberos Authentication
-=======================
-
-Kerberos is a computer network authentication protocol which works on the basis of 'tickets' to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Its designers aimed it primarily at a client–server model and it provides mutual authentication—both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks.
-
-Dependency
-----------
-Set up a KDC server. Skip this step if KDC server exists. It's difficult to cover every way Kerberos can be setup (ie: there are cross realm setups and multi-trust environments). This section will describe how to setup the sqoop principals with a local deployment of MIT kerberos.
-
--	All components which are Kerberos authenticated need one KDC server. If current Hadoop cluster uses Kerberos authentication, there should be a KDC server.
--	If there is no KDC server, follow http://web.mit.edu/kerberos/krb5-devel/doc/admin/install_kdc.html to set up one.
-
-Configure Hadoop cluster to use Kerberos authentication.
-
--	Authentication type should be cluster level. All components must have the same authentication type: use Kerberos or not. In other words, Sqoop with Kerberos authentication could not communicate with other Hadoop components, such as HDFS, Yarn, Hive, etc., without Kerberos authentication, and vice versa.
--	How to set up a Hadoop cluster with Kerberos authentication is out of the scope of this document. Follow the related links like https://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html
-
-Create keytab and principal for Sqoop 2 via kadmin in command line.
-
-::
-
-  addprinc -randkey HTTP/<FQDN>@<REALM>
-  addprinc -randkey sqoop/<FQDN>@<REALM>
-  xst -k /home/kerberos/sqoop.keytab HTTP/<FQDN>@<REALM>
-  xst -k /home/kerberos/sqoop.keytab sqoop/<FQDN>@<REALM>
-
--	The <FQDN> should be replaced by the FQDN of the server, which could be found via “hostname -f” in command line.
--	The <REALM> should be replaced by the realm name in krb5.conf file generated when installing the KDC server in the former step.
--	The principal HTTP/<FQDN>@<REALM> is used in communication between Sqoop client and Sqoop server. Since Sqoop server is an http server, so the HTTP principal is a must during SPNEGO process, and it is case sensitive.
--	Http request could be sent from other client like browser, wget or curl with SPNEGO support.
--	The principal sqoop/<FQDN>@<REALM> is used in communication between Sqoop server and Hdfs/Yarn as the credential of Sqoop server.
-
-Configuration
--------------
-Modify Sqoop configuration file, normally in <Sqoop Folder>/conf/sqoop.properties.
-
-::
-
-  org.apache.sqoop.authentication.type=KERBEROS
-  org.apache.sqoop.authentication.handler=org.apache.sqoop.security.authentication.KerberosAuthenticationHandler
-  org.apache.sqoop.authentication.kerberos.principal=sqoop/_HOST@<REALM>
-  org.apache.sqoop.authentication.kerberos.keytab=/home/kerberos/sqoop.keytab
-  org.apache.sqoop.authentication.kerberos.http.principal=HTTP/_HOST@<REALM>
-  org.apache.sqoop.authentication.kerberos.http.keytab=/home/kerberos/sqoop.keytab
-  org.apache.sqoop.authentication.kerberos.proxyuser=true
-
--	When _HOST is used as FQDN in principal, it will be replaced by the real FQDN. https://issues.apache.org/jira/browse/HADOOP-6632
--	If parameter proxyuser is set true, Sqoop server will use proxy user mode (sqoop delegate real client user) to run Yarn job. If false, Sqoop server will use sqoop user to run Yarn job.
-
-Run command
------------
-Set SQOOP2_HOST to FQDN.
-
-::
-
-  export SQOOP2_HOST=$(hostname -f).
-
--	The <FQDN> should be replaced by the FQDN of the server, which could be found via “hostname -f” in command line.
-
-Start Sqoop server using sqoop user.
-
-::
-
-  sudo –u sqoop <Sqoop Folder>/bin/sqoop.sh server start
-
-Run kinit to generate ticket cache.
-
-::
-
-  kinit HTTP/<FQDN>@<REALM> -kt /home/kerberos/sqoop.keytab
-
-Start Sqoop client.
-
-::
-
-  <Sqoop Folder>/bin/sqoop.sh client
-
-Verify
-------
-If the Sqoop server has started successfully with Kerberos authentication, the following line will be in <@LOGDIR>/sqoop.log:
-
-::
-
-  2014-12-04 15:02:58,038 INFO  security.KerberosAuthenticationHandler [org.apache.sqoop.security.authentication.KerberosAuthenticationHandler.secureLogin(KerberosAuthenticationHandler.java:84)] Using Kerberos authentication, principal [sqoop/_HOST@HADOOP.COM] keytab [/home/kerberos/sqoop.keytab]
-
-If the Sqoop client was able to communicate with the Sqoop server, the following will be in <@LOGDIR>/sqoop.log :
-
-::
-
-  Refreshing Kerberos configuration
-  Acquire TGT from Cache
-  Principal is HTTP/<FQDN>@HADOOP.COM
-  null credentials from Ticket Cache
-  principal is HTTP/<FQDN>@HADOOP.COM
-  Will use keytab
-  Commit Succeeded
-
-Customized Authentication
-=========================
-
-Users can create their own authentication modules. By performing the following steps:
-
--	Create customized authentication handler extends abstract class AuthenticationHandler.
--	Implement abstract function doInitialize and secureLogin in AuthenticationHandler.
-
-::
-
-  public class MyAuthenticationHandler extends AuthenticationHandler {
-
-    private static final Logger LOG = Logger.getLogger(MyAuthenticationHandler.class);
-
-    public void doInitialize() {
-      securityEnabled = true;
-    }
-
-    public void secureLogin() {
-      LOG.info("Using customized authentication.");
-    }
-  }
-
--	Modify configuration org.apache.sqoop.authentication.handler in <Sqoop Folder>/conf/sqoop.properties and set it to the customized authentication handler class name.
--	Restart the Sqoop server.
-
-Authorization
-=============
-
-Users, Groups, and Roles
-------------------------
-
-At the core of Sqoop's authorization system are users, groups, and roles. Roles allow administrators to give a name to a set of grants which can be easily reused. A role may be assigned to users, groups, and other roles. For example, consider a system with the following users and groups.
-
-::
-
-  <User>: <Groups>
-  user_all: group1, group2
-  user1: group1
-  user2: group2
-
-Sqoop roles must be created manually before being used, unlike users and groups. Users and groups are managed by the login system (Linux, LDAP or Kerberos). When a user wants to access one resource (connector, link, connector), the Sqoop2 server will determine the username of this user and the groups associated. That information is then used to determine if the user should have access to this resource being requested, by comparing the required privileges of the Sqoop operation to the user privileges using the following rules.
-
-- User privileges (Has the privilege been granted to the user?)
-- Group privileges (Does the user belong to any groups that the privilege has been granted to?)
-- Role privileges (Does the user or any of the groups that the user belongs to have a role that grants the privilege?)
-
-Administrator
--------------
-
-There is a special user: administrator, which can’t be created, deleted by command. The only way to set administrator is to modify the configuration file. Administrator could run management commands to create/delete roles. However, administrator does not implicitly have all privileges. Administrator has to grant privilege to him/her if he/she needs to request the resource.
-
-Role management commands
-------------------------
-
-::
-
-  CREATE ROLE –role role_name
-  DROP ROLE –role role_name
-  SHOW ROLE
-
-- Only the administrator has privilege for this.
-
-Principal management commands
------------------------------
-
-::
-
-  GRANT ROLE --principal-type principal_type --principal principal_name --role role_name
-  REVOKE ROLE --principal-type principal_type --principal principal_name --role role_name
-  SHOW ROLE --principal-type principal_type --principal principal_name
-  SHOW PRINCIPAL --role role_name
-
-- principal_type: USER | GROUP | ROLE
-
-Privilege management commands
------------------------------
-
-::
-
-  GRANT PRIVILEGE --principal-type principal_type --principal principal_name --resource-type resource_type --resource resource_name --action action_name [--with-grant]
-  REVOKE PRIVILEGE --principal-type principal_type --principal principal_name [--resource-type resource_type --resource resource_name --action action_name] [--with-grant]
-  SHOW PRIVILEGE –principal-type principal_type –principal principal_name [--resource-type resource_type --resource resource_name --action action_name]
-
-- principal_type: USER | GROUP | ROLE
-- resource_type: CONNECTOR | LINK | JOB
-- action_type: ALL | READ | WRITE
-- With with-grant in GRANT PRIVILEGE command, this principal could grant his/her privilege to other users.
-- Without resource in REVOKE PRIVILEGE command, all privileges on this principal will be revoked.
-- With with-grant in REVOKE PRIVILEGE command, only grant privilege on this principal will be removed. This principal has the privilege to access this resource, but he/she could not grant his/her privilege to others.
-- Without resource in SHOW PRIVILEGE command, all privileges on this principal will be listed.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Sqoop5MinutesDemo.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Sqoop5MinutesDemo.rst b/docs/src/site/sphinx/Sqoop5MinutesDemo.rst
deleted file mode 100644
index 19115a2..0000000
--- a/docs/src/site/sphinx/Sqoop5MinutesDemo.rst
+++ /dev/null
@@ -1,242 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-====================
-Sqoop 5 Minutes Demo
-====================
-
-This page will walk you through the basic usage of Sqoop. You need to have installed and configured Sqoop server and client in order to follow this guide. Installation procedure is described on `Installation page <Installation.html>`_. Please note that exact output shown in this page might differ from yours as Sqoop evolves. All major information should however remain the same.
-
-Sqoop uses unique names or persistent ids to identify connectors, links, jobs and configs. We support querying a entity by its unique name or by its perisent database Id.
-
-Starting Client
-===============
-
-Start client in interactive mode using following command: ::
-
-  sqoop2-shell
-
-Configure client to use your Sqoop server: ::
-
-  sqoop:000> set server --host your.host.com --port 12000 --webapp sqoop
-
-Verify that connection is working by simple version checking: ::
-
-  sqoop:000> show version --all
-  client version:
-    Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f
-    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
-  server version:
-    Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f
-    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
-  API versions:
-    [v1]
-
-You should received similar output as shown above describing the sqoop client build version, the server build version and the supported versions for the rest API.
-
-You can use the help command to check all the supported commands in the sqoop shell.
-::
-
-  sqoop:000> help
-  For information about Sqoop, visit: http://sqoop.apache.org/
-
-  Available commands:
-    exit    (\x  ) Exit the shell
-    history (\H  ) Display, manage and recall edit-line history
-    help    (\h  ) Display this help message
-    set     (\st ) Configure various client options and settings
-    show    (\sh ) Display various objects and configuration options
-    create  (\cr ) Create new object in Sqoop repository
-    delete  (\d  ) Delete existing object in Sqoop repository
-    update  (\up ) Update objects in Sqoop repository
-    clone   (\cl ) Create new object based on existing one
-    start   (\sta) Start job
-    stop    (\stp) Stop job
-    status  (\stu) Display status of a job
-    enable  (\en ) Enable object in Sqoop repository
-    disable (\di ) Disable object in Sqoop repository
-
-
-Creating Link Object
-==========================
-
-Check for the registered connectors on your Sqoop server: ::
-
-  sqoop:000> show connector
-  +----+------------------------+----------------+------------------------------------------------------+----------------------+
-  | Id |          Name          |    Version     |                        Class                         | Supported Directions |
-  +----+------------------------+----------------+------------------------------------------------------+----------------------+
-  | 1  | hdfs-connector         | 2.0.0-SNAPSHOT | org.apache.sqoop.connector.hdfs.HdfsConnector        | FROM/TO              |
-  | 2  | generic-jdbc-connector | 2.0.0-SNAPSHOT | org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO              |
-  +----+------------------------+----------------+------------------------------------------------------+----------------------+
-
-Our example contains two connectors. The one with connector Id 2 is called the ``generic-jdbc-connector``. This is a basic connector relying on the Java JDBC interface for communicating with data sources. It should work with the most common databases that are providing JDBC drivers. Please note that you must install JDBC drivers separately. They are not bundled in Sqoop due to incompatible licenses.
-
-Generic JDBC Connector in our example has a persistence Id 2 and we will use this value to create new link object for this connector. Note that the link name should be unique.
-::
-
-  sqoop:000> create link -c 2
-  Creating link for connector with id 2
-  Please fill following values to create new link object
-  Name: First Link
-
-  Link configuration
-  JDBC Driver Class: com.mysql.jdbc.Driver
-  JDBC Connection String: jdbc:mysql://mysql.server/database
-  Username: sqoop
-  Password: *****
-  JDBC Connection Properties:
-  There are currently 0 values in the map:
-  entry#protocol=tcp
-  New link was successfully created with validation status OK and persistent id 1
-
-Our new link object was created with assigned id 1.
-
-In the ``show connector -all`` we see that there is a hdfs-connector registered in sqoop with the persistent id 1. Let us create another link object but this time for the  hdfs-connector instead.
-
-::
-
-  sqoop:000> create link -c 1
-  Creating link for connector with id 1
-  Please fill following values to create new link object
-  Name: Second Link
-
-  Link configuration
-  HDFS URI: hdfs://nameservice1:8020/
-  New link was successfully created with validation status OK and persistent id 2
-
-Creating Job Object
-===================
-
-Connectors implement the ``From`` for reading data from and/or ``To`` for writing data to. Generic JDBC Connector supports both of them List of supported directions for each connector might be seen in the output of ``show connector -all`` command above. In order to create a job we need to specifiy the ``From`` and ``To`` parts of the job uniquely identified by their link Ids. We already have 2 links created in the system, you can verify the same with the following command
-
-::
-
-  sqoop:000> show link --all
-  2 link(s) to show:
-  link with id 1 and name First Link (Enabled: true, Created by root at 11/4/14 4:27 PM, Updated by root at 11/4/14 4:27 PM)
-  Using Connector id 2
-    Link configuration
-      JDBC Driver Class: com.mysql.jdbc.Driver
-      JDBC Connection String: jdbc:mysql://mysql.ent.cloudera.com/sqoop
-      Username: sqoop
-      Password:
-      JDBC Connection Properties:
-        protocol = tcp
-  link with id 2 and name Second Link (Enabled: true, Created by root at 11/4/14 4:38 PM, Updated by root at 11/4/14 4:38 PM)
-  Using Connector id 1
-    Link configuration
-      HDFS URI: hdfs://nameservice1:8020/
-
-Next, we can use the two link Ids to associate the ``From`` and ``To`` for the job.
-::
-
-   sqoop:000> create job -f 1 -t 2
-   Creating job for links with from id 1 and to id 2
-   Please fill following values to create new job object
-   Name: Sqoopy
-
-   FromJob configuration
-
-    Schema name:(Required)sqoop
-    Table name:(Required)sqoop
-    Table SQL statement:(Optional)
-    Table column names:(Optional)
-    Partition column name:(Optional) id
-    Null value allowed for the partition column:(Optional)
-    Boundary query:(Optional)
-
-  ToJob configuration
-
-    Output format:
-     0 : TEXT_FILE
-     1 : SEQUENCE_FILE
-    Choose: 0
-    Compression format:
-     0 : NONE
-     1 : DEFAULT
-     2 : DEFLATE
-     3 : GZIP
-     4 : BZIP2
-     5 : LZO
-     6 : LZ4
-     7 : SNAPPY
-     8 : CUSTOM
-    Choose: 0
-    Custom compression format:(Optional)
-    Output directory:(Required)/root/projects/sqoop
-
-    Driver Config
-    Extractors:(Optional) 2
-    Loaders:(Optional) 2
-    New job was successfully created with validation status OK  and persistent id 1
-
-Our new job object was created with assigned id 1. Note that if null value is allowed for the partition column,
-at least 2 extractors are needed for Sqoop to carry out the data transfer. On specifying 1 extractor in this
-scenario, Sqoop shall ignore this setting and continue with 2 extractors.
-
-Start Job ( a.k.a Data transfer )
-=================================
-
-You can start a sqoop job with the following command:
-::
-
-  sqoop:000> start job -j 1
-  Submission details
-  Job ID: 1
-  Server URL: http://localhost:12000/sqoop/
-  Created by: root
-  Creation date: 2014-11-04 19:43:29 PST
-  Lastly updated by: root
-  External ID: job_1412137947693_0001
-    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
-  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
-
-You can iteratively check your running job status with ``status job`` command:
-
-::
-
-  sqoop:000> status job -j 1
-  Submission details
-  Job ID: 1
-  Server URL: http://localhost:12000/sqoop/
-  Created by: root
-  Creation date: 2014-11-04 19:43:29 PST
-  Lastly updated by: root
-  External ID: job_1412137947693_0001
-    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
-  2014-11-04 20:09:16 PST: RUNNING  - 0.00 % 
-
-Alternatively you can start a sqoop job and observe job running status with the following command:
-
-::
-
-  sqoop:000> start job -j 1 -s
-  Submission details
-  Job ID: 1
-  Server URL: http://localhost:12000/sqoop/
-  Created by: root
-  Creation date: 2014-11-04 19:43:29 PST
-  Lastly updated by: root
-  External ID: job_1412137947693_0001
-    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
-  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
-  2014-11-04 19:43:39 PST: RUNNING  - 0.00 %
-  2014-11-04 19:43:49 PST: RUNNING  - 10.00 %
-
-And finally you can stop running the job at any time using ``stop job`` command: ::
-
-  sqoop:000> stop job -j 1
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Tools.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Tools.rst b/docs/src/site/sphinx/Tools.rst
deleted file mode 100644
index fb0187a..0000000
--- a/docs/src/site/sphinx/Tools.rst
+++ /dev/null
@@ -1,129 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-=====
-Tools
-=====
-
-Tools are server commands that administrators can execute on the Sqoop server machine in order to perform various maintenance tasks. The tool execution will always perform a given task and finish. There are no long running services implemented as tools.
-
-In order to perform the maintenance task each tool is suppose to do, they need to be executed in exactly the same environment as the main Sqoop server. The tool binary will take care of setting up the ``CLASSPATH`` and other environmental variables that might be required. However it's up to the administrator himself to run the tool under the same user as is used for the server. This is usually configured automatically for various Hadoop distributions (such as Apache Bigtop).
-
-
-.. note:: Running tools while the Sqoop Server is also running is not recommended as it might lead to a data corruption and service disruption.
-
-List of available tools:
-
-* verify
-* upgrade
-
-To run the desired tool, execute binary ``sqoop2-tool`` with desired tool name. For example to run ``verify`` tool::
-
-  sqoop2-tool verify
-
-.. note:: Stop the Sqoop Server before running Sqoop tools. Running tools while Sqoop Server is running can lead to a data corruption and service disruption.
-
-Verify
-======
-
-The verify tool will verify Sqoop server configuration by starting all subsystems with the exception of servlets and tearing them down.
-
-To run the ``verify`` tool::
-
-  sqoop2-tool verify
-
-If the verification process succeeds, you should see messages like::
-
-  Verification was successful.
-  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
-
-If the verification process will find any inconsistencies, it will print out the following message instead::
-
-  Verification has failed, please check Server logs for further details.
-  Tool class org.apache.sqoop.tools.tool.VerifyTool has failed.
-
-Further details why the verification has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
-
-Upgrade
-=======
-
-Upgrades all versionable components inside Sqoop2. This includes structural changes inside the repository and stored metadata.
-Running this tool on Sqoop deployment that was already upgraded will have no effect.
-
-To run the ``upgrade`` tool::
-
-  sqoop2-tool upgrade
-
-Upon successful upgrade you should see following message::
-
-  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
-
-Execution failure will show the following message instead::
-
-  Tool class org.apache.sqoop.tools.tool.UpgradeTool has failed.
-
-Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
-
-RepositoryDump
-==============
-
-Writes the user-created contents of the Sqoop repository to a file in JSON format. This includes connections, jobs and submissions.
-
-To run the ``repositorydump`` tool::
-
-  sqoop2-tool repositorydump -o repository.json
-
-As an option, the administrator can choose to include sensitive information such as database connection passwords in the file::
-
-  sqoop2-tool repositorydump -o repository.json --include-sensitive
-
-Upon successful execution, you should see the following message::
-
-  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has finished correctly.
-
-If repository dump has failed, you will see the following message instead::
-
-  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has failed.
-
-Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
-
-RepositoryLoad
-==============
-
-Reads a json formatted file created by RepositoryDump and loads to current Sqoop repository.
-
-To run the ``repositoryLoad`` tool::
-
-  sqoop2-tool repositoryload -i repository.json
-
-Upon successful execution, you should see the following message::
-
-  Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has finished correctly.
-
-If repository load failed you will see the following message instead::
-
- Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has failed.
-
-Or an exception. Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
-
-.. note:: If the repository dump was created without passwords (default), the connections will not contain a password and the jobs will fail to execute. In that case you'll need to manually update the connections and set the password.
-.. note:: RepositoryLoad tool will always generate new connections, jobs and submissions from the file. Even when an identical objects already exists in repository.
-
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Upgrade.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Upgrade.rst b/docs/src/site/sphinx/Upgrade.rst
deleted file mode 100644
index 385c5ae..0000000
--- a/docs/src/site/sphinx/Upgrade.rst
+++ /dev/null
@@ -1,84 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-=======
-Upgrade
-=======
-
-This page describes procedure that you need to take in order to upgrade Sqoop from one release to a higher release. Upgrading both client and server component will be discussed separately.
-
-.. note:: Only updates from one Sqoop 2 release to another are covered, starting with upgrades from version 1.99.2. This guide do not contain general information how to upgrade from Sqoop 1 to Sqoop 2.
-
-Upgrading Server
-================
-
-As Sqoop server is using a database repository for persisting sqoop entities such as the connector, driver, links and jobs the repository schema might need to be updated as part of the server upgrade. In addition the configs and inputs described by the various connectors and the driver may also change with a new server version and might need a data upgrade.
-
-There are two ways how to upgrade Sqoop entities in the repository, you can either execute upgrade tool or configure the sqoop server to perform all necessary upgrades on start up.
-
-It's strongly advised to back up the repository before moving on to next steps. Backup instructions will vary depending on the repository implementation. For example, using MySQL as a repository will require a different back procedure than Apache Derby. Please follow the repositories' backup procedure.
-
-Upgrading Server using upgrade tool
------------------------------------
-
-Preferred upgrade path is to explicitly run the `Upgrade Tool <Tools.html#upgrade>`_. First step is to however shutdown the server as having both the server and upgrade utility accessing the same repository might corrupt it::
-
-  sqoop2-server stop
-
-When the server has been successfully stopped, you can update the server bits and simply run the upgrade tool::
-
-  sqoop2-tool upgrade
-
-You should see that the upgrade process has been successful::
-
-  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
-
-In case of any failure, please take a look into `Upgrade Tool <Tools.html#upgrade>`_ documentation page.
-
-Upgrading Server on start-up
-----------------------------
-
-The capability of performing the upgrade has been built-in to the server, however is disabled by default to avoid any unintentional changes to the repository. You can start the repository schema upgrade procedure by stopping the server: ::
-
-  sqoop2-server stop
-
-Before starting the server again you will need to enable the auto-upgrade feature that will perform all necessary changes during Sqoop Server start up.
-
-You need to set the following property in configuration file ``sqoop.properties`` for the repository schema upgrade.
-::
-
-   org.apache.sqoop.repository.schema.immutable=false
-
-You need to set the following property in configuration file ``sqoop.properties`` for the connector config data upgrade.
-::
-
-   org.apache.sqoop.connector.autoupgrade=true
-
-You need to set the following property in configuration file ``sqoop.properties`` for the driver config data upgrade.
-::
-
-   org.apache.sqoop.driver.autoupgrade=true
-
-When all properties are set, start the sqoop server using the following command::
-
-  sqoop2-server start
-
-All required actions will be performed automatically during the server bootstrap. It's strongly advised to set all three properties to their original values once the server has been successfully started and the upgrade has completed
-
-Upgrading Client
-================
-
-Client do not require any manual steps during upgrade. Replacing the binaries with updated version is sufficient.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/admin.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/admin.rst b/docs/src/site/sphinx/admin.rst
new file mode 100644
index 0000000..d149dfd
--- /dev/null
+++ b/docs/src/site/sphinx/admin.rst
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===========
+Admin Guide
+===========
+
+.. toctree::
+   :glob:
+
+   admin/*

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/admin/Installation.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/admin/Installation.rst b/docs/src/site/sphinx/admin/Installation.rst
new file mode 100644
index 0000000..9d56875
--- /dev/null
+++ b/docs/src/site/sphinx/admin/Installation.rst
@@ -0,0 +1,103 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+============
+Installation
+============
+
+Sqoop ships as one binary package however it's compound from two separate parts - client and server. You need to install server on single node in your cluster. This node will then serve as an entry point for all connecting Sqoop clients. Server acts as a mapreduce client and therefore Hadoop must be installed and configured on machine hosting Sqoop server. Clients can be installed on any arbitrary number of machines. Client is not acting as a mapreduce client and thus you do not need to install Hadoop on nodes that will act only as a Sqoop client.
+
+Server installation
+===================
+
+Copy Sqoop artifact on machine where you want to run Sqoop server. This machine must have installed and configured Hadoop. You don't need to run any Hadoop related services there, however the machine must be able to act as an Hadoop client. You should be able to list a HDFS for example: ::
+
+  hadoop dfs -ls
+
+Sqoop server supports multiple Hadoop versions. However as Hadoop major versions are not compatible with each other, Sqoop have multiple binary artefacts - one for each supported major version of Hadoop. You need to make sure that you're using appropriated binary artifact for your specific Hadoop version. To install Sqoop server decompress appropriate distribution artifact in location at your convenience and change your working directory to this folder. ::
+
+  # Decompress Sqoop distribution tarball
+  tar -xvf sqoop-<version>-bin-hadoop<hadoop-version>.tar.gz
+
+  # Move decompressed content to any location
+  mv sqoop-<version>-bin-hadoop<hadoop version>.tar.gz /usr/lib/sqoop
+
+  # Change working directory
+  cd /usr/lib/sqoop
+
+
+Installing Dependencies
+-----------------------
+
+Hadoop libraries must be available on node where you are planning to run Sqoop server with proper configuration for major services - ``NameNode`` and either ``JobTracker`` or ``ResourceManager`` depending whether you are running Hadoop 1 or 2. There is no need to run any Hadoop service on the same node as Sqoop server, just the libraries and configuration files must be available.
+
+Path to Hadoop libraries is stored in environment ``HADOOP_COMMON_HOME``, ``HADOOP_HDFS_HOME``, ``HADOOP_MAPRED_HOME`` and ``HADOOP_YARN_HOME``. You need to set the environment with your Hadoop libraries. If the environment ``HADOOP_HOME`` is set, the default expected locations are ``$HADOOP_HOME/share/hadoop/common``, ``$HADOOP_HOME/share/hadoop/hdfs``, ``$HADOOP_HOME/share/hadoop/mapreduce`` and ``$HADOOP_HOME/share/hadoop/yarn``.
+
+Lastly you might need to install JDBC drivers that are not bundled with Sqoop because of incompatible licenses. You can add any arbitrary Java jar file to Sqoop server by copying it into ``lib/`` directory. You can create this directory if it do not exists already.
+
+Configuring PATH
+----------------
+
+All user and administrator facing shell commands are stored in ``bin/`` directory. It's recommended to add this directory to your ``$PATH`` for their easier execution, for example::
+
+  PATH=$PATH:`pwd`/bin/
+
+Further documentation pages will assume that you have the binaries on your ``$PATH``. You will need to call them specifying full path if you decide to skip this step.
+
+Configuring Server
+------------------
+
+Before starting server you should revise configuration to match your specific environment. Server configuration files are stored in ``conf`` directory.
+
+File ``sqoop_bootstrap.properties`` specifies which configuration provider should be used for loading configuration for rest of Sqoop server. Default value ``PropertiesConfigurationProvider`` should be sufficient.
+
+
+Second configuration file ``sqoop.properties`` contains remaining configuration properties that can affect Sqoop server. File is very well documented, so check if all configuration properties fits your environment. Default or very little tweaking should be sufficient most common cases.
+
+You can verify the Sqoop server configuration using `Verify Tool <Tools.html#verify>`__, for example::
+
+  sqoop2-tool verify
+
+Upon running the ``verify`` tool, you should see messages similar to the following::
+
+  Verification was successful.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+Consult `Verify Tool <Tools.html#upgrade>`__ documentation page in case of any failure.
+
+Server Life Cycle
+-----------------
+
+After installation and configuration you can start Sqoop server with following command: ::
+
+  sqoop2-server start
+
+Similarly you can stop server using following command: ::
+
+  sqoop2-server stop
+
+By default Sqoop server daemons use ports 12000. You can set ``org.apache.sqoop.jetty.port`` in configuration file ``conf/sqoop.properties`` to use different ports.
+
+Client installation
+===================
+
+Client do not need extra installation and configuration steps. Just copy Sqoop distribution artifact on target machine and unzip it in desired location. You can start client with following command: ::
+
+  sqoop2-shell
+
+You can find more documentation to Sqoop client in `Command Line Client <CommandLineClient.html>`_ section.
+
+

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/admin/Tools.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/admin/Tools.rst b/docs/src/site/sphinx/admin/Tools.rst
new file mode 100644
index 0000000..fb0187a
--- /dev/null
+++ b/docs/src/site/sphinx/admin/Tools.rst
@@ -0,0 +1,129 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=====
+Tools
+=====
+
+Tools are server commands that administrators can execute on the Sqoop server machine in order to perform various maintenance tasks. The tool execution will always perform a given task and finish. There are no long running services implemented as tools.
+
+In order to perform the maintenance task each tool is suppose to do, they need to be executed in exactly the same environment as the main Sqoop server. The tool binary will take care of setting up the ``CLASSPATH`` and other environmental variables that might be required. However it's up to the administrator himself to run the tool under the same user as is used for the server. This is usually configured automatically for various Hadoop distributions (such as Apache Bigtop).
+
+
+.. note:: Running tools while the Sqoop Server is also running is not recommended as it might lead to a data corruption and service disruption.
+
+List of available tools:
+
+* verify
+* upgrade
+
+To run the desired tool, execute binary ``sqoop2-tool`` with desired tool name. For example to run ``verify`` tool::
+
+  sqoop2-tool verify
+
+.. note:: Stop the Sqoop Server before running Sqoop tools. Running tools while Sqoop Server is running can lead to a data corruption and service disruption.
+
+Verify
+======
+
+The verify tool will verify Sqoop server configuration by starting all subsystems with the exception of servlets and tearing them down.
+
+To run the ``verify`` tool::
+
+  sqoop2-tool verify
+
+If the verification process succeeds, you should see messages like::
+
+  Verification was successful.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+If the verification process will find any inconsistencies, it will print out the following message instead::
+
+  Verification has failed, please check Server logs for further details.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has failed.
+
+Further details why the verification has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+Upgrade
+=======
+
+Upgrades all versionable components inside Sqoop2. This includes structural changes inside the repository and stored metadata.
+Running this tool on Sqoop deployment that was already upgraded will have no effect.
+
+To run the ``upgrade`` tool::
+
+  sqoop2-tool upgrade
+
+Upon successful upgrade you should see following message::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+Execution failure will show the following message instead::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has failed.
+
+Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+RepositoryDump
+==============
+
+Writes the user-created contents of the Sqoop repository to a file in JSON format. This includes connections, jobs and submissions.
+
+To run the ``repositorydump`` tool::
+
+  sqoop2-tool repositorydump -o repository.json
+
+As an option, the administrator can choose to include sensitive information such as database connection passwords in the file::
+
+  sqoop2-tool repositorydump -o repository.json --include-sensitive
+
+Upon successful execution, you should see the following message::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has finished correctly.
+
+If repository dump has failed, you will see the following message instead::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has failed.
+
+Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+RepositoryLoad
+==============
+
+Reads a json formatted file created by RepositoryDump and loads to current Sqoop repository.
+
+To run the ``repositoryLoad`` tool::
+
+  sqoop2-tool repositoryload -i repository.json
+
+Upon successful execution, you should see the following message::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has finished correctly.
+
+If repository load failed you will see the following message instead::
+
+ Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has failed.
+
+Or an exception. Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+.. note:: If the repository dump was created without passwords (default), the connections will not contain a password and the jobs will fail to execute. In that case you'll need to manually update the connections and set the password.
+.. note:: RepositoryLoad tool will always generate new connections, jobs and submissions from the file. Even when an identical objects already exists in repository.
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/admin/Upgrade.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/admin/Upgrade.rst b/docs/src/site/sphinx/admin/Upgrade.rst
new file mode 100644
index 0000000..385c5ae
--- /dev/null
+++ b/docs/src/site/sphinx/admin/Upgrade.rst
@@ -0,0 +1,84 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=======
+Upgrade
+=======
+
+This page describes procedure that you need to take in order to upgrade Sqoop from one release to a higher release. Upgrading both client and server component will be discussed separately.
+
+.. note:: Only updates from one Sqoop 2 release to another are covered, starting with upgrades from version 1.99.2. This guide do not contain general information how to upgrade from Sqoop 1 to Sqoop 2.
+
+Upgrading Server
+================
+
+As Sqoop server is using a database repository for persisting sqoop entities such as the connector, driver, links and jobs the repository schema might need to be updated as part of the server upgrade. In addition the configs and inputs described by the various connectors and the driver may also change with a new server version and might need a data upgrade.
+
+There are two ways how to upgrade Sqoop entities in the repository, you can either execute upgrade tool or configure the sqoop server to perform all necessary upgrades on start up.
+
+It's strongly advised to back up the repository before moving on to next steps. Backup instructions will vary depending on the repository implementation. For example, using MySQL as a repository will require a different back procedure than Apache Derby. Please follow the repositories' backup procedure.
+
+Upgrading Server using upgrade tool
+-----------------------------------
+
+Preferred upgrade path is to explicitly run the `Upgrade Tool <Tools.html#upgrade>`_. First step is to however shutdown the server as having both the server and upgrade utility accessing the same repository might corrupt it::
+
+  sqoop2-server stop
+
+When the server has been successfully stopped, you can update the server bits and simply run the upgrade tool::
+
+  sqoop2-tool upgrade
+
+You should see that the upgrade process has been successful::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+In case of any failure, please take a look into `Upgrade Tool <Tools.html#upgrade>`_ documentation page.
+
+Upgrading Server on start-up
+----------------------------
+
+The capability of performing the upgrade has been built-in to the server, however is disabled by default to avoid any unintentional changes to the repository. You can start the repository schema upgrade procedure by stopping the server: ::
+
+  sqoop2-server stop
+
+Before starting the server again you will need to enable the auto-upgrade feature that will perform all necessary changes during Sqoop Server start up.
+
+You need to set the following property in configuration file ``sqoop.properties`` for the repository schema upgrade.
+::
+
+   org.apache.sqoop.repository.schema.immutable=false
+
+You need to set the following property in configuration file ``sqoop.properties`` for the connector config data upgrade.
+::
+
+   org.apache.sqoop.connector.autoupgrade=true
+
+You need to set the following property in configuration file ``sqoop.properties`` for the driver config data upgrade.
+::
+
+   org.apache.sqoop.driver.autoupgrade=true
+
+When all properties are set, start the sqoop server using the following command::
+
+  sqoop2-server start
+
+All required actions will be performed automatically during the server bootstrap. It's strongly advised to set all three properties to their original values once the server has been successfully started and the upgrade has completed
+
+Upgrading Client
+================
+
+Client do not require any manual steps during upgrade. Replacing the binaries with updated version is sufficient.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/conf.py
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/conf.py b/docs/src/site/sphinx/conf.py
index 6a9bf31..7b620f7 100644
--- a/docs/src/site/sphinx/conf.py
+++ b/docs/src/site/sphinx/conf.py
@@ -103,12 +103,12 @@ html_use_index = True
 #html_theme = 'default'
 
 html_sidebars = {
-  '**': ['localtoc.html', 'relations.html', 'sourcelink.html'],
+  '**': ['globaltoc.html'],
 }
 
 # The theme to use for HTML and HTML Help pages.  See the documentation for
 # a list of builtin themes.
-html_theme = 'haiku'
+html_theme = 'sphinxdoc'
 
 # Theme options are theme-specific and customize the look and feel of a theme
 # further.  For a list of options available for each theme, see the

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev.rst b/docs/src/site/sphinx/dev.rst
new file mode 100644
index 0000000..16f237b
--- /dev/null
+++ b/docs/src/site/sphinx/dev.rst
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===============
+Developer Guide
+===============
+
+.. toctree::
+   :glob:
+
+   dev/*

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev/BuildingSqoop2.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev/BuildingSqoop2.rst b/docs/src/site/sphinx/dev/BuildingSqoop2.rst
new file mode 100644
index 0000000..7fbbb6b
--- /dev/null
+++ b/docs/src/site/sphinx/dev/BuildingSqoop2.rst
@@ -0,0 +1,76 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+================================
+Building Sqoop2 from source code
+================================
+
+This guide will show you how to build Sqoop2 from source code. Sqoop is using `maven <http://maven.apache.org/>`_ as build system. You you will need to use at least version 3.0 as older versions will not work correctly. All other dependencies will be downloaded by maven automatically. With exception of special JDBC drivers that are needed only for advanced integration tests.
+
+Downloading source code
+-----------------------
+
+Sqoop project is using git as a revision control system hosted at Apache Software Foundation. You can clone entire repository using following command:
+
+::
+
+  git clone https://git-wip-us.apache.org/repos/asf/sqoop.git sqoop2
+
+Sqoop2 is currently developed in special branch ``sqoop2`` that you need to check out after clone:
+
+::
+
+  cd sqoop2
+  git checkout sqoop2
+
+Building project
+----------------
+
+You can use usual maven targets like ``compile`` or ``package`` to build the project. Sqoop supports one major Hadoop revision at the moment - 2.x. As compiled code for one Hadoop major version can't be used on another, you must compile Sqoop against appropriate Hadoop version.
+
+::
+
+  mvn compile
+
+Maven target ``package`` can be used to create Sqoop packages similar to the ones that are officially available for download. Sqoop will build only source tarball by default. You need to specify ``-Pbinary`` to build binary distribution.
+
+::
+
+  mvn package -Pbinary
+
+Running tests
+-------------
+
+Sqoop supports two different sets of tests. First smaller and much faster set is called **unit tests** and will be executed on maven target ``test``. Second larger set of **integration tests** will be executed on maven target ``integration-test``. Please note that integration tests might require manual steps for installing various JDBC drivers into your local maven cache.
+
+Example for running unit tests:
+
+::
+
+  mvn test
+
+Example for running integration tests:
+
+::
+
+  mvn integration-test
+
+For the **unit tests**, there are two helpful profiles: **fast** and **slow**. The **fast** unit tests do not start or use any services. The **slow** unit tests, may start services or use an external service (ie. MySQL).
+
+::
+
+  mvn test -Pfast,hadoop200
+  mvn test -Pslow,hadoop200
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev/ClientAPI.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev/ClientAPI.rst b/docs/src/site/sphinx/dev/ClientAPI.rst
new file mode 100644
index 0000000..9626878
--- /dev/null
+++ b/docs/src/site/sphinx/dev/ClientAPI.rst
@@ -0,0 +1,304 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===========================
+Sqoop Java Client API Guide
+===========================
+
+This document will explain how to use Sqoop Java Client API with external application. Client API allows you to execute the functions of sqoop commands. It requires Sqoop Client JAR and its dependencies.
+
+The main class that provides wrapper methods for all the supported operations is the
+::
+
+  public class SqoopClient {
+    ...
+  }
+
+Java Client API is explained using Generic JDBC Connector example. Before executing the application using the sqoop client API, check whether sqoop server is running.
+
+Workflow
+========
+
+Given workflow has to be followed for executing a sqoop job in Sqoop server.
+
+  1. Create LINK object for a given connectorId             - Creates Link object and returns linkId (lid)
+  2. Create a JOB for a given "from" and "to" linkId            - Create Job object and returns jobId (jid)
+  3. Start the JOB for a given jobId                        - Start Job on the server and creates a submission record
+
+Project Dependencies
+====================
+Here given maven dependency
+
+::
+
+  <dependency>
+    <groupId>org.apache.sqoop</groupId>
+      <artifactId>sqoop-client</artifactId>
+      <version>${requestedVersion}</version>
+  </dependency>
+
+Initialization
+==============
+
+First initialize the SqoopClient class with server URL as argument.
+
+::
+
+  String url = "http://localhost:12000/sqoop/";
+  SqoopClient client = new SqoopClient(url);
+
+Server URL value can be modfied by setting value to setServerUrl(String) method
+
+::
+
+  client.setServerUrl(newUrl);
+
+
+Link
+====
+Connectors provide the facility to interact with many data sources and thus can be used as a means to transfer data between them in Sqoop. The registered connector implementation will provide logic to read from and/or write to a data source that it represents. A connector can have one or more links associated with it. The java client API allows you to create, update and delete a link for any registered connector. Creating or updating a link requires you to populate the Link Config for that particular connector. Hence the first thing to do is get the list of registered connectors and select the connector for which you would like to create a link. Then
+you can get the list of all the config/inputs using `Display Config and Input Names For Connector`_ for that connector.
+
+
+Save Link
+---------
+
+First create a new link by invoking ``createLink(cid)`` method with connector Id and it returns a MLink object with dummy id and the unfilled link config inputs for that connector. Then fill the config inputs with relevant values. Invoke ``saveLink`` passing it the filled MLink object.
+
+::
+
+  // create a placeholder for link
+  long connectorId = 1;
+  MLink link = client.createLink(connectorId);
+  link.setName("Vampire");
+  link.setCreationUser("Buffy");
+  MLinkConfig linkConfig = link.getConnectorLinkConfig();
+  // fill in the link config values
+  linkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://localhost/my");
+  linkConfig.getStringInput("linkConfig.jdbcDriver").setValue("com.mysql.jdbc.Driver");
+  linkConfig.getStringInput("linkConfig.username").setValue("root");
+  linkConfig.getStringInput("linkConfig.password").setValue("root");
+  // save the link object that was filled
+  Status status = client.saveLink(link);
+  if(status.canProceed()) {
+   System.out.println("Created Link with Link Id : " + link.getPersistenceId());
+  } else {
+   System.out.println("Something went wrong creating the link");
+  }
+
+``status.canProceed()`` returns true if status is OK or a WARNING. Before sending the status, the link config values are validated using the corresponding validator associated with th link config inputs.
+
+On successful execution of the saveLink method, new link Id is assigned to the link object else an exception is thrown. ``link.getPersistenceId()`` method returns the unique Id for this object persisted in the sqoop repository.
+
+User can retrieve a link using the following methods
+
++----------------------------+--------------------------------------+
+|   Method                   | Description                          |
++============================+======================================+
+| ``getLink(lid)``           | Returns a link by id                 |
++----------------------------+--------------------------------------+
+| ``getLinks()``             | Returns list of links in the sqoop   |
++----------------------------+--------------------------------------+
+
+Job
+===
+
+A sqoop job holds the ``From`` and ``To`` parts for transferring data from the ``From`` data source to the ``To`` data source. Both the ``From`` and the ``To`` are uniquely identified by their corresponding connector Link Ids. i.e when creating a job we have to specifiy the ``FromLinkId`` and the ``ToLinkId``. Thus the pre-requisite for creating a job is to first create the links as described above.
+
+Once the linkIds for the ``From`` and ``To`` are given, then the job configs for the associated connector for the link object have to be filled. You can get the list of all the from and to job config/inputs using `Display Config and Input Names For Connector`_ for that connector. A connector can have one or more links. We then use the links in the ``From`` and ``To`` direction to populate the corresponding ``MFromConfig`` and ``MToConfig`` respectively.
+
+In addition to filling the job configs for the ``From`` and the ``To`` representing the link, we also need to fill the driver configs that control the job execution engine environment. For example, if the job execution engine happens to be the MapReduce we will specifiy the number of mappers to be used in reading data from the ``From`` data source.
+
+Save Job
+---------
+Here is the code to create and then save a job
+::
+
+  String url = "http://localhost:12000/sqoop/";
+  SqoopClient client = new SqoopClient(url);
+  //Creating dummy job object
+  long fromLinkId = 1;// for jdbc connector
+  long toLinkId = 2; // for HDFS connector
+  MJob job = client.createJob(fromLinkId, toLinkId);
+  job.setName("Vampire");
+  job.setCreationUser("Buffy");
+  // set the "FROM" link job config values
+  MFromConfig fromJobConfig = job.getFromJobConfig();
+  fromJobConfig.getStringInput("fromJobConfig.schemaName").setValue("sqoop");
+  fromJobConfig.getStringInput("fromJobConfig.tableName").setValue("sqoop");
+  fromJobConfig.getStringInput("fromJobConfig.partitionColumn").setValue("id");
+  // set the "TO" link job config values
+  MToConfig toJobConfig = job.getToJobConfig();
+  toJobConfig.getStringInput("toJobConfig.outputDirectory").setValue("/usr/tmp");
+  // set the driver config values
+  MDriverConfig driverConfig = job.getDriverConfig();
+  driverConfig.getStringInput("throttlingConfig.numExtractors").setValue("3");
+
+  Status status = client.saveJob(job);
+  if(status.canProceed()) {
+   System.out.println("Created Job with Job Id: "+ job.getPersistenceId());
+  } else {
+   System.out.println("Something went wrong creating the job");
+  }
+
+User can retrieve a job using the following methods
+
++----------------------------+--------------------------------------+
+|   Method                   | Description                          |
++============================+======================================+
+| ``getJob(jid)``            | Returns a job by id                  |
++----------------------------+--------------------------------------+
+| ``getJobs()``              | Returns list of jobs in the sqoop    |
++----------------------------+--------------------------------------+
+
+
+List of status codes
+--------------------
+
++------------------+------------------------------------------------------------------------------------------------------------+
+| Function         | Description                                                                                                |
++==================+============================================================================================================+
+| ``OK``           | There are no issues, no warnings.                                                                          |
++------------------+------------------------------------------------------------------------------------------------------------+
+| ``WARNING``      | Validated entity is correct enough to be proceed. Not a fatal error                                        |
++------------------+------------------------------------------------------------------------------------------------------------+
+| ``ERROR``        | There are serious issues with validated entity. We can't proceed until reported issues will be resolved.   |
++------------------+------------------------------------------------------------------------------------------------------------+
+
+View Error or Warning valdiation message
+----------------------------------------
+
+In case of any WARNING AND ERROR status, user has to iterate the list of validation messages.
+
+::
+
+ printMessage(link.getConnectorLinkConfig().getConfigs());
+
+ private static void printMessage(List<MConfig> configs) {
+   for(MConfig config : configs) {
+     List<MInput<?>> inputlist = config.getInputs();
+     if (config.getValidationMessages() != null) {
+      // print every validation message
+      for(Message message : config.getValidationMessages()) {
+       System.out.println("Config validation message: " + message.getMessage());
+      }
+     }
+     for (MInput minput : inputlist) {
+       if (minput.getValidationStatus() == Status.WARNING) {
+        for(Message message : minput.getValidationMessages()) {
+         System.out.println("Config Input Validation Warning: " + message.getMessage());
+       }
+     }
+     else if (minput.getValidationStatus() == Status.ERROR) {
+       for(Message message : minput.getValidationMessages()) {
+        System.out.println("Config Input Validation Error: " + message.getMessage());
+       }
+      }
+     }
+    }
+
+Updating link and job
+---------------------
+After creating link or job in the repository, you can update or delete a link or job using the following functions
+
++----------------------------------+------------------------------------------------------------------------------------+
+|   Method                         | Description                                                                        |
++==================================+====================================================================================+
+| ``updateLink(link)``             | Invoke update with link and check status for any errors or warnings                |
++----------------------------------+------------------------------------------------------------------------------------+
+| ``deleteLink(lid)``              | Delete link. Deletes only if specified link is not used by any job                 |
++----------------------------------+------------------------------------------------------------------------------------+
+| ``updateJob(job)``               | Invoke update with job and check status for any errors or warnings                 |
++----------------------------------+------------------------------------------------------------------------------------+
+| ``deleteJob(jid)``               | Delete job                                                                         |
++----------------------------------+------------------------------------------------------------------------------------+
+
+Job Start
+==============
+
+Starting a job requires a job id. On successful start, getStatus() method returns "BOOTING" or "RUNNING".
+
+::
+
+  //Job start
+  long jobId = 1;
+  MSubmission submission = client.startJob(jobId);
+  System.out.println("Job Submission Status : " + submission.getStatus());
+  if(submission.getStatus().isRunning() && submission.getProgress() != -1) {
+    System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100));
+  }
+  System.out.println("Hadoop job id :" + submission.getExternalId());
+  System.out.println("Job link : " + submission.getExternalLink());
+  Counters counters = submission.getCounters();
+  if(counters != null) {
+    System.out.println("Counters:");
+    for(CounterGroup group : counters) {
+      System.out.print("\t");
+      System.out.println(group.getName());
+      for(Counter counter : group) {
+        System.out.print("\t\t");
+        System.out.print(counter.getName());
+        System.out.print(": ");
+        System.out.println(counter.getValue());
+      }
+    }
+  }
+  if(submission.getExceptionInfo() != null) {
+    System.out.println("Exception info : " +submission.getExceptionInfo());
+  }
+
+
+  //Check job status for a running job 
+  MSubmission submission = client.getJobStatus(jobId);
+  if(submission.getStatus().isRunning() && submission.getProgress() != -1) {
+    System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100));
+  }
+
+  //Stop a running job
+  submission.stopJob(jobId);
+
+Above code block, job start is asynchronous. For synchronous job start, use ``startJob(jid, callback, pollTime)`` method. If you are not interested in getting the job status, then invoke the same method with "null" as the value for the callback parameter and this returns the final job status. ``pollTime`` is the request interval for getting the job status from sqoop server and the value should be greater than zero. We will frequently hit the sqoop server if a low value is given for the ``pollTime``. When a synchronous job is started with a non null callback, it first invokes the callback's ``submitted(MSubmission)`` method on successful start, after every poll time interval, it then invokes the ``updated(MSubmission)`` method on the callback API and finally on finishing the job executuon it invokes the ``finished(MSubmission)`` method on the callback API.
+
+Display Config and Input Names For Connector
+============================================
+
+You can view the config/input names for the link and job config types per connector
+
+::
+
+  String url = "http://localhost:12000/sqoop/";
+  SqoopClient client = new SqoopClient(url);
+  long connectorId = 1;
+  // link config for connector
+  describe(client.getConnector(connectorId).getLinkConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+  // from job config for connector
+  describe(client.getConnector(connectorId).getFromConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+  // to job config for the connector
+  describe(client.getConnector(connectorId).getToConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+
+  void describe(List<MConfig> configs, ResourceBundle resource) {
+    for (MConfig config : configs) {
+      System.out.println(resource.getString(config.getLabelKey())+":");
+      List<MInput<?>> inputs = config.getInputs();
+      for (MInput input : inputs) {
+        System.out.println(resource.getString(input.getLabelKey()) + " : " + input.getValue());
+      }
+      System.out.println();
+    }
+  }
+
+
+Above Sqoop 2 Client API tutorial explained how to create a link, create job and and then start the job.