You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@sqoop.apache.org by ka...@apache.org on 2015/11/19 00:01:14 UTC

[1/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

Repository: sqoop
Updated Branches:
  refs/heads/sqoop2 25b0df5c8 -> 3613843a7


http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/connectors/Connector-Kite.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/connectors/Connector-Kite.rst b/docs/src/site/sphinx/user/connectors/Connector-Kite.rst
new file mode 100644
index 0000000..414ad8a
--- /dev/null
+++ b/docs/src/site/sphinx/user/connectors/Connector-Kite.rst
@@ -0,0 +1,110 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==============
+Kite Connector
+==============
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the Kite Connector, create a link for the connector and a job that uses the link. For more information on Kite, checkout the kite documentation: http://kitesdk.org/docs/1.0.0/Kite-SDK-Guide.html.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Input                       | Type    | Description                                                           | Example                    |
++=============================+=========+=======================================================================+============================+
+| authority                   | String  | The authority of the kite dataset.                                    | hdfs://example.com:8020/   |
+|                             |         | *Optional*. See note below.                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+
+**Notes**
+=========
+
+1. The authority is useful for specifying Hive metastore or HDFS URI.
+
+**FROM Job Configuration**
+++++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the FROM direction include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Input                       | Type    | Description                                                           | Example                    |
++=============================+=========+=======================================================================+============================+
+| URI                         | String  | The Kite dataset URI to use.                                          | dataset:hdfs:/tmp/ns/ds    |
+|                             |         | *Required*. See notes below.                                          |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+
+**Notes**
+=========
+
+1. The URI and the authority from the link configuration will be merged to create a complete dataset URI internally. If the given dataset URI contains authority, the authority from the link configuration will be ignored.
+2. Only *hdfs* and *hive* are supported currently.
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the TO direction include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Input                       | Type    | Description                                                           | Example                    |
++=============================+=========+=======================================================================+============================+
+| URI                         | String  | The Kite dataset URI to use.                                          | dataset:hdfs:/tmp/ns/ds    |
+|                             |         | *Required*. See note below.                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| File format                 | Enum    | The format of the data the kite dataset should write out.             | PARQUET                    |
+|                             |         | *Optional*. See note below.                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+
+**Notes**
+=========
+
+1. The URI and the authority from the link configuration will be merged to create a complete dataset URI internally. If the given dataset URI contains authority, the authority from the link configuration will be ignored.
+2. Only *hdfs* and *hive* are supported currently.
+
+-----------
+Partitioner
+-----------
+
+The kite connector only creates one partition currently.
+
+---------
+Extractor
+---------
+
+During the *extraction* phase, Kite is used to query a dataset. Since there is only one dataset to query, only a single reader is created to read the dataset.
+
+**NOTE**: The avro schema kite generates will be slightly different than the original schema. This is because avro identifiers have strict naming requirements.
+
+------
+Loader
+------
+
+During the *loading* phase, Kite is used to write several temporary datasets. The number of temporary datasets is equivalent to the number of *loaders* that are being used.
+
+----------
+Destroyers
+----------
+
+The Kite connector TO destroyer merges all the temporary datasets into a single dataset.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/connectors/Connector-SFTP.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/connectors/Connector-SFTP.rst b/docs/src/site/sphinx/user/connectors/Connector-SFTP.rst
new file mode 100644
index 0000000..d25ea3f
--- /dev/null
+++ b/docs/src/site/sphinx/user/connectors/Connector-SFTP.rst
@@ -0,0 +1,91 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==============
+SFTP Connector
+==============
+
+The SFTP connector supports moving data between a Secure File Transfer Protocol (SFTP) server and other supported Sqoop2 connectors.
+
+Currently only the TO direction is supported to write records to an SFTP server. A FROM connector is pending (SQOOP-2218).
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+Before executing a Sqoop2 job with the SFTP connector, set **mapreduce.task.classpath.user.precedence** to true in the Hadoop cluster config, for example::
+
+    <property>
+      <name>mapreduce.task.classpath.user.precedence</name>
+      <value>true</value>
+    </property>
+
+This is required since the SFTP connector uses the JSch library (http://www.jcraft.com/jsch/) to provide SFTP functionality. Unfortunately Hadoop currently ships with an earlier version of this library which causes an issue with some SFTP servers. Setting this property ensures that the current version of the library packaged with this connector will appear first in the classpath.
+
+To use the SFTP Connector, create a link for the connector and a job that uses the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Input                       | Type    | Description                                                           | Example                    |
++=============================+=========+=======================================================================+============================+
+| SFTP server hostname        | String  | Hostname for the SFTP server.                                         | sftp.example.com           |
+|                             |         | *Required*.                                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| SFTP server port            | Integer | Port number for the SFTP server. Defaults to 22.                      | 2220                       |
+|                             |         | *Optional*.                                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Username                    | String  | The username to provide when connecting to the SFTP server.           | sqoop                      |
+|                             |         | *Required*.                                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Password                    | String  | The password to provide when connecting to the SFTP server.           | sqoop                      |
+|                             |         | *Required*                                                            |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+
+**Notes**
+=========
+
+1. The SFTP connector will attempt to connect to the SFTP server as part of the link validation process. If for some reason a connection can not be established, you'll see a corresponding error message.
+2. Note that during connection, the SFTP connector explictly disables *StrictHostKeyChecking* to avoid "UnknownHostKey" errors.
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the TO direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Input                       | Type    | Description                                                             | Example                           |
++=============================+=========+=========================================================================+===================================+
+| Output directory            | String  | The location on the SFTP server that the connector will write files to. | uploads                           |
+|                             |         | *Required*                                                              |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+
+**Notes**
+=========
+
+1. The *output directory* value needs to be an existing directory on the SFTP server.
+
+------
+Loader
+------
+
+During the *loading* phase, the connector will create uniquely named files in the *output directory* for each partition of data received from the **FROM** connector.
\ No newline at end of file


[8/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

Posted by ka...@apache.org.
SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs
(Jarek Jarcec Cecho via Kate Ting)


Project: http://git-wip-us.apache.org/repos/asf/sqoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/sqoop/commit/3613843a
Tree: http://git-wip-us.apache.org/repos/asf/sqoop/tree/3613843a
Diff: http://git-wip-us.apache.org/repos/asf/sqoop/diff/3613843a

Branch: refs/heads/sqoop2
Commit: 3613843a7c52fb872f7365fae58b2410a1047b4b
Parents: 25b0df5
Author: Kate Ting <ka...@apache.org>
Authored: Wed Nov 18 14:58:57 2015 -0800
Committer: Kate Ting <ka...@apache.org>
Committed: Wed Nov 18 14:58:57 2015 -0800

----------------------------------------------------------------------
 docs/pom.xml                                    |    5 +-
 docs/src/site/sphinx/BuildingSqoop2.rst         |   76 -
 docs/src/site/sphinx/ClientAPI.rst              |  304 ----
 docs/src/site/sphinx/CommandLineClient.rst      |  533 ------
 docs/src/site/sphinx/Connector-FTP.rst          |   81 -
 docs/src/site/sphinx/Connector-GenericJDBC.rst  |  194 ---
 docs/src/site/sphinx/Connector-HDFS.rst         |  159 --
 docs/src/site/sphinx/Connector-Kafka.rst        |   64 -
 docs/src/site/sphinx/Connector-Kite.rst         |  110 --
 docs/src/site/sphinx/Connector-SFTP.rst         |   91 -
 docs/src/site/sphinx/ConnectorDevelopment.rst   |  595 -------
 docs/src/site/sphinx/DevEnv.rst                 |   57 -
 docs/src/site/sphinx/Installation.rst           |  103 --
 docs/src/site/sphinx/RESTAPI.rst                | 1601 ------------------
 docs/src/site/sphinx/Repository.rst             |  335 ----
 docs/src/site/sphinx/SecurityGuideOnSqoop2.rst  |  239 ---
 docs/src/site/sphinx/Sqoop5MinutesDemo.rst      |  242 ---
 docs/src/site/sphinx/Tools.rst                  |  129 --
 docs/src/site/sphinx/Upgrade.rst                |   84 -
 docs/src/site/sphinx/admin.rst                  |   24 +
 docs/src/site/sphinx/admin/Installation.rst     |  103 ++
 docs/src/site/sphinx/admin/Tools.rst            |  129 ++
 docs/src/site/sphinx/admin/Upgrade.rst          |   84 +
 docs/src/site/sphinx/conf.py                    |    4 +-
 docs/src/site/sphinx/dev.rst                    |   24 +
 docs/src/site/sphinx/dev/BuildingSqoop2.rst     |   76 +
 docs/src/site/sphinx/dev/ClientAPI.rst          |  304 ++++
 .../site/sphinx/dev/ConnectorDevelopment.rst    |  595 +++++++
 docs/src/site/sphinx/dev/DevEnv.rst             |   57 +
 docs/src/site/sphinx/dev/RESTAPI.rst            | 1601 ++++++++++++++++++
 docs/src/site/sphinx/dev/Repository.rst         |  335 ++++
 docs/src/site/sphinx/index.rst                  |   76 +-
 docs/src/site/sphinx/security.rst               |   24 +
 .../sphinx/security/SecurityGuideOnSqoop2.rst   |  239 +++
 docs/src/site/sphinx/user.rst                   |   24 +
 docs/src/site/sphinx/user/CommandLineClient.rst |  533 ++++++
 docs/src/site/sphinx/user/Connectors.rst        |   24 +
 docs/src/site/sphinx/user/Sqoop5MinutesDemo.rst |  242 +++
 .../sphinx/user/connectors/Connector-FTP.rst    |   81 +
 .../user/connectors/Connector-GenericJDBC.rst   |  194 +++
 .../sphinx/user/connectors/Connector-HDFS.rst   |  159 ++
 .../sphinx/user/connectors/Connector-Kafka.rst  |   64 +
 .../sphinx/user/connectors/Connector-Kite.rst   |  110 ++
 .../sphinx/user/connectors/Connector-SFTP.rst   |   91 +
 44 files changed, 5152 insertions(+), 5047 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/pom.xml
----------------------------------------------------------------------
diff --git a/docs/pom.xml b/docs/pom.xml
index 079e896..c96a582 100644
--- a/docs/pom.xml
+++ b/docs/pom.xml
@@ -70,7 +70,10 @@ limitations under the License.
           <plugin>
             <groupId>org.tomdz.maven</groupId>
             <artifactId>sphinx-maven-plugin</artifactId>
-            <version>1.0.2</version>
+            <version>1.0.3</version>
+            <configuration>
+              <warningsAsErrors>true</warningsAsErrors>
+            </configuration>
           </plugin>
           <!-- Turning off standard reports as they collide with sphinx -->
           <plugin>

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/BuildingSqoop2.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/BuildingSqoop2.rst b/docs/src/site/sphinx/BuildingSqoop2.rst
deleted file mode 100644
index 7fbbb6b..0000000
--- a/docs/src/site/sphinx/BuildingSqoop2.rst
+++ /dev/null
@@ -1,76 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-================================
-Building Sqoop2 from source code
-================================
-
-This guide will show you how to build Sqoop2 from source code. Sqoop is using `maven <http://maven.apache.org/>`_ as build system. You you will need to use at least version 3.0 as older versions will not work correctly. All other dependencies will be downloaded by maven automatically. With exception of special JDBC drivers that are needed only for advanced integration tests.
-
-Downloading source code
------------------------
-
-Sqoop project is using git as a revision control system hosted at Apache Software Foundation. You can clone entire repository using following command:
-
-::
-
-  git clone https://git-wip-us.apache.org/repos/asf/sqoop.git sqoop2
-
-Sqoop2 is currently developed in special branch ``sqoop2`` that you need to check out after clone:
-
-::
-
-  cd sqoop2
-  git checkout sqoop2
-
-Building project
-----------------
-
-You can use usual maven targets like ``compile`` or ``package`` to build the project. Sqoop supports one major Hadoop revision at the moment - 2.x. As compiled code for one Hadoop major version can't be used on another, you must compile Sqoop against appropriate Hadoop version.
-
-::
-
-  mvn compile
-
-Maven target ``package`` can be used to create Sqoop packages similar to the ones that are officially available for download. Sqoop will build only source tarball by default. You need to specify ``-Pbinary`` to build binary distribution.
-
-::
-
-  mvn package -Pbinary
-
-Running tests
--------------
-
-Sqoop supports two different sets of tests. First smaller and much faster set is called **unit tests** and will be executed on maven target ``test``. Second larger set of **integration tests** will be executed on maven target ``integration-test``. Please note that integration tests might require manual steps for installing various JDBC drivers into your local maven cache.
-
-Example for running unit tests:
-
-::
-
-  mvn test
-
-Example for running integration tests:
-
-::
-
-  mvn integration-test
-
-For the **unit tests**, there are two helpful profiles: **fast** and **slow**. The **fast** unit tests do not start or use any services. The **slow** unit tests, may start services or use an external service (ie. MySQL).
-
-::
-
-  mvn test -Pfast,hadoop200
-  mvn test -Pslow,hadoop200
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/ClientAPI.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/ClientAPI.rst b/docs/src/site/sphinx/ClientAPI.rst
deleted file mode 100644
index 9626878..0000000
--- a/docs/src/site/sphinx/ClientAPI.rst
+++ /dev/null
@@ -1,304 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-===========================
-Sqoop Java Client API Guide
-===========================
-
-This document will explain how to use Sqoop Java Client API with external application. Client API allows you to execute the functions of sqoop commands. It requires Sqoop Client JAR and its dependencies.
-
-The main class that provides wrapper methods for all the supported operations is the
-::
-
-  public class SqoopClient {
-    ...
-  }
-
-Java Client API is explained using Generic JDBC Connector example. Before executing the application using the sqoop client API, check whether sqoop server is running.
-
-Workflow
-========
-
-Given workflow has to be followed for executing a sqoop job in Sqoop server.
-
-  1. Create LINK object for a given connectorId             - Creates Link object and returns linkId (lid)
-  2. Create a JOB for a given "from" and "to" linkId            - Create Job object and returns jobId (jid)
-  3. Start the JOB for a given jobId                        - Start Job on the server and creates a submission record
-
-Project Dependencies
-====================
-Here given maven dependency
-
-::
-
-  <dependency>
-    <groupId>org.apache.sqoop</groupId>
-      <artifactId>sqoop-client</artifactId>
-      <version>${requestedVersion}</version>
-  </dependency>
-
-Initialization
-==============
-
-First initialize the SqoopClient class with server URL as argument.
-
-::
-
-  String url = "http://localhost:12000/sqoop/";
-  SqoopClient client = new SqoopClient(url);
-
-Server URL value can be modfied by setting value to setServerUrl(String) method
-
-::
-
-  client.setServerUrl(newUrl);
-
-
-Link
-====
-Connectors provide the facility to interact with many data sources and thus can be used as a means to transfer data between them in Sqoop. The registered connector implementation will provide logic to read from and/or write to a data source that it represents. A connector can have one or more links associated with it. The java client API allows you to create, update and delete a link for any registered connector. Creating or updating a link requires you to populate the Link Config for that particular connector. Hence the first thing to do is get the list of registered connectors and select the connector for which you would like to create a link. Then
-you can get the list of all the config/inputs using `Display Config and Input Names For Connector`_ for that connector.
-
-
-Save Link
----------
-
-First create a new link by invoking ``createLink(cid)`` method with connector Id and it returns a MLink object with dummy id and the unfilled link config inputs for that connector. Then fill the config inputs with relevant values. Invoke ``saveLink`` passing it the filled MLink object.
-
-::
-
-  // create a placeholder for link
-  long connectorId = 1;
-  MLink link = client.createLink(connectorId);
-  link.setName("Vampire");
-  link.setCreationUser("Buffy");
-  MLinkConfig linkConfig = link.getConnectorLinkConfig();
-  // fill in the link config values
-  linkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://localhost/my");
-  linkConfig.getStringInput("linkConfig.jdbcDriver").setValue("com.mysql.jdbc.Driver");
-  linkConfig.getStringInput("linkConfig.username").setValue("root");
-  linkConfig.getStringInput("linkConfig.password").setValue("root");
-  // save the link object that was filled
-  Status status = client.saveLink(link);
-  if(status.canProceed()) {
-   System.out.println("Created Link with Link Id : " + link.getPersistenceId());
-  } else {
-   System.out.println("Something went wrong creating the link");
-  }
-
-``status.canProceed()`` returns true if status is OK or a WARNING. Before sending the status, the link config values are validated using the corresponding validator associated with th link config inputs.
-
-On successful execution of the saveLink method, new link Id is assigned to the link object else an exception is thrown. ``link.getPersistenceId()`` method returns the unique Id for this object persisted in the sqoop repository.
-
-User can retrieve a link using the following methods
-
-+----------------------------+--------------------------------------+
-|   Method                   | Description                          |
-+============================+======================================+
-| ``getLink(lid)``           | Returns a link by id                 |
-+----------------------------+--------------------------------------+
-| ``getLinks()``             | Returns list of links in the sqoop   |
-+----------------------------+--------------------------------------+
-
-Job
-===
-
-A sqoop job holds the ``From`` and ``To`` parts for transferring data from the ``From`` data source to the ``To`` data source. Both the ``From`` and the ``To`` are uniquely identified by their corresponding connector Link Ids. i.e when creating a job we have to specifiy the ``FromLinkId`` and the ``ToLinkId``. Thus the pre-requisite for creating a job is to first create the links as described above.
-
-Once the linkIds for the ``From`` and ``To`` are given, then the job configs for the associated connector for the link object have to be filled. You can get the list of all the from and to job config/inputs using `Display Config and Input Names For Connector`_ for that connector. A connector can have one or more links. We then use the links in the ``From`` and ``To`` direction to populate the corresponding ``MFromConfig`` and ``MToConfig`` respectively.
-
-In addition to filling the job configs for the ``From`` and the ``To`` representing the link, we also need to fill the driver configs that control the job execution engine environment. For example, if the job execution engine happens to be the MapReduce we will specifiy the number of mappers to be used in reading data from the ``From`` data source.
-
-Save Job
----------
-Here is the code to create and then save a job
-::
-
-  String url = "http://localhost:12000/sqoop/";
-  SqoopClient client = new SqoopClient(url);
-  //Creating dummy job object
-  long fromLinkId = 1;// for jdbc connector
-  long toLinkId = 2; // for HDFS connector
-  MJob job = client.createJob(fromLinkId, toLinkId);
-  job.setName("Vampire");
-  job.setCreationUser("Buffy");
-  // set the "FROM" link job config values
-  MFromConfig fromJobConfig = job.getFromJobConfig();
-  fromJobConfig.getStringInput("fromJobConfig.schemaName").setValue("sqoop");
-  fromJobConfig.getStringInput("fromJobConfig.tableName").setValue("sqoop");
-  fromJobConfig.getStringInput("fromJobConfig.partitionColumn").setValue("id");
-  // set the "TO" link job config values
-  MToConfig toJobConfig = job.getToJobConfig();
-  toJobConfig.getStringInput("toJobConfig.outputDirectory").setValue("/usr/tmp");
-  // set the driver config values
-  MDriverConfig driverConfig = job.getDriverConfig();
-  driverConfig.getStringInput("throttlingConfig.numExtractors").setValue("3");
-
-  Status status = client.saveJob(job);
-  if(status.canProceed()) {
-   System.out.println("Created Job with Job Id: "+ job.getPersistenceId());
-  } else {
-   System.out.println("Something went wrong creating the job");
-  }
-
-User can retrieve a job using the following methods
-
-+----------------------------+--------------------------------------+
-|   Method                   | Description                          |
-+============================+======================================+
-| ``getJob(jid)``            | Returns a job by id                  |
-+----------------------------+--------------------------------------+
-| ``getJobs()``              | Returns list of jobs in the sqoop    |
-+----------------------------+--------------------------------------+
-
-
-List of status codes
---------------------
-
-+------------------+------------------------------------------------------------------------------------------------------------+
-| Function         | Description                                                                                                |
-+==================+============================================================================================================+
-| ``OK``           | There are no issues, no warnings.                                                                          |
-+------------------+------------------------------------------------------------------------------------------------------------+
-| ``WARNING``      | Validated entity is correct enough to be proceed. Not a fatal error                                        |
-+------------------+------------------------------------------------------------------------------------------------------------+
-| ``ERROR``        | There are serious issues with validated entity. We can't proceed until reported issues will be resolved.   |
-+------------------+------------------------------------------------------------------------------------------------------------+
-
-View Error or Warning valdiation message
-----------------------------------------
-
-In case of any WARNING AND ERROR status, user has to iterate the list of validation messages.
-
-::
-
- printMessage(link.getConnectorLinkConfig().getConfigs());
-
- private static void printMessage(List<MConfig> configs) {
-   for(MConfig config : configs) {
-     List<MInput<?>> inputlist = config.getInputs();
-     if (config.getValidationMessages() != null) {
-      // print every validation message
-      for(Message message : config.getValidationMessages()) {
-       System.out.println("Config validation message: " + message.getMessage());
-      }
-     }
-     for (MInput minput : inputlist) {
-       if (minput.getValidationStatus() == Status.WARNING) {
-        for(Message message : minput.getValidationMessages()) {
-         System.out.println("Config Input Validation Warning: " + message.getMessage());
-       }
-     }
-     else if (minput.getValidationStatus() == Status.ERROR) {
-       for(Message message : minput.getValidationMessages()) {
-        System.out.println("Config Input Validation Error: " + message.getMessage());
-       }
-      }
-     }
-    }
-
-Updating link and job
----------------------
-After creating link or job in the repository, you can update or delete a link or job using the following functions
-
-+----------------------------------+------------------------------------------------------------------------------------+
-|   Method                         | Description                                                                        |
-+==================================+====================================================================================+
-| ``updateLink(link)``             | Invoke update with link and check status for any errors or warnings                |
-+----------------------------------+------------------------------------------------------------------------------------+
-| ``deleteLink(lid)``              | Delete link. Deletes only if specified link is not used by any job                 |
-+----------------------------------+------------------------------------------------------------------------------------+
-| ``updateJob(job)``               | Invoke update with job and check status for any errors or warnings                 |
-+----------------------------------+------------------------------------------------------------------------------------+
-| ``deleteJob(jid)``               | Delete job                                                                         |
-+----------------------------------+------------------------------------------------------------------------------------+
-
-Job Start
-==============
-
-Starting a job requires a job id. On successful start, getStatus() method returns "BOOTING" or "RUNNING".
-
-::
-
-  //Job start
-  long jobId = 1;
-  MSubmission submission = client.startJob(jobId);
-  System.out.println("Job Submission Status : " + submission.getStatus());
-  if(submission.getStatus().isRunning() && submission.getProgress() != -1) {
-    System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100));
-  }
-  System.out.println("Hadoop job id :" + submission.getExternalId());
-  System.out.println("Job link : " + submission.getExternalLink());
-  Counters counters = submission.getCounters();
-  if(counters != null) {
-    System.out.println("Counters:");
-    for(CounterGroup group : counters) {
-      System.out.print("\t");
-      System.out.println(group.getName());
-      for(Counter counter : group) {
-        System.out.print("\t\t");
-        System.out.print(counter.getName());
-        System.out.print(": ");
-        System.out.println(counter.getValue());
-      }
-    }
-  }
-  if(submission.getExceptionInfo() != null) {
-    System.out.println("Exception info : " +submission.getExceptionInfo());
-  }
-
-
-  //Check job status for a running job 
-  MSubmission submission = client.getJobStatus(jobId);
-  if(submission.getStatus().isRunning() && submission.getProgress() != -1) {
-    System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100));
-  }
-
-  //Stop a running job
-  submission.stopJob(jobId);
-
-Above code block, job start is asynchronous. For synchronous job start, use ``startJob(jid, callback, pollTime)`` method. If you are not interested in getting the job status, then invoke the same method with "null" as the value for the callback parameter and this returns the final job status. ``pollTime`` is the request interval for getting the job status from sqoop server and the value should be greater than zero. We will frequently hit the sqoop server if a low value is given for the ``pollTime``. When a synchronous job is started with a non null callback, it first invokes the callback's ``submitted(MSubmission)`` method on successful start, after every poll time interval, it then invokes the ``updated(MSubmission)`` method on the callback API and finally on finishing the job executuon it invokes the ``finished(MSubmission)`` method on the callback API.
-
-Display Config and Input Names For Connector
-============================================
-
-You can view the config/input names for the link and job config types per connector
-
-::
-
-  String url = "http://localhost:12000/sqoop/";
-  SqoopClient client = new SqoopClient(url);
-  long connectorId = 1;
-  // link config for connector
-  describe(client.getConnector(connectorId).getLinkConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
-  // from job config for connector
-  describe(client.getConnector(connectorId).getFromConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
-  // to job config for the connector
-  describe(client.getConnector(connectorId).getToConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
-
-  void describe(List<MConfig> configs, ResourceBundle resource) {
-    for (MConfig config : configs) {
-      System.out.println(resource.getString(config.getLabelKey())+":");
-      List<MInput<?>> inputs = config.getInputs();
-      for (MInput input : inputs) {
-        System.out.println(resource.getString(input.getLabelKey()) + " : " + input.getValue());
-      }
-      System.out.println();
-    }
-  }
-
-
-Above Sqoop 2 Client API tutorial explained how to create a link, create job and and then start the job.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/CommandLineClient.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/CommandLineClient.rst b/docs/src/site/sphinx/CommandLineClient.rst
deleted file mode 100644
index 8c4c592..0000000
--- a/docs/src/site/sphinx/CommandLineClient.rst
+++ /dev/null
@@ -1,533 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-===================
-Command Line Shell
-===================
-
-Sqoop 2 provides command line shell that is capable of communicating with Sqoop 2 server using REST interface. Client is able to run in two modes - interactive and batch mode. Commands ``create``, ``update`` and ``clone`` are not currently supported in batch mode. Interactive mode supports all available commands.
-
-You can start Sqoop 2 client in interactive mode using command ``sqoop2-shell``::
-
-  sqoop2-shell
-
-Batch mode can be started by adding additional argument representing path to your Sqoop client script: ::
-
-  sqoop2-shell /path/to/your/script.sqoop
-
-Sqoop client script is expected to contain valid Sqoop client commands, empty lines and lines starting with ``#`` that are denoting comment lines. Comments and empty lines are ignored, all other lines are interpreted. Example script: ::
-
-  # Specify company server
-  set server --host sqoop2.company.net
-
-  # Executing given job
-  start job  --jid 1
-
-
-.. contents:: Table of Contents
-
-Resource file
-=============
-
-Sqoop 2 client have ability to load resource files similarly as other command line tools. At the beginning of execution Sqoop client will check existence of file ``.sqoop2rc`` in home directory of currently logged user. If such file exists, it will be interpreted before any additional actions. This file is loaded in both interactive and batch mode. It can be used to execute any batch compatible commands.
-
-Example resource file: ::
-
-  # Configure our Sqoop 2 server automatically
-  set server --host sqoop2.company.net
-
-  # Run in verbose mode by default
-  set option --name verbose --value true
-
-Commands
-========
-
-Sqoop 2 contains several commands that will be documented in this section. Each command have one more functions that are accepting various arguments. Not all commands are supported in both interactive and batch mode.
-
-Auxiliary Commands
-------------------
-
-Auxiliary commands are commands that are improving user experience and are running purely on client side. Thus they do not need working connection to the server.
-
-* ``exit`` Exit client immediately. This command can be also executed by sending EOT (end of transmission) character. It's CTRL+D on most common Linux shells like Bash or Zsh.
-* ``history`` Print out command history. Please note that Sqoop client is saving history from previous executions and thus you might see commands that you've executed in previous runs.
-* ``help`` Show all available commands with short in-shell documentation.
-
-::
-
- sqoop:000> help
- For information about Sqoop, visit: http://sqoop.apache.org/
-
- Available commands:
-   exit    (\x  ) Exit the shell
-   history (\H  ) Display, manage and recall edit-line history
-   help    (\h  ) Display this help message
-   set     (\st ) Configure various client options and settings
-   show    (\sh ) Display various objects and configuration options
-   create  (\cr ) Create new object in Sqoop repository
-   delete  (\d  ) Delete existing object in Sqoop repository
-   update  (\up ) Update objects in Sqoop repository
-   clone   (\cl ) Create new object based on existing one
-   start   (\sta) Start job
-   stop    (\stp) Stop job
-   status  (\stu) Display status of a job
-   enable  (\en ) Enable object in Sqoop repository
-   disable (\di ) Disable object in Sqoop repository
-
-Set Command
------------
-
-Set command allows to set various properties of the client. Similarly as auxiliary commands, set do not require connection to Sqoop server. Set commands is not used to reconfigure Sqoop server.
-
-Available functions:
-
-+---------------+------------------------------------------+
-| Function      | Description                              |
-+===============+==========================================+
-| ``server``    | Set connection configuration for server  |
-+---------------+------------------------------------------+
-| ``option``    | Set various client side options          |
-+---------------+------------------------------------------+
-
-Set Server Function
-~~~~~~~~~~~~~~~~~~~
-
-Configure connection to Sqoop server - host port and web application name. Available arguments:
-
-+-----------------------+---------------+--------------------------------------------------+
-| Argument              | Default value | Description                                      |
-+=======================+===============+==================================================+
-| ``-h``, ``--host``    | localhost     | Server name (FQDN) where Sqoop server is running |
-+-----------------------+---------------+--------------------------------------------------+
-| ``-p``, ``--port``    | 12000         | TCP Port                                         |
-+-----------------------+---------------+--------------------------------------------------+
-| ``-w``, ``--webapp``  | sqoop         | Jetty's web application name                    |
-+-----------------------+---------------+--------------------------------------------------+
-| ``-u``, ``--url``     |               | Sqoop Server in url format                       |
-+-----------------------+---------------+--------------------------------------------------+
-
-Example: ::
-
-  set server --host sqoop2.company.net --port 80 --webapp sqoop
-
-or ::
-
-  set server --url http://sqoop2.company.net:80/sqoop
-
-Note: When ``--url`` option is given, ``--host``, ``--port`` or ``--webapp`` option will be ignored.
-
-Set Option Function
-~~~~~~~~~~~~~~~~~~~
-
-Configure Sqoop client related options. This function have two required arguments ``name`` and ``value``. Name represents internal property name and value holds new value that should be set. List of available option names follows:
-
-+-------------------+---------------+---------------------------------------------------------------------+
-| Option name       | Default value | Description                                                         |
-+===================+===============+=====================================================================+
-| ``verbose``       | false         | Client will print additional information if verbose mode is enabled |
-+-------------------+---------------+---------------------------------------------------------------------+
-| ``poll-timeout``  | 10000         | Server poll timeout in milliseconds                                 |
-+-------------------+---------------+---------------------------------------------------------------------+
-
-Example: ::
-
-  set option --name verbose --value true
-  set option --name poll-timeout --value 20000
-
-Show Command
-------------
-
-Show commands displays various information as described below.
-
-Available functions:
-
-+----------------+--------------------------------------------------------------------------------------------------------+
-| Function       | Description                                                                                            |
-+================+========================================================================================================+
-| ``server``     | Display connection information to the sqoop server (host, port, webapp)                                |
-+----------------+--------------------------------------------------------------------------------------------------------+
-| ``option``     | Display various client side options                                                                    |
-+----------------+--------------------------------------------------------------------------------------------------------+
-| ``version``    | Show client build version, with an option -all it shows server build version and supported api versions|
-+----------------+--------------------------------------------------------------------------------------------------------+
-| ``connector``  | Show connector configurable and its related configs                                                    |
-+----------------+--------------------------------------------------------------------------------------------------------+
-| ``driver``     | Show driver configurable and its related configs                                                       |
-+----------------+--------------------------------------------------------------------------------------------------------+
-| ``link``       | Show links in sqoop                                                                                    |
-+----------------+--------------------------------------------------------------------------------------------------------+
-| ``job``        | Show jobs in sqoop                                                                                     |
-+----------------+--------------------------------------------------------------------------------------------------------+
-
-Show Server Function
-~~~~~~~~~~~~~~~~~~~~
-
-Show details about connection to Sqoop server.
-
-+-----------------------+--------------------------------------------------------------+
-| Argument              |  Description                                                 |
-+=======================+==============================================================+
-| ``-a``, ``--all``     | Show all connection related information (host, port, webapp) |
-+-----------------------+--------------------------------------------------------------+
-| ``-h``, ``--host``    | Show host                                                    |
-+-----------------------+--------------------------------------------------------------+
-| ``-p``, ``--port``    | Show port                                                    |
-+-----------------------+--------------------------------------------------------------+
-| ``-w``, ``--webapp``  | Show web application name                                    |
-+-----------------------+--------------------------------------------------------------+
-
-Example: ::
-
-  show server --all
-
-Show Option Function
-~~~~~~~~~~~~~~~~~~~~
-
-Show values of various client side options. This function will show all client options when called without arguments.
-
-+-----------------------+--------------------------------------------------------------+
-| Argument              |  Description                                                 |
-+=======================+==============================================================+
-| ``-n``, ``--name``    | Show client option value with given name                     |
-+-----------------------+--------------------------------------------------------------+
-
-Please check table in `Set Option Function`_ section to get a list of all supported option names.
-
-Example: ::
-
-  show option --name verbose
-
-Show Version Function
-~~~~~~~~~~~~~~~~~~~~~
-
-Show build versions of both client and server as well as the supported rest api versions.
-
-+------------------------+-----------------------------------------------+
-| Argument               |  Description                                  |
-+========================+===============================================+
-| ``-a``, ``--all``      | Show all versions (server, client, api)       |
-+------------------------+-----------------------------------------------+
-| ``-c``, ``--client``   | Show client build version                     |
-+------------------------+-----------------------------------------------+
-| ``-s``, ``--server``   | Show server build version                     |
-+------------------------+-----------------------------------------------+
-| ``-p``, ``--api``      | Show supported api versions                   |
-+------------------------+-----------------------------------------------+
-
-Example: ::
-
-  show version --all
-
-Show Connector Function
-~~~~~~~~~~~~~~~~~~~~~~~
-
-Show persisted connector configurable and its related configs used in creating associated link and job objects
-
-+-----------------------+------------------------------------------------+
-| Argument              |  Description                                   |
-+=======================+================================================+
-| ``-a``, ``--all``     | Show information for all connectors            |
-+-----------------------+------------------------------------------------+
-| ``-c``, ``--cid <x>`` | Show information for connector with id ``<x>`` |
-+-----------------------+------------------------------------------------+
-
-Example: ::
-
-  show connector --all or show connector
-
-Show Driver Function
-~~~~~~~~~~~~~~~~~~~~
-
-Show persisted driver configurable and its related configs used in creating job objects
-
-This function do not have any extra arguments. There is only one registered driver in sqoop
-
-Example: ::
-
-  show driver
-
-Show Link Function
-~~~~~~~~~~~~~~~~~~
-
-Show persisted link objects.
-
-+-----------------------+------------------------------------------------------+
-| Argument              |  Description                                         |
-+=======================+======================================================+
-| ``-a``, ``--all``     | Show all available links                             |
-+-----------------------+------------------------------------------------------+
-| ``-x``, ``--lid <x>`` | Show link with id ``<x>``                            |
-+-----------------------+------------------------------------------------------+
-
-Example: ::
-
-  show link --all or show link
-
-Show Job Function
-~~~~~~~~~~~~~~~~~
-
-Show persisted job objects.
-
-+-----------------------+----------------------------------------------+
-| Argument              |  Description                                 |
-+=======================+==============================================+
-| ``-a``, ``--all``     | Show all available jobs                      |
-+-----------------------+----------------------------------------------+
-| ``-j``, ``--jid <x>`` | Show job with id ``<x>``                     |
-+-----------------------+----------------------------------------------+
-
-Example: ::
-
-  show job --all or show job
-
-Show Submission Function
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Show persisted job submission objects.
-
-+-----------------------+---------------------------------------------+
-| Argument              |  Description                                |
-+=======================+=============================================+
-| ``-j``, ``--jid <x>`` | Show available submissions for given job    |
-+-----------------------+---------------------------------------------+
-| ``-d``, ``--detail``  | Show job submissions in full details        |
-+-----------------------+---------------------------------------------+
-
-Example: ::
-
-  show submission
-  show submission --jid 1
-  show submission --jid 1 --detail
-
-Create Command
---------------
-
-Creates new link and job objects. This command is supported only in interactive mode. It will ask user to enter the link config and job configs for from /to and driver when creating link and job objects respectively.
-
-Available functions:
-
-+----------------+-------------------------------------------------+
-| Function       | Description                                     |
-+================+=================================================+
-| ``link``       | Create new link object                          |
-+----------------+-------------------------------------------------+
-| ``job``        | Create new job object                           |
-+----------------+-------------------------------------------------+
-
-Create Link Function
-~~~~~~~~~~~~~~~~~~~~
-
-Create new link object.
-
-+------------------------+-------------------------------------------------------------+
-| Argument               |  Description                                                |
-+========================+=============================================================+
-| ``-c``, ``--cid <x>``  |  Create new link object for connector with id ``<x>``       |
-+------------------------+-------------------------------------------------------------+
-
-
-Example: ::
-
-  create link --cid 1 or create link -c 1
-
-Create Job Function
-~~~~~~~~~~~~~~~~~~~
-
-Create new job object.
-
-+------------------------+------------------------------------------------------------------+
-| Argument               |  Description                                                     |
-+========================+==================================================================+
-| ``-f``, ``--from <x>`` | Create new job object with a FROM link with id ``<x>``           |
-+------------------------+------------------------------------------------------------------+
-| ``-t``, ``--to <t>``   | Create new job object with a TO link with id ``<x>``             |
-+------------------------+------------------------------------------------------------------+
-
-Example: ::
-
-  create job --from 1 --to 2 or create job --f 1 --t 2 
-
-Update Command
---------------
-
-Update commands allows you to edit link and job objects. This command is supported only in interactive mode.
-
-Update Link Function
-~~~~~~~~~~~~~~~~~~~~
-
-Update existing link object.
-
-+-----------------------+---------------------------------------------+
-| Argument              |  Description                                |
-+=======================+=============================================+
-| ``-x``, ``--lid <x>`` |  Update existing link with id ``<x>``       |
-+-----------------------+---------------------------------------------+
-
-Example: ::
-
-  update link --lid 1
-
-Update Job Function
-~~~~~~~~~~~~~~~~~~~
-
-Update existing job object.
-
-+-----------------------+--------------------------------------------+
-| Argument              |  Description                               |
-+=======================+============================================+
-| ``-j``, ``--jid <x>`` | Update existing job object with id ``<x>`` |
-+-----------------------+--------------------------------------------+
-
-Example: ::
-
-  update job --jid 1
-
-
-Delete Command
---------------
-
-Deletes link and job objects from Sqoop server.
-
-Delete Link Function
-~~~~~~~~~~~~~~~~~~~~
-
-Delete existing link object.
-
-+-----------------------+-------------------------------------------+
-| Argument              |  Description                              |
-+=======================+===========================================+
-| ``-x``, ``--lid <x>`` |  Delete link object with id ``<x>``       |
-+-----------------------+-------------------------------------------+
-
-Example: ::
-
-  delete link --lid 1
-
-
-Delete Job Function
-~~~~~~~~~~~~~~~~~~~
-
-Delete existing job object.
-
-+-----------------------+------------------------------------------+
-| Argument              |  Description                             |
-+=======================+==========================================+
-| ``-j``, ``--jid <x>`` | Delete job object with id ``<x>``        |
-+-----------------------+------------------------------------------+
-
-Example: ::
-
-  delete job --jid 1
-
-
-Clone Command
--------------
-
-Clone command will load existing link or job object from Sqoop server and allow user in place updates that will result in creation of new link or job object. This command is not supported in batch mode.
-
-Clone Link Function
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Clone existing link object.
-
-+-----------------------+------------------------------------------+
-| Argument              |  Description                             |
-+=======================+==========================================+
-| ``-x``, ``--lid <x>`` |  Clone link object with id ``<x>``       |
-+-----------------------+------------------------------------------+
-
-Example: ::
-
-  clone link --lid 1
-
-
-Clone Job Function
-~~~~~~~~~~~~~~~~~~
-
-Clone existing job object.
-
-+-----------------------+------------------------------------------+
-| Argument              |  Description                             |
-+=======================+==========================================+
-| ``-j``, ``--jid <x>`` | Clone job object with id ``<x>``         |
-+-----------------------+------------------------------------------+
-
-Example: ::
-
-  clone job --jid 1
-
-Start Command
--------------
-
-Start command will begin execution of an existing Sqoop job.
-
-Start Job Function
-~~~~~~~~~~~~~~~~~~
-
-Start job (submit new submission). Starting already running job is considered as invalid operation.
-
-+----------------------------+----------------------------+
-| Argument                   |  Description               |
-+============================+============================+
-| ``-j``, ``--jid <x>``      | Start job with id ``<x>``  |
-+----------------------------+----------------------------+
-| ``-s``, ``--synchronous``  | Synchoronous job execution |
-+----------------------------+----------------------------+
-
-Example: ::
-
-  start job --jid 1
-  start job --jid 1 --synchronous
-
-Stop Command
-------------
-
-Stop command will interrupt an job execution.
-
-Stop Job Function
-~~~~~~~~~~~~~~~~~
-
-Interrupt running job.
-
-+-----------------------+------------------------------------------+
-| Argument              |  Description                             |
-+=======================+==========================================+
-| ``-j``, ``--jid <x>`` | Interrupt running job with id ``<x>``    |
-+-----------------------+------------------------------------------+
-
-Example: ::
-
-  stop job --jid 1
-
-Status Command
---------------
-
-Status command will retrieve the last status of a job.
-
-Status Job Function
-~~~~~~~~~~~~~~~~~~~
-
-Retrieve last status for given job.
-
-+-----------------------+------------------------------------------+
-| Argument              |  Description                             |
-+=======================+==========================================+
-| ``-j``, ``--jid <x>`` | Retrieve status for job with id ``<x>``  |
-+-----------------------+------------------------------------------+
-
-Example: ::
-
-  status job --jid 1
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Connector-FTP.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Connector-FTP.rst b/docs/src/site/sphinx/Connector-FTP.rst
deleted file mode 100644
index cc10d68..0000000
--- a/docs/src/site/sphinx/Connector-FTP.rst
+++ /dev/null
@@ -1,81 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-==================
-FTP Connector
-==================
-
-The FTP connector supports moving data between an FTP server and other supported Sqoop2 connectors.
-
-Currently only the TO direction is supported to write records to an FTP server. A FROM connector is pending (SQOOP-2127).
-
-.. contents::
-   :depth: 3
-
------
-Usage
------
-
-To use the FTP Connector, create a link for the connector and a job that uses the link.
-
-**Link Configuration**
-++++++++++++++++++++++
-
-Inputs associated with the link configuration include:
-
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Input                       | Type    | Description                                                           | Example                    |
-+=============================+=========+=======================================================================+============================+
-| FTP server hostname         | String  | Hostname for the FTP server.                                          | ftp.example.com            |
-|                             |         | *Required*.                                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| FTP server port             | Integer | Port number for the FTP server. Defaults to 21.                       | 2100                       |
-|                             |         | *Optional*.                                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Username                    | String  | The username to provide when connecting to the FTP server.            | sqoop                      |
-|                             |         | *Required*.                                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Password                    | String  | The password to provide when connecting to the FTP server.            | sqoop                      |
-|                             |         | *Required*                                                            |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-
-**Notes**
-=========
-
-1. The FTP connector will attempt to connect to the FTP server as part of the link validation process. If for some reason a connection can not be established, you'll see a corresponding warning message.
-
-**TO Job Configuration**
-++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the TO direction include:
-
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Input                       | Type    | Description                                                             | Example                           |
-+=============================+=========+=========================================================================+===================================+
-| Output directory            | String  | The location on the FTP server that the connector will write files to.  | uploads                           |
-|                             |         | *Required*                                                              |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-
-**Notes**
-=========
-
-1. The *output directory* value needs to be an existing directory on the FTP server.
-
-------
-Loader
-------
-
-During the *loading* phase, the connector will create uniquely named files in the *output directory* for each partition of data received from the **FROM** connector.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Connector-GenericJDBC.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Connector-GenericJDBC.rst b/docs/src/site/sphinx/Connector-GenericJDBC.rst
deleted file mode 100644
index 347547d..0000000
--- a/docs/src/site/sphinx/Connector-GenericJDBC.rst
+++ /dev/null
@@ -1,194 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-======================
-Generic JDBC Connector
-======================
-
-The Generic JDBC Connector can connect to any data source that adheres to the **JDBC 4** specification.
-
-.. contents::
-   :depth: 3
-
------
-Usage
------
-
-To use the Generic JDBC Connector, create a link for the connector and a job that uses the link.
-
-**Link Configuration**
-++++++++++++++++++++++
-
-Inputs associated with the link configuration include:
-
-+-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
-| Input                       | Type    | Description                                                           | Example                                  |
-+=============================+=========+=======================================================================+==========================================+
-| JDBC Driver Class           | String  | The full class name of the JDBC driver.                               | com.mysql.jdbc.Driver                    |
-|                             |         | *Required* and accessible by the Sqoop server.                        |                                          |
-+-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
-| JDBC Connection String      | String  | The JDBC connection string to use when connecting to the data source. | jdbc:mysql://localhost/test              |
-|                             |         | *Required*. Connectivity upon creation is optional.                   |                                          |
-+-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
-| Username                    | String  | The username to provide when connecting to the data source.           | sqoop                                    |
-|                             |         | *Optional*. Connectivity upon creation is optional.                   |                                          |
-+-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
-| Password                    | String  | The password to provide when connecting to the data source.           | sqoop                                    |
-|                             |         | *Optional*. Connectivity upon creation is optional.                   |                                          |
-+-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
-| JDBC Connection Properties  | Map     | A map of JDBC connection properties to pass to the JDBC driver        | profileSQL=true&useFastDateParsing=false |
-|                             |         | *Optional*.                                                           |                                          |
-+-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
-
-**FROM Job Configuration**
-++++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the FROM direction include:
-
-+-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
-| Input                       | Type    | Description                                                             | Example                                     |
-+=============================+=========+=========================================================================+=============================================+
-| Schema name                 | String  | The schema name the table is part of.                                   | sqoop                                       |
-|                             |         | *Optional*                                                              |                                             |
-+-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
-| Table name                  | String  | The table name to import data from.                                     | test                                        |
-|                             |         | *Optional*. See note below.                                             |                                             |
-+-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
-| Table SQL statement         | String  | The SQL statement used to perform a **free form query**.                | ``SELECT COUNT(*) FROM test ${CONDITIONS}`` |
-|                             |         | *Optional*. See notes below.                                            |                                             |
-+-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
-| Table column names          | String  | Columns to extract from the JDBC data source.                           | col1,col2                                   |
-|                             |         | *Optional* Comma separated list of columns.                             |                                             |
-+-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
-| Partition column name       | Map     | The column name used to partition the data transfer process.            | col1                                        |
-|                             |         | *Optional*.  Defaults to table's first column of primary key.           |                                             |
-+-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
-| Null value allowed for      | Boolean | True or false depending on whether NULL values are allowed in data      | true                                        |
-| the partition column        |         | of the Partition column. *Optional*.                                    |                                             |
-+-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
-| Boundary query              | String  | The query used to define an upper and lower boundary when partitioning. |                                             |
-|                             |         | *Optional*.                                                             |                                             |
-+-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
-
-**Notes**
-=========
-
-1. *Table name* and *Table SQL statement* are mutually exclusive. If *Table name* is provided, the *Table SQL statement* should not be provided. If *Table SQL statement* is provided then *Table name* should not be provided.
-2. *Table column names* should be provided only if *Table name* is provided.
-3. If there are columns with similar names, column aliases are required. For example: ``SELECT table1.id as "i", table2.id as "j" FROM table1 INNER JOIN table2 ON table1.id = table2.id``.
-
-**TO Job Configuration**
-++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the TO direction include:
-
-+-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
-| Input                       | Type    | Description                                                             | Example                                         |
-+=============================+=========+=========================================================================+=================================================+
-| Schema name                 | String  | The schema name the table is part of.                                   | sqoop                                           |
-|                             |         | *Optional*                                                              |                                                 |
-+-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
-| Table name                  | String  | The table name to import data from.                                     | test                                            |
-|                             |         | *Optional*. See note below.                                             |                                                 |
-+-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
-| Table SQL statement         | String  | The SQL statement used to perform a **free form query**.                | ``INSERT INTO test (col1, col2) VALUES (?, ?)`` |
-|                             |         | *Optional*. See note below.                                             |                                                 |
-+-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
-| Table column names          | String  | Columns to insert into the JDBC data source.                            | col1,col2                                       |
-|                             |         | *Optional* Comma separated list of columns.                             |                                                 |
-+-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
-| Stage table name            | String  | The name of the table used as a *staging table*.                        | staging                                         |
-|                             |         | *Optional*.                                                             |                                                 |
-+-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
-| Should clear stage table    | Boolean | True or false depending on whether the staging table should be cleared  | true                                            |
-|                             |         | after the data transfer has finished. *Optional*.                       |                                                 |
-+-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
-
-**Notes**
-=========
-
-1. *Table name* and *Table SQL statement* are mutually exclusive. If *Table name* is provided, the *Table SQL statement* should not be provided. If *Table SQL statement* is provided then *Table name* should not be provided.
-2. *Table column names* should be provided only if *Table name* is provided.
-
------------
-Partitioner
------------
-
-The Generic JDBC Connector partitioner generates conditions to be used by the extractor.
-It varies in how it partitions data transfer based on the partition column data type.
-Though, each strategy roughly takes on the following form:
-::
-
-  (upper boundary - lower boundary) / (max partitions)
-
-By default, the *primary key* will be used to partition the data unless otherwise specified.
-
-The following data types are currently supported:
-
-1. TINYINT
-2. SMALLINT
-3. INTEGER
-4. BIGINT
-5. REAL
-6. FLOAT
-7. DOUBLE
-8. NUMERIC
-9. DECIMAL
-10. BIT
-11. BOOLEAN
-12. DATE
-13. TIME
-14. TIMESTAMP
-15. CHAR
-16. VARCHAR
-17. LONGVARCHAR
-
----------
-Extractor
----------
-
-During the *extraction* phase, the JDBC data source is queried using SQL. This SQL will vary based on your configuration.
-
-- If *Table name* is provided, then the SQL statement generated will take on the form ``SELECT * FROM <table name>``.
-- If *Table name* and *Columns* are provided, then the SQL statement generated will take on the form ``SELECT <columns> FROM <table name>``.
-- If *Table SQL statement* is provided, then the provided SQL statement will be used.
-
-The conditions generated by the *partitioner* are appended to the end of the SQL query to query a section of data.
-
-The Generic JDBC connector extracts CSV data usable by the *CSV Intermediate Data Format*.
-
-------
-Loader
-------
-
-During the *loading* phase, the JDBC data source is queried using SQL. This SQL will vary based on your configuration.
-
-- If *Table name* is provided, then the SQL statement generated will take on the form ``INSERT INTO <table name> (col1, col2, ...) VALUES (?,?,..)``.
-- If *Table name* and *Columns* are provided, then the SQL statement generated will take on the form ``INSERT INTO <table name> (<columns>) VALUES (?,?,..)``.
-- If *Table SQL statement* is provided, then the provided SQL statement will be used.
-
-This connector expects to receive CSV data consumable by the *CSV Intermediate Data Format*.
-
-----------
-Destroyers
-----------
-
-The Generic JDBC Connector performs two operations in the destroyer in the TO direction:
-
-1. Copy the contents of the staging table to the desired table.
-2. Clear the staging table.
-
-No operations are performed in the FROM direction.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Connector-HDFS.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Connector-HDFS.rst b/docs/src/site/sphinx/Connector-HDFS.rst
deleted file mode 100644
index c44b1b6..0000000
--- a/docs/src/site/sphinx/Connector-HDFS.rst
+++ /dev/null
@@ -1,159 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-==============
-HDFS Connector
-==============
-
-.. contents::
-   :depth: 3
-
------
-Usage
------
-
-To use the HDFS Connector, create a link for the connector and a job that uses the link.
-
-**Link Configuration**
-++++++++++++++++++++++
-
-Inputs associated with the link configuration include:
-
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Input                       | Type    | Description                                                           | Example                    |
-+=============================+=========+=======================================================================+============================+
-| URI                         | String  | The URI of the HDFS File System.                                      | hdfs://example.com:8020/   |
-|                             |         | *Optional*. See note below.                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Configuration directory     | String  | Path to the clusters configuration directory.                         | /etc/conf/hadoop           |
-|                             |         | *Optional*.                                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-
-**Notes**
-=========
-
-1. The specified URI will override the declared URI in your configuration.
-
-**FROM Job Configuration**
-++++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the FROM direction include:
-
-+-----------------------------+---------+-------------------------------------------------------------------------+------------------+
-| Input                       | Type    | Description                                                             | Example          |
-+=============================+=========+=========================================================================+==================+
-| Input directory             | String  | The location in HDFS that the connector should look for files in.       | /tmp/sqoop2/hdfs |
-|                             |         | *Required*. See note below.                                             |                  |
-+-----------------------------+---------+-------------------------------------------------------------------------+------------------+
-| Null value                  | String  | The value of NULL in the contents of each file extracted.               | \N               |
-|                             |         | *Optional*. See note below.                                             |                  |
-+-----------------------------+---------+-------------------------------------------------------------------------+------------------+
-| Override null value         | Boolean | Tells the connector to replace the specified NULL value.                | true             |
-|                             |         | *Optional*. See note below.                                             |                  |
-+-----------------------------+---------+-------------------------------------------------------------------------+------------------+
-
-**Notes**
-=========
-
-1. All files in *Input directory* will be extracted.
-2. *Null value* and *override null value* should be used in conjunction. If *override null value* is not set to true, then *null value* will not be used when extracting data.
-
-**TO Job Configuration**
-++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the TO direction include:
-
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Input                       | Type    | Description                                                             | Example                           |
-+=============================+=========+=========================================================================+===================================+
-| Output directory            | String  | The location in HDFS that the connector will load files to.             | /tmp/sqoop2/hdfs                  |
-|                             |         | *Optional*                                                              |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Output format               | Enum    | The format to output data to.                                           | CSV                               |
-|                             |         | *Optional*. See note below.                                             |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Compression                 | Enum    | Compression class.                                                      | GZIP                              |
-|                             |         | *Optional*. See note below.                                             |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Custom compression          | String  | Custom compression class.                                               | org.apache.sqoop.SqoopCompression |
-|                             |         | *Optional* Comma separated list of columns.                             |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Null value                  | String  | The value of NULL in the contents of each file loaded.                  | \N                                |
-|                             |         | *Optional*. See note below.                                             |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Override null value         | Boolean | Tells the connector to replace the specified NULL value.                | true                              |
-|                             |         | *Optional*. See note below.                                             |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Append mode                 | Boolean | Append to an existing output directory.                                 | true                              |
-|                             |         | *Optional*.                                                             |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-
-**Notes**
-=========
-
-1. *Output format* only supports CSV at the moment.
-2. *Compression* supports all Hadoop compression classes.
-3. *Null value* and *override null value* should be used in conjunction. If *override null value* is not set to true, then *null value* will not be used when loading data.
-
------------
-Partitioner
------------
-
-The HDFS Connector partitioner partitions based on total blocks in all files in the specified input directory.
-Blocks will try to be placed in splits based on the *node* and *rack* they reside in.
-
----------
-Extractor
----------
-
-During the *extraction* phase, the FileSystem API is used to query files from HDFS. The HDFS cluster used is the one defined by:
-
-1. The HDFS URI in the link configuration
-2. The Hadoop configuration in the link configuration
-3. The Hadoop configuration used by the execution framework
-
-The format of the data must be CSV. The NULL value in the CSV can be chosen via *null value*. For example::
-
-    1,\N
-    2,null
-    3,NULL
-
-In the above example, if *null value* is set to \N, then only the first row's NULL value will be inferred.
-
-------
-Loader
-------
-
-During the *loading* phase, HDFS is written to via the FileSystem API. The number of files created is equal to the number of loads that run. The format of the data currently can only be CSV. The NULL value in the CSV can be chosen via *null value*. For example:
-
-+--------------+-------+
-| Id           | Value |
-+==============+=======+
-| 1            | NULL  |
-+--------------+-------+
-| 2            | value |
-+--------------+-------+
-
-If *null value* is set to \N, then here's how the data will look like in HDFS::
-
-    1,\N
-    2,value
-
-----------
-Destroyers
-----------
-
-The HDFS TO destroyer moves all created files to the proper output directory.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Connector-Kafka.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Connector-Kafka.rst b/docs/src/site/sphinx/Connector-Kafka.rst
deleted file mode 100644
index b6bca14..0000000
--- a/docs/src/site/sphinx/Connector-Kafka.rst
+++ /dev/null
@@ -1,64 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-===============
-Kafka Connector
-===============
-
-Currently, only the TO direction is supported.
-
-.. contents::
-   :depth: 3
-
------
-Usage
------
-
-To use the Kafka Connector, create a link for the connector and a job that uses the link.
-
-**Link Configuration**
-++++++++++++++++++++++
-
-Inputs associated with the link configuration include:
-
-+----------------------+---------+-----------------------------------------------------------+-------------------------------------+
-| Input                | Type    | Description                                               | Example                             |
-+======================+=========+===========================================================+=====================================+
-| Broker list          | String  | Comma separated list of kafka brokers.                    | example.com:10000,example.com:11000 |
-|                      |         | *Required*.                                               |                                     |
-+----------------------+---------+-----------------------------------------------------------+-------------------------------------+
-| Zookeeper connection | String  | Comma separated list of zookeeper servers in your quorum. | /etc/conf/hadoop                    |
-|                      |         | *Required*.                                               |                                     |
-+----------------------+---------+-----------------------------------------------------------+-------------------------------------+
-
-**TO Job Configuration**
-++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the FROM direction include:
-
-+-------+---------+---------------------------------+----------+
-| Input | Type    | Description                     | Example  |
-+=======+=========+=================================+==========+
-| topic | String  | The Kafka topic to transfer to. | my topic |
-|       |         | *Required*.                     |          |
-+-------+---------+---------------------------------+----------+
-
-------
-Loader
-------
-
-During the *loading* phase, Kafka is written to directly from each loader. The order in which data is loaded into Kafka is not guaranteed.
-

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Connector-Kite.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Connector-Kite.rst b/docs/src/site/sphinx/Connector-Kite.rst
deleted file mode 100644
index 414ad8a..0000000
--- a/docs/src/site/sphinx/Connector-Kite.rst
+++ /dev/null
@@ -1,110 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-==============
-Kite Connector
-==============
-
-.. contents::
-   :depth: 3
-
------
-Usage
------
-
-To use the Kite Connector, create a link for the connector and a job that uses the link. For more information on Kite, checkout the kite documentation: http://kitesdk.org/docs/1.0.0/Kite-SDK-Guide.html.
-
-**Link Configuration**
-++++++++++++++++++++++
-
-Inputs associated with the link configuration include:
-
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Input                       | Type    | Description                                                           | Example                    |
-+=============================+=========+=======================================================================+============================+
-| authority                   | String  | The authority of the kite dataset.                                    | hdfs://example.com:8020/   |
-|                             |         | *Optional*. See note below.                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-
-**Notes**
-=========
-
-1. The authority is useful for specifying Hive metastore or HDFS URI.
-
-**FROM Job Configuration**
-++++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the FROM direction include:
-
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Input                       | Type    | Description                                                           | Example                    |
-+=============================+=========+=======================================================================+============================+
-| URI                         | String  | The Kite dataset URI to use.                                          | dataset:hdfs:/tmp/ns/ds    |
-|                             |         | *Required*. See notes below.                                          |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-
-**Notes**
-=========
-
-1. The URI and the authority from the link configuration will be merged to create a complete dataset URI internally. If the given dataset URI contains authority, the authority from the link configuration will be ignored.
-2. Only *hdfs* and *hive* are supported currently.
-
-**TO Job Configuration**
-++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the TO direction include:
-
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Input                       | Type    | Description                                                           | Example                    |
-+=============================+=========+=======================================================================+============================+
-| URI                         | String  | The Kite dataset URI to use.                                          | dataset:hdfs:/tmp/ns/ds    |
-|                             |         | *Required*. See note below.                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| File format                 | Enum    | The format of the data the kite dataset should write out.             | PARQUET                    |
-|                             |         | *Optional*. See note below.                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-
-**Notes**
-=========
-
-1. The URI and the authority from the link configuration will be merged to create a complete dataset URI internally. If the given dataset URI contains authority, the authority from the link configuration will be ignored.
-2. Only *hdfs* and *hive* are supported currently.
-
------------
-Partitioner
------------
-
-The kite connector only creates one partition currently.
-
----------
-Extractor
----------
-
-During the *extraction* phase, Kite is used to query a dataset. Since there is only one dataset to query, only a single reader is created to read the dataset.
-
-**NOTE**: The avro schema kite generates will be slightly different than the original schema. This is because avro identifiers have strict naming requirements.
-
-------
-Loader
-------
-
-During the *loading* phase, Kite is used to write several temporary datasets. The number of temporary datasets is equivalent to the number of *loaders* that are being used.
-
-----------
-Destroyers
-----------
-
-The Kite connector TO destroyer merges all the temporary datasets into a single dataset.
\ No newline at end of file


[7/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Connector-SFTP.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Connector-SFTP.rst b/docs/src/site/sphinx/Connector-SFTP.rst
deleted file mode 100644
index d25ea3f..0000000
--- a/docs/src/site/sphinx/Connector-SFTP.rst
+++ /dev/null
@@ -1,91 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-==============
-SFTP Connector
-==============
-
-The SFTP connector supports moving data between a Secure File Transfer Protocol (SFTP) server and other supported Sqoop2 connectors.
-
-Currently only the TO direction is supported to write records to an SFTP server. A FROM connector is pending (SQOOP-2218).
-
-.. contents::
-   :depth: 3
-
------
-Usage
------
-
-Before executing a Sqoop2 job with the SFTP connector, set **mapreduce.task.classpath.user.precedence** to true in the Hadoop cluster config, for example::
-
-    <property>
-      <name>mapreduce.task.classpath.user.precedence</name>
-      <value>true</value>
-    </property>
-
-This is required since the SFTP connector uses the JSch library (http://www.jcraft.com/jsch/) to provide SFTP functionality. Unfortunately Hadoop currently ships with an earlier version of this library which causes an issue with some SFTP servers. Setting this property ensures that the current version of the library packaged with this connector will appear first in the classpath.
-
-To use the SFTP Connector, create a link for the connector and a job that uses the link.
-
-**Link Configuration**
-++++++++++++++++++++++
-
-Inputs associated with the link configuration include:
-
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Input                       | Type    | Description                                                           | Example                    |
-+=============================+=========+=======================================================================+============================+
-| SFTP server hostname        | String  | Hostname for the SFTP server.                                         | sftp.example.com           |
-|                             |         | *Required*.                                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| SFTP server port            | Integer | Port number for the SFTP server. Defaults to 22.                      | 2220                       |
-|                             |         | *Optional*.                                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Username                    | String  | The username to provide when connecting to the SFTP server.           | sqoop                      |
-|                             |         | *Required*.                                                           |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-| Password                    | String  | The password to provide when connecting to the SFTP server.           | sqoop                      |
-|                             |         | *Required*                                                            |                            |
-+-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
-
-**Notes**
-=========
-
-1. The SFTP connector will attempt to connect to the SFTP server as part of the link validation process. If for some reason a connection can not be established, you'll see a corresponding error message.
-2. Note that during connection, the SFTP connector explictly disables *StrictHostKeyChecking* to avoid "UnknownHostKey" errors.
-
-**TO Job Configuration**
-++++++++++++++++++++++++
-
-Inputs associated with the Job configuration for the TO direction include:
-
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-| Input                       | Type    | Description                                                             | Example                           |
-+=============================+=========+=========================================================================+===================================+
-| Output directory            | String  | The location on the SFTP server that the connector will write files to. | uploads                           |
-|                             |         | *Required*                                                              |                                   |
-+-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
-
-**Notes**
-=========
-
-1. The *output directory* value needs to be an existing directory on the SFTP server.
-
-------
-Loader
-------
-
-During the *loading* phase, the connector will create uniquely named files in the *output directory* for each partition of data received from the **FROM** connector.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/ConnectorDevelopment.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/ConnectorDevelopment.rst b/docs/src/site/sphinx/ConnectorDevelopment.rst
deleted file mode 100644
index 0e8ea92..0000000
--- a/docs/src/site/sphinx/ConnectorDevelopment.rst
+++ /dev/null
@@ -1,595 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-=============================
-Sqoop 2 Connector Development
-=============================
-
-This document describes how to implement a connector in the Sqoop 2 using the code sample from one of the built-in connectors ( ``GenericJdbcConnector`` ) as a reference. Sqoop 2 jobs support extraction from and/or loading to different data sources. Sqoop 2 connectors encapsulate the job lifecyle operations for extracting and/or loading data from and/or to
-different data sources. Each connector will primarily focus on a particular data source and its custom implementation for optimally reading and/or writing data in a distributed environment.
-
-.. contents::
-
-What is a Sqoop Connector?
-++++++++++++++++++++++++++
-
-Connectors provide the facility to interact with many data sources and thus can be used as a means to transfer data between them in Sqoop. The connector implementation will provide logic to read from and/or write to a data source that it represents. For instance the ( ``GenericJdbcConnector`` ) encapsulates the logic to read from and/or write to jdbc enabled relational data sources. The connector part that enables reading from a data source and transferring this data to internal Sqoop format is called the FROM and the part that enables writng data to a data source by transferring data from Sqoop format is called TO. In order to interact with these data sources, the connector will provide one or many config classes and input fields within it.
-
-Broadly we support two main config types for connectors, link type represented by the enum ``ConfigType.LINK`` and job type represented by the enum ``ConfigType.JOB``. Link config represents the properties to physically connect to the data source. Job config represent the properties that are required to invoke reading from and/or writing to particular dataset in the data source it connects to. If a connector supports both reading from and writing to, it will provide the ``FromJobConfig`` and ``ToJobConfig`` objects. Each of these config objects are custom to each connector and can have one or more inputs associated with each of the Link, FromJob and ToJob config types. Hence we call the connectors as configurables i.e an entity that can provide configs for interacting with the data source it represents. As the connectors evolve over time to support new features in their data sources, the configs and inputs will change as well. Thus the connector API also provides methods for upgradi
 ng the config and input names and data related to these data sources across different versions.
-
-The connectors implement logic for various stages of the extract/load process using the connector API described below. While extracting/reading data from the data-source the main stages are ``Initializer``, ``Partitioner``, ``Extractor`` and ``Destroyer``. While loading/writitng data to the data source the main stages currently supported are ``Initializer``, ``Loader`` and ``Destroyer``. Each stage has its unique set of responsibilities that are explained in detail below. Since connectors understand the internals of the data source they represent, they work in tandem with the sqoop supported execution engines such as MapReduce or Spark (in future) to accomplish this process in a most optimal way.
-
-When do we add a new connector?
-===============================
-You add a new connector when you need to extract/read data from a new data source, or load/write
-data into a new data source that is not supported yet in Sqoop 2.
-In addition to the connector API, Sqoop 2 also has an submission and execution engine interface.
-At the moment the only supported engine is MapReduce, but we may support additional engines in the future such as Spark. Since many parallel execution engines are capable of reading/writing data, there may be a question of whether adding support for a new data source should be done through the connector or the execution engine API.
-
-**Our guideline are as follows:** Connectors should manage all data extract(reading) from and/or load(writing) into a data source. Submission and execution engine together manage the job submission and execution life cycle to read/write data from/to data sources in the most optimal way possible. If you need to support a new data store and details of linking to it and don't care how the process of reading/writing from/to happens then you are looking to add a connector and you should continue reading the below Connector API details to contribute new connectors to Sqoop 2.
-
-
-Connector Implementation
-++++++++++++++++++++++++
-
-The ``SqoopConnector`` class defines an API for the connectors that must be implemented by the connector developers. Each Connector must extend ``SqoopConnector`` and override the methods shown below.
-::
-
-  public abstract String getVersion();
-  public abstract ResourceBundle getBundle(Locale locale);
-  public abstract Class getLinkConfigurationClass();
-  public abstract Class getJobConfigurationClass(Direction direction);
-  public abstract From getFrom();
-  public abstract To getTo();
-  public abstract ConnectorConfigurableUpgrader getConfigurableUpgrader(String oldConnectorVersion)
-
-Connectors can optionally override the following methods:
-::
-
-  public List<Direction> getSupportedDirections();
-  public Class<? extends IntermediateDataFormat<?>> getIntermediateDataFormat()
-
-The ``getVersion`` method returns the current version of the connector
-It is important to provide a unique identifier every time a connector jar is released externally.
-In case of the Sqoop built-in connectors, the version refers to the Sqoop build/release version. External
-connectors can also use the same or similar mechanism to set this version. The version number is critical for
-the connector upgrade logic used in Sqoop
-
-::
-
-   @Override
-    public String getVersion() {
-     return VersionInfo.getBuildVersion();
-    }
-
-
-The ``getFrom`` method returns From_ instance
-which is a ``Transferable`` entity that encapsulates the operations
-needed to read from the data source that the connector represents.
-
-The ``getTo`` method returns To_ instance
-which is a ``Transferable`` entity that encapsulates the operations
-needed to write to the data source that the connector represents.
-
-Methods such as ``getBundle`` , ``getLinkConfigurationClass`` , ``getJobConfigurationClass``
-are related to `Configurations`_
-
-Since a connector represents a data source and it can support one of the two directions, either reading FROM its data source or writing to its data souurce or both, the ``getSupportedDirections`` method returns a list of directions that a connector will implement. This should be a subset of the values in the ``Direction`` enum we provide:
-::
-
-  public List<Direction> getSupportedDirections() {
-      return Arrays.asList(new Direction[]{
-          Direction.FROM,
-          Direction.TO
-      });
-  }
-
-
-From
-====
-
-The ``getFrom`` method returns From_ instance which is a ``Transferable`` entity that encapsulates the operations needed to read from the data source the connector represents. The built-in ``GenericJdbcConnector`` defines ``From`` like this.
-::
-
-  private static final From FROM = new From(
-        GenericJdbcFromInitializer.class,
-        GenericJdbcPartitioner.class,
-        GenericJdbcExtractor.class,
-        GenericJdbcFromDestroyer.class);
-  ...
-
-  @Override
-  public From getFrom() {
-    return FROM;
-  }
-
-Initializer and Destroyer
--------------------------
-.. _Initializer:
-.. _Destroyer:
-
-Initializer is instantiated before the submission of sqoop job to the execution engine and doing preparations such as connecting to the data source, creating temporary tables or adding dependent jar files. Initializers are executed as the first step in the sqoop job lifecyle. All interactions within an initializer are assumed to occur within a single thread, so state can be maintained between method calls (such as database connections). Here is the ``Initializer`` API.
-::
-
-  public abstract void initialize(InitializerContext context, LinkConfiguration linkConfiguration,
-      JobConfiguration jobConfiguration);
-
-  public List<String> getJars(InitializerContext context, LinkConfiguration linkConfiguration,
-      JobConfiguration jobConfiguration){
-       return new LinkedList<String>();
-      }
-
-  public abstract Schema getSchema(InitializerContext context, LinkConfiguration linkConfiguration,
-      JobConfiguration jobConfiguration) {
-         return new NullSchema();
-      }
-
-In addition to the initialize() method where the job execution preparation activities occur, the ``Initializer`` can also implement the getSchema() method for the directions ``FROM`` and ``TO`` that it supports.
-
-The getSchema() method is used by the sqoop system to match the data extracted/read by the ``From`` instance of connector data source with the data loaded/written to the ``To`` instance of the connector data source. In case of a relational database or columnar database, the returned Schema object will include collection of columns with their data types. If the data source is schema-less, such as a file, a default ``NullSchema`` will be used (i.e a Schema object without any columns).
-
-NOTE: Sqoop 2 currently does not support extract and load between two connectors that represent schema-less data sources. We expect that atleast the ``From`` instance of the connector or the ``To`` instance of the connector in the sqoop job will have a schema. If both ``From`` and ``To`` have a associated non empty schema, Sqoop 2 will load data by column name, i.e, data in column "A" in ``From`` instance of the connector for the job will be loaded to column "A" in the ``To`` instance of the connector for that job.
-
-
-``Destroyer`` is instantiated after the execution engine finishes its processing. It is the last step in the sqoop job lifecyle, so pending clean up tasks such as dropping temporary tables and closing connections. The term destroyer is a little misleading. It represents the phase where the final output commits to the data source can also happen in case of the ``TO`` instance of the connector code.
-
-Partitioner
------------
-
-The ``Partitioner`` creates ``Partition`` instances ranging from 1..N. The N is driven by a configuration as well. The default set of partitions created is set to 10 in the sqoop code. Here is the ``Partitioner`` API
-
-``Partitioner`` must implement the ``getPartitions`` method in the ``Partitioner`` API.
-
-::
-
-  public abstract List<Partition> getPartitions(PartitionerContext context,
-      LinkConfiguration linkConfiguration, FromJobConfiguration jobConfiguration);
-
-``Partition`` instances are passed to Extractor_ as the argument of ``extract`` method.
-Extractor_ determines which portion of the data to extract by a given partition.
-
-There is no actual convention for Partition classes other than being actually ``Writable`` and ``toString()`` -able. Here is the ``Partition`` API
-::
-
-  public abstract class Partition {
-    public abstract void readFields(DataInput in) throws IOException;
-    public abstract void write(DataOutput out) throws IOException;
-    public abstract String toString();
-  }
-
-Connectors can implement custom ``Partition`` classes. ``GenericJdbcPartitioner`` is one such example. It returns the ``GenericJdbcPartition`` objects.
-
-Extractor
----------
-
-Extractor (E for ETL) extracts data from a given data source
-``Extractor`` must implement the ``extract`` method in the ``Extractor`` API.
-::
-
-  public abstract void extract(ExtractorContext context,
-                               LinkConfiguration linkConfiguration,
-                               JobConfiguration jobConfiguration,
-                               SqoopPartition partition);
-
-The ``extract`` method extracts data from the data source using the link and job configuration properties and writes it to the ``SqoopMapDataWriter`` (provided in the extractor context given to the extract method).
-The ``SqoopMapDataWriter`` has the ``SqoopWritable`` thats holds the data read from the data source in the `Intermediate Data Format representation`_
-
-Extractors use Writer's provided by the ExtractorContext to send a record through the sqoop system.
-::
-
-  context.getDataWriter().writeArrayRecord(array);
-
-The extractor must iterate through the given partition in the ``extract`` method.
-::
-
-  while (resultSet.next()) {
-    ...
-    context.getDataWriter().writeArrayRecord(array);
-    ...
-  }
-
-
-To
-==
-
-The ``getTo`` method returns ``TO`` instance which is a ``Transferable`` entity that encapsulates the operations needed to wtite data to the data source the connector represents. The built-in ``GenericJdbcConnector`` defines ``To`` like this.
-::
-
-  private static final To TO = new To(
-        GenericJdbcToInitializer.class,
-        GenericJdbcLoader.class,
-        GenericJdbcToDestroyer.class);
-  ...
-
-  @Override
-  public To getTo() {
-    return TO;
-  }
-
-
-Initializer and Destroyer
--------------------------
-
-Initializer_ and Destroyer_ of a ``To`` instance are used in a similar way to those of a ``From`` instance.
-Refer to the previous section for more details.
-
-
-Loader
-------
-
-A loader (L for ETL) receives data from the ``From`` instance of the sqoop connector associated with the sqoop job and then loads it to an ``TO`` instance of the connector associated with the same sqoop job
-
-``Loader`` must implement ``load`` method of the ``Loader`` API
-::
-
-  public abstract void load(LoaderContext context,
-                            ConnectionConfiguration connectionConfiguration,
-                            JobConfiguration jobConfiguration) throws Exception;
-
-The ``load`` method reads data from ``SqoopOutputFormatDataReader`` (provided in the loader context of the load methods). It reads the data in the `Intermediate Data Format representation`_ and loads it to the data source.
-
-Loader must iterate in the ``load`` method until the data from ``DataReader`` is exhausted.
-::
-
-  while ((array = context.getDataReader().readArrayRecord()) != null) {
-    ...
-  }
-
-NOTE: we do not yet support a stage for connector developers to control how to balance the loading/writitng of data across the mutiple loaders. In future we may be adding this to the connector API to have custom logic to balance the loading across multiple reducers.
-
-Sqoop Connector Identifier : sqoopconnector.properties
-======================================================
-
-Every Sqoop 2 connector needs to have a sqoopconnector.properties in the packaged jar to be identified by Sqoop.
-A typical ``sqoopconnector.properties`` for a sqoop2 connector looks like below
-
-::
-
- # Sqoop Foo Connector Properties
- org.apache.sqoop.connector.class = org.apache.sqoop.connector.foo.FooConnector
- org.apache.sqoop.connector.name = sqoop-foo-connector
-
-If the above file does not exist, then Sqoop will not load this jar and thus cannot be registered into Sqoop repository for creating Sqoop jobs
-
-
-Sqoop Connector Build-time Dependencies
-=======================================
-
-Sqoop provides the connector-sdk module identified by the package:``org.apache.sqoop.connector`` It provides the public facing apis for the external connectors
-to extend from. It also provides common utilities that the connectors can utilize for converting data to and from the sqoop intermediate data format
-
-The common-test module identified by the package  ``org.apache.sqoop.common.test`` provides utilities used related to the built-in connectors such as the JDBC, HDFS,
-and Kafka connectors that can be used by the external connectors for creating the end-end integration test for sqoop jobs
-
-The test module identified by the package ``org.apache.sqoop.test`` provides various minicluster utilites the integration tests can extend from to run
- a sqoop job with the given sqoop connector either using it as a ``FROM`` or ``TO`` data-source
-
-Hence the pom.xml for the sqoop kite connector built using the kite-sdk  might look something like below
-
-::
-
-   <dependencies>
-    <!-- Sqoop modules -->
-    <dependency>
-      <groupId>org.apache.sqoop</groupId>
-      <artifactId>connector-sdk</artifactId>
-    </dependency>
-
-    <!-- Testing specified modules -->
-    <dependency>
-      <groupId>org.testng</groupId>
-      <artifactId>testng</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.mockito</groupId>
-      <artifactId>mockito-all</artifactId>
-      <scope>test</scope>
-    </dependency>
-     <dependency>
-       <groupId>org.apache.sqoop</groupId>
-       <artifactId>sqoop-common-test</artifactId>
-     </dependency>
-
-     <dependency>
-       <groupId>org.apache.sqoop</groupId>
-       <artifactId>test</artifactId>
-     </dependency>
-    <!-- Connector required modules -->
-    <dependency>
-      <groupId>org.kitesdk</groupId>
-      <artifactId>kite-data-core</artifactId>
-    </dependency>
-    ....
-  </dependencies>
-
-Configurables
-+++++++++++++
-
-Configurable registration
-=========================
-One of the currently supported configurable in Sqoop are the connectors. Sqoop 2 registers definitions of connectors from the file named ``sqoopconnector.properties`` which each connector implementation should provide to become available in Sqoop.
-::
-
-  # Generic JDBC Connector Properties
-  org.apache.sqoop.connector.class = org.apache.sqoop.connector.jdbc.GenericJdbcConnector
-  org.apache.sqoop.connector.name = generic-jdbc-connector
-
-
-Configurations
-==============
-
-Implementations of ``SqoopConnector`` overrides methods such as ``getLinkConfigurationClass`` and ``getJobConfigurationClass`` returning configuration class.
-::
-
-  @Override
-  public Class getLinkConfigurationClass() {
-    return LinkConfiguration.class;
-  }
-
-  @Override
-  public Class getJobConfigurationClass(Direction direction) {
-    switch (direction) {
-      case FROM:
-        return FromJobConfiguration.class;
-      case TO:
-        return ToJobConfiguration.class;
-      default:
-        return null;
-    }
-  }
-
-Configurations are represented by annotations defined in ``org.apache.sqoop.model`` package.
-Annotations such as ``ConfigurationClass`` , ``ConfigClass`` , ``Config`` and ``Input``
-are provided for defining configuration objects for each connector.
-
-``@ConfigurationClass`` is a marker annotation for ``ConfigurationClasses``  that hold a group or lis of ``ConfigClasses`` annotated with the marker ``@ConfigClass``
-::
-
-  @ConfigurationClass
-  public class LinkConfiguration {
-
-    @Config public LinkConfig linkConfig;
-
-    public LinkConfiguration() {
-      linkConfig = new LinkConfig();
-    }
-  }
-
-Each ``ConfigClass`` defines the different inputs it exposes for the link and job configs. These inputs are annotated with ``@Input`` and the user will be asked to fill in when they create a sqoop job and choose to use this instance of the connector for either the ``From`` or ``To`` part of the job.
-
-::
-
-    @ConfigClass(validators = {@Validator(LinkConfig.ConfigValidator.class)})
-    public class LinkConfig {
-      @Input(size = 128, validators = {@Validator(NotEmpty.class), @Validator(ClassAvailable.class)} )
-      @Input(size = 128) public String jdbcDriver;
-      @Input(size = 128) public String connectionString;
-      @Input(size = 40)  public String username;
-      @Input(size = 40, sensitive = true) public String password;
-      @Input public Map<String, String> jdbcProperties;
-    }
-
-Each ``ConfigClass`` and the  inputs within the configs annotated with ``Input`` can specifiy validators via the ``@Validator`` annotation described below.
-
-
-Configs and Inputs
-==================================
-As discussed above, ``Input`` provides a way to express the type of config parameter exposed. In addition it allows connector developer to add attributes
-that describe how the input will be used in the sqoop job. Here are the list of the supported attributes
-
-
-Inputs associated with the link configuration include:
-
-+-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
-| Attribute                   | Type    | Description                                                           | Example                                         |
-+=============================+=========+=======================================================================+=================================================+
-| size                        | Integer |Describes the maximum size of the attribute value .                    |@Input(size = 128) public String driver          |
-+-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
-| sensitive                   | Boolean |Describes if the input value should be hidden from display             |@Input(sensitive = true) public String password  |
-+-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
-| sensitiveKeyPattern         | String  |If the config paramteter is a map, this java regular expression        |@Input(sensitiveKeyPattern = ".*sensitive")      |
-|                             |         |(http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html)|public Map<String, String> sensitiveMap          |
-|                             |         |will be used to decide which keys are hidden from display.             |                                                 |
-|                             |         |                                                                       |                                                 |
-+-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
-| editable                    | Enum    |Describes the roles that can edit the value of this input              |@Input(editable = ANY) public String value       |
-+-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
-| overrides                   | String  |Describes a list of other inputs this input can override in this config|@Input(overrides ="value") public String lvalue  |
-+-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
-
-
-``Editable`` Attribute: Possible values for the Enum InputEditable are USER_ONLY, CONNECTOR_ONLY, ANY. If an input says editable by USER_ONLY, then the connector code during the
-job run or upgrade cannot update the config input value. Similarly for a CONNECTOR_ONLY, user cannot update its value via the rest api or shell command line.
-
-``Overrides`` Attribute: USER_ONLY input attribute values cannot be overriden by other inputs.
-
-Empty Configuration
--------------------
-If a connector does not have any configuration inputs to specify for the ``ConfigType.LINK`` or ``ConfigType.JOB`` it is recommended to return the ``EmptyConfiguration`` class in the ``getLinkConfigurationClass()`` or ``getJobConfigurationClass(..)`` methods.
-::
-
-   @ConfigurationClass
-   public class EmptyConfiguration { }
-
-
-Configuration ResourceBundle
-============================
-
-The config and its corresponding input names, the input field description are represented in the config resource bundle defined per connector.
-::
-
-  # jdbc driver
-  connection.jdbcDriver.label = JDBC Driver Class
-  connection.jdbcDriver.help = Enter the fully qualified class name of the JDBC \
-                     driver that will be used for establishing this connection.
-
-  # connect string
-  connection.connectionString.label = JDBC Connection String
-  connection.connectionString.help = Enter the value of JDBC connection string to be \
-                     used by this connector for creating connections.
-
-  ...
-
-Those resources are loaded by ``getBundle`` method of the ``SqoopConnector.``
-::
-
-  @Override
-  public ResourceBundle getBundle(Locale locale) {
-    return ResourceBundle.getBundle(
-    GenericJdbcConnectorConstants.RESOURCE_BUNDLE_NAME, locale);
-  }
-
-
-Validations for Configs and Inputs
-==================================
-
-Validators validate the config objects and the inputs associated with the config objects. For config objects themselves we encourage developers to write custom valdiators for both the link and job config types.
-
-::
-
-   @Input(size = 128, validators = {@Validator(value = StartsWith.class, strArg = "jdbc:")} )
-
-   @Input(size = 255, validators = { @Validator(NotEmpty.class) })
-
-Sqoop 2 provides a list of standard input validators that can be used by different connectors for the link and job type configuration inputs.
-
-::
-
-    public class NotEmpty extends AbstractValidator<String> {
-    @Override
-    public void validate(String instance) {
-      if (instance == null || instance.isEmpty()) {
-       addMessage(Status.ERROR, "Can't be null nor empty");
-      }
-     }
-    }
-
-The validation logic is executed when users creating the sqoop jobs input values for the link and job configs associated with the ``From`` and ``To`` instances of the connectors associated with the job.
-
-
-Loading External Connectors
-+++++++++++++++++++++++++++
-
-Loading new connector say sqoop-foo-connector to the sqoop2, here are the steps to follow
-
-1. Create a ``sqoop-foo-connector.jar``. Make sure the jar contains the ``sqoopconnector.properties`` for it to be picked up by Sqoop
-
-2. Add this jar to the ``org.apache.sqoop.classpath.extra`` property in the sqoop.properties located under the ``conf`` directory.
-
-::
-
- # Sqoop application classpath
- # ":" separated list of jars to be included in sqoop.
- #
- org.apache.sqoop.classpath.extra=/path/to/connector.jar
-
-3. Start the Sqoop 2 server and while initializing the server this jar should be loaded into the Sqoop 2's class path and registered into the Sqoop 2 repository
-
-
-
-Sqoop 2 MapReduce Job Execution Lifecycle with Connector API
-++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-Sqoop 2 provides MapReduce utilities such as ``SqoopMapper`` and ``SqoopReducer`` that aid sqoop job execution.
-
-Note: Any class prefixed with Sqoop is a internal sqoop class provided for MapReduce and is not part of the conenector API. These internal classes work with the custom implementations of ``Extractor``, ``Partitioner`` in the ``From`` instance and ``Loader`` in the ``To`` instance of the connector.
-
-When reading from a data source, the ``Extractor`` provided by the ``From`` instance of the connector extracts data from a corresponding data source it represents and the ``Loader``, provided by the TO instance of the connector, loads data into the data source it represents.
-
-The diagram below describes the initialization phase of a job.
-``SqoopInputFormat`` create splits using ``Partitioner``.
-::
-
-      ,----------------.          ,-----------.
-      |SqoopInputFormat|          |Partitioner|
-      `-------+--------'          `-----+-----'
-   getSplits  |                         |
-  ----------->|                         |
-              |      getPartitions      |
-              |------------------------>|
-              |                         |         ,---------.
-              |                         |-------> |Partition|
-              |                         |         `----+----'
-              |<- - - - - - - - - - - - |              |
-              |                         |              |          ,----------.
-              |-------------------------------------------------->|SqoopSplit|
-              |                         |              |          `----+-----'
-
-The diagram below describes the map phase of a job.
-``SqoopMapper`` invokes ``From`` connector's extractor's ``extract`` method.
-::
-
-      ,-----------.
-      |SqoopMapper|
-      `-----+-----'
-     run    |
-  --------->|                                   ,------------------.
-            |---------------------------------->|SqoopMapDataWriter|
-            |                                   `------+-----------'
-            |                ,---------.               |
-            |--------------> |Extractor|               |
-            |                `----+----'               |
-            |      extract        |                    |
-            |-------------------->|                    |
-            |                     |                    |
-           read from Data Source  |                    |
-  <-------------------------------|      write*        |
-            |                     |------------------->|
-            |                     |                    |           ,-------------.
-            |                     |                    |---------->|SqoopWritable|
-            |                     |                    |           `----+--------'
-            |                     |                    |                |
-            |                     |                    |                |  context.write(writable, ..)
-            |                     |                    |                |---------------------------->
-
-The diagram below decribes the reduce phase of a job.
-``OutputFormat`` invokes ``To`` connector's loader's ``load`` method (via ``SqoopOutputFormatLoadExecutor`` ).
-::
-
-    ,------------.  ,---------------------.
-    |SqoopReducer|  |SqoopNullOutputFormat|
-    `---+--------'  `----------+----------'
-        |                 |   ,-----------------------------.
-        |                 |-> |SqoopOutputFormatLoadExecutor|
-        |                 |   `--------------+--------------'              |
-        |                 |                  |                             |
-        |                 |                  |   ,-----------------.   ,-------------.
-        |                 |                  |-> |SqoopRecordWriter|-->|SqoopWritable|
-      getRecordWriter     |                  |   `--------+--------'   `---+---------'
-  ----------------------->| getRecordWriter  |            |                |
-        |                 |----------------->|            |                |     ,--------------.
-        |                 |                  |---------------------------------->|ConsumerThread|
-        |                 |                  |            |                |     `------+-------'
-        |                 |<- - - - - - - - -|            |                |            |    ,------.
-  <- - - - - - - - - - - -|                  |            |                |            |--->|Loader|
-        |                 |                  |            |                |            |    `--+---'
-        |                 |                  |            |                |            |       |
-        |                 |                  |            |                |            | load  |
-   run  |                 |                  |            |                |            |------>|
-  ----->|                 |     write        |            |                |            |       |
-        |------------------------------------------------>| setContent     |            | read* |
-        |                 |                  |            |--------------->| getContent |<------|
-        |                 |                  |            |                |<-----------|       |
-        |                 |                  |            |                |            | - - ->|
-        |                 |                  |            |                |            |       | write into Data Source
-        |                 |                  |            |                |            |       |----------------------->
-
-More details can be found in `Sqoop MR Execution Engine`_
-
-.. _`Sqoop MR Execution Engine`: https://cwiki.apache.org/confluence/display/SQOOP/Sqoop+MR+Execution+Engine
-
-.. _`Intermediate Data Format representation`: https://cwiki.apache.org/confluence/display/SQOOP/Sqoop2+Intermediate+representation

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/DevEnv.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/DevEnv.rst b/docs/src/site/sphinx/DevEnv.rst
deleted file mode 100644
index 3b72e06..0000000
--- a/docs/src/site/sphinx/DevEnv.rst
+++ /dev/null
@@ -1,57 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-=====================================
-Sqoop 2 Development Environment Setup
-=====================================
-
-This document describes you how to setup development environment for Sqoop 2.
-
-System Requirement
-==================
-
-Java
-----
-
-Sqoop has been developped and test only with JDK from `Oracle <http://www.oracle.com/technetwork/java/javase/downloads/index.html>`_ and we require at least version 7 (we're not supporting JDK 1.6 and older releases).
-
-Maven
------
-
-Sqoop uses Maven 3 for building the project. Download `Maven <http://maven.apache.org/download.cgi>`_ and its Installation instructions given in `link <http://maven.apache.org/download.cgi#Maven_Documentation>`_.
-
-Eclipse Setup
-=============
-
-Steps for downloading source code is given in `Building Sqoop2 <BuildingSqoop2.html>`_
-
-Sqoop 2 project has multiple modules where one module is depend on another module for e.g. sqoop 2 client module has sqoop 2 common module dependency. Follow below step for creating eclipse's project and classpath for each module.
-
-::
-
-  //Install all package into local maven repository
-  mvn clean install -DskipTests
-
-  //Adding M2_REPO variable to eclipse workspace
-  mvn eclipse:configure-workspace -Declipse.workspace=<path-to-eclipse-workspace-dir-for-sqoop-2>
-
-  //Eclipse project creation with optional parameters
-  mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
-
-Alternatively, for manually adding M2_REPO classpath variable as maven repository path in eclipse-> window-> Java ->Classpath Variables ->Click "New" ->In new dialog box, input Name as M2_REPO and Path as $HOME/.m2/repository ->click Ok.
-
-On successful execution of above maven commands, Then import the sqoop project modules into eclipse-> File -> Import ->General ->Existing Projects into Workspace-> Click Next-> Browse Sqoop 2 directory ($HOME/git/sqoop2) ->Click Ok ->Import dialog shows multiple projects (sqoop-client, sqoop-common, etc.) -> Select all modules -> click Finish.
-

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Installation.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Installation.rst b/docs/src/site/sphinx/Installation.rst
deleted file mode 100644
index 9d56875..0000000
--- a/docs/src/site/sphinx/Installation.rst
+++ /dev/null
@@ -1,103 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-============
-Installation
-============
-
-Sqoop ships as one binary package however it's compound from two separate parts - client and server. You need to install server on single node in your cluster. This node will then serve as an entry point for all connecting Sqoop clients. Server acts as a mapreduce client and therefore Hadoop must be installed and configured on machine hosting Sqoop server. Clients can be installed on any arbitrary number of machines. Client is not acting as a mapreduce client and thus you do not need to install Hadoop on nodes that will act only as a Sqoop client.
-
-Server installation
-===================
-
-Copy Sqoop artifact on machine where you want to run Sqoop server. This machine must have installed and configured Hadoop. You don't need to run any Hadoop related services there, however the machine must be able to act as an Hadoop client. You should be able to list a HDFS for example: ::
-
-  hadoop dfs -ls
-
-Sqoop server supports multiple Hadoop versions. However as Hadoop major versions are not compatible with each other, Sqoop have multiple binary artefacts - one for each supported major version of Hadoop. You need to make sure that you're using appropriated binary artifact for your specific Hadoop version. To install Sqoop server decompress appropriate distribution artifact in location at your convenience and change your working directory to this folder. ::
-
-  # Decompress Sqoop distribution tarball
-  tar -xvf sqoop-<version>-bin-hadoop<hadoop-version>.tar.gz
-
-  # Move decompressed content to any location
-  mv sqoop-<version>-bin-hadoop<hadoop version>.tar.gz /usr/lib/sqoop
-
-  # Change working directory
-  cd /usr/lib/sqoop
-
-
-Installing Dependencies
------------------------
-
-Hadoop libraries must be available on node where you are planning to run Sqoop server with proper configuration for major services - ``NameNode`` and either ``JobTracker`` or ``ResourceManager`` depending whether you are running Hadoop 1 or 2. There is no need to run any Hadoop service on the same node as Sqoop server, just the libraries and configuration files must be available.
-
-Path to Hadoop libraries is stored in environment ``HADOOP_COMMON_HOME``, ``HADOOP_HDFS_HOME``, ``HADOOP_MAPRED_HOME`` and ``HADOOP_YARN_HOME``. You need to set the environment with your Hadoop libraries. If the environment ``HADOOP_HOME`` is set, the default expected locations are ``$HADOOP_HOME/share/hadoop/common``, ``$HADOOP_HOME/share/hadoop/hdfs``, ``$HADOOP_HOME/share/hadoop/mapreduce`` and ``$HADOOP_HOME/share/hadoop/yarn``.
-
-Lastly you might need to install JDBC drivers that are not bundled with Sqoop because of incompatible licenses. You can add any arbitrary Java jar file to Sqoop server by copying it into ``lib/`` directory. You can create this directory if it do not exists already.
-
-Configuring PATH
-----------------
-
-All user and administrator facing shell commands are stored in ``bin/`` directory. It's recommended to add this directory to your ``$PATH`` for their easier execution, for example::
-
-  PATH=$PATH:`pwd`/bin/
-
-Further documentation pages will assume that you have the binaries on your ``$PATH``. You will need to call them specifying full path if you decide to skip this step.
-
-Configuring Server
-------------------
-
-Before starting server you should revise configuration to match your specific environment. Server configuration files are stored in ``conf`` directory.
-
-File ``sqoop_bootstrap.properties`` specifies which configuration provider should be used for loading configuration for rest of Sqoop server. Default value ``PropertiesConfigurationProvider`` should be sufficient.
-
-
-Second configuration file ``sqoop.properties`` contains remaining configuration properties that can affect Sqoop server. File is very well documented, so check if all configuration properties fits your environment. Default or very little tweaking should be sufficient most common cases.
-
-You can verify the Sqoop server configuration using `Verify Tool <Tools.html#verify>`__, for example::
-
-  sqoop2-tool verify
-
-Upon running the ``verify`` tool, you should see messages similar to the following::
-
-  Verification was successful.
-  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
-
-Consult `Verify Tool <Tools.html#upgrade>`__ documentation page in case of any failure.
-
-Server Life Cycle
------------------
-
-After installation and configuration you can start Sqoop server with following command: ::
-
-  sqoop2-server start
-
-Similarly you can stop server using following command: ::
-
-  sqoop2-server stop
-
-By default Sqoop server daemons use ports 12000. You can set ``org.apache.sqoop.jetty.port`` in configuration file ``conf/sqoop.properties`` to use different ports.
-
-Client installation
-===================
-
-Client do not need extra installation and configuration steps. Just copy Sqoop distribution artifact on target machine and unzip it in desired location. You can start client with following command: ::
-
-  sqoop2-shell
-
-You can find more documentation to Sqoop client in `Command Line Client <CommandLineClient.html>`_ section.
-
-


[4/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev/ConnectorDevelopment.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev/ConnectorDevelopment.rst b/docs/src/site/sphinx/dev/ConnectorDevelopment.rst
new file mode 100644
index 0000000..0e8ea92
--- /dev/null
+++ b/docs/src/site/sphinx/dev/ConnectorDevelopment.rst
@@ -0,0 +1,595 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=============================
+Sqoop 2 Connector Development
+=============================
+
+This document describes how to implement a connector in the Sqoop 2 using the code sample from one of the built-in connectors ( ``GenericJdbcConnector`` ) as a reference. Sqoop 2 jobs support extraction from and/or loading to different data sources. Sqoop 2 connectors encapsulate the job lifecyle operations for extracting and/or loading data from and/or to
+different data sources. Each connector will primarily focus on a particular data source and its custom implementation for optimally reading and/or writing data in a distributed environment.
+
+.. contents::
+
+What is a Sqoop Connector?
+++++++++++++++++++++++++++
+
+Connectors provide the facility to interact with many data sources and thus can be used as a means to transfer data between them in Sqoop. The connector implementation will provide logic to read from and/or write to a data source that it represents. For instance the ( ``GenericJdbcConnector`` ) encapsulates the logic to read from and/or write to jdbc enabled relational data sources. The connector part that enables reading from a data source and transferring this data to internal Sqoop format is called the FROM and the part that enables writng data to a data source by transferring data from Sqoop format is called TO. In order to interact with these data sources, the connector will provide one or many config classes and input fields within it.
+
+Broadly we support two main config types for connectors, link type represented by the enum ``ConfigType.LINK`` and job type represented by the enum ``ConfigType.JOB``. Link config represents the properties to physically connect to the data source. Job config represent the properties that are required to invoke reading from and/or writing to particular dataset in the data source it connects to. If a connector supports both reading from and writing to, it will provide the ``FromJobConfig`` and ``ToJobConfig`` objects. Each of these config objects are custom to each connector and can have one or more inputs associated with each of the Link, FromJob and ToJob config types. Hence we call the connectors as configurables i.e an entity that can provide configs for interacting with the data source it represents. As the connectors evolve over time to support new features in their data sources, the configs and inputs will change as well. Thus the connector API also provides methods for upgradi
 ng the config and input names and data related to these data sources across different versions.
+
+The connectors implement logic for various stages of the extract/load process using the connector API described below. While extracting/reading data from the data-source the main stages are ``Initializer``, ``Partitioner``, ``Extractor`` and ``Destroyer``. While loading/writitng data to the data source the main stages currently supported are ``Initializer``, ``Loader`` and ``Destroyer``. Each stage has its unique set of responsibilities that are explained in detail below. Since connectors understand the internals of the data source they represent, they work in tandem with the sqoop supported execution engines such as MapReduce or Spark (in future) to accomplish this process in a most optimal way.
+
+When do we add a new connector?
+===============================
+You add a new connector when you need to extract/read data from a new data source, or load/write
+data into a new data source that is not supported yet in Sqoop 2.
+In addition to the connector API, Sqoop 2 also has an submission and execution engine interface.
+At the moment the only supported engine is MapReduce, but we may support additional engines in the future such as Spark. Since many parallel execution engines are capable of reading/writing data, there may be a question of whether adding support for a new data source should be done through the connector or the execution engine API.
+
+**Our guideline are as follows:** Connectors should manage all data extract(reading) from and/or load(writing) into a data source. Submission and execution engine together manage the job submission and execution life cycle to read/write data from/to data sources in the most optimal way possible. If you need to support a new data store and details of linking to it and don't care how the process of reading/writing from/to happens then you are looking to add a connector and you should continue reading the below Connector API details to contribute new connectors to Sqoop 2.
+
+
+Connector Implementation
+++++++++++++++++++++++++
+
+The ``SqoopConnector`` class defines an API for the connectors that must be implemented by the connector developers. Each Connector must extend ``SqoopConnector`` and override the methods shown below.
+::
+
+  public abstract String getVersion();
+  public abstract ResourceBundle getBundle(Locale locale);
+  public abstract Class getLinkConfigurationClass();
+  public abstract Class getJobConfigurationClass(Direction direction);
+  public abstract From getFrom();
+  public abstract To getTo();
+  public abstract ConnectorConfigurableUpgrader getConfigurableUpgrader(String oldConnectorVersion)
+
+Connectors can optionally override the following methods:
+::
+
+  public List<Direction> getSupportedDirections();
+  public Class<? extends IntermediateDataFormat<?>> getIntermediateDataFormat()
+
+The ``getVersion`` method returns the current version of the connector
+It is important to provide a unique identifier every time a connector jar is released externally.
+In case of the Sqoop built-in connectors, the version refers to the Sqoop build/release version. External
+connectors can also use the same or similar mechanism to set this version. The version number is critical for
+the connector upgrade logic used in Sqoop
+
+::
+
+   @Override
+    public String getVersion() {
+     return VersionInfo.getBuildVersion();
+    }
+
+
+The ``getFrom`` method returns From_ instance
+which is a ``Transferable`` entity that encapsulates the operations
+needed to read from the data source that the connector represents.
+
+The ``getTo`` method returns To_ instance
+which is a ``Transferable`` entity that encapsulates the operations
+needed to write to the data source that the connector represents.
+
+Methods such as ``getBundle`` , ``getLinkConfigurationClass`` , ``getJobConfigurationClass``
+are related to `Configurations`_
+
+Since a connector represents a data source and it can support one of the two directions, either reading FROM its data source or writing to its data souurce or both, the ``getSupportedDirections`` method returns a list of directions that a connector will implement. This should be a subset of the values in the ``Direction`` enum we provide:
+::
+
+  public List<Direction> getSupportedDirections() {
+      return Arrays.asList(new Direction[]{
+          Direction.FROM,
+          Direction.TO
+      });
+  }
+
+
+From
+====
+
+The ``getFrom`` method returns From_ instance which is a ``Transferable`` entity that encapsulates the operations needed to read from the data source the connector represents. The built-in ``GenericJdbcConnector`` defines ``From`` like this.
+::
+
+  private static final From FROM = new From(
+        GenericJdbcFromInitializer.class,
+        GenericJdbcPartitioner.class,
+        GenericJdbcExtractor.class,
+        GenericJdbcFromDestroyer.class);
+  ...
+
+  @Override
+  public From getFrom() {
+    return FROM;
+  }
+
+Initializer and Destroyer
+-------------------------
+.. _Initializer:
+.. _Destroyer:
+
+Initializer is instantiated before the submission of sqoop job to the execution engine and doing preparations such as connecting to the data source, creating temporary tables or adding dependent jar files. Initializers are executed as the first step in the sqoop job lifecyle. All interactions within an initializer are assumed to occur within a single thread, so state can be maintained between method calls (such as database connections). Here is the ``Initializer`` API.
+::
+
+  public abstract void initialize(InitializerContext context, LinkConfiguration linkConfiguration,
+      JobConfiguration jobConfiguration);
+
+  public List<String> getJars(InitializerContext context, LinkConfiguration linkConfiguration,
+      JobConfiguration jobConfiguration){
+       return new LinkedList<String>();
+      }
+
+  public abstract Schema getSchema(InitializerContext context, LinkConfiguration linkConfiguration,
+      JobConfiguration jobConfiguration) {
+         return new NullSchema();
+      }
+
+In addition to the initialize() method where the job execution preparation activities occur, the ``Initializer`` can also implement the getSchema() method for the directions ``FROM`` and ``TO`` that it supports.
+
+The getSchema() method is used by the sqoop system to match the data extracted/read by the ``From`` instance of connector data source with the data loaded/written to the ``To`` instance of the connector data source. In case of a relational database or columnar database, the returned Schema object will include collection of columns with their data types. If the data source is schema-less, such as a file, a default ``NullSchema`` will be used (i.e a Schema object without any columns).
+
+NOTE: Sqoop 2 currently does not support extract and load between two connectors that represent schema-less data sources. We expect that atleast the ``From`` instance of the connector or the ``To`` instance of the connector in the sqoop job will have a schema. If both ``From`` and ``To`` have a associated non empty schema, Sqoop 2 will load data by column name, i.e, data in column "A" in ``From`` instance of the connector for the job will be loaded to column "A" in the ``To`` instance of the connector for that job.
+
+
+``Destroyer`` is instantiated after the execution engine finishes its processing. It is the last step in the sqoop job lifecyle, so pending clean up tasks such as dropping temporary tables and closing connections. The term destroyer is a little misleading. It represents the phase where the final output commits to the data source can also happen in case of the ``TO`` instance of the connector code.
+
+Partitioner
+-----------
+
+The ``Partitioner`` creates ``Partition`` instances ranging from 1..N. The N is driven by a configuration as well. The default set of partitions created is set to 10 in the sqoop code. Here is the ``Partitioner`` API
+
+``Partitioner`` must implement the ``getPartitions`` method in the ``Partitioner`` API.
+
+::
+
+  public abstract List<Partition> getPartitions(PartitionerContext context,
+      LinkConfiguration linkConfiguration, FromJobConfiguration jobConfiguration);
+
+``Partition`` instances are passed to Extractor_ as the argument of ``extract`` method.
+Extractor_ determines which portion of the data to extract by a given partition.
+
+There is no actual convention for Partition classes other than being actually ``Writable`` and ``toString()`` -able. Here is the ``Partition`` API
+::
+
+  public abstract class Partition {
+    public abstract void readFields(DataInput in) throws IOException;
+    public abstract void write(DataOutput out) throws IOException;
+    public abstract String toString();
+  }
+
+Connectors can implement custom ``Partition`` classes. ``GenericJdbcPartitioner`` is one such example. It returns the ``GenericJdbcPartition`` objects.
+
+Extractor
+---------
+
+Extractor (E for ETL) extracts data from a given data source
+``Extractor`` must implement the ``extract`` method in the ``Extractor`` API.
+::
+
+  public abstract void extract(ExtractorContext context,
+                               LinkConfiguration linkConfiguration,
+                               JobConfiguration jobConfiguration,
+                               SqoopPartition partition);
+
+The ``extract`` method extracts data from the data source using the link and job configuration properties and writes it to the ``SqoopMapDataWriter`` (provided in the extractor context given to the extract method).
+The ``SqoopMapDataWriter`` has the ``SqoopWritable`` thats holds the data read from the data source in the `Intermediate Data Format representation`_
+
+Extractors use Writer's provided by the ExtractorContext to send a record through the sqoop system.
+::
+
+  context.getDataWriter().writeArrayRecord(array);
+
+The extractor must iterate through the given partition in the ``extract`` method.
+::
+
+  while (resultSet.next()) {
+    ...
+    context.getDataWriter().writeArrayRecord(array);
+    ...
+  }
+
+
+To
+==
+
+The ``getTo`` method returns ``TO`` instance which is a ``Transferable`` entity that encapsulates the operations needed to wtite data to the data source the connector represents. The built-in ``GenericJdbcConnector`` defines ``To`` like this.
+::
+
+  private static final To TO = new To(
+        GenericJdbcToInitializer.class,
+        GenericJdbcLoader.class,
+        GenericJdbcToDestroyer.class);
+  ...
+
+  @Override
+  public To getTo() {
+    return TO;
+  }
+
+
+Initializer and Destroyer
+-------------------------
+
+Initializer_ and Destroyer_ of a ``To`` instance are used in a similar way to those of a ``From`` instance.
+Refer to the previous section for more details.
+
+
+Loader
+------
+
+A loader (L for ETL) receives data from the ``From`` instance of the sqoop connector associated with the sqoop job and then loads it to an ``TO`` instance of the connector associated with the same sqoop job
+
+``Loader`` must implement ``load`` method of the ``Loader`` API
+::
+
+  public abstract void load(LoaderContext context,
+                            ConnectionConfiguration connectionConfiguration,
+                            JobConfiguration jobConfiguration) throws Exception;
+
+The ``load`` method reads data from ``SqoopOutputFormatDataReader`` (provided in the loader context of the load methods). It reads the data in the `Intermediate Data Format representation`_ and loads it to the data source.
+
+Loader must iterate in the ``load`` method until the data from ``DataReader`` is exhausted.
+::
+
+  while ((array = context.getDataReader().readArrayRecord()) != null) {
+    ...
+  }
+
+NOTE: we do not yet support a stage for connector developers to control how to balance the loading/writitng of data across the mutiple loaders. In future we may be adding this to the connector API to have custom logic to balance the loading across multiple reducers.
+
+Sqoop Connector Identifier : sqoopconnector.properties
+======================================================
+
+Every Sqoop 2 connector needs to have a sqoopconnector.properties in the packaged jar to be identified by Sqoop.
+A typical ``sqoopconnector.properties`` for a sqoop2 connector looks like below
+
+::
+
+ # Sqoop Foo Connector Properties
+ org.apache.sqoop.connector.class = org.apache.sqoop.connector.foo.FooConnector
+ org.apache.sqoop.connector.name = sqoop-foo-connector
+
+If the above file does not exist, then Sqoop will not load this jar and thus cannot be registered into Sqoop repository for creating Sqoop jobs
+
+
+Sqoop Connector Build-time Dependencies
+=======================================
+
+Sqoop provides the connector-sdk module identified by the package:``org.apache.sqoop.connector`` It provides the public facing apis for the external connectors
+to extend from. It also provides common utilities that the connectors can utilize for converting data to and from the sqoop intermediate data format
+
+The common-test module identified by the package  ``org.apache.sqoop.common.test`` provides utilities used related to the built-in connectors such as the JDBC, HDFS,
+and Kafka connectors that can be used by the external connectors for creating the end-end integration test for sqoop jobs
+
+The test module identified by the package ``org.apache.sqoop.test`` provides various minicluster utilites the integration tests can extend from to run
+ a sqoop job with the given sqoop connector either using it as a ``FROM`` or ``TO`` data-source
+
+Hence the pom.xml for the sqoop kite connector built using the kite-sdk  might look something like below
+
+::
+
+   <dependencies>
+    <!-- Sqoop modules -->
+    <dependency>
+      <groupId>org.apache.sqoop</groupId>
+      <artifactId>connector-sdk</artifactId>
+    </dependency>
+
+    <!-- Testing specified modules -->
+    <dependency>
+      <groupId>org.testng</groupId>
+      <artifactId>testng</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.mockito</groupId>
+      <artifactId>mockito-all</artifactId>
+      <scope>test</scope>
+    </dependency>
+     <dependency>
+       <groupId>org.apache.sqoop</groupId>
+       <artifactId>sqoop-common-test</artifactId>
+     </dependency>
+
+     <dependency>
+       <groupId>org.apache.sqoop</groupId>
+       <artifactId>test</artifactId>
+     </dependency>
+    <!-- Connector required modules -->
+    <dependency>
+      <groupId>org.kitesdk</groupId>
+      <artifactId>kite-data-core</artifactId>
+    </dependency>
+    ....
+  </dependencies>
+
+Configurables
++++++++++++++
+
+Configurable registration
+=========================
+One of the currently supported configurable in Sqoop are the connectors. Sqoop 2 registers definitions of connectors from the file named ``sqoopconnector.properties`` which each connector implementation should provide to become available in Sqoop.
+::
+
+  # Generic JDBC Connector Properties
+  org.apache.sqoop.connector.class = org.apache.sqoop.connector.jdbc.GenericJdbcConnector
+  org.apache.sqoop.connector.name = generic-jdbc-connector
+
+
+Configurations
+==============
+
+Implementations of ``SqoopConnector`` overrides methods such as ``getLinkConfigurationClass`` and ``getJobConfigurationClass`` returning configuration class.
+::
+
+  @Override
+  public Class getLinkConfigurationClass() {
+    return LinkConfiguration.class;
+  }
+
+  @Override
+  public Class getJobConfigurationClass(Direction direction) {
+    switch (direction) {
+      case FROM:
+        return FromJobConfiguration.class;
+      case TO:
+        return ToJobConfiguration.class;
+      default:
+        return null;
+    }
+  }
+
+Configurations are represented by annotations defined in ``org.apache.sqoop.model`` package.
+Annotations such as ``ConfigurationClass`` , ``ConfigClass`` , ``Config`` and ``Input``
+are provided for defining configuration objects for each connector.
+
+``@ConfigurationClass`` is a marker annotation for ``ConfigurationClasses``  that hold a group or lis of ``ConfigClasses`` annotated with the marker ``@ConfigClass``
+::
+
+  @ConfigurationClass
+  public class LinkConfiguration {
+
+    @Config public LinkConfig linkConfig;
+
+    public LinkConfiguration() {
+      linkConfig = new LinkConfig();
+    }
+  }
+
+Each ``ConfigClass`` defines the different inputs it exposes for the link and job configs. These inputs are annotated with ``@Input`` and the user will be asked to fill in when they create a sqoop job and choose to use this instance of the connector for either the ``From`` or ``To`` part of the job.
+
+::
+
+    @ConfigClass(validators = {@Validator(LinkConfig.ConfigValidator.class)})
+    public class LinkConfig {
+      @Input(size = 128, validators = {@Validator(NotEmpty.class), @Validator(ClassAvailable.class)} )
+      @Input(size = 128) public String jdbcDriver;
+      @Input(size = 128) public String connectionString;
+      @Input(size = 40)  public String username;
+      @Input(size = 40, sensitive = true) public String password;
+      @Input public Map<String, String> jdbcProperties;
+    }
+
+Each ``ConfigClass`` and the  inputs within the configs annotated with ``Input`` can specifiy validators via the ``@Validator`` annotation described below.
+
+
+Configs and Inputs
+==================================
+As discussed above, ``Input`` provides a way to express the type of config parameter exposed. In addition it allows connector developer to add attributes
+that describe how the input will be used in the sqoop job. Here are the list of the supported attributes
+
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
+| Attribute                   | Type    | Description                                                           | Example                                         |
++=============================+=========+=======================================================================+=================================================+
+| size                        | Integer |Describes the maximum size of the attribute value .                    |@Input(size = 128) public String driver          |
++-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
+| sensitive                   | Boolean |Describes if the input value should be hidden from display             |@Input(sensitive = true) public String password  |
++-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
+| sensitiveKeyPattern         | String  |If the config paramteter is a map, this java regular expression        |@Input(sensitiveKeyPattern = ".*sensitive")      |
+|                             |         |(http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html)|public Map<String, String> sensitiveMap          |
+|                             |         |will be used to decide which keys are hidden from display.             |                                                 |
+|                             |         |                                                                       |                                                 |
++-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
+| editable                    | Enum    |Describes the roles that can edit the value of this input              |@Input(editable = ANY) public String value       |
++-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
+| overrides                   | String  |Describes a list of other inputs this input can override in this config|@Input(overrides ="value") public String lvalue  |
++-----------------------------+---------+-----------------------------------------------------------------------+-------------------------------------------------+
+
+
+``Editable`` Attribute: Possible values for the Enum InputEditable are USER_ONLY, CONNECTOR_ONLY, ANY. If an input says editable by USER_ONLY, then the connector code during the
+job run or upgrade cannot update the config input value. Similarly for a CONNECTOR_ONLY, user cannot update its value via the rest api or shell command line.
+
+``Overrides`` Attribute: USER_ONLY input attribute values cannot be overriden by other inputs.
+
+Empty Configuration
+-------------------
+If a connector does not have any configuration inputs to specify for the ``ConfigType.LINK`` or ``ConfigType.JOB`` it is recommended to return the ``EmptyConfiguration`` class in the ``getLinkConfigurationClass()`` or ``getJobConfigurationClass(..)`` methods.
+::
+
+   @ConfigurationClass
+   public class EmptyConfiguration { }
+
+
+Configuration ResourceBundle
+============================
+
+The config and its corresponding input names, the input field description are represented in the config resource bundle defined per connector.
+::
+
+  # jdbc driver
+  connection.jdbcDriver.label = JDBC Driver Class
+  connection.jdbcDriver.help = Enter the fully qualified class name of the JDBC \
+                     driver that will be used for establishing this connection.
+
+  # connect string
+  connection.connectionString.label = JDBC Connection String
+  connection.connectionString.help = Enter the value of JDBC connection string to be \
+                     used by this connector for creating connections.
+
+  ...
+
+Those resources are loaded by ``getBundle`` method of the ``SqoopConnector.``
+::
+
+  @Override
+  public ResourceBundle getBundle(Locale locale) {
+    return ResourceBundle.getBundle(
+    GenericJdbcConnectorConstants.RESOURCE_BUNDLE_NAME, locale);
+  }
+
+
+Validations for Configs and Inputs
+==================================
+
+Validators validate the config objects and the inputs associated with the config objects. For config objects themselves we encourage developers to write custom valdiators for both the link and job config types.
+
+::
+
+   @Input(size = 128, validators = {@Validator(value = StartsWith.class, strArg = "jdbc:")} )
+
+   @Input(size = 255, validators = { @Validator(NotEmpty.class) })
+
+Sqoop 2 provides a list of standard input validators that can be used by different connectors for the link and job type configuration inputs.
+
+::
+
+    public class NotEmpty extends AbstractValidator<String> {
+    @Override
+    public void validate(String instance) {
+      if (instance == null || instance.isEmpty()) {
+       addMessage(Status.ERROR, "Can't be null nor empty");
+      }
+     }
+    }
+
+The validation logic is executed when users creating the sqoop jobs input values for the link and job configs associated with the ``From`` and ``To`` instances of the connectors associated with the job.
+
+
+Loading External Connectors
++++++++++++++++++++++++++++
+
+Loading new connector say sqoop-foo-connector to the sqoop2, here are the steps to follow
+
+1. Create a ``sqoop-foo-connector.jar``. Make sure the jar contains the ``sqoopconnector.properties`` for it to be picked up by Sqoop
+
+2. Add this jar to the ``org.apache.sqoop.classpath.extra`` property in the sqoop.properties located under the ``conf`` directory.
+
+::
+
+ # Sqoop application classpath
+ # ":" separated list of jars to be included in sqoop.
+ #
+ org.apache.sqoop.classpath.extra=/path/to/connector.jar
+
+3. Start the Sqoop 2 server and while initializing the server this jar should be loaded into the Sqoop 2's class path and registered into the Sqoop 2 repository
+
+
+
+Sqoop 2 MapReduce Job Execution Lifecycle with Connector API
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Sqoop 2 provides MapReduce utilities such as ``SqoopMapper`` and ``SqoopReducer`` that aid sqoop job execution.
+
+Note: Any class prefixed with Sqoop is a internal sqoop class provided for MapReduce and is not part of the conenector API. These internal classes work with the custom implementations of ``Extractor``, ``Partitioner`` in the ``From`` instance and ``Loader`` in the ``To`` instance of the connector.
+
+When reading from a data source, the ``Extractor`` provided by the ``From`` instance of the connector extracts data from a corresponding data source it represents and the ``Loader``, provided by the TO instance of the connector, loads data into the data source it represents.
+
+The diagram below describes the initialization phase of a job.
+``SqoopInputFormat`` create splits using ``Partitioner``.
+::
+
+      ,----------------.          ,-----------.
+      |SqoopInputFormat|          |Partitioner|
+      `-------+--------'          `-----+-----'
+   getSplits  |                         |
+  ----------->|                         |
+              |      getPartitions      |
+              |------------------------>|
+              |                         |         ,---------.
+              |                         |-------> |Partition|
+              |                         |         `----+----'
+              |<- - - - - - - - - - - - |              |
+              |                         |              |          ,----------.
+              |-------------------------------------------------->|SqoopSplit|
+              |                         |              |          `----+-----'
+
+The diagram below describes the map phase of a job.
+``SqoopMapper`` invokes ``From`` connector's extractor's ``extract`` method.
+::
+
+      ,-----------.
+      |SqoopMapper|
+      `-----+-----'
+     run    |
+  --------->|                                   ,------------------.
+            |---------------------------------->|SqoopMapDataWriter|
+            |                                   `------+-----------'
+            |                ,---------.               |
+            |--------------> |Extractor|               |
+            |                `----+----'               |
+            |      extract        |                    |
+            |-------------------->|                    |
+            |                     |                    |
+           read from Data Source  |                    |
+  <-------------------------------|      write*        |
+            |                     |------------------->|
+            |                     |                    |           ,-------------.
+            |                     |                    |---------->|SqoopWritable|
+            |                     |                    |           `----+--------'
+            |                     |                    |                |
+            |                     |                    |                |  context.write(writable, ..)
+            |                     |                    |                |---------------------------->
+
+The diagram below decribes the reduce phase of a job.
+``OutputFormat`` invokes ``To`` connector's loader's ``load`` method (via ``SqoopOutputFormatLoadExecutor`` ).
+::
+
+    ,------------.  ,---------------------.
+    |SqoopReducer|  |SqoopNullOutputFormat|
+    `---+--------'  `----------+----------'
+        |                 |   ,-----------------------------.
+        |                 |-> |SqoopOutputFormatLoadExecutor|
+        |                 |   `--------------+--------------'              |
+        |                 |                  |                             |
+        |                 |                  |   ,-----------------.   ,-------------.
+        |                 |                  |-> |SqoopRecordWriter|-->|SqoopWritable|
+      getRecordWriter     |                  |   `--------+--------'   `---+---------'
+  ----------------------->| getRecordWriter  |            |                |
+        |                 |----------------->|            |                |     ,--------------.
+        |                 |                  |---------------------------------->|ConsumerThread|
+        |                 |                  |            |                |     `------+-------'
+        |                 |<- - - - - - - - -|            |                |            |    ,------.
+  <- - - - - - - - - - - -|                  |            |                |            |--->|Loader|
+        |                 |                  |            |                |            |    `--+---'
+        |                 |                  |            |                |            |       |
+        |                 |                  |            |                |            | load  |
+   run  |                 |                  |            |                |            |------>|
+  ----->|                 |     write        |            |                |            |       |
+        |------------------------------------------------>| setContent     |            | read* |
+        |                 |                  |            |--------------->| getContent |<------|
+        |                 |                  |            |                |<-----------|       |
+        |                 |                  |            |                |            | - - ->|
+        |                 |                  |            |                |            |       | write into Data Source
+        |                 |                  |            |                |            |       |----------------------->
+
+More details can be found in `Sqoop MR Execution Engine`_
+
+.. _`Sqoop MR Execution Engine`: https://cwiki.apache.org/confluence/display/SQOOP/Sqoop+MR+Execution+Engine
+
+.. _`Intermediate Data Format representation`: https://cwiki.apache.org/confluence/display/SQOOP/Sqoop2+Intermediate+representation

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev/DevEnv.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev/DevEnv.rst b/docs/src/site/sphinx/dev/DevEnv.rst
new file mode 100644
index 0000000..3b72e06
--- /dev/null
+++ b/docs/src/site/sphinx/dev/DevEnv.rst
@@ -0,0 +1,57 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=====================================
+Sqoop 2 Development Environment Setup
+=====================================
+
+This document describes you how to setup development environment for Sqoop 2.
+
+System Requirement
+==================
+
+Java
+----
+
+Sqoop has been developped and test only with JDK from `Oracle <http://www.oracle.com/technetwork/java/javase/downloads/index.html>`_ and we require at least version 7 (we're not supporting JDK 1.6 and older releases).
+
+Maven
+-----
+
+Sqoop uses Maven 3 for building the project. Download `Maven <http://maven.apache.org/download.cgi>`_ and its Installation instructions given in `link <http://maven.apache.org/download.cgi#Maven_Documentation>`_.
+
+Eclipse Setup
+=============
+
+Steps for downloading source code is given in `Building Sqoop2 <BuildingSqoop2.html>`_
+
+Sqoop 2 project has multiple modules where one module is depend on another module for e.g. sqoop 2 client module has sqoop 2 common module dependency. Follow below step for creating eclipse's project and classpath for each module.
+
+::
+
+  //Install all package into local maven repository
+  mvn clean install -DskipTests
+
+  //Adding M2_REPO variable to eclipse workspace
+  mvn eclipse:configure-workspace -Declipse.workspace=<path-to-eclipse-workspace-dir-for-sqoop-2>
+
+  //Eclipse project creation with optional parameters
+  mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
+
+Alternatively, for manually adding M2_REPO classpath variable as maven repository path in eclipse-> window-> Java ->Classpath Variables ->Click "New" ->In new dialog box, input Name as M2_REPO and Path as $HOME/.m2/repository ->click Ok.
+
+On successful execution of above maven commands, Then import the sqoop project modules into eclipse-> File -> Import ->General ->Existing Projects into Workspace-> Click Next-> Browse Sqoop 2 directory ($HOME/git/sqoop2) ->Click Ok ->Import dialog shows multiple projects (sqoop-client, sqoop-common, etc.) -> Select all modules -> click Finish.
+


[2/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/security/SecurityGuideOnSqoop2.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/security/SecurityGuideOnSqoop2.rst b/docs/src/site/sphinx/security/SecurityGuideOnSqoop2.rst
new file mode 100644
index 0000000..7194d3b
--- /dev/null
+++ b/docs/src/site/sphinx/security/SecurityGuideOnSqoop2.rst
@@ -0,0 +1,239 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=========================
+Security Guide On Sqoop 2
+=========================
+
+Most Hadoop components, such as HDFS, Yarn, Hive, etc., have security frameworks, which support Simple, Kerberos and LDAP authentication. currently Sqoop 2 provides 2 types of authentication: simple and kerberos. The authentication module is pluggable, so more authentication types can be added. Additionally, a new role based access control is introduced in Sqoop 1.99.6. We recommend to use this capability in multi tenant environments, so that malicious users can’t easily abuse your created link and job objects.
+
+Simple Authentication
+=====================
+
+Configuration
+-------------
+Modify Sqoop configuration file, normally in <Sqoop Folder>/conf/sqoop.properties.
+
+::
+
+  org.apache.sqoop.authentication.type=SIMPLE
+  org.apache.sqoop.authentication.handler=org.apache.sqoop.security.authentication.SimpleAuthenticationHandler
+  org.apache.sqoop.anonymous=true
+
+-	Simple authentication is used by default. Commenting out authentication configuration will yield the use of simple authentication.
+
+Run command
+-----------
+Start Sqoop server as usual.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh server start
+
+Start Sqoop client as usual.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh client
+
+Kerberos Authentication
+=======================
+
+Kerberos is a computer network authentication protocol which works on the basis of 'tickets' to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Its designers aimed it primarily at a client–server model and it provides mutual authentication—both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks.
+
+Dependency
+----------
+Set up a KDC server. Skip this step if KDC server exists. It's difficult to cover every way Kerberos can be setup (ie: there are cross realm setups and multi-trust environments). This section will describe how to setup the sqoop principals with a local deployment of MIT kerberos.
+
+-	All components which are Kerberos authenticated need one KDC server. If current Hadoop cluster uses Kerberos authentication, there should be a KDC server.
+-	If there is no KDC server, follow http://web.mit.edu/kerberos/krb5-devel/doc/admin/install_kdc.html to set up one.
+
+Configure Hadoop cluster to use Kerberos authentication.
+
+-	Authentication type should be cluster level. All components must have the same authentication type: use Kerberos or not. In other words, Sqoop with Kerberos authentication could not communicate with other Hadoop components, such as HDFS, Yarn, Hive, etc., without Kerberos authentication, and vice versa.
+-	How to set up a Hadoop cluster with Kerberos authentication is out of the scope of this document. Follow the related links like https://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html
+
+Create keytab and principal for Sqoop 2 via kadmin in command line.
+
+::
+
+  addprinc -randkey HTTP/<FQDN>@<REALM>
+  addprinc -randkey sqoop/<FQDN>@<REALM>
+  xst -k /home/kerberos/sqoop.keytab HTTP/<FQDN>@<REALM>
+  xst -k /home/kerberos/sqoop.keytab sqoop/<FQDN>@<REALM>
+
+-	The <FQDN> should be replaced by the FQDN of the server, which could be found via “hostname -f” in command line.
+-	The <REALM> should be replaced by the realm name in krb5.conf file generated when installing the KDC server in the former step.
+-	The principal HTTP/<FQDN>@<REALM> is used in communication between Sqoop client and Sqoop server. Since Sqoop server is an http server, so the HTTP principal is a must during SPNEGO process, and it is case sensitive.
+-	Http request could be sent from other client like browser, wget or curl with SPNEGO support.
+-	The principal sqoop/<FQDN>@<REALM> is used in communication between Sqoop server and Hdfs/Yarn as the credential of Sqoop server.
+
+Configuration
+-------------
+Modify Sqoop configuration file, normally in <Sqoop Folder>/conf/sqoop.properties.
+
+::
+
+  org.apache.sqoop.authentication.type=KERBEROS
+  org.apache.sqoop.authentication.handler=org.apache.sqoop.security.authentication.KerberosAuthenticationHandler
+  org.apache.sqoop.authentication.kerberos.principal=sqoop/_HOST@<REALM>
+  org.apache.sqoop.authentication.kerberos.keytab=/home/kerberos/sqoop.keytab
+  org.apache.sqoop.authentication.kerberos.http.principal=HTTP/_HOST@<REALM>
+  org.apache.sqoop.authentication.kerberos.http.keytab=/home/kerberos/sqoop.keytab
+  org.apache.sqoop.authentication.kerberos.proxyuser=true
+
+-	When _HOST is used as FQDN in principal, it will be replaced by the real FQDN. https://issues.apache.org/jira/browse/HADOOP-6632
+-	If parameter proxyuser is set true, Sqoop server will use proxy user mode (sqoop delegate real client user) to run Yarn job. If false, Sqoop server will use sqoop user to run Yarn job.
+
+Run command
+-----------
+Set SQOOP2_HOST to FQDN.
+
+::
+
+  export SQOOP2_HOST=$(hostname -f).
+
+-	The <FQDN> should be replaced by the FQDN of the server, which could be found via “hostname -f” in command line.
+
+Start Sqoop server using sqoop user.
+
+::
+
+  sudo –u sqoop <Sqoop Folder>/bin/sqoop.sh server start
+
+Run kinit to generate ticket cache.
+
+::
+
+  kinit HTTP/<FQDN>@<REALM> -kt /home/kerberos/sqoop.keytab
+
+Start Sqoop client.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh client
+
+Verify
+------
+If the Sqoop server has started successfully with Kerberos authentication, the following line will be in <@LOGDIR>/sqoop.log:
+
+::
+
+  2014-12-04 15:02:58,038 INFO  security.KerberosAuthenticationHandler [org.apache.sqoop.security.authentication.KerberosAuthenticationHandler.secureLogin(KerberosAuthenticationHandler.java:84)] Using Kerberos authentication, principal [sqoop/_HOST@HADOOP.COM] keytab [/home/kerberos/sqoop.keytab]
+
+If the Sqoop client was able to communicate with the Sqoop server, the following will be in <@LOGDIR>/sqoop.log :
+
+::
+
+  Refreshing Kerberos configuration
+  Acquire TGT from Cache
+  Principal is HTTP/<FQDN>@HADOOP.COM
+  null credentials from Ticket Cache
+  principal is HTTP/<FQDN>@HADOOP.COM
+  Will use keytab
+  Commit Succeeded
+
+Customized Authentication
+=========================
+
+Users can create their own authentication modules. By performing the following steps:
+
+-	Create customized authentication handler extends abstract class AuthenticationHandler.
+-	Implement abstract function doInitialize and secureLogin in AuthenticationHandler.
+
+::
+
+  public class MyAuthenticationHandler extends AuthenticationHandler {
+
+    private static final Logger LOG = Logger.getLogger(MyAuthenticationHandler.class);
+
+    public void doInitialize() {
+      securityEnabled = true;
+    }
+
+    public void secureLogin() {
+      LOG.info("Using customized authentication.");
+    }
+  }
+
+-	Modify configuration org.apache.sqoop.authentication.handler in <Sqoop Folder>/conf/sqoop.properties and set it to the customized authentication handler class name.
+-	Restart the Sqoop server.
+
+Authorization
+=============
+
+Users, Groups, and Roles
+------------------------
+
+At the core of Sqoop's authorization system are users, groups, and roles. Roles allow administrators to give a name to a set of grants which can be easily reused. A role may be assigned to users, groups, and other roles. For example, consider a system with the following users and groups.
+
+::
+
+  <User>: <Groups>
+  user_all: group1, group2
+  user1: group1
+  user2: group2
+
+Sqoop roles must be created manually before being used, unlike users and groups. Users and groups are managed by the login system (Linux, LDAP or Kerberos). When a user wants to access one resource (connector, link, connector), the Sqoop2 server will determine the username of this user and the groups associated. That information is then used to determine if the user should have access to this resource being requested, by comparing the required privileges of the Sqoop operation to the user privileges using the following rules.
+
+- User privileges (Has the privilege been granted to the user?)
+- Group privileges (Does the user belong to any groups that the privilege has been granted to?)
+- Role privileges (Does the user or any of the groups that the user belongs to have a role that grants the privilege?)
+
+Administrator
+-------------
+
+There is a special user: administrator, which can’t be created, deleted by command. The only way to set administrator is to modify the configuration file. Administrator could run management commands to create/delete roles. However, administrator does not implicitly have all privileges. Administrator has to grant privilege to him/her if he/she needs to request the resource.
+
+Role management commands
+------------------------
+
+::
+
+  CREATE ROLE –role role_name
+  DROP ROLE –role role_name
+  SHOW ROLE
+
+- Only the administrator has privilege for this.
+
+Principal management commands
+-----------------------------
+
+::
+
+  GRANT ROLE --principal-type principal_type --principal principal_name --role role_name
+  REVOKE ROLE --principal-type principal_type --principal principal_name --role role_name
+  SHOW ROLE --principal-type principal_type --principal principal_name
+  SHOW PRINCIPAL --role role_name
+
+- principal_type: USER | GROUP | ROLE
+
+Privilege management commands
+-----------------------------
+
+::
+
+  GRANT PRIVILEGE --principal-type principal_type --principal principal_name --resource-type resource_type --resource resource_name --action action_name [--with-grant]
+  REVOKE PRIVILEGE --principal-type principal_type --principal principal_name [--resource-type resource_type --resource resource_name --action action_name] [--with-grant]
+  SHOW PRIVILEGE –principal-type principal_type –principal principal_name [--resource-type resource_type --resource resource_name --action action_name]
+
+- principal_type: USER | GROUP | ROLE
+- resource_type: CONNECTOR | LINK | JOB
+- action_type: ALL | READ | WRITE
+- With with-grant in GRANT PRIVILEGE command, this principal could grant his/her privilege to other users.
+- Without resource in REVOKE PRIVILEGE command, all privileges on this principal will be revoked.
+- With with-grant in REVOKE PRIVILEGE command, only grant privilege on this principal will be removed. This principal has the privilege to access this resource, but he/she could not grant his/her privilege to others.
+- Without resource in SHOW PRIVILEGE command, all privileges on this principal will be listed.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user.rst b/docs/src/site/sphinx/user.rst
new file mode 100644
index 0000000..b343615
--- /dev/null
+++ b/docs/src/site/sphinx/user.rst
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==========
+User Guide
+==========
+
+.. toctree::
+   :glob:
+
+   user/*
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/CommandLineClient.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/CommandLineClient.rst b/docs/src/site/sphinx/user/CommandLineClient.rst
new file mode 100644
index 0000000..8d52671
--- /dev/null
+++ b/docs/src/site/sphinx/user/CommandLineClient.rst
@@ -0,0 +1,533 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===================
+Command Line Shell
+===================
+
+Sqoop 2 provides command line shell that is capable of communicating with Sqoop 2 server using REST interface. Client is able to run in two modes - interactive and batch mode. Commands ``create``, ``update`` and ``clone`` are not currently supported in batch mode. Interactive mode supports all available commands.
+
+You can start Sqoop 2 client in interactive mode using command ``sqoop2-shell``::
+
+  sqoop2-shell
+
+Batch mode can be started by adding additional argument representing path to your Sqoop client script: ::
+
+  sqoop2-shell /path/to/your/script.sqoop
+
+Sqoop client script is expected to contain valid Sqoop client commands, empty lines and lines starting with ``#`` that are denoting comment lines. Comments and empty lines are ignored, all other lines are interpreted. Example script: ::
+
+  # Specify company server
+  set server --host sqoop2.company.net
+
+  # Executing given job
+  start job  --jid 1
+
+
+.. contents:: Table of Contents
+
+Resource file
+=============
+
+Sqoop 2 client have ability to load resource files similarly as other command line tools. At the beginning of execution Sqoop client will check existence of file ``.sqoop2rc`` in home directory of currently logged user. If such file exists, it will be interpreted before any additional actions. This file is loaded in both interactive and batch mode. It can be used to execute any batch compatible commands.
+
+Example resource file: ::
+
+  # Configure our Sqoop 2 server automatically
+  set server --host sqoop2.company.net
+
+  # Run in verbose mode by default
+  set option --name verbose --value true
+
+Commands
+========
+
+Sqoop 2 contains several commands that will be documented in this section. Each command have one more functions that are accepting various arguments. Not all commands are supported in both interactive and batch mode.
+
+Auxiliary Commands
+------------------
+
+Auxiliary commands are commands that are improving user experience and are running purely on client side. Thus they do not need working connection to the server.
+
+* ``exit`` Exit client immediately. This command can be also executed by sending EOT (end of transmission) character. It's CTRL+D on most common Linux shells like Bash or Zsh.
+* ``history`` Print out command history. Please note that Sqoop client is saving history from previous executions and thus you might see commands that you've executed in previous runs.
+* ``help`` Show all available commands with short in-shell documentation.
+
+::
+
+ sqoop:000> help
+ For information about Sqoop, visit: http://sqoop.apache.org/
+
+ Available commands:
+   exit    (\x  ) Exit the shell
+   history (\H  ) Display, manage and recall edit-line history
+   help    (\h  ) Display this help message
+   set     (\st ) Configure various client options and settings
+   show    (\sh ) Display various objects and configuration options
+   create  (\cr ) Create new object in Sqoop repository
+   delete  (\d  ) Delete existing object in Sqoop repository
+   update  (\up ) Update objects in Sqoop repository
+   clone   (\cl ) Create new object based on existing one
+   start   (\sta) Start job
+   stop    (\stp) Stop job
+   status  (\stu) Display status of a job
+   enable  (\en ) Enable object in Sqoop repository
+   disable (\di ) Disable object in Sqoop repository
+
+Set Command
+-----------
+
+Set command allows to set various properties of the client. Similarly as auxiliary commands, set do not require connection to Sqoop server. Set commands is not used to reconfigure Sqoop server.
+
+Available functions:
+
++---------------+------------------------------------------+
+| Function      | Description                              |
++===============+==========================================+
+| ``server``    | Set connection configuration for server  |
++---------------+------------------------------------------+
+| ``option``    | Set various client side options          |
++---------------+------------------------------------------+
+
+Set Server Function
+~~~~~~~~~~~~~~~~~~~
+
+Configure connection to Sqoop server - host port and web application name. Available arguments:
+
++-----------------------+---------------+--------------------------------------------------+
+| Argument              | Default value | Description                                      |
++=======================+===============+==================================================+
+| ``-h``, ``--host``    | localhost     | Server name (FQDN) where Sqoop server is running |
++-----------------------+---------------+--------------------------------------------------+
+| ``-p``, ``--port``    | 12000         | TCP Port                                         |
++-----------------------+---------------+--------------------------------------------------+
+| ``-w``, ``--webapp``  | sqoop         | Jetty's web application name                     |
++-----------------------+---------------+--------------------------------------------------+
+| ``-u``, ``--url``     |               | Sqoop Server in url format                       |
++-----------------------+---------------+--------------------------------------------------+
+
+Example: ::
+
+  set server --host sqoop2.company.net --port 80 --webapp sqoop
+
+or ::
+
+  set server --url http://sqoop2.company.net:80/sqoop
+
+Note: When ``--url`` option is given, ``--host``, ``--port`` or ``--webapp`` option will be ignored.
+
+Set Option Function
+~~~~~~~~~~~~~~~~~~~
+
+Configure Sqoop client related options. This function have two required arguments ``name`` and ``value``. Name represents internal property name and value holds new value that should be set. List of available option names follows:
+
++-------------------+---------------+---------------------------------------------------------------------+
+| Option name       | Default value | Description                                                         |
++===================+===============+=====================================================================+
+| ``verbose``       | false         | Client will print additional information if verbose mode is enabled |
++-------------------+---------------+---------------------------------------------------------------------+
+| ``poll-timeout``  | 10000         | Server poll timeout in milliseconds                                 |
++-------------------+---------------+---------------------------------------------------------------------+
+
+Example: ::
+
+  set option --name verbose --value true
+  set option --name poll-timeout --value 20000
+
+Show Command
+------------
+
+Show commands displays various information as described below.
+
+Available functions:
+
++----------------+--------------------------------------------------------------------------------------------------------+
+| Function       | Description                                                                                            |
++================+========================================================================================================+
+| ``server``     | Display connection information to the sqoop server (host, port, webapp)                                |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``option``     | Display various client side options                                                                    |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``version``    | Show client build version, with an option -all it shows server build version and supported api versions|
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``connector``  | Show connector configurable and its related configs                                                    |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``driver``     | Show driver configurable and its related configs                                                       |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``link``       | Show links in sqoop                                                                                    |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``job``        | Show jobs in sqoop                                                                                     |
++----------------+--------------------------------------------------------------------------------------------------------+
+
+Show Server Function
+~~~~~~~~~~~~~~~~~~~~
+
+Show details about connection to Sqoop server.
+
++-----------------------+--------------------------------------------------------------+
+| Argument              |  Description                                                 |
++=======================+==============================================================+
+| ``-a``, ``--all``     | Show all connection related information (host, port, webapp) |
++-----------------------+--------------------------------------------------------------+
+| ``-h``, ``--host``    | Show host                                                    |
++-----------------------+--------------------------------------------------------------+
+| ``-p``, ``--port``    | Show port                                                    |
++-----------------------+--------------------------------------------------------------+
+| ``-w``, ``--webapp``  | Show web application name                                    |
++-----------------------+--------------------------------------------------------------+
+
+Example: ::
+
+  show server --all
+
+Show Option Function
+~~~~~~~~~~~~~~~~~~~~
+
+Show values of various client side options. This function will show all client options when called without arguments.
+
++-----------------------+--------------------------------------------------------------+
+| Argument              |  Description                                                 |
++=======================+==============================================================+
+| ``-n``, ``--name``    | Show client option value with given name                     |
++-----------------------+--------------------------------------------------------------+
+
+Please check table in `Set Option Function`_ section to get a list of all supported option names.
+
+Example: ::
+
+  show option --name verbose
+
+Show Version Function
+~~~~~~~~~~~~~~~~~~~~~
+
+Show build versions of both client and server as well as the supported rest api versions.
+
++------------------------+-----------------------------------------------+
+| Argument               |  Description                                  |
++========================+===============================================+
+| ``-a``, ``--all``      | Show all versions (server, client, api)       |
++------------------------+-----------------------------------------------+
+| ``-c``, ``--client``   | Show client build version                     |
++------------------------+-----------------------------------------------+
+| ``-s``, ``--server``   | Show server build version                     |
++------------------------+-----------------------------------------------+
+| ``-p``, ``--api``      | Show supported api versions                   |
++------------------------+-----------------------------------------------+
+
+Example: ::
+
+  show version --all
+
+Show Connector Function
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Show persisted connector configurable and its related configs used in creating associated link and job objects
+
++-----------------------+------------------------------------------------+
+| Argument              |  Description                                   |
++=======================+================================================+
+| ``-a``, ``--all``     | Show information for all connectors            |
++-----------------------+------------------------------------------------+
+| ``-c``, ``--cid <x>`` | Show information for connector with id ``<x>`` |
++-----------------------+------------------------------------------------+
+
+Example: ::
+
+  show connector --all or show connector
+
+Show Driver Function
+~~~~~~~~~~~~~~~~~~~~
+
+Show persisted driver configurable and its related configs used in creating job objects
+
+This function do not have any extra arguments. There is only one registered driver in sqoop
+
+Example: ::
+
+  show driver
+
+Show Link Function
+~~~~~~~~~~~~~~~~~~
+
+Show persisted link objects.
+
++-----------------------+------------------------------------------------------+
+| Argument              |  Description                                         |
++=======================+======================================================+
+| ``-a``, ``--all``     | Show all available links                             |
++-----------------------+------------------------------------------------------+
+| ``-x``, ``--lid <x>`` | Show link with id ``<x>``                            |
++-----------------------+------------------------------------------------------+
+
+Example: ::
+
+  show link --all or show link
+
+Show Job Function
+~~~~~~~~~~~~~~~~~
+
+Show persisted job objects.
+
++-----------------------+----------------------------------------------+
+| Argument              |  Description                                 |
++=======================+==============================================+
+| ``-a``, ``--all``     | Show all available jobs                      |
++-----------------------+----------------------------------------------+
+| ``-j``, ``--jid <x>`` | Show job with id ``<x>``                     |
++-----------------------+----------------------------------------------+
+
+Example: ::
+
+  show job --all or show job
+
+Show Submission Function
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Show persisted job submission objects.
+
++-----------------------+---------------------------------------------+
+| Argument              |  Description                                |
++=======================+=============================================+
+| ``-j``, ``--jid <x>`` | Show available submissions for given job    |
++-----------------------+---------------------------------------------+
+| ``-d``, ``--detail``  | Show job submissions in full details        |
++-----------------------+---------------------------------------------+
+
+Example: ::
+
+  show submission
+  show submission --jid 1
+  show submission --jid 1 --detail
+
+Create Command
+--------------
+
+Creates new link and job objects. This command is supported only in interactive mode. It will ask user to enter the link config and job configs for from /to and driver when creating link and job objects respectively.
+
+Available functions:
+
++----------------+-------------------------------------------------+
+| Function       | Description                                     |
++================+=================================================+
+| ``link``       | Create new link object                          |
++----------------+-------------------------------------------------+
+| ``job``        | Create new job object                           |
++----------------+-------------------------------------------------+
+
+Create Link Function
+~~~~~~~~~~~~~~~~~~~~
+
+Create new link object.
+
++------------------------+-------------------------------------------------------------+
+| Argument               |  Description                                                |
++========================+=============================================================+
+| ``-c``, ``--cid <x>``  |  Create new link object for connector with id ``<x>``       |
++------------------------+-------------------------------------------------------------+
+
+
+Example: ::
+
+  create link --cid 1 or create link -c 1
+
+Create Job Function
+~~~~~~~~~~~~~~~~~~~
+
+Create new job object.
+
++------------------------+------------------------------------------------------------------+
+| Argument               |  Description                                                     |
++========================+==================================================================+
+| ``-f``, ``--from <x>`` | Create new job object with a FROM link with id ``<x>``           |
++------------------------+------------------------------------------------------------------+
+| ``-t``, ``--to <t>``   | Create new job object with a TO link with id ``<x>``             |
++------------------------+------------------------------------------------------------------+
+
+Example: ::
+
+  create job --from 1 --to 2 or create job --f 1 --t 2 
+
+Update Command
+--------------
+
+Update commands allows you to edit link and job objects. This command is supported only in interactive mode.
+
+Update Link Function
+~~~~~~~~~~~~~~~~~~~~
+
+Update existing link object.
+
++-----------------------+---------------------------------------------+
+| Argument              |  Description                                |
++=======================+=============================================+
+| ``-x``, ``--lid <x>`` |  Update existing link with id ``<x>``       |
++-----------------------+---------------------------------------------+
+
+Example: ::
+
+  update link --lid 1
+
+Update Job Function
+~~~~~~~~~~~~~~~~~~~
+
+Update existing job object.
+
++-----------------------+--------------------------------------------+
+| Argument              |  Description                               |
++=======================+============================================+
+| ``-j``, ``--jid <x>`` | Update existing job object with id ``<x>`` |
++-----------------------+--------------------------------------------+
+
+Example: ::
+
+  update job --jid 1
+
+
+Delete Command
+--------------
+
+Deletes link and job objects from Sqoop server.
+
+Delete Link Function
+~~~~~~~~~~~~~~~~~~~~
+
+Delete existing link object.
+
++-----------------------+-------------------------------------------+
+| Argument              |  Description                              |
++=======================+===========================================+
+| ``-x``, ``--lid <x>`` |  Delete link object with id ``<x>``       |
++-----------------------+-------------------------------------------+
+
+Example: ::
+
+  delete link --lid 1
+
+
+Delete Job Function
+~~~~~~~~~~~~~~~~~~~
+
+Delete existing job object.
+
++-----------------------+------------------------------------------+
+| Argument              |  Description                             |
++=======================+==========================================+
+| ``-j``, ``--jid <x>`` | Delete job object with id ``<x>``        |
++-----------------------+------------------------------------------+
+
+Example: ::
+
+  delete job --jid 1
+
+
+Clone Command
+-------------
+
+Clone command will load existing link or job object from Sqoop server and allow user in place updates that will result in creation of new link or job object. This command is not supported in batch mode.
+
+Clone Link Function
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Clone existing link object.
+
++-----------------------+------------------------------------------+
+| Argument              |  Description                             |
++=======================+==========================================+
+| ``-x``, ``--lid <x>`` |  Clone link object with id ``<x>``       |
++-----------------------+------------------------------------------+
+
+Example: ::
+
+  clone link --lid 1
+
+
+Clone Job Function
+~~~~~~~~~~~~~~~~~~
+
+Clone existing job object.
+
++-----------------------+------------------------------------------+
+| Argument              |  Description                             |
++=======================+==========================================+
+| ``-j``, ``--jid <x>`` | Clone job object with id ``<x>``         |
++-----------------------+------------------------------------------+
+
+Example: ::
+
+  clone job --jid 1
+
+Start Command
+-------------
+
+Start command will begin execution of an existing Sqoop job.
+
+Start Job Function
+~~~~~~~~~~~~~~~~~~
+
+Start job (submit new submission). Starting already running job is considered as invalid operation.
+
++----------------------------+----------------------------+
+| Argument                   |  Description               |
++============================+============================+
+| ``-j``, ``--jid <x>``      | Start job with id ``<x>``  |
++----------------------------+----------------------------+
+| ``-s``, ``--synchronous``  | Synchoronous job execution |
++----------------------------+----------------------------+
+
+Example: ::
+
+  start job --jid 1
+  start job --jid 1 --synchronous
+
+Stop Command
+------------
+
+Stop command will interrupt an job execution.
+
+Stop Job Function
+~~~~~~~~~~~~~~~~~
+
+Interrupt running job.
+
++-----------------------+------------------------------------------+
+| Argument              |  Description                             |
++=======================+==========================================+
+| ``-j``, ``--jid <x>`` | Interrupt running job with id ``<x>``    |
++-----------------------+------------------------------------------+
+
+Example: ::
+
+  stop job --jid 1
+
+Status Command
+--------------
+
+Status command will retrieve the last status of a job.
+
+Status Job Function
+~~~~~~~~~~~~~~~~~~~
+
+Retrieve last status for given job.
+
++-----------------------+------------------------------------------+
+| Argument              |  Description                             |
++=======================+==========================================+
+| ``-j``, ``--jid <x>`` | Retrieve status for job with id ``<x>``  |
++-----------------------+------------------------------------------+
+
+Example: ::
+
+  status job --jid 1
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/Connectors.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/Connectors.rst b/docs/src/site/sphinx/user/Connectors.rst
new file mode 100644
index 0000000..f44a308
--- /dev/null
+++ b/docs/src/site/sphinx/user/Connectors.rst
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==========
+Connectors
+==========
+
+.. toctree::
+   :glob:
+
+   connectors/*

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/Sqoop5MinutesDemo.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/Sqoop5MinutesDemo.rst b/docs/src/site/sphinx/user/Sqoop5MinutesDemo.rst
new file mode 100644
index 0000000..19115a2
--- /dev/null
+++ b/docs/src/site/sphinx/user/Sqoop5MinutesDemo.rst
@@ -0,0 +1,242 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+====================
+Sqoop 5 Minutes Demo
+====================
+
+This page will walk you through the basic usage of Sqoop. You need to have installed and configured Sqoop server and client in order to follow this guide. Installation procedure is described on `Installation page <Installation.html>`_. Please note that exact output shown in this page might differ from yours as Sqoop evolves. All major information should however remain the same.
+
+Sqoop uses unique names or persistent ids to identify connectors, links, jobs and configs. We support querying a entity by its unique name or by its perisent database Id.
+
+Starting Client
+===============
+
+Start client in interactive mode using following command: ::
+
+  sqoop2-shell
+
+Configure client to use your Sqoop server: ::
+
+  sqoop:000> set server --host your.host.com --port 12000 --webapp sqoop
+
+Verify that connection is working by simple version checking: ::
+
+  sqoop:000> show version --all
+  client version:
+    Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f
+    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+  server version:
+    Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f
+    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+  API versions:
+    [v1]
+
+You should received similar output as shown above describing the sqoop client build version, the server build version and the supported versions for the rest API.
+
+You can use the help command to check all the supported commands in the sqoop shell.
+::
+
+  sqoop:000> help
+  For information about Sqoop, visit: http://sqoop.apache.org/
+
+  Available commands:
+    exit    (\x  ) Exit the shell
+    history (\H  ) Display, manage and recall edit-line history
+    help    (\h  ) Display this help message
+    set     (\st ) Configure various client options and settings
+    show    (\sh ) Display various objects and configuration options
+    create  (\cr ) Create new object in Sqoop repository
+    delete  (\d  ) Delete existing object in Sqoop repository
+    update  (\up ) Update objects in Sqoop repository
+    clone   (\cl ) Create new object based on existing one
+    start   (\sta) Start job
+    stop    (\stp) Stop job
+    status  (\stu) Display status of a job
+    enable  (\en ) Enable object in Sqoop repository
+    disable (\di ) Disable object in Sqoop repository
+
+
+Creating Link Object
+==========================
+
+Check for the registered connectors on your Sqoop server: ::
+
+  sqoop:000> show connector
+  +----+------------------------+----------------+------------------------------------------------------+----------------------+
+  | Id |          Name          |    Version     |                        Class                         | Supported Directions |
+  +----+------------------------+----------------+------------------------------------------------------+----------------------+
+  | 1  | hdfs-connector         | 2.0.0-SNAPSHOT | org.apache.sqoop.connector.hdfs.HdfsConnector        | FROM/TO              |
+  | 2  | generic-jdbc-connector | 2.0.0-SNAPSHOT | org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO              |
+  +----+------------------------+----------------+------------------------------------------------------+----------------------+
+
+Our example contains two connectors. The one with connector Id 2 is called the ``generic-jdbc-connector``. This is a basic connector relying on the Java JDBC interface for communicating with data sources. It should work with the most common databases that are providing JDBC drivers. Please note that you must install JDBC drivers separately. They are not bundled in Sqoop due to incompatible licenses.
+
+Generic JDBC Connector in our example has a persistence Id 2 and we will use this value to create new link object for this connector. Note that the link name should be unique.
+::
+
+  sqoop:000> create link -c 2
+  Creating link for connector with id 2
+  Please fill following values to create new link object
+  Name: First Link
+
+  Link configuration
+  JDBC Driver Class: com.mysql.jdbc.Driver
+  JDBC Connection String: jdbc:mysql://mysql.server/database
+  Username: sqoop
+  Password: *****
+  JDBC Connection Properties:
+  There are currently 0 values in the map:
+  entry#protocol=tcp
+  New link was successfully created with validation status OK and persistent id 1
+
+Our new link object was created with assigned id 1.
+
+In the ``show connector -all`` we see that there is a hdfs-connector registered in sqoop with the persistent id 1. Let us create another link object but this time for the  hdfs-connector instead.
+
+::
+
+  sqoop:000> create link -c 1
+  Creating link for connector with id 1
+  Please fill following values to create new link object
+  Name: Second Link
+
+  Link configuration
+  HDFS URI: hdfs://nameservice1:8020/
+  New link was successfully created with validation status OK and persistent id 2
+
+Creating Job Object
+===================
+
+Connectors implement the ``From`` for reading data from and/or ``To`` for writing data to. Generic JDBC Connector supports both of them List of supported directions for each connector might be seen in the output of ``show connector -all`` command above. In order to create a job we need to specifiy the ``From`` and ``To`` parts of the job uniquely identified by their link Ids. We already have 2 links created in the system, you can verify the same with the following command
+
+::
+
+  sqoop:000> show link --all
+  2 link(s) to show:
+  link with id 1 and name First Link (Enabled: true, Created by root at 11/4/14 4:27 PM, Updated by root at 11/4/14 4:27 PM)
+  Using Connector id 2
+    Link configuration
+      JDBC Driver Class: com.mysql.jdbc.Driver
+      JDBC Connection String: jdbc:mysql://mysql.ent.cloudera.com/sqoop
+      Username: sqoop
+      Password:
+      JDBC Connection Properties:
+        protocol = tcp
+  link with id 2 and name Second Link (Enabled: true, Created by root at 11/4/14 4:38 PM, Updated by root at 11/4/14 4:38 PM)
+  Using Connector id 1
+    Link configuration
+      HDFS URI: hdfs://nameservice1:8020/
+
+Next, we can use the two link Ids to associate the ``From`` and ``To`` for the job.
+::
+
+   sqoop:000> create job -f 1 -t 2
+   Creating job for links with from id 1 and to id 2
+   Please fill following values to create new job object
+   Name: Sqoopy
+
+   FromJob configuration
+
+    Schema name:(Required)sqoop
+    Table name:(Required)sqoop
+    Table SQL statement:(Optional)
+    Table column names:(Optional)
+    Partition column name:(Optional) id
+    Null value allowed for the partition column:(Optional)
+    Boundary query:(Optional)
+
+  ToJob configuration
+
+    Output format:
+     0 : TEXT_FILE
+     1 : SEQUENCE_FILE
+    Choose: 0
+    Compression format:
+     0 : NONE
+     1 : DEFAULT
+     2 : DEFLATE
+     3 : GZIP
+     4 : BZIP2
+     5 : LZO
+     6 : LZ4
+     7 : SNAPPY
+     8 : CUSTOM
+    Choose: 0
+    Custom compression format:(Optional)
+    Output directory:(Required)/root/projects/sqoop
+
+    Driver Config
+    Extractors:(Optional) 2
+    Loaders:(Optional) 2
+    New job was successfully created with validation status OK  and persistent id 1
+
+Our new job object was created with assigned id 1. Note that if null value is allowed for the partition column,
+at least 2 extractors are needed for Sqoop to carry out the data transfer. On specifying 1 extractor in this
+scenario, Sqoop shall ignore this setting and continue with 2 extractors.
+
+Start Job ( a.k.a Data transfer )
+=================================
+
+You can start a sqoop job with the following command:
+::
+
+  sqoop:000> start job -j 1
+  Submission details
+  Job ID: 1
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
+
+You can iteratively check your running job status with ``status job`` command:
+
+::
+
+  sqoop:000> status job -j 1
+  Submission details
+  Job ID: 1
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 20:09:16 PST: RUNNING  - 0.00 % 
+
+Alternatively you can start a sqoop job and observe job running status with the following command:
+
+::
+
+  sqoop:000> start job -j 1 -s
+  Submission details
+  Job ID: 1
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
+  2014-11-04 19:43:39 PST: RUNNING  - 0.00 %
+  2014-11-04 19:43:49 PST: RUNNING  - 10.00 %
+
+And finally you can stop running the job at any time using ``stop job`` command: ::
+
+  sqoop:000> stop job -j 1
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/connectors/Connector-FTP.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/connectors/Connector-FTP.rst b/docs/src/site/sphinx/user/connectors/Connector-FTP.rst
new file mode 100644
index 0000000..cc10d68
--- /dev/null
+++ b/docs/src/site/sphinx/user/connectors/Connector-FTP.rst
@@ -0,0 +1,81 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==================
+FTP Connector
+==================
+
+The FTP connector supports moving data between an FTP server and other supported Sqoop2 connectors.
+
+Currently only the TO direction is supported to write records to an FTP server. A FROM connector is pending (SQOOP-2127).
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the FTP Connector, create a link for the connector and a job that uses the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Input                       | Type    | Description                                                           | Example                    |
++=============================+=========+=======================================================================+============================+
+| FTP server hostname         | String  | Hostname for the FTP server.                                          | ftp.example.com            |
+|                             |         | *Required*.                                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| FTP server port             | Integer | Port number for the FTP server. Defaults to 21.                       | 2100                       |
+|                             |         | *Optional*.                                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Username                    | String  | The username to provide when connecting to the FTP server.            | sqoop                      |
+|                             |         | *Required*.                                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Password                    | String  | The password to provide when connecting to the FTP server.            | sqoop                      |
+|                             |         | *Required*                                                            |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+
+**Notes**
+=========
+
+1. The FTP connector will attempt to connect to the FTP server as part of the link validation process. If for some reason a connection can not be established, you'll see a corresponding warning message.
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the TO direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Input                       | Type    | Description                                                             | Example                           |
++=============================+=========+=========================================================================+===================================+
+| Output directory            | String  | The location on the FTP server that the connector will write files to.  | uploads                           |
+|                             |         | *Required*                                                              |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+
+**Notes**
+=========
+
+1. The *output directory* value needs to be an existing directory on the FTP server.
+
+------
+Loader
+------
+
+During the *loading* phase, the connector will create uniquely named files in the *output directory* for each partition of data received from the **FROM** connector.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/connectors/Connector-GenericJDBC.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/connectors/Connector-GenericJDBC.rst b/docs/src/site/sphinx/user/connectors/Connector-GenericJDBC.rst
new file mode 100644
index 0000000..347547d
--- /dev/null
+++ b/docs/src/site/sphinx/user/connectors/Connector-GenericJDBC.rst
@@ -0,0 +1,194 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+======================
+Generic JDBC Connector
+======================
+
+The Generic JDBC Connector can connect to any data source that adheres to the **JDBC 4** specification.
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the Generic JDBC Connector, create a link for the connector and a job that uses the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| Input                       | Type    | Description                                                           | Example                                  |
++=============================+=========+=======================================================================+==========================================+
+| JDBC Driver Class           | String  | The full class name of the JDBC driver.                               | com.mysql.jdbc.Driver                    |
+|                             |         | *Required* and accessible by the Sqoop server.                        |                                          |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| JDBC Connection String      | String  | The JDBC connection string to use when connecting to the data source. | jdbc:mysql://localhost/test              |
+|                             |         | *Required*. Connectivity upon creation is optional.                   |                                          |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| Username                    | String  | The username to provide when connecting to the data source.           | sqoop                                    |
+|                             |         | *Optional*. Connectivity upon creation is optional.                   |                                          |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| Password                    | String  | The password to provide when connecting to the data source.           | sqoop                                    |
+|                             |         | *Optional*. Connectivity upon creation is optional.                   |                                          |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| JDBC Connection Properties  | Map     | A map of JDBC connection properties to pass to the JDBC driver        | profileSQL=true&useFastDateParsing=false |
+|                             |         | *Optional*.                                                           |                                          |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+
+**FROM Job Configuration**
+++++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the FROM direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Input                       | Type    | Description                                                             | Example                                     |
++=============================+=========+=========================================================================+=============================================+
+| Schema name                 | String  | The schema name the table is part of.                                   | sqoop                                       |
+|                             |         | *Optional*                                                              |                                             |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Table name                  | String  | The table name to import data from.                                     | test                                        |
+|                             |         | *Optional*. See note below.                                             |                                             |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Table SQL statement         | String  | The SQL statement used to perform a **free form query**.                | ``SELECT COUNT(*) FROM test ${CONDITIONS}`` |
+|                             |         | *Optional*. See notes below.                                            |                                             |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Table column names          | String  | Columns to extract from the JDBC data source.                           | col1,col2                                   |
+|                             |         | *Optional* Comma separated list of columns.                             |                                             |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Partition column name       | Map     | The column name used to partition the data transfer process.            | col1                                        |
+|                             |         | *Optional*.  Defaults to table's first column of primary key.           |                                             |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Null value allowed for      | Boolean | True or false depending on whether NULL values are allowed in data      | true                                        |
+| the partition column        |         | of the Partition column. *Optional*.                                    |                                             |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Boundary query              | String  | The query used to define an upper and lower boundary when partitioning. |                                             |
+|                             |         | *Optional*.                                                             |                                             |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+
+**Notes**
+=========
+
+1. *Table name* and *Table SQL statement* are mutually exclusive. If *Table name* is provided, the *Table SQL statement* should not be provided. If *Table SQL statement* is provided then *Table name* should not be provided.
+2. *Table column names* should be provided only if *Table name* is provided.
+3. If there are columns with similar names, column aliases are required. For example: ``SELECT table1.id as "i", table2.id as "j" FROM table1 INNER JOIN table2 ON table1.id = table2.id``.
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the TO direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Input                       | Type    | Description                                                             | Example                                         |
++=============================+=========+=========================================================================+=================================================+
+| Schema name                 | String  | The schema name the table is part of.                                   | sqoop                                           |
+|                             |         | *Optional*                                                              |                                                 |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Table name                  | String  | The table name to import data from.                                     | test                                            |
+|                             |         | *Optional*. See note below.                                             |                                                 |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Table SQL statement         | String  | The SQL statement used to perform a **free form query**.                | ``INSERT INTO test (col1, col2) VALUES (?, ?)`` |
+|                             |         | *Optional*. See note below.                                             |                                                 |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Table column names          | String  | Columns to insert into the JDBC data source.                            | col1,col2                                       |
+|                             |         | *Optional* Comma separated list of columns.                             |                                                 |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Stage table name            | String  | The name of the table used as a *staging table*.                        | staging                                         |
+|                             |         | *Optional*.                                                             |                                                 |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Should clear stage table    | Boolean | True or false depending on whether the staging table should be cleared  | true                                            |
+|                             |         | after the data transfer has finished. *Optional*.                       |                                                 |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+
+**Notes**
+=========
+
+1. *Table name* and *Table SQL statement* are mutually exclusive. If *Table name* is provided, the *Table SQL statement* should not be provided. If *Table SQL statement* is provided then *Table name* should not be provided.
+2. *Table column names* should be provided only if *Table name* is provided.
+
+-----------
+Partitioner
+-----------
+
+The Generic JDBC Connector partitioner generates conditions to be used by the extractor.
+It varies in how it partitions data transfer based on the partition column data type.
+Though, each strategy roughly takes on the following form:
+::
+
+  (upper boundary - lower boundary) / (max partitions)
+
+By default, the *primary key* will be used to partition the data unless otherwise specified.
+
+The following data types are currently supported:
+
+1. TINYINT
+2. SMALLINT
+3. INTEGER
+4. BIGINT
+5. REAL
+6. FLOAT
+7. DOUBLE
+8. NUMERIC
+9. DECIMAL
+10. BIT
+11. BOOLEAN
+12. DATE
+13. TIME
+14. TIMESTAMP
+15. CHAR
+16. VARCHAR
+17. LONGVARCHAR
+
+---------
+Extractor
+---------
+
+During the *extraction* phase, the JDBC data source is queried using SQL. This SQL will vary based on your configuration.
+
+- If *Table name* is provided, then the SQL statement generated will take on the form ``SELECT * FROM <table name>``.
+- If *Table name* and *Columns* are provided, then the SQL statement generated will take on the form ``SELECT <columns> FROM <table name>``.
+- If *Table SQL statement* is provided, then the provided SQL statement will be used.
+
+The conditions generated by the *partitioner* are appended to the end of the SQL query to query a section of data.
+
+The Generic JDBC connector extracts CSV data usable by the *CSV Intermediate Data Format*.
+
+------
+Loader
+------
+
+During the *loading* phase, the JDBC data source is queried using SQL. This SQL will vary based on your configuration.
+
+- If *Table name* is provided, then the SQL statement generated will take on the form ``INSERT INTO <table name> (col1, col2, ...) VALUES (?,?,..)``.
+- If *Table name* and *Columns* are provided, then the SQL statement generated will take on the form ``INSERT INTO <table name> (<columns>) VALUES (?,?,..)``.
+- If *Table SQL statement* is provided, then the provided SQL statement will be used.
+
+This connector expects to receive CSV data consumable by the *CSV Intermediate Data Format*.
+
+----------
+Destroyers
+----------
+
+The Generic JDBC Connector performs two operations in the destroyer in the TO direction:
+
+1. Copy the contents of the staging table to the desired table.
+2. Clear the staging table.
+
+No operations are performed in the FROM direction.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/connectors/Connector-HDFS.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/connectors/Connector-HDFS.rst b/docs/src/site/sphinx/user/connectors/Connector-HDFS.rst
new file mode 100644
index 0000000..c44b1b6
--- /dev/null
+++ b/docs/src/site/sphinx/user/connectors/Connector-HDFS.rst
@@ -0,0 +1,159 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==============
+HDFS Connector
+==============
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the HDFS Connector, create a link for the connector and a job that uses the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Input                       | Type    | Description                                                           | Example                    |
++=============================+=========+=======================================================================+============================+
+| URI                         | String  | The URI of the HDFS File System.                                      | hdfs://example.com:8020/   |
+|                             |         | *Optional*. See note below.                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Configuration directory     | String  | Path to the clusters configuration directory.                         | /etc/conf/hadoop           |
+|                             |         | *Optional*.                                                           |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+
+**Notes**
+=========
+
+1. The specified URI will override the declared URI in your configuration.
+
+**FROM Job Configuration**
+++++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the FROM direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+------------------+
+| Input                       | Type    | Description                                                             | Example          |
++=============================+=========+=========================================================================+==================+
+| Input directory             | String  | The location in HDFS that the connector should look for files in.       | /tmp/sqoop2/hdfs |
+|                             |         | *Required*. See note below.                                             |                  |
++-----------------------------+---------+-------------------------------------------------------------------------+------------------+
+| Null value                  | String  | The value of NULL in the contents of each file extracted.               | \N               |
+|                             |         | *Optional*. See note below.                                             |                  |
++-----------------------------+---------+-------------------------------------------------------------------------+------------------+
+| Override null value         | Boolean | Tells the connector to replace the specified NULL value.                | true             |
+|                             |         | *Optional*. See note below.                                             |                  |
++-----------------------------+---------+-------------------------------------------------------------------------+------------------+
+
+**Notes**
+=========
+
+1. All files in *Input directory* will be extracted.
+2. *Null value* and *override null value* should be used in conjunction. If *override null value* is not set to true, then *null value* will not be used when extracting data.
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the TO direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Input                       | Type    | Description                                                             | Example                           |
++=============================+=========+=========================================================================+===================================+
+| Output directory            | String  | The location in HDFS that the connector will load files to.             | /tmp/sqoop2/hdfs                  |
+|                             |         | *Optional*                                                              |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Output format               | Enum    | The format to output data to.                                           | CSV                               |
+|                             |         | *Optional*. See note below.                                             |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Compression                 | Enum    | Compression class.                                                      | GZIP                              |
+|                             |         | *Optional*. See note below.                                             |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Custom compression          | String  | Custom compression class.                                               | org.apache.sqoop.SqoopCompression |
+|                             |         | *Optional* Comma separated list of columns.                             |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Null value                  | String  | The value of NULL in the contents of each file loaded.                  | \N                                |
+|                             |         | *Optional*. See note below.                                             |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Override null value         | Boolean | Tells the connector to replace the specified NULL value.                | true                              |
+|                             |         | *Optional*. See note below.                                             |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Append mode                 | Boolean | Append to an existing output directory.                                 | true                              |
+|                             |         | *Optional*.                                                             |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+
+**Notes**
+=========
+
+1. *Output format* only supports CSV at the moment.
+2. *Compression* supports all Hadoop compression classes.
+3. *Null value* and *override null value* should be used in conjunction. If *override null value* is not set to true, then *null value* will not be used when loading data.
+
+-----------
+Partitioner
+-----------
+
+The HDFS Connector partitioner partitions based on total blocks in all files in the specified input directory.
+Blocks will try to be placed in splits based on the *node* and *rack* they reside in.
+
+---------
+Extractor
+---------
+
+During the *extraction* phase, the FileSystem API is used to query files from HDFS. The HDFS cluster used is the one defined by:
+
+1. The HDFS URI in the link configuration
+2. The Hadoop configuration in the link configuration
+3. The Hadoop configuration used by the execution framework
+
+The format of the data must be CSV. The NULL value in the CSV can be chosen via *null value*. For example::
+
+    1,\N
+    2,null
+    3,NULL
+
+In the above example, if *null value* is set to \N, then only the first row's NULL value will be inferred.
+
+------
+Loader
+------
+
+During the *loading* phase, HDFS is written to via the FileSystem API. The number of files created is equal to the number of loads that run. The format of the data currently can only be CSV. The NULL value in the CSV can be chosen via *null value*. For example:
+
++--------------+-------+
+| Id           | Value |
++==============+=======+
+| 1            | NULL  |
++--------------+-------+
+| 2            | value |
++--------------+-------+
+
+If *null value* is set to \N, then here's how the data will look like in HDFS::
+
+    1,\N
+    2,value
+
+----------
+Destroyers
+----------
+
+The HDFS TO destroyer moves all created files to the proper output directory.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/user/connectors/Connector-Kafka.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/user/connectors/Connector-Kafka.rst b/docs/src/site/sphinx/user/connectors/Connector-Kafka.rst
new file mode 100644
index 0000000..b6bca14
--- /dev/null
+++ b/docs/src/site/sphinx/user/connectors/Connector-Kafka.rst
@@ -0,0 +1,64 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===============
+Kafka Connector
+===============
+
+Currently, only the TO direction is supported.
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the Kafka Connector, create a link for the connector and a job that uses the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++----------------------+---------+-----------------------------------------------------------+-------------------------------------+
+| Input                | Type    | Description                                               | Example                             |
++======================+=========+===========================================================+=====================================+
+| Broker list          | String  | Comma separated list of kafka brokers.                    | example.com:10000,example.com:11000 |
+|                      |         | *Required*.                                               |                                     |
++----------------------+---------+-----------------------------------------------------------+-------------------------------------+
+| Zookeeper connection | String  | Comma separated list of zookeeper servers in your quorum. | /etc/conf/hadoop                    |
+|                      |         | *Required*.                                               |                                     |
++----------------------+---------+-----------------------------------------------------------+-------------------------------------+
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the FROM direction include:
+
++-------+---------+---------------------------------+----------+
+| Input | Type    | Description                     | Example  |
++=======+=========+=================================+==========+
+| topic | String  | The Kafka topic to transfer to. | my topic |
+|       |         | *Required*.                     |          |
++-------+---------+---------------------------------+----------+
+
+------
+Loader
+------
+
+During the *loading* phase, Kafka is written to directly from each loader. The order in which data is loaded into Kafka is not guaranteed.
+


[5/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/SecurityGuideOnSqoop2.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/SecurityGuideOnSqoop2.rst b/docs/src/site/sphinx/SecurityGuideOnSqoop2.rst
deleted file mode 100644
index 7194d3b..0000000
--- a/docs/src/site/sphinx/SecurityGuideOnSqoop2.rst
+++ /dev/null
@@ -1,239 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-=========================
-Security Guide On Sqoop 2
-=========================
-
-Most Hadoop components, such as HDFS, Yarn, Hive, etc., have security frameworks, which support Simple, Kerberos and LDAP authentication. currently Sqoop 2 provides 2 types of authentication: simple and kerberos. The authentication module is pluggable, so more authentication types can be added. Additionally, a new role based access control is introduced in Sqoop 1.99.6. We recommend to use this capability in multi tenant environments, so that malicious users can’t easily abuse your created link and job objects.
-
-Simple Authentication
-=====================
-
-Configuration
--------------
-Modify Sqoop configuration file, normally in <Sqoop Folder>/conf/sqoop.properties.
-
-::
-
-  org.apache.sqoop.authentication.type=SIMPLE
-  org.apache.sqoop.authentication.handler=org.apache.sqoop.security.authentication.SimpleAuthenticationHandler
-  org.apache.sqoop.anonymous=true
-
--	Simple authentication is used by default. Commenting out authentication configuration will yield the use of simple authentication.
-
-Run command
------------
-Start Sqoop server as usual.
-
-::
-
-  <Sqoop Folder>/bin/sqoop.sh server start
-
-Start Sqoop client as usual.
-
-::
-
-  <Sqoop Folder>/bin/sqoop.sh client
-
-Kerberos Authentication
-=======================
-
-Kerberos is a computer network authentication protocol which works on the basis of 'tickets' to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Its designers aimed it primarily at a client–server model and it provides mutual authentication—both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks.
-
-Dependency
-----------
-Set up a KDC server. Skip this step if KDC server exists. It's difficult to cover every way Kerberos can be setup (ie: there are cross realm setups and multi-trust environments). This section will describe how to setup the sqoop principals with a local deployment of MIT kerberos.
-
--	All components which are Kerberos authenticated need one KDC server. If current Hadoop cluster uses Kerberos authentication, there should be a KDC server.
--	If there is no KDC server, follow http://web.mit.edu/kerberos/krb5-devel/doc/admin/install_kdc.html to set up one.
-
-Configure Hadoop cluster to use Kerberos authentication.
-
--	Authentication type should be cluster level. All components must have the same authentication type: use Kerberos or not. In other words, Sqoop with Kerberos authentication could not communicate with other Hadoop components, such as HDFS, Yarn, Hive, etc., without Kerberos authentication, and vice versa.
--	How to set up a Hadoop cluster with Kerberos authentication is out of the scope of this document. Follow the related links like https://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html
-
-Create keytab and principal for Sqoop 2 via kadmin in command line.
-
-::
-
-  addprinc -randkey HTTP/<FQDN>@<REALM>
-  addprinc -randkey sqoop/<FQDN>@<REALM>
-  xst -k /home/kerberos/sqoop.keytab HTTP/<FQDN>@<REALM>
-  xst -k /home/kerberos/sqoop.keytab sqoop/<FQDN>@<REALM>
-
--	The <FQDN> should be replaced by the FQDN of the server, which could be found via “hostname -f” in command line.
--	The <REALM> should be replaced by the realm name in krb5.conf file generated when installing the KDC server in the former step.
--	The principal HTTP/<FQDN>@<REALM> is used in communication between Sqoop client and Sqoop server. Since Sqoop server is an http server, so the HTTP principal is a must during SPNEGO process, and it is case sensitive.
--	Http request could be sent from other client like browser, wget or curl with SPNEGO support.
--	The principal sqoop/<FQDN>@<REALM> is used in communication between Sqoop server and Hdfs/Yarn as the credential of Sqoop server.
-
-Configuration
--------------
-Modify Sqoop configuration file, normally in <Sqoop Folder>/conf/sqoop.properties.
-
-::
-
-  org.apache.sqoop.authentication.type=KERBEROS
-  org.apache.sqoop.authentication.handler=org.apache.sqoop.security.authentication.KerberosAuthenticationHandler
-  org.apache.sqoop.authentication.kerberos.principal=sqoop/_HOST@<REALM>
-  org.apache.sqoop.authentication.kerberos.keytab=/home/kerberos/sqoop.keytab
-  org.apache.sqoop.authentication.kerberos.http.principal=HTTP/_HOST@<REALM>
-  org.apache.sqoop.authentication.kerberos.http.keytab=/home/kerberos/sqoop.keytab
-  org.apache.sqoop.authentication.kerberos.proxyuser=true
-
--	When _HOST is used as FQDN in principal, it will be replaced by the real FQDN. https://issues.apache.org/jira/browse/HADOOP-6632
--	If parameter proxyuser is set true, Sqoop server will use proxy user mode (sqoop delegate real client user) to run Yarn job. If false, Sqoop server will use sqoop user to run Yarn job.
-
-Run command
------------
-Set SQOOP2_HOST to FQDN.
-
-::
-
-  export SQOOP2_HOST=$(hostname -f).
-
--	The <FQDN> should be replaced by the FQDN of the server, which could be found via “hostname -f” in command line.
-
-Start Sqoop server using sqoop user.
-
-::
-
-  sudo –u sqoop <Sqoop Folder>/bin/sqoop.sh server start
-
-Run kinit to generate ticket cache.
-
-::
-
-  kinit HTTP/<FQDN>@<REALM> -kt /home/kerberos/sqoop.keytab
-
-Start Sqoop client.
-
-::
-
-  <Sqoop Folder>/bin/sqoop.sh client
-
-Verify
-------
-If the Sqoop server has started successfully with Kerberos authentication, the following line will be in <@LOGDIR>/sqoop.log:
-
-::
-
-  2014-12-04 15:02:58,038 INFO  security.KerberosAuthenticationHandler [org.apache.sqoop.security.authentication.KerberosAuthenticationHandler.secureLogin(KerberosAuthenticationHandler.java:84)] Using Kerberos authentication, principal [sqoop/_HOST@HADOOP.COM] keytab [/home/kerberos/sqoop.keytab]
-
-If the Sqoop client was able to communicate with the Sqoop server, the following will be in <@LOGDIR>/sqoop.log :
-
-::
-
-  Refreshing Kerberos configuration
-  Acquire TGT from Cache
-  Principal is HTTP/<FQDN>@HADOOP.COM
-  null credentials from Ticket Cache
-  principal is HTTP/<FQDN>@HADOOP.COM
-  Will use keytab
-  Commit Succeeded
-
-Customized Authentication
-=========================
-
-Users can create their own authentication modules. By performing the following steps:
-
--	Create customized authentication handler extends abstract class AuthenticationHandler.
--	Implement abstract function doInitialize and secureLogin in AuthenticationHandler.
-
-::
-
-  public class MyAuthenticationHandler extends AuthenticationHandler {
-
-    private static final Logger LOG = Logger.getLogger(MyAuthenticationHandler.class);
-
-    public void doInitialize() {
-      securityEnabled = true;
-    }
-
-    public void secureLogin() {
-      LOG.info("Using customized authentication.");
-    }
-  }
-
--	Modify configuration org.apache.sqoop.authentication.handler in <Sqoop Folder>/conf/sqoop.properties and set it to the customized authentication handler class name.
--	Restart the Sqoop server.
-
-Authorization
-=============
-
-Users, Groups, and Roles
-------------------------
-
-At the core of Sqoop's authorization system are users, groups, and roles. Roles allow administrators to give a name to a set of grants which can be easily reused. A role may be assigned to users, groups, and other roles. For example, consider a system with the following users and groups.
-
-::
-
-  <User>: <Groups>
-  user_all: group1, group2
-  user1: group1
-  user2: group2
-
-Sqoop roles must be created manually before being used, unlike users and groups. Users and groups are managed by the login system (Linux, LDAP or Kerberos). When a user wants to access one resource (connector, link, connector), the Sqoop2 server will determine the username of this user and the groups associated. That information is then used to determine if the user should have access to this resource being requested, by comparing the required privileges of the Sqoop operation to the user privileges using the following rules.
-
-- User privileges (Has the privilege been granted to the user?)
-- Group privileges (Does the user belong to any groups that the privilege has been granted to?)
-- Role privileges (Does the user or any of the groups that the user belongs to have a role that grants the privilege?)
-
-Administrator
--------------
-
-There is a special user: administrator, which can’t be created, deleted by command. The only way to set administrator is to modify the configuration file. Administrator could run management commands to create/delete roles. However, administrator does not implicitly have all privileges. Administrator has to grant privilege to him/her if he/she needs to request the resource.
-
-Role management commands
-------------------------
-
-::
-
-  CREATE ROLE –role role_name
-  DROP ROLE –role role_name
-  SHOW ROLE
-
-- Only the administrator has privilege for this.
-
-Principal management commands
------------------------------
-
-::
-
-  GRANT ROLE --principal-type principal_type --principal principal_name --role role_name
-  REVOKE ROLE --principal-type principal_type --principal principal_name --role role_name
-  SHOW ROLE --principal-type principal_type --principal principal_name
-  SHOW PRINCIPAL --role role_name
-
-- principal_type: USER | GROUP | ROLE
-
-Privilege management commands
------------------------------
-
-::
-
-  GRANT PRIVILEGE --principal-type principal_type --principal principal_name --resource-type resource_type --resource resource_name --action action_name [--with-grant]
-  REVOKE PRIVILEGE --principal-type principal_type --principal principal_name [--resource-type resource_type --resource resource_name --action action_name] [--with-grant]
-  SHOW PRIVILEGE –principal-type principal_type –principal principal_name [--resource-type resource_type --resource resource_name --action action_name]
-
-- principal_type: USER | GROUP | ROLE
-- resource_type: CONNECTOR | LINK | JOB
-- action_type: ALL | READ | WRITE
-- With with-grant in GRANT PRIVILEGE command, this principal could grant his/her privilege to other users.
-- Without resource in REVOKE PRIVILEGE command, all privileges on this principal will be revoked.
-- With with-grant in REVOKE PRIVILEGE command, only grant privilege on this principal will be removed. This principal has the privilege to access this resource, but he/she could not grant his/her privilege to others.
-- Without resource in SHOW PRIVILEGE command, all privileges on this principal will be listed.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Sqoop5MinutesDemo.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Sqoop5MinutesDemo.rst b/docs/src/site/sphinx/Sqoop5MinutesDemo.rst
deleted file mode 100644
index 19115a2..0000000
--- a/docs/src/site/sphinx/Sqoop5MinutesDemo.rst
+++ /dev/null
@@ -1,242 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-====================
-Sqoop 5 Minutes Demo
-====================
-
-This page will walk you through the basic usage of Sqoop. You need to have installed and configured Sqoop server and client in order to follow this guide. Installation procedure is described on `Installation page <Installation.html>`_. Please note that exact output shown in this page might differ from yours as Sqoop evolves. All major information should however remain the same.
-
-Sqoop uses unique names or persistent ids to identify connectors, links, jobs and configs. We support querying a entity by its unique name or by its perisent database Id.
-
-Starting Client
-===============
-
-Start client in interactive mode using following command: ::
-
-  sqoop2-shell
-
-Configure client to use your Sqoop server: ::
-
-  sqoop:000> set server --host your.host.com --port 12000 --webapp sqoop
-
-Verify that connection is working by simple version checking: ::
-
-  sqoop:000> show version --all
-  client version:
-    Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f
-    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
-  server version:
-    Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f
-    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
-  API versions:
-    [v1]
-
-You should received similar output as shown above describing the sqoop client build version, the server build version and the supported versions for the rest API.
-
-You can use the help command to check all the supported commands in the sqoop shell.
-::
-
-  sqoop:000> help
-  For information about Sqoop, visit: http://sqoop.apache.org/
-
-  Available commands:
-    exit    (\x  ) Exit the shell
-    history (\H  ) Display, manage and recall edit-line history
-    help    (\h  ) Display this help message
-    set     (\st ) Configure various client options and settings
-    show    (\sh ) Display various objects and configuration options
-    create  (\cr ) Create new object in Sqoop repository
-    delete  (\d  ) Delete existing object in Sqoop repository
-    update  (\up ) Update objects in Sqoop repository
-    clone   (\cl ) Create new object based on existing one
-    start   (\sta) Start job
-    stop    (\stp) Stop job
-    status  (\stu) Display status of a job
-    enable  (\en ) Enable object in Sqoop repository
-    disable (\di ) Disable object in Sqoop repository
-
-
-Creating Link Object
-==========================
-
-Check for the registered connectors on your Sqoop server: ::
-
-  sqoop:000> show connector
-  +----+------------------------+----------------+------------------------------------------------------+----------------------+
-  | Id |          Name          |    Version     |                        Class                         | Supported Directions |
-  +----+------------------------+----------------+------------------------------------------------------+----------------------+
-  | 1  | hdfs-connector         | 2.0.0-SNAPSHOT | org.apache.sqoop.connector.hdfs.HdfsConnector        | FROM/TO              |
-  | 2  | generic-jdbc-connector | 2.0.0-SNAPSHOT | org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO              |
-  +----+------------------------+----------------+------------------------------------------------------+----------------------+
-
-Our example contains two connectors. The one with connector Id 2 is called the ``generic-jdbc-connector``. This is a basic connector relying on the Java JDBC interface for communicating with data sources. It should work with the most common databases that are providing JDBC drivers. Please note that you must install JDBC drivers separately. They are not bundled in Sqoop due to incompatible licenses.
-
-Generic JDBC Connector in our example has a persistence Id 2 and we will use this value to create new link object for this connector. Note that the link name should be unique.
-::
-
-  sqoop:000> create link -c 2
-  Creating link for connector with id 2
-  Please fill following values to create new link object
-  Name: First Link
-
-  Link configuration
-  JDBC Driver Class: com.mysql.jdbc.Driver
-  JDBC Connection String: jdbc:mysql://mysql.server/database
-  Username: sqoop
-  Password: *****
-  JDBC Connection Properties:
-  There are currently 0 values in the map:
-  entry#protocol=tcp
-  New link was successfully created with validation status OK and persistent id 1
-
-Our new link object was created with assigned id 1.
-
-In the ``show connector -all`` we see that there is a hdfs-connector registered in sqoop with the persistent id 1. Let us create another link object but this time for the  hdfs-connector instead.
-
-::
-
-  sqoop:000> create link -c 1
-  Creating link for connector with id 1
-  Please fill following values to create new link object
-  Name: Second Link
-
-  Link configuration
-  HDFS URI: hdfs://nameservice1:8020/
-  New link was successfully created with validation status OK and persistent id 2
-
-Creating Job Object
-===================
-
-Connectors implement the ``From`` for reading data from and/or ``To`` for writing data to. Generic JDBC Connector supports both of them List of supported directions for each connector might be seen in the output of ``show connector -all`` command above. In order to create a job we need to specifiy the ``From`` and ``To`` parts of the job uniquely identified by their link Ids. We already have 2 links created in the system, you can verify the same with the following command
-
-::
-
-  sqoop:000> show link --all
-  2 link(s) to show:
-  link with id 1 and name First Link (Enabled: true, Created by root at 11/4/14 4:27 PM, Updated by root at 11/4/14 4:27 PM)
-  Using Connector id 2
-    Link configuration
-      JDBC Driver Class: com.mysql.jdbc.Driver
-      JDBC Connection String: jdbc:mysql://mysql.ent.cloudera.com/sqoop
-      Username: sqoop
-      Password:
-      JDBC Connection Properties:
-        protocol = tcp
-  link with id 2 and name Second Link (Enabled: true, Created by root at 11/4/14 4:38 PM, Updated by root at 11/4/14 4:38 PM)
-  Using Connector id 1
-    Link configuration
-      HDFS URI: hdfs://nameservice1:8020/
-
-Next, we can use the two link Ids to associate the ``From`` and ``To`` for the job.
-::
-
-   sqoop:000> create job -f 1 -t 2
-   Creating job for links with from id 1 and to id 2
-   Please fill following values to create new job object
-   Name: Sqoopy
-
-   FromJob configuration
-
-    Schema name:(Required)sqoop
-    Table name:(Required)sqoop
-    Table SQL statement:(Optional)
-    Table column names:(Optional)
-    Partition column name:(Optional) id
-    Null value allowed for the partition column:(Optional)
-    Boundary query:(Optional)
-
-  ToJob configuration
-
-    Output format:
-     0 : TEXT_FILE
-     1 : SEQUENCE_FILE
-    Choose: 0
-    Compression format:
-     0 : NONE
-     1 : DEFAULT
-     2 : DEFLATE
-     3 : GZIP
-     4 : BZIP2
-     5 : LZO
-     6 : LZ4
-     7 : SNAPPY
-     8 : CUSTOM
-    Choose: 0
-    Custom compression format:(Optional)
-    Output directory:(Required)/root/projects/sqoop
-
-    Driver Config
-    Extractors:(Optional) 2
-    Loaders:(Optional) 2
-    New job was successfully created with validation status OK  and persistent id 1
-
-Our new job object was created with assigned id 1. Note that if null value is allowed for the partition column,
-at least 2 extractors are needed for Sqoop to carry out the data transfer. On specifying 1 extractor in this
-scenario, Sqoop shall ignore this setting and continue with 2 extractors.
-
-Start Job ( a.k.a Data transfer )
-=================================
-
-You can start a sqoop job with the following command:
-::
-
-  sqoop:000> start job -j 1
-  Submission details
-  Job ID: 1
-  Server URL: http://localhost:12000/sqoop/
-  Created by: root
-  Creation date: 2014-11-04 19:43:29 PST
-  Lastly updated by: root
-  External ID: job_1412137947693_0001
-    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
-  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
-
-You can iteratively check your running job status with ``status job`` command:
-
-::
-
-  sqoop:000> status job -j 1
-  Submission details
-  Job ID: 1
-  Server URL: http://localhost:12000/sqoop/
-  Created by: root
-  Creation date: 2014-11-04 19:43:29 PST
-  Lastly updated by: root
-  External ID: job_1412137947693_0001
-    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
-  2014-11-04 20:09:16 PST: RUNNING  - 0.00 % 
-
-Alternatively you can start a sqoop job and observe job running status with the following command:
-
-::
-
-  sqoop:000> start job -j 1 -s
-  Submission details
-  Job ID: 1
-  Server URL: http://localhost:12000/sqoop/
-  Created by: root
-  Creation date: 2014-11-04 19:43:29 PST
-  Lastly updated by: root
-  External ID: job_1412137947693_0001
-    http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
-  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
-  2014-11-04 19:43:39 PST: RUNNING  - 0.00 %
-  2014-11-04 19:43:49 PST: RUNNING  - 10.00 %
-
-And finally you can stop running the job at any time using ``stop job`` command: ::
-
-  sqoop:000> stop job -j 1
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Tools.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Tools.rst b/docs/src/site/sphinx/Tools.rst
deleted file mode 100644
index fb0187a..0000000
--- a/docs/src/site/sphinx/Tools.rst
+++ /dev/null
@@ -1,129 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-=====
-Tools
-=====
-
-Tools are server commands that administrators can execute on the Sqoop server machine in order to perform various maintenance tasks. The tool execution will always perform a given task and finish. There are no long running services implemented as tools.
-
-In order to perform the maintenance task each tool is suppose to do, they need to be executed in exactly the same environment as the main Sqoop server. The tool binary will take care of setting up the ``CLASSPATH`` and other environmental variables that might be required. However it's up to the administrator himself to run the tool under the same user as is used for the server. This is usually configured automatically for various Hadoop distributions (such as Apache Bigtop).
-
-
-.. note:: Running tools while the Sqoop Server is also running is not recommended as it might lead to a data corruption and service disruption.
-
-List of available tools:
-
-* verify
-* upgrade
-
-To run the desired tool, execute binary ``sqoop2-tool`` with desired tool name. For example to run ``verify`` tool::
-
-  sqoop2-tool verify
-
-.. note:: Stop the Sqoop Server before running Sqoop tools. Running tools while Sqoop Server is running can lead to a data corruption and service disruption.
-
-Verify
-======
-
-The verify tool will verify Sqoop server configuration by starting all subsystems with the exception of servlets and tearing them down.
-
-To run the ``verify`` tool::
-
-  sqoop2-tool verify
-
-If the verification process succeeds, you should see messages like::
-
-  Verification was successful.
-  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
-
-If the verification process will find any inconsistencies, it will print out the following message instead::
-
-  Verification has failed, please check Server logs for further details.
-  Tool class org.apache.sqoop.tools.tool.VerifyTool has failed.
-
-Further details why the verification has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
-
-Upgrade
-=======
-
-Upgrades all versionable components inside Sqoop2. This includes structural changes inside the repository and stored metadata.
-Running this tool on Sqoop deployment that was already upgraded will have no effect.
-
-To run the ``upgrade`` tool::
-
-  sqoop2-tool upgrade
-
-Upon successful upgrade you should see following message::
-
-  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
-
-Execution failure will show the following message instead::
-
-  Tool class org.apache.sqoop.tools.tool.UpgradeTool has failed.
-
-Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
-
-RepositoryDump
-==============
-
-Writes the user-created contents of the Sqoop repository to a file in JSON format. This includes connections, jobs and submissions.
-
-To run the ``repositorydump`` tool::
-
-  sqoop2-tool repositorydump -o repository.json
-
-As an option, the administrator can choose to include sensitive information such as database connection passwords in the file::
-
-  sqoop2-tool repositorydump -o repository.json --include-sensitive
-
-Upon successful execution, you should see the following message::
-
-  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has finished correctly.
-
-If repository dump has failed, you will see the following message instead::
-
-  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has failed.
-
-Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
-
-RepositoryLoad
-==============
-
-Reads a json formatted file created by RepositoryDump and loads to current Sqoop repository.
-
-To run the ``repositoryLoad`` tool::
-
-  sqoop2-tool repositoryload -i repository.json
-
-Upon successful execution, you should see the following message::
-
-  Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has finished correctly.
-
-If repository load failed you will see the following message instead::
-
- Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has failed.
-
-Or an exception. Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
-
-.. note:: If the repository dump was created without passwords (default), the connections will not contain a password and the jobs will fail to execute. In that case you'll need to manually update the connections and set the password.
-.. note:: RepositoryLoad tool will always generate new connections, jobs and submissions from the file. Even when an identical objects already exists in repository.
-
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Upgrade.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Upgrade.rst b/docs/src/site/sphinx/Upgrade.rst
deleted file mode 100644
index 385c5ae..0000000
--- a/docs/src/site/sphinx/Upgrade.rst
+++ /dev/null
@@ -1,84 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-=======
-Upgrade
-=======
-
-This page describes procedure that you need to take in order to upgrade Sqoop from one release to a higher release. Upgrading both client and server component will be discussed separately.
-
-.. note:: Only updates from one Sqoop 2 release to another are covered, starting with upgrades from version 1.99.2. This guide do not contain general information how to upgrade from Sqoop 1 to Sqoop 2.
-
-Upgrading Server
-================
-
-As Sqoop server is using a database repository for persisting sqoop entities such as the connector, driver, links and jobs the repository schema might need to be updated as part of the server upgrade. In addition the configs and inputs described by the various connectors and the driver may also change with a new server version and might need a data upgrade.
-
-There are two ways how to upgrade Sqoop entities in the repository, you can either execute upgrade tool or configure the sqoop server to perform all necessary upgrades on start up.
-
-It's strongly advised to back up the repository before moving on to next steps. Backup instructions will vary depending on the repository implementation. For example, using MySQL as a repository will require a different back procedure than Apache Derby. Please follow the repositories' backup procedure.
-
-Upgrading Server using upgrade tool
------------------------------------
-
-Preferred upgrade path is to explicitly run the `Upgrade Tool <Tools.html#upgrade>`_. First step is to however shutdown the server as having both the server and upgrade utility accessing the same repository might corrupt it::
-
-  sqoop2-server stop
-
-When the server has been successfully stopped, you can update the server bits and simply run the upgrade tool::
-
-  sqoop2-tool upgrade
-
-You should see that the upgrade process has been successful::
-
-  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
-
-In case of any failure, please take a look into `Upgrade Tool <Tools.html#upgrade>`_ documentation page.
-
-Upgrading Server on start-up
-----------------------------
-
-The capability of performing the upgrade has been built-in to the server, however is disabled by default to avoid any unintentional changes to the repository. You can start the repository schema upgrade procedure by stopping the server: ::
-
-  sqoop2-server stop
-
-Before starting the server again you will need to enable the auto-upgrade feature that will perform all necessary changes during Sqoop Server start up.
-
-You need to set the following property in configuration file ``sqoop.properties`` for the repository schema upgrade.
-::
-
-   org.apache.sqoop.repository.schema.immutable=false
-
-You need to set the following property in configuration file ``sqoop.properties`` for the connector config data upgrade.
-::
-
-   org.apache.sqoop.connector.autoupgrade=true
-
-You need to set the following property in configuration file ``sqoop.properties`` for the driver config data upgrade.
-::
-
-   org.apache.sqoop.driver.autoupgrade=true
-
-When all properties are set, start the sqoop server using the following command::
-
-  sqoop2-server start
-
-All required actions will be performed automatically during the server bootstrap. It's strongly advised to set all three properties to their original values once the server has been successfully started and the upgrade has completed
-
-Upgrading Client
-================
-
-Client do not require any manual steps during upgrade. Replacing the binaries with updated version is sufficient.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/admin.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/admin.rst b/docs/src/site/sphinx/admin.rst
new file mode 100644
index 0000000..d149dfd
--- /dev/null
+++ b/docs/src/site/sphinx/admin.rst
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===========
+Admin Guide
+===========
+
+.. toctree::
+   :glob:
+
+   admin/*

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/admin/Installation.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/admin/Installation.rst b/docs/src/site/sphinx/admin/Installation.rst
new file mode 100644
index 0000000..9d56875
--- /dev/null
+++ b/docs/src/site/sphinx/admin/Installation.rst
@@ -0,0 +1,103 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+============
+Installation
+============
+
+Sqoop ships as one binary package however it's compound from two separate parts - client and server. You need to install server on single node in your cluster. This node will then serve as an entry point for all connecting Sqoop clients. Server acts as a mapreduce client and therefore Hadoop must be installed and configured on machine hosting Sqoop server. Clients can be installed on any arbitrary number of machines. Client is not acting as a mapreduce client and thus you do not need to install Hadoop on nodes that will act only as a Sqoop client.
+
+Server installation
+===================
+
+Copy Sqoop artifact on machine where you want to run Sqoop server. This machine must have installed and configured Hadoop. You don't need to run any Hadoop related services there, however the machine must be able to act as an Hadoop client. You should be able to list a HDFS for example: ::
+
+  hadoop dfs -ls
+
+Sqoop server supports multiple Hadoop versions. However as Hadoop major versions are not compatible with each other, Sqoop have multiple binary artefacts - one for each supported major version of Hadoop. You need to make sure that you're using appropriated binary artifact for your specific Hadoop version. To install Sqoop server decompress appropriate distribution artifact in location at your convenience and change your working directory to this folder. ::
+
+  # Decompress Sqoop distribution tarball
+  tar -xvf sqoop-<version>-bin-hadoop<hadoop-version>.tar.gz
+
+  # Move decompressed content to any location
+  mv sqoop-<version>-bin-hadoop<hadoop version>.tar.gz /usr/lib/sqoop
+
+  # Change working directory
+  cd /usr/lib/sqoop
+
+
+Installing Dependencies
+-----------------------
+
+Hadoop libraries must be available on node where you are planning to run Sqoop server with proper configuration for major services - ``NameNode`` and either ``JobTracker`` or ``ResourceManager`` depending whether you are running Hadoop 1 or 2. There is no need to run any Hadoop service on the same node as Sqoop server, just the libraries and configuration files must be available.
+
+Path to Hadoop libraries is stored in environment ``HADOOP_COMMON_HOME``, ``HADOOP_HDFS_HOME``, ``HADOOP_MAPRED_HOME`` and ``HADOOP_YARN_HOME``. You need to set the environment with your Hadoop libraries. If the environment ``HADOOP_HOME`` is set, the default expected locations are ``$HADOOP_HOME/share/hadoop/common``, ``$HADOOP_HOME/share/hadoop/hdfs``, ``$HADOOP_HOME/share/hadoop/mapreduce`` and ``$HADOOP_HOME/share/hadoop/yarn``.
+
+Lastly you might need to install JDBC drivers that are not bundled with Sqoop because of incompatible licenses. You can add any arbitrary Java jar file to Sqoop server by copying it into ``lib/`` directory. You can create this directory if it do not exists already.
+
+Configuring PATH
+----------------
+
+All user and administrator facing shell commands are stored in ``bin/`` directory. It's recommended to add this directory to your ``$PATH`` for their easier execution, for example::
+
+  PATH=$PATH:`pwd`/bin/
+
+Further documentation pages will assume that you have the binaries on your ``$PATH``. You will need to call them specifying full path if you decide to skip this step.
+
+Configuring Server
+------------------
+
+Before starting server you should revise configuration to match your specific environment. Server configuration files are stored in ``conf`` directory.
+
+File ``sqoop_bootstrap.properties`` specifies which configuration provider should be used for loading configuration for rest of Sqoop server. Default value ``PropertiesConfigurationProvider`` should be sufficient.
+
+
+Second configuration file ``sqoop.properties`` contains remaining configuration properties that can affect Sqoop server. File is very well documented, so check if all configuration properties fits your environment. Default or very little tweaking should be sufficient most common cases.
+
+You can verify the Sqoop server configuration using `Verify Tool <Tools.html#verify>`__, for example::
+
+  sqoop2-tool verify
+
+Upon running the ``verify`` tool, you should see messages similar to the following::
+
+  Verification was successful.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+Consult `Verify Tool <Tools.html#upgrade>`__ documentation page in case of any failure.
+
+Server Life Cycle
+-----------------
+
+After installation and configuration you can start Sqoop server with following command: ::
+
+  sqoop2-server start
+
+Similarly you can stop server using following command: ::
+
+  sqoop2-server stop
+
+By default Sqoop server daemons use ports 12000. You can set ``org.apache.sqoop.jetty.port`` in configuration file ``conf/sqoop.properties`` to use different ports.
+
+Client installation
+===================
+
+Client do not need extra installation and configuration steps. Just copy Sqoop distribution artifact on target machine and unzip it in desired location. You can start client with following command: ::
+
+  sqoop2-shell
+
+You can find more documentation to Sqoop client in `Command Line Client <CommandLineClient.html>`_ section.
+
+

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/admin/Tools.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/admin/Tools.rst b/docs/src/site/sphinx/admin/Tools.rst
new file mode 100644
index 0000000..fb0187a
--- /dev/null
+++ b/docs/src/site/sphinx/admin/Tools.rst
@@ -0,0 +1,129 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=====
+Tools
+=====
+
+Tools are server commands that administrators can execute on the Sqoop server machine in order to perform various maintenance tasks. The tool execution will always perform a given task and finish. There are no long running services implemented as tools.
+
+In order to perform the maintenance task each tool is suppose to do, they need to be executed in exactly the same environment as the main Sqoop server. The tool binary will take care of setting up the ``CLASSPATH`` and other environmental variables that might be required. However it's up to the administrator himself to run the tool under the same user as is used for the server. This is usually configured automatically for various Hadoop distributions (such as Apache Bigtop).
+
+
+.. note:: Running tools while the Sqoop Server is also running is not recommended as it might lead to a data corruption and service disruption.
+
+List of available tools:
+
+* verify
+* upgrade
+
+To run the desired tool, execute binary ``sqoop2-tool`` with desired tool name. For example to run ``verify`` tool::
+
+  sqoop2-tool verify
+
+.. note:: Stop the Sqoop Server before running Sqoop tools. Running tools while Sqoop Server is running can lead to a data corruption and service disruption.
+
+Verify
+======
+
+The verify tool will verify Sqoop server configuration by starting all subsystems with the exception of servlets and tearing them down.
+
+To run the ``verify`` tool::
+
+  sqoop2-tool verify
+
+If the verification process succeeds, you should see messages like::
+
+  Verification was successful.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+If the verification process will find any inconsistencies, it will print out the following message instead::
+
+  Verification has failed, please check Server logs for further details.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has failed.
+
+Further details why the verification has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+Upgrade
+=======
+
+Upgrades all versionable components inside Sqoop2. This includes structural changes inside the repository and stored metadata.
+Running this tool on Sqoop deployment that was already upgraded will have no effect.
+
+To run the ``upgrade`` tool::
+
+  sqoop2-tool upgrade
+
+Upon successful upgrade you should see following message::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+Execution failure will show the following message instead::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has failed.
+
+Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+RepositoryDump
+==============
+
+Writes the user-created contents of the Sqoop repository to a file in JSON format. This includes connections, jobs and submissions.
+
+To run the ``repositorydump`` tool::
+
+  sqoop2-tool repositorydump -o repository.json
+
+As an option, the administrator can choose to include sensitive information such as database connection passwords in the file::
+
+  sqoop2-tool repositorydump -o repository.json --include-sensitive
+
+Upon successful execution, you should see the following message::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has finished correctly.
+
+If repository dump has failed, you will see the following message instead::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has failed.
+
+Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+RepositoryLoad
+==============
+
+Reads a json formatted file created by RepositoryDump and loads to current Sqoop repository.
+
+To run the ``repositoryLoad`` tool::
+
+  sqoop2-tool repositoryload -i repository.json
+
+Upon successful execution, you should see the following message::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has finished correctly.
+
+If repository load failed you will see the following message instead::
+
+ Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has failed.
+
+Or an exception. Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+.. note:: If the repository dump was created without passwords (default), the connections will not contain a password and the jobs will fail to execute. In that case you'll need to manually update the connections and set the password.
+.. note:: RepositoryLoad tool will always generate new connections, jobs and submissions from the file. Even when an identical objects already exists in repository.
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/admin/Upgrade.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/admin/Upgrade.rst b/docs/src/site/sphinx/admin/Upgrade.rst
new file mode 100644
index 0000000..385c5ae
--- /dev/null
+++ b/docs/src/site/sphinx/admin/Upgrade.rst
@@ -0,0 +1,84 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=======
+Upgrade
+=======
+
+This page describes procedure that you need to take in order to upgrade Sqoop from one release to a higher release. Upgrading both client and server component will be discussed separately.
+
+.. note:: Only updates from one Sqoop 2 release to another are covered, starting with upgrades from version 1.99.2. This guide do not contain general information how to upgrade from Sqoop 1 to Sqoop 2.
+
+Upgrading Server
+================
+
+As Sqoop server is using a database repository for persisting sqoop entities such as the connector, driver, links and jobs the repository schema might need to be updated as part of the server upgrade. In addition the configs and inputs described by the various connectors and the driver may also change with a new server version and might need a data upgrade.
+
+There are two ways how to upgrade Sqoop entities in the repository, you can either execute upgrade tool or configure the sqoop server to perform all necessary upgrades on start up.
+
+It's strongly advised to back up the repository before moving on to next steps. Backup instructions will vary depending on the repository implementation. For example, using MySQL as a repository will require a different back procedure than Apache Derby. Please follow the repositories' backup procedure.
+
+Upgrading Server using upgrade tool
+-----------------------------------
+
+Preferred upgrade path is to explicitly run the `Upgrade Tool <Tools.html#upgrade>`_. First step is to however shutdown the server as having both the server and upgrade utility accessing the same repository might corrupt it::
+
+  sqoop2-server stop
+
+When the server has been successfully stopped, you can update the server bits and simply run the upgrade tool::
+
+  sqoop2-tool upgrade
+
+You should see that the upgrade process has been successful::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+In case of any failure, please take a look into `Upgrade Tool <Tools.html#upgrade>`_ documentation page.
+
+Upgrading Server on start-up
+----------------------------
+
+The capability of performing the upgrade has been built-in to the server, however is disabled by default to avoid any unintentional changes to the repository. You can start the repository schema upgrade procedure by stopping the server: ::
+
+  sqoop2-server stop
+
+Before starting the server again you will need to enable the auto-upgrade feature that will perform all necessary changes during Sqoop Server start up.
+
+You need to set the following property in configuration file ``sqoop.properties`` for the repository schema upgrade.
+::
+
+   org.apache.sqoop.repository.schema.immutable=false
+
+You need to set the following property in configuration file ``sqoop.properties`` for the connector config data upgrade.
+::
+
+   org.apache.sqoop.connector.autoupgrade=true
+
+You need to set the following property in configuration file ``sqoop.properties`` for the driver config data upgrade.
+::
+
+   org.apache.sqoop.driver.autoupgrade=true
+
+When all properties are set, start the sqoop server using the following command::
+
+  sqoop2-server start
+
+All required actions will be performed automatically during the server bootstrap. It's strongly advised to set all three properties to their original values once the server has been successfully started and the upgrade has completed
+
+Upgrading Client
+================
+
+Client do not require any manual steps during upgrade. Replacing the binaries with updated version is sufficient.

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/conf.py
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/conf.py b/docs/src/site/sphinx/conf.py
index 6a9bf31..7b620f7 100644
--- a/docs/src/site/sphinx/conf.py
+++ b/docs/src/site/sphinx/conf.py
@@ -103,12 +103,12 @@ html_use_index = True
 #html_theme = 'default'
 
 html_sidebars = {
-  '**': ['localtoc.html', 'relations.html', 'sourcelink.html'],
+  '**': ['globaltoc.html'],
 }
 
 # The theme to use for HTML and HTML Help pages.  See the documentation for
 # a list of builtin themes.
-html_theme = 'haiku'
+html_theme = 'sphinxdoc'
 
 # Theme options are theme-specific and customize the look and feel of a theme
 # further.  For a list of options available for each theme, see the

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev.rst b/docs/src/site/sphinx/dev.rst
new file mode 100644
index 0000000..16f237b
--- /dev/null
+++ b/docs/src/site/sphinx/dev.rst
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===============
+Developer Guide
+===============
+
+.. toctree::
+   :glob:
+
+   dev/*

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev/BuildingSqoop2.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev/BuildingSqoop2.rst b/docs/src/site/sphinx/dev/BuildingSqoop2.rst
new file mode 100644
index 0000000..7fbbb6b
--- /dev/null
+++ b/docs/src/site/sphinx/dev/BuildingSqoop2.rst
@@ -0,0 +1,76 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+================================
+Building Sqoop2 from source code
+================================
+
+This guide will show you how to build Sqoop2 from source code. Sqoop is using `maven <http://maven.apache.org/>`_ as build system. You you will need to use at least version 3.0 as older versions will not work correctly. All other dependencies will be downloaded by maven automatically. With exception of special JDBC drivers that are needed only for advanced integration tests.
+
+Downloading source code
+-----------------------
+
+Sqoop project is using git as a revision control system hosted at Apache Software Foundation. You can clone entire repository using following command:
+
+::
+
+  git clone https://git-wip-us.apache.org/repos/asf/sqoop.git sqoop2
+
+Sqoop2 is currently developed in special branch ``sqoop2`` that you need to check out after clone:
+
+::
+
+  cd sqoop2
+  git checkout sqoop2
+
+Building project
+----------------
+
+You can use usual maven targets like ``compile`` or ``package`` to build the project. Sqoop supports one major Hadoop revision at the moment - 2.x. As compiled code for one Hadoop major version can't be used on another, you must compile Sqoop against appropriate Hadoop version.
+
+::
+
+  mvn compile
+
+Maven target ``package`` can be used to create Sqoop packages similar to the ones that are officially available for download. Sqoop will build only source tarball by default. You need to specify ``-Pbinary`` to build binary distribution.
+
+::
+
+  mvn package -Pbinary
+
+Running tests
+-------------
+
+Sqoop supports two different sets of tests. First smaller and much faster set is called **unit tests** and will be executed on maven target ``test``. Second larger set of **integration tests** will be executed on maven target ``integration-test``. Please note that integration tests might require manual steps for installing various JDBC drivers into your local maven cache.
+
+Example for running unit tests:
+
+::
+
+  mvn test
+
+Example for running integration tests:
+
+::
+
+  mvn integration-test
+
+For the **unit tests**, there are two helpful profiles: **fast** and **slow**. The **fast** unit tests do not start or use any services. The **slow** unit tests, may start services or use an external service (ie. MySQL).
+
+::
+
+  mvn test -Pfast,hadoop200
+  mvn test -Pslow,hadoop200
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev/ClientAPI.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev/ClientAPI.rst b/docs/src/site/sphinx/dev/ClientAPI.rst
new file mode 100644
index 0000000..9626878
--- /dev/null
+++ b/docs/src/site/sphinx/dev/ClientAPI.rst
@@ -0,0 +1,304 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===========================
+Sqoop Java Client API Guide
+===========================
+
+This document will explain how to use Sqoop Java Client API with external application. Client API allows you to execute the functions of sqoop commands. It requires Sqoop Client JAR and its dependencies.
+
+The main class that provides wrapper methods for all the supported operations is the
+::
+
+  public class SqoopClient {
+    ...
+  }
+
+Java Client API is explained using Generic JDBC Connector example. Before executing the application using the sqoop client API, check whether sqoop server is running.
+
+Workflow
+========
+
+Given workflow has to be followed for executing a sqoop job in Sqoop server.
+
+  1. Create LINK object for a given connectorId             - Creates Link object and returns linkId (lid)
+  2. Create a JOB for a given "from" and "to" linkId            - Create Job object and returns jobId (jid)
+  3. Start the JOB for a given jobId                        - Start Job on the server and creates a submission record
+
+Project Dependencies
+====================
+Here given maven dependency
+
+::
+
+  <dependency>
+    <groupId>org.apache.sqoop</groupId>
+      <artifactId>sqoop-client</artifactId>
+      <version>${requestedVersion}</version>
+  </dependency>
+
+Initialization
+==============
+
+First initialize the SqoopClient class with server URL as argument.
+
+::
+
+  String url = "http://localhost:12000/sqoop/";
+  SqoopClient client = new SqoopClient(url);
+
+Server URL value can be modfied by setting value to setServerUrl(String) method
+
+::
+
+  client.setServerUrl(newUrl);
+
+
+Link
+====
+Connectors provide the facility to interact with many data sources and thus can be used as a means to transfer data between them in Sqoop. The registered connector implementation will provide logic to read from and/or write to a data source that it represents. A connector can have one or more links associated with it. The java client API allows you to create, update and delete a link for any registered connector. Creating or updating a link requires you to populate the Link Config for that particular connector. Hence the first thing to do is get the list of registered connectors and select the connector for which you would like to create a link. Then
+you can get the list of all the config/inputs using `Display Config and Input Names For Connector`_ for that connector.
+
+
+Save Link
+---------
+
+First create a new link by invoking ``createLink(cid)`` method with connector Id and it returns a MLink object with dummy id and the unfilled link config inputs for that connector. Then fill the config inputs with relevant values. Invoke ``saveLink`` passing it the filled MLink object.
+
+::
+
+  // create a placeholder for link
+  long connectorId = 1;
+  MLink link = client.createLink(connectorId);
+  link.setName("Vampire");
+  link.setCreationUser("Buffy");
+  MLinkConfig linkConfig = link.getConnectorLinkConfig();
+  // fill in the link config values
+  linkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://localhost/my");
+  linkConfig.getStringInput("linkConfig.jdbcDriver").setValue("com.mysql.jdbc.Driver");
+  linkConfig.getStringInput("linkConfig.username").setValue("root");
+  linkConfig.getStringInput("linkConfig.password").setValue("root");
+  // save the link object that was filled
+  Status status = client.saveLink(link);
+  if(status.canProceed()) {
+   System.out.println("Created Link with Link Id : " + link.getPersistenceId());
+  } else {
+   System.out.println("Something went wrong creating the link");
+  }
+
+``status.canProceed()`` returns true if status is OK or a WARNING. Before sending the status, the link config values are validated using the corresponding validator associated with th link config inputs.
+
+On successful execution of the saveLink method, new link Id is assigned to the link object else an exception is thrown. ``link.getPersistenceId()`` method returns the unique Id for this object persisted in the sqoop repository.
+
+User can retrieve a link using the following methods
+
++----------------------------+--------------------------------------+
+|   Method                   | Description                          |
++============================+======================================+
+| ``getLink(lid)``           | Returns a link by id                 |
++----------------------------+--------------------------------------+
+| ``getLinks()``             | Returns list of links in the sqoop   |
++----------------------------+--------------------------------------+
+
+Job
+===
+
+A sqoop job holds the ``From`` and ``To`` parts for transferring data from the ``From`` data source to the ``To`` data source. Both the ``From`` and the ``To`` are uniquely identified by their corresponding connector Link Ids. i.e when creating a job we have to specifiy the ``FromLinkId`` and the ``ToLinkId``. Thus the pre-requisite for creating a job is to first create the links as described above.
+
+Once the linkIds for the ``From`` and ``To`` are given, then the job configs for the associated connector for the link object have to be filled. You can get the list of all the from and to job config/inputs using `Display Config and Input Names For Connector`_ for that connector. A connector can have one or more links. We then use the links in the ``From`` and ``To`` direction to populate the corresponding ``MFromConfig`` and ``MToConfig`` respectively.
+
+In addition to filling the job configs for the ``From`` and the ``To`` representing the link, we also need to fill the driver configs that control the job execution engine environment. For example, if the job execution engine happens to be the MapReduce we will specifiy the number of mappers to be used in reading data from the ``From`` data source.
+
+Save Job
+---------
+Here is the code to create and then save a job
+::
+
+  String url = "http://localhost:12000/sqoop/";
+  SqoopClient client = new SqoopClient(url);
+  //Creating dummy job object
+  long fromLinkId = 1;// for jdbc connector
+  long toLinkId = 2; // for HDFS connector
+  MJob job = client.createJob(fromLinkId, toLinkId);
+  job.setName("Vampire");
+  job.setCreationUser("Buffy");
+  // set the "FROM" link job config values
+  MFromConfig fromJobConfig = job.getFromJobConfig();
+  fromJobConfig.getStringInput("fromJobConfig.schemaName").setValue("sqoop");
+  fromJobConfig.getStringInput("fromJobConfig.tableName").setValue("sqoop");
+  fromJobConfig.getStringInput("fromJobConfig.partitionColumn").setValue("id");
+  // set the "TO" link job config values
+  MToConfig toJobConfig = job.getToJobConfig();
+  toJobConfig.getStringInput("toJobConfig.outputDirectory").setValue("/usr/tmp");
+  // set the driver config values
+  MDriverConfig driverConfig = job.getDriverConfig();
+  driverConfig.getStringInput("throttlingConfig.numExtractors").setValue("3");
+
+  Status status = client.saveJob(job);
+  if(status.canProceed()) {
+   System.out.println("Created Job with Job Id: "+ job.getPersistenceId());
+  } else {
+   System.out.println("Something went wrong creating the job");
+  }
+
+User can retrieve a job using the following methods
+
++----------------------------+--------------------------------------+
+|   Method                   | Description                          |
++============================+======================================+
+| ``getJob(jid)``            | Returns a job by id                  |
++----------------------------+--------------------------------------+
+| ``getJobs()``              | Returns list of jobs in the sqoop    |
++----------------------------+--------------------------------------+
+
+
+List of status codes
+--------------------
+
++------------------+------------------------------------------------------------------------------------------------------------+
+| Function         | Description                                                                                                |
++==================+============================================================================================================+
+| ``OK``           | There are no issues, no warnings.                                                                          |
++------------------+------------------------------------------------------------------------------------------------------------+
+| ``WARNING``      | Validated entity is correct enough to be proceed. Not a fatal error                                        |
++------------------+------------------------------------------------------------------------------------------------------------+
+| ``ERROR``        | There are serious issues with validated entity. We can't proceed until reported issues will be resolved.   |
++------------------+------------------------------------------------------------------------------------------------------------+
+
+View Error or Warning valdiation message
+----------------------------------------
+
+In case of any WARNING AND ERROR status, user has to iterate the list of validation messages.
+
+::
+
+ printMessage(link.getConnectorLinkConfig().getConfigs());
+
+ private static void printMessage(List<MConfig> configs) {
+   for(MConfig config : configs) {
+     List<MInput<?>> inputlist = config.getInputs();
+     if (config.getValidationMessages() != null) {
+      // print every validation message
+      for(Message message : config.getValidationMessages()) {
+       System.out.println("Config validation message: " + message.getMessage());
+      }
+     }
+     for (MInput minput : inputlist) {
+       if (minput.getValidationStatus() == Status.WARNING) {
+        for(Message message : minput.getValidationMessages()) {
+         System.out.println("Config Input Validation Warning: " + message.getMessage());
+       }
+     }
+     else if (minput.getValidationStatus() == Status.ERROR) {
+       for(Message message : minput.getValidationMessages()) {
+        System.out.println("Config Input Validation Error: " + message.getMessage());
+       }
+      }
+     }
+    }
+
+Updating link and job
+---------------------
+After creating link or job in the repository, you can update or delete a link or job using the following functions
+
++----------------------------------+------------------------------------------------------------------------------------+
+|   Method                         | Description                                                                        |
++==================================+====================================================================================+
+| ``updateLink(link)``             | Invoke update with link and check status for any errors or warnings                |
++----------------------------------+------------------------------------------------------------------------------------+
+| ``deleteLink(lid)``              | Delete link. Deletes only if specified link is not used by any job                 |
++----------------------------------+------------------------------------------------------------------------------------+
+| ``updateJob(job)``               | Invoke update with job and check status for any errors or warnings                 |
++----------------------------------+------------------------------------------------------------------------------------+
+| ``deleteJob(jid)``               | Delete job                                                                         |
++----------------------------------+------------------------------------------------------------------------------------+
+
+Job Start
+==============
+
+Starting a job requires a job id. On successful start, getStatus() method returns "BOOTING" or "RUNNING".
+
+::
+
+  //Job start
+  long jobId = 1;
+  MSubmission submission = client.startJob(jobId);
+  System.out.println("Job Submission Status : " + submission.getStatus());
+  if(submission.getStatus().isRunning() && submission.getProgress() != -1) {
+    System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100));
+  }
+  System.out.println("Hadoop job id :" + submission.getExternalId());
+  System.out.println("Job link : " + submission.getExternalLink());
+  Counters counters = submission.getCounters();
+  if(counters != null) {
+    System.out.println("Counters:");
+    for(CounterGroup group : counters) {
+      System.out.print("\t");
+      System.out.println(group.getName());
+      for(Counter counter : group) {
+        System.out.print("\t\t");
+        System.out.print(counter.getName());
+        System.out.print(": ");
+        System.out.println(counter.getValue());
+      }
+    }
+  }
+  if(submission.getExceptionInfo() != null) {
+    System.out.println("Exception info : " +submission.getExceptionInfo());
+  }
+
+
+  //Check job status for a running job 
+  MSubmission submission = client.getJobStatus(jobId);
+  if(submission.getStatus().isRunning() && submission.getProgress() != -1) {
+    System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100));
+  }
+
+  //Stop a running job
+  submission.stopJob(jobId);
+
+Above code block, job start is asynchronous. For synchronous job start, use ``startJob(jid, callback, pollTime)`` method. If you are not interested in getting the job status, then invoke the same method with "null" as the value for the callback parameter and this returns the final job status. ``pollTime`` is the request interval for getting the job status from sqoop server and the value should be greater than zero. We will frequently hit the sqoop server if a low value is given for the ``pollTime``. When a synchronous job is started with a non null callback, it first invokes the callback's ``submitted(MSubmission)`` method on successful start, after every poll time interval, it then invokes the ``updated(MSubmission)`` method on the callback API and finally on finishing the job executuon it invokes the ``finished(MSubmission)`` method on the callback API.
+
+Display Config and Input Names For Connector
+============================================
+
+You can view the config/input names for the link and job config types per connector
+
+::
+
+  String url = "http://localhost:12000/sqoop/";
+  SqoopClient client = new SqoopClient(url);
+  long connectorId = 1;
+  // link config for connector
+  describe(client.getConnector(connectorId).getLinkConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+  // from job config for connector
+  describe(client.getConnector(connectorId).getFromConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+  // to job config for the connector
+  describe(client.getConnector(connectorId).getToConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+
+  void describe(List<MConfig> configs, ResourceBundle resource) {
+    for (MConfig config : configs) {
+      System.out.println(resource.getString(config.getLabelKey())+":");
+      List<MInput<?>> inputs = config.getInputs();
+      for (MInput input : inputs) {
+        System.out.println(resource.getString(input.getLabelKey()) + " : " + input.getValue());
+      }
+      System.out.println();
+    }
+  }
+
+
+Above Sqoop 2 Client API tutorial explained how to create a link, create job and and then start the job.


[6/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/RESTAPI.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/RESTAPI.rst b/docs/src/site/sphinx/RESTAPI.rst
deleted file mode 100644
index 39aabc0..0000000
--- a/docs/src/site/sphinx/RESTAPI.rst
+++ /dev/null
@@ -1,1601 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-=========================
-Sqoop REST API Guide
-=========================
-
-This document will explain how you can use Sqoop REST API to build applications interacting with Sqoop server.
-The REST API covers all aspects of managing Sqoop jobs and allows you to build an app in any programming language using HTTP over JSON.
-
-.. contents:: Table of Contents
-
-Initialization
-=========================
-
-Before continuing further, make sure that the Sqoop server is running.
-
-Then find out the details of the Sqoop server: ``host``, ``port`` and ``webapp``, and keep them in mind. Note that the sqoop server is running on Jetty. To exercise a REST API for Sqoop, you could assemble and send a HTTP request to an url corresponding to that API. Generally, the url contains the ``host`` on which the sqoop server is running, the ``port`` at which the sqoop server is listening to and ``webapp``, the context path at which the Sqoop server is registered in the Jetty engine.
-
-Certain requests might need to contain some additional query parameters and post data. These parameters could be given via
-the HTTP headers, request body or both. All the content in the HTTP body is in ``JSON`` format.
-
-Understand Connector, Driver, Link and Job
-===========================================================
-
-To create and run a Sqoop Job, we need to provide config values for connecting to a data source and then processing the data in that data source. Processing might be either reading from or writing to the data source. Thus we have configurable entities such as the ``From`` and ``To`` parts of the connectors, the driver that each expose configs and one or more inputs within them.
-
-For instance a connector that represents a relational data source such as MySQL will expose config classes for connecting to the database. Some of the relevant inputs are the connection string, driver class, the username and the password to connect to the database. These configs remain the same to read data from any of the tables within that database. Hence they are grouped under ``LinkConfiguration``.
-
-Each connector can support Reading from a data source and/or writing/to a data source it represents. Reading from and writing to a data source are represented by From and To respectively. Specific configurations are required to peform the job of reading from or writing to the data source. These are grouped in the ``FromJobConfiguration`` and ``ToJobConfiguration`` objects of the connector.
-
-For instance, a connector that represents a relational data source such as MySQL will expose the table name to read from or the SQL query to use while reading data as a FromJobConfiguration. Similarly a connector that represents a data source such as HDFS, will expose the output directory to write to as a ToJobConfiguration.
-
-
-Objects
-==============
-
-This section covers all the objects that might exist in an API request and/or API response.
-
-Configs and Inputs
-------------------
-
-Before creating any link for a connector or a job with associated ``From`` and ``To`` links, the first thing to do is getting familiar with all the configurations that the connector exposes.
-
-Each config consists of the following information
-
-+------------------+---------------------------------------------------------+
-|   Field          | Description                                             |
-+==================+=========================================================+
-| ``id``           | The id of this config                                   |
-+------------------+---------------------------------------------------------+
-| ``inputs``       | A array of inputs of this config                        |
-+------------------+---------------------------------------------------------+
-| ``name``         | The unique name of this config per connector            |
-+------------------+---------------------------------------------------------+
-| ``type``         | The type of this config (LINK/ JOB)                     |
-+------------------+---------------------------------------------------------+
-
-A typical config object is showing below:
-
-::
-
-   {
-    id:7,
-    inputs:[
-      {
-         id: 25,
-         name: "throttlingConfig.numExtractors",
-         type: "INTEGER",
-         sensitive: false
-      },
-      {
-         id: 26,
-         name: "throttlingConfig.numLoaders",
-         type: "INTEGER",
-         sensitive: false
-       }
-    ],
-    name: "throttlingConfig",
-    type: "JOB"
-  }
-
-Each input object in a config is structured below:
-
-+------------------+---------------------------------------------------------+
-|   Field          | Description                                             |
-+==================+=========================================================+
-| ``id``           | The id of this input                                    |
-+------------------+---------------------------------------------------------+
-| ``name``         | The unique name of this input per config                |
-+------------------+---------------------------------------------------------+
-| ``type``         | The data type of this input field                       |
-+------------------+---------------------------------------------------------+
-| ``size``         | The length of this input field                          |
-+------------------+---------------------------------------------------------+
-| ``sensitive``    | Whether this input contain sensitive information        |
-+------------------+---------------------------------------------------------+
-
-
-To send a filled config in the request, you should always use config id and input id to map the values to their correspondig names.
-For example, the following request contains an input value ``com.mysql.jdbc.Driver`` with input id ``7`` inside a config with id ``4`` that belongs to a link with id ``3``
-
-::
-
-      link: {
-            id: 3,
-            enabled: true,
-            link-config-values: [{
-                id: 4,
-                inputs: [{
-                    id: 7,
-                    name: "linkConfig.jdbcDriver",
-                    value: "com.mysql.jdbc.Driver",
-                    type: "STRING",
-                    size: 128,
-                    sensitive: false
-                }, {
-                    id: 8,
-                    name: "linkConfig.connectionString",
-                    value: "jdbc%3Amysql%3A%2F%2Fmysql.ent.cloudera.com%2Fsqoop",
-                    type: "STRING",
-                    size: 128,
-                    sensitive: false
-                },
-                ...
-             }
-           }
-
-Exception Response
-------------------
-
-Each operation on Sqoop server might return an exception in the Http response. Remember to take this into account.The exception code and message could be found in both the header and body of the response.
-
-Please jump to "Header Parameters" section to find how to get exception information from header.
-
-In the body, the exception is expressed in ``JSON`` format. An example of the exception is:
-
-::
-
-  {
-    "message":"DERBYREPO_0030:Unable to load specific job metadata from repository - Couldn't find job with id 2",
-    "stack-trace":[
-      {
-        "file":"DerbyRepositoryHandler.java",
-        "line":1111,
-        "class":"org.apache.sqoop.repository.derby.DerbyRepositoryHandler",
-        "method":"findJob"
-      },
-      {
-        "file":"JdbcRepository.java",
-        "line":451,
-        "class":"org.apache.sqoop.repository.JdbcRepository$16",
-        "method":"doIt"
-      },
-      {
-        "file":"JdbcRepository.java",
-        "line":90,
-        "class":"org.apache.sqoop.repository.JdbcRepository",
-        "method":"doWithConnection"
-      },
-      {
-        "file":"JdbcRepository.java",
-        "line":61,
-        "class":"org.apache.sqoop.repository.JdbcRepository",
-        "method":"doWithConnection"
-      },
-      {
-        "file":"JdbcRepository.java",
-        "line":448,
-        "class":"org.apache.sqoop.repository.JdbcRepository",
-        "method":"findJob"
-      },
-      {
-        "file":"JobRequestHandler.java",
-        "line":238,
-        "class":"org.apache.sqoop.handler.JobRequestHandler",
-        "method":"getJobs"
-      }
-    ],
-    "class":"org.apache.sqoop.common.SqoopException"
-  }
-
-Config and Input Validation Status Response
---------------------------------------------
-
-The config and the inputs associated with the connectors also provide custom validation rules for the values given to these input fields. Sqoop applies these custom validators and its corresponding valdation logic when config values for the LINK and JOB are posted.
-
-
-An example of a OK status with the persisted ID:
-::
-
- {
-    "id": 3,
-    "validation-result": [
-        {}
-    ]
- }
-
-An example of ERROR status:
-::
-
-   {
-     "validation-result": [
-       {
-        "linkConfig": [
-          {
-            "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
-            "status": "ERROR"
-          }
-        ]
-      }
-     ]
-   }
-
-Job Submission Status Response
-------------------------------
-
-After starting a job, you could look up the running status of it. There could be 7 possible status:
-
-+-----------------------------+---------------------------------------------------------+
-|   Status                    | Description                                             |
-+=============================+=========================================================+
-| ``BOOTING``                 | In the middle of submitting the job                     |
-+-----------------------------+---------------------------------------------------------+
-| ``FAILURE_ON_SUBMIT``       | Unable to submit this job to remote cluster             |
-+-----------------------------+---------------------------------------------------------+
-| ``RUNNING``                 | The job is running now                                  |
-+-----------------------------+---------------------------------------------------------+
-| ``SUCCEEDED``               | Job finished successfully                               |
-+-----------------------------+---------------------------------------------------------+
-| ``FAILED``                  | Job failed                                              |
-+-----------------------------+---------------------------------------------------------+
-| ``NEVER_EXECUTED``          | The job has never been executed since created           |
-+-----------------------------+---------------------------------------------------------+
-| ``UNKNOWN``                 | The status is unknown                                   |
-+-----------------------------+---------------------------------------------------------+
-
-Header Parameters
-=================
-
-For all the responses, the following parameters in the HTTP message header are available:
-
-+---------------------------+----------+------------------------------------------------------------------------------+
-|   Parameter               | Required | Description                                                                  |
-+===========================+==========+==============================================================================+
-| ``sqoop-error-code``      | false    | The error code when some error happen in the server side for this request    |
-+---------------------------+----------+------------------------------------------------------------------------------+
-| ``sqoop-error-message``   | false    | The explanation for a error code                                             |
-+---------------------------+----------+------------------------------------------------------------------------------+
-
-So far, there are only these 2 parameters in the header of response message. They only exist when something bad happen in the server.
-And they always come along with an exception message in the response body.
-
-REST APIs
-==========
-
-The section elaborates all the rest apis that are supported by the Sqoop server.
-
-For all Sqoop requests, the following request parameters will be added automatically. However, this user name is only in simple mode. In Kerberos mode, this user name will be ignored by Sqoop server and user name in UGI which is authenticated by Kerberos server will be used instead.
-
-+---------------------------+---------------------------------------------------------+
-|   Parameter               | Description                                             |
-+===========================+=========================================================+
-| ``user.name``             | The name of the user who makes the requests             |
-+---------------------------+---------------------------------------------------------+
-
-
-/version - [GET] - Get Sqoop Version
--------------------------------------
-
-Get all the version metadata of Sqoop software in the server side.
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-
-* Fields of Response:
-
-+--------------------+---------------------------------------------------------+
-|   Field            | Description                                             |
-+====================+=========================================================+
-| ``source-revision``| The revision number of Sqoop source code                |
-+--------------------+---------------------------------------------------------+
-| ``api-versions``   | The version of network protocol                         |
-+--------------------+---------------------------------------------------------+
-| ``build-date``     | The Sqoop release date                                  |
-+--------------------+---------------------------------------------------------+
-| ``user``           | The user who made the release                           |
-+--------------------+---------------------------------------------------------+
-| ``source-url``     | The url of the source code trunk                        |
-+--------------------+---------------------------------------------------------+
-| ``build-version``  | The version of Sqoop in the server side                 |
-+--------------------+---------------------------------------------------------+
-
-
-* Response Example:
-
-::
-
-   {
-    source-url: "git://vbasavaraj.local/Users/vbasavaraj/Projects/SqoopRefactoring/sqoop2/common",
-    source-revision: "418c5f637c3f09b94ea7fc3b0a4610831373a25f",
-    build-version: "2.0.0-SNAPSHOT",
-    api-versions: [
-       "v1"
-     ],
-    user: "vbasavaraj",
-    build-date: "Mon Nov 3 08:18:21 PST 2014"
-   }
-
-/v1/connectors - [GET]  Get all Connectors
--------------------------------------------
-
-Get all the connectors registered in Sqoop
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-
-* Response Example
-
-::
-
-  {
-    connectors: [{
-        id: 1,
-        link-config: [],
-        job-config: {},
-        name: "hdfs-connector",
-        class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
-        all-config-resources: {},
-        version: "2.0.0-SNAPSHOT"
-    }, {
-        id: 2,
-        link-config: [],
-        job-config: {},
-        name: "generic-jdbc-connector",
-        class: "org.apache.sqoop.connector.jdbc.GenericJdbcConnector",
-        all-config - resources: {},
-        version: "2.0.0-SNAPSHOT"
-    }]
-  }
-
-/v1/connector/[cname] or /v1/connector/[cid] - [GET] - Get Connector
----------------------------------------------------------------------
-
-Provide the id or unique name of the connector in the url ``[cid]`` or ``[cname]`` part.
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-
-* Fields of Response:
-
-+--------------------------+----------------------------------------------------------------------------------------+
-|   Field                  | Description                                                                            |
-+==========================+========================================================================================+
-| ``id``                   | The id for the connector ( registered as a configurable )                              |
-+--------------------------+----------------------------------------------------------------------------------------+
-| ``job-config``           | Connector job config and inputs for both FROM and TO                                   |
-+--------------------------+----------------------------------------------------------------------------------------+
-| ``link-config``          | Connector link config and inputs                                                       |
-+--------------------------+----------------------------------------------------------------------------------------+
-| ``all-config-resources`` | All config inputs labels and description for the given connector                       |
-+--------------------------+----------------------------------------------------------------------------------------+
-| ``version``              | The build version required for config and input data upgrades                          |
-+--------------------------+----------------------------------------------------------------------------------------+
-
-* Response Example:
-
-::
-
-   {
-    connector: {
-        id: 1,
-        job-config: {
-            TO: [{
-                id: 3,
-                inputs: [{
-                    id: 3,
-                    values: "TEXT_FILE,SEQUENCE_FILE",
-                    name: "toJobConfig.outputFormat",
-                    type: "ENUM",
-                    sensitive: false
-                }, {
-                    id: 4,
-                    values: "NONE,DEFAULT,DEFLATE,GZIP,BZIP2,LZO,LZ4,SNAPPY,CUSTOM",
-                    name: "toJobConfig.compression",
-                    type: "ENUM",
-                    sensitive: false
-                }, {
-                    id: 5,
-                    name: "toJobConfig.customCompression",
-                    type: "STRING",
-                    size: 255,
-                    sensitive: false
-                }, {
-                    id: 6,
-                    name: "toJobConfig.outputDirectory",
-                    type: "STRING",
-                    size: 255,
-                    sensitive: false
-                }],
-                name: "toJobConfig",
-                type: "JOB"
-            }],
-            FROM: [{
-                id: 2,
-                inputs: [{
-                    id: 2,
-                    name: "fromJobConfig.inputDirectory",
-                    type: "STRING",
-                    size: 255,
-                    sensitive: false
-                }],
-                name: "fromJobConfig",
-                type: "JOB"
-            }]
-        },
-        link-config: [{
-            id: 1,
-            inputs: [{
-                id: 1,
-                name: "linkConfig.uri",
-                type: "STRING",
-                size: 255,
-                sensitive: false
-            }],
-            name: "linkConfig",
-            type: "LINK"
-        }],
-        name: "hdfs-connector",
-        class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
-        all-config-resources: {
-            fromJobConfig.label: "From Job configuration",
-                toJobConfig.ignored.label: "Ignored",
-                fromJobConfig.help: "Specifies information required to get data from Hadoop ecosystem",
-                toJobConfig.ignored.help: "This value is ignored",
-                toJobConfig.label: "ToJob configuration",
-                toJobConfig.storageType.label: "Storage type",
-                fromJobConfig.inputDirectory.label: "Input directory",
-                toJobConfig.outputFormat.label: "Output format",
-                toJobConfig.outputDirectory.label: "Output directory",
-                toJobConfig.outputDirectory.help: "Output directory for final data",
-                toJobConfig.compression.help: "Compression that should be used for the data",
-                toJobConfig.outputFormat.help: "Format in which data should be serialized",
-                toJobConfig.customCompression.label: "Custom compression format",
-                toJobConfig.compression.label: "Compression format",
-                linkConfig.label: "Link configuration",
-                toJobConfig.customCompression.help: "Full class name of the custom compression",
-                toJobConfig.storageType.help: "Target on Hadoop ecosystem where to store data",
-                linkConfig.help: "Here you supply information necessary to connect to HDFS",
-                linkConfig.uri.help: "HDFS URI used to connect to HDFS",
-                linkConfig.uri.label: "HDFS URI",
-                fromJobConfig.inputDirectory.help: "Directory that should be exported",
-                toJobConfig.help: "You must supply the information requested in order to get information where you want to store your data."
-        },
-        version: "2.0.0-SNAPSHOT"
-     }
-   }
-
-
-/v1/driver - [GET]- Get Sqoop Driver
------------------------------------------------
-
-Driver exposes configurations required for the job execution.
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-
-* Fields of Response:
-
-+--------------------------+----------------------------------------------------------------------------------------------------+
-|   Field                  | Description                                                                                        |
-+==========================+====================================================================================================+
-| ``id``                   | The id for the driver ( registered as a configurable )                                             |
-+--------------------------+----------------------------------------------------------------------------------------------------+
-| ``job-config``           | Driver job config and inputs                                                                       |
-+--------------------------+----------------------------------------------------------------------------------------------------+
-| ``version``              | The build version of the driver                                                                    |
-+--------------------------+----------------------------------------------------------------------------------------------------+
-| ``all-config-resources`` | Driver exposed config and input labels and description                                             |
-+--------------------------+----------------------------------------------------------------------------------------------------+
-
-* Response Example:
-
-::
-
- {
-    id: 3,
-    job-config: [{
-        id: 7,
-        inputs: [{
-            id: 25,
-            name: "throttlingConfig.numExtractors",
-            type: "INTEGER",
-            sensitive: false
-        }, {
-            id: 26,
-            name: "throttlingConfig.numLoaders",
-            type: "INTEGER",
-            sensitive: false
-        }],
-        name: "throttlingConfig",
-        type: "JOB"
-    }],
-    all-config-resources: {
-        throttlingConfig.numExtractors.label: "Extractors",
-            throttlingConfig.numLoaders.help: "Number of loaders that Sqoop will use",
-            throttlingConfig.numLoaders.label: "Loaders",
-            throttlingConfig.label: "Throttling resources",
-            throttlingConfig.numExtractors.help: "Number of extractors that Sqoop will use",
-            throttlingConfig.help: "Set throttling boundaries to not overload your systems"
-    },
-    version: "1"
- }
-
-/v1/links/ - [GET]  Get all links
--------------------------------------------
-
-Get all the links created in Sqoop
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-
-* Response Example
-
-::
-
-  {
-    links: [
-      {
-        id: 1,
-        enabled: true,
-        update-user: "root",
-        link-config-values: [],
-        name: "First Link",
-        creation-date: 1415309361756,
-        connector-id: 1,
-        update-date: 1415309361756,
-        creation-user: "root"
-      },
-      {
-        id: 2,
-        enabled: true,
-        update-user: "root",
-        link-config-values: [],
-        name: "Second Link",
-        creation-date: 1415309390807,
-        connector-id: 2,
-        update-date: 1415309390807,
-        creation-user: "root"
-      }
-    ]
-  }
-
-
-/v1/links?cname=[cname] - [GET]  Get all links by Connector
-------------------------------------------------------------
-Get all the links for a given connector identified by ``[cname]`` part.
-
-
-/v1/link/[lname]  or /v1/link/[lid] - [GET] - Get Link
--------------------------------------------------------------------------------
-
-Provide the id or unique name of the link in the url ``[lid]`` or ``[lname]`` part.
-
-Get all the details of the link including the id, name, type and the corresponding config input values for the link
-
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-
-* Response Example:
-
-::
-
- {
-    link: {
-        id: 1,
-        enabled: true,
-        link-config-values: [{
-            id: 1,
-            inputs: [{
-                id: 1,
-                name: "linkConfig.uri",
-                value: "hdfs%3A%2F%2Fnamenode%3A8090",
-                type: "STRING",
-                size: 255,
-                sensitive: false
-            }],
-            name: "linkConfig",
-            type: "LINK"
-        }],
-        update-user: "root",
-        name: "First Link",
-        creation-date: 1415287846371,
-        connector-id: 1,
-        update-date: 1415287846371,
-        creation-user: "root"
-    }
- }
-
-/v1/link - [POST] - Create Link
----------------------------------------------------------
-
-Create a new link object. Provide values to the link config inputs for the ones that are required.
-
-* Method: ``POST``
-* Format: ``JSON``
-* Fields of Request:
-
-+--------------------------+--------------------------------------------------------------------------------------+
-|   Field                  | Description                                                                          |
-+==========================+======================================================================================+
-| ``link``                 | The root of the post data in JSON                                                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``id``                   | The id of the link can be left blank in the post data                                |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``enabled``              | Whether to enable this link (true/false)                                             |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``update-date``          | The last updated time of this link                                                   |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``creation-date``        | The creation time of this link                                                       |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``update-user``          | The user who updated this link                                                       |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``creation-user``        | The user who created this link                                                       |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``name``                 | The name of this link                                                                |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``link-config-values``   | Config input values for link config for the corresponding connector                  |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``connector-id``         | The id of the connector used for this link                                           |
-+--------------------------+--------------------------------------------------------------------------------------+
-
-* Request Example:
-
-::
-
-  {
-    link: {
-        id: -1,
-        enabled: true,
-        link-config-values: [{
-            id: 1,
-            inputs: [{
-                id: 1,
-                name: "linkConfig.uri",
-                value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
-                type: "STRING",
-                size: 255,
-                sensitive: false
-            }],
-            name: "testInput",
-            type: "LINK"
-        }],
-        update-user: "root",
-        name: "testLink",
-        creation-date: 1415202223048,
-        connector-id: 1,
-        update-date: 1415202223048,
-        creation-user: "root"
-    }
-  }
-
-* Fields of Response:
-
-+---------------------------+--------------------------------------------------------------------------------------+
-|   Field                   | Description                                                                          |
-+===========================+======================================================================================+
-| ``id``                    | The id assigned for this new created link                                            |
-+---------------------------+--------------------------------------------------------------------------------------+
-| ``validation-result``     | The validation status for the  link config inputs given in the post data             |
-+---------------------------+--------------------------------------------------------------------------------------+
-
-* ERROR Response Example:
-
-::
-
-   {
-     "validation-result": [
-         {
-             "linkConfig": [
-                 {
-                     "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
-                     "status": "ERROR"
-                 }
-             ]
-         }
-     ]
-   }
-
-
-/v1/link/[lname]  or /v1/link/[lid] - [PUT] - Update Link
----------------------------------------------------------
-
-Update an existing link object with name [lname] or id [lid]. To make the procedure of filling inputs easier, the general practice
-is get the link first and then change some of the values for the inputs.
-
-* Method: ``PUT``
-* Format: ``JSON``
-
-* OK Response Example:
-
-::
-
-  {
-    "validation-result": [
-        {}
-    ]
-  }
-
-/v1/link/[lname]  or /v1/link/[lid]  - [DELETE] - Delete Link
------------------------------------------------------------------
-
-Delete a link with name [lname] or id [lid]
-
-* Method: ``DELETE``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``None``
-
-/v1/link/[lid]/enable  or /v1/link/[lname]/enable  - [PUT] - Enable Link
---------------------------------------------------------------------------------
-
-Enable a link with id ``lid`` or name ``lname``
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``None``
-
-/v1/link/[lid]/disable - [PUT] - Disable Link
----------------------------------------------------------
-
-Disable a link with id ``lid`` or name ``lname``
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``None``
-
-/v1/jobs/ - [GET]  Get all jobs
--------------------------------------------
-
-Get all the jobs created in Sqoop
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-
-* Response Example:
-
-::
-
-  {
-     jobs: [{
-        driver-config-values: [],
-            enabled: true,
-            from-connector-id: 1,
-            update-user: "root",
-            to-config-values: [],
-            to-connector-id: 2,
-            creation-date: 1415310157618,
-            update-date: 1415310157618,
-            creation-user: "root",
-            id: 1,
-            to-link-id: 2,
-            from-config-values: [],
-            name: "First Job",
-            from-link-id: 1
-       },{
-        driver-config-values: [],
-            enabled: true,
-            from-connector-id: 2,
-            update-user: "root",
-            to-config-values: [],
-            to-connector-id: 1,
-            creation-date: 1415310650600,
-            update-date: 1415310650600,
-            creation-user: "root",
-            id: 2,
-            to-link-id: 1,
-            from-config-values: [],
-            name: "Second Job",
-            from-link-id: 2
-       }]
-  }
-
-/v1/jobs?cname=[cname] - [GET]  Get all jobs by connector
-------------------------------------------------------------
-Get all the jobs for a given connector identified by ``[cname]`` part.
-
-
-/v1/job/[jname] or /v1/job/[jid] - [GET] - Get Job
------------------------------------------------------
-
-Provide the name or the id of the job in the url [jname]
-part or [jid] part.
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-
-* Response Example:
-
-::
-
-  {
-    job: {
-        driver-config-values: [{
-                id: 7,
-                inputs: [{
-                    id: 25,
-                    name: "throttlingConfig.numExtractors",
-                    value: "3",
-                    type: "INTEGER",
-                    sensitive: false
-                }, {
-                    id: 26,
-                    name: "throttlingConfig.numLoaders",
-                    value: "3",
-                    type: "INTEGER",
-                    sensitive: false
-                }],
-                name: "throttlingConfig",
-                type: "JOB"
-            }],
-            enabled: true,
-            from-connector-id: 1,
-            update-user: "root",
-            to-config-values: [{
-                id: 6,
-                inputs: [{
-                    id: 19,
-                    name: "toJobConfig.schemaName",
-                    type: "STRING",
-                    size: 50,
-                    sensitive: false
-                }, {
-                    id: 20,
-                    name: "toJobConfig.tableName",
-                    value: "text",
-                    type: "STRING",
-                    size: 2000,
-                    sensitive: false
-                }, {
-                    id: 21,
-                    name: "toJobConfig.sql",
-                    type: "STRING",
-                    size: 50,
-                    sensitive: false
-                }, {
-                    id: 22,
-                    name: "toJobConfig.columns",
-                    type: "STRING",
-                    size: 50,
-                    sensitive: false
-                }, {
-                    id: 23,
-                    name: "toJobConfig.stageTableName",
-                    type: "STRING",
-                    size: 2000,
-                    sensitive: false
-                }, {
-                    id: 24,
-                    name: "toJobConfig.shouldClearStageTable",
-                    type: "BOOLEAN",
-                    sensitive: false
-                }],
-                name: "toJobConfig",
-                type: "JOB"
-            }],
-            to-connector-id: 2,
-            creation-date: 1415310157618,
-            update-date: 1415310157618,
-            creation-user: "root",
-            id: 1,
-            to-link-id: 2,
-            from-config-values: [{
-                id: 2,
-                inputs: [{
-                    id: 2,
-                    name: "fromJobConfig.inputDirectory",
-                    value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
-                    type: "STRING",
-                    size: 255,
-                    sensitive: false
-                }],
-                name: "fromJobConfig",
-                type: "JOB"
-            }],
-            name: "First Job",
-            from-link- id: 1
-    }
- }
-
-
-/v1/job - [POST] - Create Job
----------------------------------------------------------
-
-Create a new job object with the corresponding config values.
-
-* Method: ``POST``
-* Format: ``JSON``
-
-* Fields of Request:
-
-
-+--------------------------+--------------------------------------------------------------------------------------+
-|   Field                  | Description                                                                          |
-+==========================+======================================================================================+
-| ``job``                  | The root of the post data in JSON                                                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``from-link-id``         | The id of the from link for the job                                                  |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``to-link-id``           | The id of the to link for the job                                                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``id``                   | The id of the link can be left blank in the post data                                |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``enabled``              | Whether to enable this job (true/false)                                              |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``update-date``          | The last updated time of this job                                                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``creation-date``        | The creation time of this job                                                        |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``update-user``          | The user who updated this job                                                        |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``creation-user``        | The uset who creates this job                                                        |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``name``                 | The name of this job                                                                 |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``from-config-values``   | Config input values for FROM part of the job                                         |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``to-config-values``     | Config input values for TO part of the job                                           |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``driver-config-values`` | Config input values for driver                                                       |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``connector-id``         | The id of the connector used for this link                                           |
-+--------------------------+--------------------------------------------------------------------------------------+
-
-
-* Request Example:
-
-::
-
- {
-   job: {
-     driver-config-values: [
-       {
-         id: 7,
-         inputs: [
-           {
-             id: 25,
-             name: "throttlingConfig.numExtractors",
-             value: "3",
-             type: "INTEGER",
-             sensitive: false
-           },
-           {
-             id: 26,
-             name: "throttlingConfig.numLoaders",
-             value: "3",
-             type: "INTEGER",
-             sensitive: false
-           }
-         ],
-         name: "throttlingConfig",
-         type: "JOB"
-       }
-     ],
-     enabled: true,
-     from-connector-id: 1,
-     update-user: "root",
-     to-config-values: [
-       {
-         id: 6,
-         inputs: [
-           {
-             id: 19,
-             name: "toJobConfig.schemaName",
-             type: "STRING",
-             size: 50,
-             sensitive: false
-           },
-           {
-             id: 20,
-             name: "toJobConfig.tableName",
-             value: "text",
-             type: "STRING",
-             size: 2000,
-             sensitive: false
-           },
-           {
-             id: 21,
-             name: "toJobConfig.sql",
-             type: "STRING",
-             size: 50,
-             sensitive: false
-           },
-           {
-             id: 22,
-             name: "toJobConfig.columns",
-             type: "STRING",
-             size: 50,
-             sensitive: false
-           },
-           {
-             id: 23,
-             name: "toJobConfig.stageTableName",
-             type: "STRING",
-             size: 2000,
-             sensitive: false
-           },
-           {
-             id: 24,
-             name: "toJobConfig.shouldClearStageTable",
-             type: "BOOLEAN",
-             sensitive: false
-           }
-         ],
-         name: "toJobConfig",
-         type: "JOB"
-       }
-     ],
-     to-connector-id: 2,
-     creation-date: 1415310157618,
-     update-date: 1415310157618,
-     creation-user: "root",
-     id: -1,
-     to-link-id: 2,
-     from-config-values: [
-       {
-         id: 2,
-         inputs: [
-           {
-             id: 2,
-             name: "fromJobConfig.inputDirectory",
-             value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
-             type: "STRING",
-             size: 255,
-             sensitive: false
-           }
-         ],
-         name: "fromJobConfig",
-         type: "JOB"
-       }
-     ],
-     name: "Test Job",
-     from-link-id: 1
-    }
-  }
-
-* Fields of Response:
-
-+---------------------------+--------------------------------------------------------------------------------------+
-|   Field                   | Description                                                                          |
-+===========================+======================================================================================+
-| ``id``                    | The id assigned for this new created job                                             |
-+--------------------------+---------------------------------------------------------------------------------------+
-| ``validation-result``     | The validation status for the job config and driver config inputs in the post data   |
-+---------------------------+--------------------------------------------------------------------------------------+
-
-
-* ERROR Response Example:
-
-::
-
-   {
-     "validation-result": [
-         {
-             "linkConfig": [
-                 {
-                     "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
-                     "status": "ERROR"
-                 }
-             ]
-         }
-     ]
-   }
-
-
-/v1/job/[jid] - [PUT] - Update Job
----------------------------------------------------------
-
-Update an existing job object with id [jid]. To make the procedure of filling inputs easier, the general practice
-is get the existing job object first and then change some of the inputs.
-
-* Method: ``PUT``
-* Format: ``JSON``
-
-The same as Create Job.
-
-* OK Response Example:
-
-::
-
-  {
-    "validation-result": [
-        {}
-    ]
-  }
-
-
-/v1/job/[jid] - [DELETE] - Delete Job
----------------------------------------------------------
-
-Delete a job with id ``jid``.
-
-* Method: ``DELETE``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``None``
-
-/v1/job/[jid]/enable - [PUT] - Enable Job
----------------------------------------------------------
-
-Enable a job with id ``jid``.
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``None``
-
-/v1/job/[jid]/disable - [PUT] - Disable Job
----------------------------------------------------------
-
-Disable a job with id ``jid``.
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``None``
-
-
-/v1/job/[jid]/start or /v1/job/[jname]/start - [PUT]- Start Job
----------------------------------------------------------------------------------
-
-Start a job with name ``[jname]`` or with id ``[jid]`` to trigger the job execution
-
-* Method: ``POST``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``Submission Record``
-
-* BOOTING Response Example
-
-::
-
-  {
-    "submission": {
-      "progress": -1,
-      "last-update-date": 1415312531188,
-      "external-id": "job_1412137947693_0004",
-      "status": "BOOTING",
-      "job": 2,
-      "creation-date": 1415312531188,
-      "to-schema": {
-        "created": 1415312531426,
-        "name": "HDFS file",
-        "columns": []
-      },
-      "external-link": "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
-      "from-schema": {
-        "created": 1415312531342,
-        "name": "text",
-        "columns": [
-          {
-            "name": "id",
-            "nullable": true,
-            "unsigned": null,
-            "type": "FIXED_POINT",
-            "size": null
-          },
-          {
-            "name": "txt",
-            "nullable": true,
-            "type": "TEXT",
-            "size": null
-          }
-        ]
-      }
-    }
-  }
-
-* SUCCEEDED Response Example
-
-::
-
-   {
-     submission: {
-       progress: -1,
-       last-update-date: 1415312809485,
-       external-id: "job_1412137947693_0004",
-       status: "SUCCEEDED",
-       job: 2,
-       creation-date: 1415312531188,
-       external-link: "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
-       counters: {
-         org.apache.hadoop.mapreduce.JobCounter: {
-           SLOTS_MILLIS_MAPS: 373553,
-           MB_MILLIS_MAPS: 382518272,
-           TOTAL_LAUNCHED_MAPS: 10,
-           MILLIS_MAPS: 373553,
-           VCORES_MILLIS_MAPS: 373553,
-           OTHER_LOCAL_MAPS: 10
-         },
-         org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
-           BYTES_WRITTEN: 0
-         },
-         org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
-           BYTES_READ: 0
-         },
-         org.apache.hadoop.mapreduce.TaskCounter: {
-           MAP_INPUT_RECORDS: 0,
-           MERGED_MAP_OUTPUTS: 0,
-           PHYSICAL_MEMORY_BYTES: 4065599488,
-           SPILLED_RECORDS: 0,
-           COMMITTED_HEAP_BYTES: 3439853568,
-           CPU_MILLISECONDS: 236900,
-           FAILED_SHUFFLE: 0,
-           VIRTUAL_MEMORY_BYTES: 15231422464,
-           SPLIT_RAW_BYTES: 1187,
-           MAP_OUTPUT_RECORDS: 1000000,
-           GC_TIME_MILLIS: 7282
-         },
-         org.apache.hadoop.mapreduce.FileSystemCounter: {
-           FILE_WRITE_OPS: 0,
-           FILE_READ_OPS: 0,
-           FILE_LARGE_READ_OPS: 0,
-           FILE_BYTES_READ: 0,
-           HDFS_BYTES_READ: 1187,
-           FILE_BYTES_WRITTEN: 1191230,
-           HDFS_LARGE_READ_OPS: 0,
-           HDFS_WRITE_OPS: 10,
-           HDFS_READ_OPS: 10,
-           HDFS_BYTES_WRITTEN: 276389736
-         },
-         org.apache.sqoop.submission.counter.SqoopCounters: {
-           ROWS_READ: 1000000
-         }
-       }
-     }
-   }
-
-
-* ERROR Response Example
-
-::
-
-  {
-    "submission": {
-      "progress": -1,
-      "last-update-date": 1415312390570,
-      "status": "FAILURE_ON_SUBMIT",
-      "error-summary": "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
-      "job": 1,
-      "creation-date": 1415312390570,
-      "to-schema": {
-        "created": 1415312390797,
-        "name": "text",
-        "columns": [
-          {
-            "name": "id",
-            "nullable": true,
-            "unsigned": null,
-            "type": "FIXED_POINT",
-            "size": null
-          },
-          {
-            "name": "txt",
-            "nullable": true,
-            "type": "TEXT",
-            "size": null
-          }
-        ]
-      },
-      "from-schema": {
-        "created": 1415312390778,
-        "name": "HDFS file",
-        "columns": [
-        ]
-      },
-      "error-details": "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_00"
-    }
-  }
-
-/v1/job/[jid]/stop or /v1/job/[jname]/stop  - [PUT]- Stop Job
----------------------------------------------------------------------------------
-
-Stop a job with name ``[janme]`` or with id ``[jid]`` to abort the running job.
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``Submission Record``
-
-/v1/job/[jid]/status or /v1/job/[jname]/status  - [GET]- Get Job Status
----------------------------------------------------------------------------------
-
-Get status of the running job with name ``[janme]`` or with id ``[jid]``
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``Submission Record``
-
-::
-
-  {
-      "submission": {
-          "progress": 0.25,
-          "last-update-date": 1415312603838,
-          "external-id": "job_1412137947693_0004",
-          "status": "RUNNING",
-          "job": 2,
-          "creation-date": 1415312531188,
-          "external-link": "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/"
-      }
-  }
-
-/v1/submissions? - [GET] - Get all job Submissions
-----------------------------------------------------------------------
-
-Get all the submissions for every job started in SQoop
-
-/v1/submissions?jname=[jname] - [GET] - Get Submissions by Job
-----------------------------------------------------------------------
-
-Retrieve all job submissions in the past for the given job. Each submission record will have details such as the status, counters and urls for those submissions.
-
-Provide the name of the job in the url [jname] part.
-
-* Method: ``GET``
-* Format: ``JSON``
-* Request Content: ``None``
-* Fields of Response:
-
-+--------------------------+--------------------------------------------------------------------------------------+
-|   Field                  | Description                                                                          |
-+==========================+======================================================================================+
-| ``progress``             | The progress of the running Sqoop job                                                |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``job``                  | The id of the Sqoop job                                                              |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``creation-date``        | The submission timestamp                                                             |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``last-update-date``     | The timestamp of the last status update                                              |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``status``               | The status of this job submission                                                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``external-id``          | The job id of Sqoop job running on Hadoop                                            |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``external-link``        | The link to track the job status on Hadoop                                           |
-+--------------------------+--------------------------------------------------------------------------------------+
-
-* Response Example:
-
-::
-
-  {
-    submissions: [
-      {
-        progress: -1,
-        last-update-date: 1415312809485,
-        external-id: "job_1412137947693_0004",
-        status: "SUCCEEDED",
-        job: 2,
-        creation-date: 1415312531188,
-        external-link: "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
-        counters: {
-          org.apache.hadoop.mapreduce.JobCounter: {
-            SLOTS_MILLIS_MAPS: 373553,
-            MB_MILLIS_MAPS: 382518272,
-            TOTAL_LAUNCHED_MAPS: 10,
-            MILLIS_MAPS: 373553,
-            VCORES_MILLIS_MAPS: 373553,
-            OTHER_LOCAL_MAPS: 10
-          },
-          org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
-            BYTES_WRITTEN: 0
-          },
-          org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
-            BYTES_READ: 0
-          },
-          org.apache.hadoop.mapreduce.TaskCounter: {
-            MAP_INPUT_RECORDS: 0,
-            MERGED_MAP_OUTPUTS: 0,
-            PHYSICAL_MEMORY_BYTES: 4065599488,
-            SPILLED_RECORDS: 0,
-            COMMITTED_HEAP_BYTES: 3439853568,
-            CPU_MILLISECONDS: 236900,
-            FAILED_SHUFFLE: 0,
-            VIRTUAL_MEMORY_BYTES: 15231422464,
-            SPLIT_RAW_BYTES: 1187,
-            MAP_OUTPUT_RECORDS: 1000000,
-            GC_TIME_MILLIS: 7282
-          },
-          org.apache.hadoop.mapreduce.FileSystemCounter: {
-            FILE_WRITE_OPS: 0,
-            FILE_READ_OPS: 0,
-            FILE_LARGE_READ_OPS: 0,
-            FILE_BYTES_READ: 0,
-            HDFS_BYTES_READ: 1187,
-            FILE_BYTES_WRITTEN: 1191230,
-            HDFS_LARGE_READ_OPS: 0,
-            HDFS_WRITE_OPS: 10,
-            HDFS_READ_OPS: 10,
-            HDFS_BYTES_WRITTEN: 276389736
-          },
-          org.apache.sqoop.submission.counter.SqoopCounters: {
-            ROWS_READ: 1000000
-          }
-        }
-      },
-      {
-        progress: -1,
-        last-update-date: 1415312390570,
-        status: "FAILURE_ON_SUBMIT",
-        error-summary: "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
-        job: 1,
-        creation-date: 1415312390570,
-        error-details: "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner...."
-      }
-    ]
-  }
-
-/v1/authorization/roles/create - [POST] - Create Role
------------------------------------------------------
-
-Create a new role object. Provide values to the link config inputs for the ones that are required.
-
-* Method: ``POST``
-* Format: ``JSON``
-* Fields of Request:
-
-+--------------------------+--------------------------------------------------------------------------------------+
-|   Field                  | Description                                                                          |
-+==========================+======================================================================================+
-| ``role``                 | The root of the post data in JSON                                                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``name``                 | The name of this role                                                                |
-+--------------------------+--------------------------------------------------------------------------------------+
-
-* Request Example:
-
-::
-
-  {
-    role: {
-        name: "testRole",
-    }
-  }
-
-/v1/authorization/role/[role-name]  - [DELETE] - Delete Role
-------------------------------------------------------------
-
-Delete a role with name [role-name]
-
-* Method: ``DELETE``
-* Format: ``JSON``
-* Request Content: ``None``
-* Response Content: ``None``
-
-/v1/authorization/roles?principal_type=[principal-type]&principal_name=[principal-name] - [GET]  Get all Roles by Principal
----------------------------------------------------------------------------------------------------------------------------
-
-Get all the roles or for a given principal identified by ``[principal-type]`` and ``[principal-name]`` part.
-
-/v1/authorization/principals?role_name=[rname] - [GET]  Get all Principals by Role
-----------------------------------------------------------------------------------
-
-Get all the principals for a given role identified by ``[rname]`` part.
-
-/v1/authorization/roles/grant - [PUT] - Grant a Role to a Principal
--------------------------------------------------------------------
-
-Grant a role with ``[role-name]`` to a principal with ``[principal-type]`` and ``[principal-name]``.
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Fields of Request:
-
-The same as Create Role and
-
-+--------------------------+--------------------------------------------------------------------------------------+
-|   Field                  | Description                                                                          |
-+==========================+======================================================================================+
-| ``principals``           | The root of the post data in JSON                                                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``name``                 | The name of this principal                                                           |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``type``                 | The type of this principal, ("USER", "GROUP", "ROLE")                                |
-+--------------------------+--------------------------------------------------------------------------------------+
-
-* Request Example:
-
-::
-
-  {
-    roles: [{
-        name: "testRole",
-    }],
-    principals: [{
-        name: "testPrincipalName",
-        type: "USER",
-    }]
-  }
-
-* Response Content: ``None``
-
-/v1/authorization/roles/revoke - [PUT] - Revoke a Role from a Principal
------------------------------------------------------------------------
-
-Revoke a role with ``[role-name]`` to a principal with ``[principal-type]`` and ``[principal-name]``.
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Fields of Request:
-
-The same as Grant Role
-
-* Response Content: ``None``
-
-/v1/authorization/privileges/grant - [PUT] - Grant a Privilege to a Principal
------------------------------------------------------------------------------
-
-Grant a privilege with ``[resource-name]``, ``[resource-type]``, ``[action]`` and ``[with-grant-option]`` to a principal with``[principal-type]`` and ``[principal-name]``.
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Fields of Request:
-
-The same as Principal and
-
-+--------------------------+--------------------------------------------------------------------------------------+
-|   Field                  | Description                                                                          |
-+==========================+======================================================================================+
-| ``privileges``           | The root of the post data in JSON                                                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``resource-name``        | The resource name of this privilege                                                  |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``resource-type``        | The resource type of this privilege, ("CONNECTOR", "LINK", "JOB")                    |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``action``               | The action type of this privilege, ("READ", "WRITE", "ALL")                          |
-+--------------------------+--------------------------------------------------------------------------------------+
-| ``with-grant-option``    | The resource type of this privilege                                                  |
-+--------------------------+--------------------------------------------------------------------------------------+
-
-* Request Example:
-
-::
-
-  {
-    privileges: [{
-        resource-name: "testResourceName",
-        resource-type: "LINK",
-        action: "READ",
-        with-grant-option: false,
-    }]
-    principals: [{
-        name: "testPrincipalName",
-        type: "USER",
-    }]
-  }
-
-* Response Content: ``None``
-
-/v1/authorization/privileges/revoke - [PUT] - Revoke a Privilege to a Principal
--------------------------------------------------------------------------------
-
-Revoke a privilege with ``[resource-name]``, ``[resource-type]``, ``[action]`` and ``[with-grant-option]`` to a principal with``[principal-type]`` and ``[principal-name]``.
-
-* Method: ``PUT``
-* Format: ``JSON``
-* Fields of Request:
-
-The same as Grant Privilege
-
-* Response Content: ``None``
-
-/v1/authorization/privilieges?principal_type=[principal-type]&principal_name=[principal-name]&resource_type=[resource-type]&resource_name=[resource-name] - [GET]  Get all Roles by Principal (and Resource)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-Get all the privileges or for a given principal identified by ``[principal-type]`` and ``[principal-name]`` (and a given resource identified by ``[resource-type]`` and ``[resource-name]``).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/Repository.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/Repository.rst b/docs/src/site/sphinx/Repository.rst
deleted file mode 100644
index 55daf2e..0000000
--- a/docs/src/site/sphinx/Repository.rst
+++ /dev/null
@@ -1,335 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-==========
-Repository
-==========
-
-This repository contains additional information regarding Sqoop.
-
-
-Sqoop Schema
-------------
-
-The DDL queries that create the Sqoop repository schema in Derby database create the following tables:
-
-
-
-SQ_SYSTEM
-+++++++++
-Store for various state information
-
-      +----------------------------+
-      | SQ_SYSTEM                  |
-      +============================+
-      | SQM_ID: BIGINT PK          |
-      +----------------------------+
-      | SQM_KEY: VARCHAR(64)       |
-      +----------------------------+
-      | SQM_VALUE: VARCHAR(64)     |
-      +----------------------------+
-
-
-
-
-SQ_DIRECTION
-++++++++++++
-Directions
-
-      +---------------------------------------+-------------+
-      | SQ_DIRECTION                          |             |
-      +=======================================+=============+
-      | SQD_ID: BIGINT PK AUTO-GEN            |             |
-      +---------------------------------------+-------------+
-      | SQD_NAME: VARCHAR(64)                 | "FROM"|"TO" |
-      +---------------------------------------+-------------+
-
-
-
-
-SQ_CONFIGURABLE
-+++++++++++++++
-Configurable registration
-
-      +-----------------------------+----------------------+
-      | SQ_CONFIGURABLE             |                      |
-      +=============================+======================+
-      | SQC_ID: BIGINT PK AUTO-GEN  |                      |
-      +-----------------------------+----------------------+
-      | SQC_NAME: VARCHAR(64)       |                      |
-      +-----------------------------+----------------------+
-      | SQC_CLASS: VARCHAR(255)     |                      |
-      +-----------------------------+----------------------+
-      | SQC_TYPE: VARCHAR(32)       | "CONNECTOR"|"DRIVER" |
-      +-----------------------------+----------------------+
-      | SQC_VERSION: VARCHAR(64)    |                      |
-      +-----------------------------+----------------------+
-
-
-
-
-SQ_CONNECTOR_DIRECTIONS
-+++++++++++++++++++++++
-Connector directions
-
-      +------------------------------+------------------------------+
-      | SQ_CONNECTOR_DIRECTIONS      |                              |
-      +==============================+==============================+
-      | SQCD_ID: BIGINT PK AUTO-GEN  |                              |
-      +------------------------------+------------------------------+
-      | SQCD_CONNECTOR: BIGINT       | FK SQCD_CONNECTOR(SQC_ID)    |
-      +------------------------------+------------------------------+
-      | SQCD_DIRECTION: BIGINT       | FK SQCD_DIRECTION(SQD_ID)    |
-      +------------------------------+------------------------------+
-
-
-
-
-SQ_CONFIG
-+++++++++
-Config details
-
-      +-------------------------------------+------------------------------------------------------+
-      | SQ_CONFIG                           |                                                      |
-      +=====================================+======================================================+
-      | SQ_CFG_ID: BIGINT PK AUTO-GEN       |                                                      |
-      +-------------------------------------+------------------------------------------------------+
-      | SQ_CFG_CONNECTOR: BIGINT            | FK SQ_CFG_CONNECTOR(SQC_ID), NULL for driver         |
-      +-------------------------------------+------------------------------------------------------+
-      | SQ_CFG_NAME: VARCHAR(64)            |                                                      |
-      +-------------------------------------+------------------------------------------------------+
-      | SQ_CFG_TYPE: VARCHAR(32)            | "LINK"|"JOB"                                         |
-      +-------------------------------------+------------------------------------------------------+
-      | SQ_CFG_INDEX: SMALLINT              |                                                      |
-      +-------------------------------------+------------------------------------------------------+
-
-
-
-
-SQ_CONFIG_DIRECTIONS
-++++++++++++++++++++
-Connector directions
-
-      +------------------------------+------------------------------+
-      | SQ_CONNECTOR_DIRECTIONS      |                              |
-      +==============================+==============================+
-      | SQCD_ID: BIGINT PK AUTO-GEN  |                              |
-      +------------------------------+------------------------------+
-      | SQCD_CONFIG: BIGINT          | FK SQCD_CONFIG(SQ_CFG_ID)    |
-      +------------------------------+------------------------------+
-      | SQCD_DIRECTION: BIGINT       | FK SQCD_DIRECTION(SQD_ID)    |
-      +------------------------------+------------------------------+
-
-
-
-
-SQ_INPUT
-++++++++
-Input details
-
-      +----------------------------+--------------------------+
-      | SQ_INPUT                   |                          |
-      +============================+==========================+
-      | SQI_ID: BIGINT PK AUTO-GEN |                          |
-      +----------------------------+--------------------------+
-      | SQI_NAME: VARCHAR(64)      |                          |
-      +----------------------------+--------------------------+
-      | SQI_CONFIG: BIGINT         | FK SQ_CONFIG(SQ_CFG_ID)  |
-      +----------------------------+--------------------------+
-      | SQI_INDEX: SMALLINT        |                          |
-      +----------------------------+--------------------------+
-      | SQI_TYPE: VARCHAR(32)      | "STRING"|"MAP"           |
-      +----------------------------+--------------------------+
-      | SQI_STRMASK: BOOLEAN       |                          |
-      +----------------------------+--------------------------+
-      | SQI_STRLENGTH: SMALLINT    |                          |
-      +----------------------------+--------------------------+
-      | SQI_ENUMVALS: VARCHAR(100) |                          |
-      +----------------------------+--------------------------+
-
-
-
-
-SQ_LINK
-+++++++
-Stored links
-
-      +-----------------------------------+--------------------------+
-      | SQ_LINK                           |                          |
-      +===================================+==========================+
-      | SQ_LNK_ID: BIGINT PK AUTO-GEN     |                          |
-      +-----------------------------------+--------------------------+
-      | SQ_LNK_NAME: VARCHAR(64)          |                          |
-      +-----------------------------------+--------------------------+
-      | SQ_LNK_CONNECTOR: BIGINT          | FK SQ_CONNECTOR(SQC_ID)  |
-      +-----------------------------------+--------------------------+
-      | SQ_LNK_CREATION_USER: VARCHAR(32) |                          |
-      +-----------------------------------+--------------------------+
-      | SQ_LNK_CREATION_DATE: TIMESTAMP   |                          |
-      +-----------------------------------+--------------------------+
-      | SQ_LNK_UPDATE_USER: VARCHAR(32)   |                          |
-      +-----------------------------------+--------------------------+
-      | SQ_LNK_UPDATE_DATE: TIMESTAMP     |                          |
-      +-----------------------------------+--------------------------+
-      | SQ_LNK_ENABLED: BOOLEAN           |                          |
-      +-----------------------------------+--------------------------+
-
-
-
-
-SQ_JOB
-++++++
-Stored jobs
-
-      +--------------------------------+-----------------------+
-      | SQ_JOB                         |                       |
-      +================================+=======================+
-      | SQB_ID: BIGINT PK AUTO-GEN     |                       |
-      +--------------------------------+-----------------------+
-      | SQB_NAME: VARCHAR(64)          |                       |
-      +--------------------------------+-----------------------+
-      | SQB_FROM_LINK: BIGINT          | FK SQ_LINK(SQ_LNK_ID) |
-      +--------------------------------+-----------------------+
-      | SQB_TO_LINK: BIGINT            | FK SQ_LINK(SQ_LNK_ID) |
-      +--------------------------------+-----------------------+
-      | SQB_CREATION_USER: VARCHAR(32) |                       |
-      +--------------------------------+-----------------------+
-      | SQB_CREATION_DATE: TIMESTAMP   |                       |
-      +--------------------------------+-----------------------+
-      | SQB_UPDATE_USER: VARCHAR(32)   |                       |
-      +--------------------------------+-----------------------+
-      | SQB_UPDATE_DATE: TIMESTAMP     |                       |
-      +--------------------------------+-----------------------+
-      | SQB_ENABLED: BOOLEAN           |                       |
-      +--------------------------------+-----------------------+
-
-
-
-
-SQ_LINK_INPUT
-+++++++++++++
-N:M relationship link and input
-
-      +----------------------------+-----------------------+
-      | SQ_LINK_INPUT              |                       |
-      +============================+=======================+
-      | SQ_LNKI_LINK: BIGINT PK    | FK SQ_LINK(SQ_LNK_ID) |
-      +----------------------------+-----------------------+
-      | SQ_LNKI_INPUT: BIGINT PK   | FK SQ_INPUT(SQI_ID)   |
-      +----------------------------+-----------------------+
-      | SQ_LNKI_VALUE: LONG VARCHAR|                       |
-      +----------------------------+-----------------------+
-
-
-
-
-SQ_JOB_INPUT
-++++++++++++
-N:M relationship job and input
-
-      +----------------------------+---------------------+
-      | SQ_JOB_INPUT               |                     |
-      +============================+=====================+
-      | SQBI_JOB: BIGINT PK        | FK SQ_JOB(SQB_ID)   |
-      +----------------------------+---------------------+
-      | SQBI_INPUT: BIGINT PK      | FK SQ_INPUT(SQI_ID) |
-      +----------------------------+---------------------+
-      | SQBI_VALUE: LONG VARCHAR   |                     |
-      +----------------------------+---------------------+
-
-
-
-
-SQ_SUBMISSION
-+++++++++++++
-List of submissions
-
-      +-----------------------------------+-------------------+
-      | SQ_JOB_SUBMISSION                 |                   |
-      +===================================+===================+
-      | SQS_ID: BIGINT PK                 |                   |
-      +-----------------------------------+-------------------+
-      | SQS_JOB: BIGINT                   | FK SQ_JOB(SQB_ID) |
-      +-----------------------------------+-------------------+
-      | SQS_STATUS: VARCHAR(20)           |                   |
-      +-----------------------------------+-------------------+
-      | SQS_CREATION_USER: VARCHAR(32)    |                   |
-      +-----------------------------------+-------------------+
-      | SQS_CREATION_DATE: TIMESTAMP      |                   |
-      +-----------------------------------+-------------------+
-      | SQS_UPDATE_USER: VARCHAR(32)      |                   |
-      +-----------------------------------+-------------------+
-      | SQS_UPDATE_DATE: TIMESTAMP        |                   |
-      +-----------------------------------+-------------------+
-      | SQS_EXTERNAL_ID: VARCHAR(50)      |                   |
-      +-----------------------------------+-------------------+
-      | SQS_EXTERNAL_LINK: VARCHAR(150)   |                   |
-      +-----------------------------------+-------------------+
-      | SQS_EXCEPTION: VARCHAR(150)       |                   |
-      +-----------------------------------+-------------------+
-      | SQS_EXCEPTION_TRACE: VARCHAR(750) |                   |
-      +-----------------------------------+-------------------+
-
-
-
-
-SQ_COUNTER_GROUP
-++++++++++++++++
-List of counter groups
-
-      +----------------------------+
-      | SQ_COUNTER_GROUP           |
-      +============================+
-      | SQG_ID: BIGINT PK          |
-      +----------------------------+
-      | SQG_NAME: VARCHAR(75)      |
-      +----------------------------+
-
-
-
-
-SQ_COUNTER
-++++++++++
-List of counters
-
-      +----------------------------+
-      | SQ_COUNTER                 |
-      +============================+
-      | SQR_ID: BIGINT PK          |
-      +----------------------------+
-      | SQR_NAME: VARCHAR(75)      |
-      +----------------------------+
-
-
-
-
-SQ_COUNTER_SUBMISSION
-+++++++++++++++++++++
-N:M Relationship
-
-      +----------------------------+--------------------------------+
-      | SQ_COUNTER_SUBMISSION      |                                |
-      +============================+================================+
-      | SQRS_GROUP: BIGINT PK      | FK SQ_COUNTER_GROUP(SQR_ID)    |
-      +----------------------------+--------------------------------+
-      | SQRS_COUNTER: BIGINT PK    | FK SQ_COUNTER(SQR_ID)          |
-      +----------------------------+--------------------------------+
-      | SQRS_SUBMISSION: BIGINT PK | FK SQ_SUBMISSION(SQS_ID)       |
-      +----------------------------+--------------------------------+
-      | SQRS_VALUE: BIGINT         |                                |
-      +----------------------------+--------------------------------+
-
-


[3/8] sqoop git commit: SQOOP-2694: Sqoop2: Doc: Register structure in sphinx for our docs (Jarek Jarcec Cecho via Kate Ting)

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev/RESTAPI.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev/RESTAPI.rst b/docs/src/site/sphinx/dev/RESTAPI.rst
new file mode 100644
index 0000000..39aabc0
--- /dev/null
+++ b/docs/src/site/sphinx/dev/RESTAPI.rst
@@ -0,0 +1,1601 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+=========================
+Sqoop REST API Guide
+=========================
+
+This document will explain how you can use Sqoop REST API to build applications interacting with Sqoop server.
+The REST API covers all aspects of managing Sqoop jobs and allows you to build an app in any programming language using HTTP over JSON.
+
+.. contents:: Table of Contents
+
+Initialization
+=========================
+
+Before continuing further, make sure that the Sqoop server is running.
+
+Then find out the details of the Sqoop server: ``host``, ``port`` and ``webapp``, and keep them in mind. Note that the sqoop server is running on Jetty. To exercise a REST API for Sqoop, you could assemble and send a HTTP request to an url corresponding to that API. Generally, the url contains the ``host`` on which the sqoop server is running, the ``port`` at which the sqoop server is listening to and ``webapp``, the context path at which the Sqoop server is registered in the Jetty engine.
+
+Certain requests might need to contain some additional query parameters and post data. These parameters could be given via
+the HTTP headers, request body or both. All the content in the HTTP body is in ``JSON`` format.
+
+Understand Connector, Driver, Link and Job
+===========================================================
+
+To create and run a Sqoop Job, we need to provide config values for connecting to a data source and then processing the data in that data source. Processing might be either reading from or writing to the data source. Thus we have configurable entities such as the ``From`` and ``To`` parts of the connectors, the driver that each expose configs and one or more inputs within them.
+
+For instance a connector that represents a relational data source such as MySQL will expose config classes for connecting to the database. Some of the relevant inputs are the connection string, driver class, the username and the password to connect to the database. These configs remain the same to read data from any of the tables within that database. Hence they are grouped under ``LinkConfiguration``.
+
+Each connector can support Reading from a data source and/or writing/to a data source it represents. Reading from and writing to a data source are represented by From and To respectively. Specific configurations are required to peform the job of reading from or writing to the data source. These are grouped in the ``FromJobConfiguration`` and ``ToJobConfiguration`` objects of the connector.
+
+For instance, a connector that represents a relational data source such as MySQL will expose the table name to read from or the SQL query to use while reading data as a FromJobConfiguration. Similarly a connector that represents a data source such as HDFS, will expose the output directory to write to as a ToJobConfiguration.
+
+
+Objects
+==============
+
+This section covers all the objects that might exist in an API request and/or API response.
+
+Configs and Inputs
+------------------
+
+Before creating any link for a connector or a job with associated ``From`` and ``To`` links, the first thing to do is getting familiar with all the configurations that the connector exposes.
+
+Each config consists of the following information
+
++------------------+---------------------------------------------------------+
+|   Field          | Description                                             |
++==================+=========================================================+
+| ``id``           | The id of this config                                   |
++------------------+---------------------------------------------------------+
+| ``inputs``       | A array of inputs of this config                        |
++------------------+---------------------------------------------------------+
+| ``name``         | The unique name of this config per connector            |
++------------------+---------------------------------------------------------+
+| ``type``         | The type of this config (LINK/ JOB)                     |
++------------------+---------------------------------------------------------+
+
+A typical config object is showing below:
+
+::
+
+   {
+    id:7,
+    inputs:[
+      {
+         id: 25,
+         name: "throttlingConfig.numExtractors",
+         type: "INTEGER",
+         sensitive: false
+      },
+      {
+         id: 26,
+         name: "throttlingConfig.numLoaders",
+         type: "INTEGER",
+         sensitive: false
+       }
+    ],
+    name: "throttlingConfig",
+    type: "JOB"
+  }
+
+Each input object in a config is structured below:
+
++------------------+---------------------------------------------------------+
+|   Field          | Description                                             |
++==================+=========================================================+
+| ``id``           | The id of this input                                    |
++------------------+---------------------------------------------------------+
+| ``name``         | The unique name of this input per config                |
++------------------+---------------------------------------------------------+
+| ``type``         | The data type of this input field                       |
++------------------+---------------------------------------------------------+
+| ``size``         | The length of this input field                          |
++------------------+---------------------------------------------------------+
+| ``sensitive``    | Whether this input contain sensitive information        |
++------------------+---------------------------------------------------------+
+
+
+To send a filled config in the request, you should always use config id and input id to map the values to their correspondig names.
+For example, the following request contains an input value ``com.mysql.jdbc.Driver`` with input id ``7`` inside a config with id ``4`` that belongs to a link with id ``3``
+
+::
+
+      link: {
+            id: 3,
+            enabled: true,
+            link-config-values: [{
+                id: 4,
+                inputs: [{
+                    id: 7,
+                    name: "linkConfig.jdbcDriver",
+                    value: "com.mysql.jdbc.Driver",
+                    type: "STRING",
+                    size: 128,
+                    sensitive: false
+                }, {
+                    id: 8,
+                    name: "linkConfig.connectionString",
+                    value: "jdbc%3Amysql%3A%2F%2Fmysql.ent.cloudera.com%2Fsqoop",
+                    type: "STRING",
+                    size: 128,
+                    sensitive: false
+                },
+                ...
+             }
+           }
+
+Exception Response
+------------------
+
+Each operation on Sqoop server might return an exception in the Http response. Remember to take this into account.The exception code and message could be found in both the header and body of the response.
+
+Please jump to "Header Parameters" section to find how to get exception information from header.
+
+In the body, the exception is expressed in ``JSON`` format. An example of the exception is:
+
+::
+
+  {
+    "message":"DERBYREPO_0030:Unable to load specific job metadata from repository - Couldn't find job with id 2",
+    "stack-trace":[
+      {
+        "file":"DerbyRepositoryHandler.java",
+        "line":1111,
+        "class":"org.apache.sqoop.repository.derby.DerbyRepositoryHandler",
+        "method":"findJob"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":451,
+        "class":"org.apache.sqoop.repository.JdbcRepository$16",
+        "method":"doIt"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":90,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"doWithConnection"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":61,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"doWithConnection"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":448,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"findJob"
+      },
+      {
+        "file":"JobRequestHandler.java",
+        "line":238,
+        "class":"org.apache.sqoop.handler.JobRequestHandler",
+        "method":"getJobs"
+      }
+    ],
+    "class":"org.apache.sqoop.common.SqoopException"
+  }
+
+Config and Input Validation Status Response
+--------------------------------------------
+
+The config and the inputs associated with the connectors also provide custom validation rules for the values given to these input fields. Sqoop applies these custom validators and its corresponding valdation logic when config values for the LINK and JOB are posted.
+
+
+An example of a OK status with the persisted ID:
+::
+
+ {
+    "id": 3,
+    "validation-result": [
+        {}
+    ]
+ }
+
+An example of ERROR status:
+::
+
+   {
+     "validation-result": [
+       {
+        "linkConfig": [
+          {
+            "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+            "status": "ERROR"
+          }
+        ]
+      }
+     ]
+   }
+
+Job Submission Status Response
+------------------------------
+
+After starting a job, you could look up the running status of it. There could be 7 possible status:
+
++-----------------------------+---------------------------------------------------------+
+|   Status                    | Description                                             |
++=============================+=========================================================+
+| ``BOOTING``                 | In the middle of submitting the job                     |
++-----------------------------+---------------------------------------------------------+
+| ``FAILURE_ON_SUBMIT``       | Unable to submit this job to remote cluster             |
++-----------------------------+---------------------------------------------------------+
+| ``RUNNING``                 | The job is running now                                  |
++-----------------------------+---------------------------------------------------------+
+| ``SUCCEEDED``               | Job finished successfully                               |
++-----------------------------+---------------------------------------------------------+
+| ``FAILED``                  | Job failed                                              |
++-----------------------------+---------------------------------------------------------+
+| ``NEVER_EXECUTED``          | The job has never been executed since created           |
++-----------------------------+---------------------------------------------------------+
+| ``UNKNOWN``                 | The status is unknown                                   |
++-----------------------------+---------------------------------------------------------+
+
+Header Parameters
+=================
+
+For all the responses, the following parameters in the HTTP message header are available:
+
++---------------------------+----------+------------------------------------------------------------------------------+
+|   Parameter               | Required | Description                                                                  |
++===========================+==========+==============================================================================+
+| ``sqoop-error-code``      | false    | The error code when some error happen in the server side for this request    |
++---------------------------+----------+------------------------------------------------------------------------------+
+| ``sqoop-error-message``   | false    | The explanation for a error code                                             |
++---------------------------+----------+------------------------------------------------------------------------------+
+
+So far, there are only these 2 parameters in the header of response message. They only exist when something bad happen in the server.
+And they always come along with an exception message in the response body.
+
+REST APIs
+==========
+
+The section elaborates all the rest apis that are supported by the Sqoop server.
+
+For all Sqoop requests, the following request parameters will be added automatically. However, this user name is only in simple mode. In Kerberos mode, this user name will be ignored by Sqoop server and user name in UGI which is authenticated by Kerberos server will be used instead.
+
++---------------------------+---------------------------------------------------------+
+|   Parameter               | Description                                             |
++===========================+=========================================================+
+| ``user.name``             | The name of the user who makes the requests             |
++---------------------------+---------------------------------------------------------+
+
+
+/version - [GET] - Get Sqoop Version
+-------------------------------------
+
+Get all the version metadata of Sqoop software in the server side.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------+---------------------------------------------------------+
+|   Field            | Description                                             |
++====================+=========================================================+
+| ``source-revision``| The revision number of Sqoop source code                |
++--------------------+---------------------------------------------------------+
+| ``api-versions``   | The version of network protocol                         |
++--------------------+---------------------------------------------------------+
+| ``build-date``     | The Sqoop release date                                  |
++--------------------+---------------------------------------------------------+
+| ``user``           | The user who made the release                           |
++--------------------+---------------------------------------------------------+
+| ``source-url``     | The url of the source code trunk                        |
++--------------------+---------------------------------------------------------+
+| ``build-version``  | The version of Sqoop in the server side                 |
++--------------------+---------------------------------------------------------+
+
+
+* Response Example:
+
+::
+
+   {
+    source-url: "git://vbasavaraj.local/Users/vbasavaraj/Projects/SqoopRefactoring/sqoop2/common",
+    source-revision: "418c5f637c3f09b94ea7fc3b0a4610831373a25f",
+    build-version: "2.0.0-SNAPSHOT",
+    api-versions: [
+       "v1"
+     ],
+    user: "vbasavaraj",
+    build-date: "Mon Nov 3 08:18:21 PST 2014"
+   }
+
+/v1/connectors - [GET]  Get all Connectors
+-------------------------------------------
+
+Get all the connectors registered in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example
+
+::
+
+  {
+    connectors: [{
+        id: 1,
+        link-config: [],
+        job-config: {},
+        name: "hdfs-connector",
+        class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
+        all-config-resources: {},
+        version: "2.0.0-SNAPSHOT"
+    }, {
+        id: 2,
+        link-config: [],
+        job-config: {},
+        name: "generic-jdbc-connector",
+        class: "org.apache.sqoop.connector.jdbc.GenericJdbcConnector",
+        all-config - resources: {},
+        version: "2.0.0-SNAPSHOT"
+    }]
+  }
+
+/v1/connector/[cname] or /v1/connector/[cid] - [GET] - Get Connector
+---------------------------------------------------------------------
+
+Provide the id or unique name of the connector in the url ``[cid]`` or ``[cname]`` part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------------+----------------------------------------------------------------------------------------+
+|   Field                  | Description                                                                            |
++==========================+========================================================================================+
+| ``id``                   | The id for the connector ( registered as a configurable )                              |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``job-config``           | Connector job config and inputs for both FROM and TO                                   |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``link-config``          | Connector link config and inputs                                                       |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``all-config-resources`` | All config inputs labels and description for the given connector                       |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``version``              | The build version required for config and input data upgrades                          |
++--------------------------+----------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+   {
+    connector: {
+        id: 1,
+        job-config: {
+            TO: [{
+                id: 3,
+                inputs: [{
+                    id: 3,
+                    values: "TEXT_FILE,SEQUENCE_FILE",
+                    name: "toJobConfig.outputFormat",
+                    type: "ENUM",
+                    sensitive: false
+                }, {
+                    id: 4,
+                    values: "NONE,DEFAULT,DEFLATE,GZIP,BZIP2,LZO,LZ4,SNAPPY,CUSTOM",
+                    name: "toJobConfig.compression",
+                    type: "ENUM",
+                    sensitive: false
+                }, {
+                    id: 5,
+                    name: "toJobConfig.customCompression",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }, {
+                    id: 6,
+                    name: "toJobConfig.outputDirectory",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "toJobConfig",
+                type: "JOB"
+            }],
+            FROM: [{
+                id: 2,
+                inputs: [{
+                    id: 2,
+                    name: "fromJobConfig.inputDirectory",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "fromJobConfig",
+                type: "JOB"
+            }]
+        },
+        link-config: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "linkConfig",
+            type: "LINK"
+        }],
+        name: "hdfs-connector",
+        class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
+        all-config-resources: {
+            fromJobConfig.label: "From Job configuration",
+                toJobConfig.ignored.label: "Ignored",
+                fromJobConfig.help: "Specifies information required to get data from Hadoop ecosystem",
+                toJobConfig.ignored.help: "This value is ignored",
+                toJobConfig.label: "ToJob configuration",
+                toJobConfig.storageType.label: "Storage type",
+                fromJobConfig.inputDirectory.label: "Input directory",
+                toJobConfig.outputFormat.label: "Output format",
+                toJobConfig.outputDirectory.label: "Output directory",
+                toJobConfig.outputDirectory.help: "Output directory for final data",
+                toJobConfig.compression.help: "Compression that should be used for the data",
+                toJobConfig.outputFormat.help: "Format in which data should be serialized",
+                toJobConfig.customCompression.label: "Custom compression format",
+                toJobConfig.compression.label: "Compression format",
+                linkConfig.label: "Link configuration",
+                toJobConfig.customCompression.help: "Full class name of the custom compression",
+                toJobConfig.storageType.help: "Target on Hadoop ecosystem where to store data",
+                linkConfig.help: "Here you supply information necessary to connect to HDFS",
+                linkConfig.uri.help: "HDFS URI used to connect to HDFS",
+                linkConfig.uri.label: "HDFS URI",
+                fromJobConfig.inputDirectory.help: "Directory that should be exported",
+                toJobConfig.help: "You must supply the information requested in order to get information where you want to store your data."
+        },
+        version: "2.0.0-SNAPSHOT"
+     }
+   }
+
+
+/v1/driver - [GET]- Get Sqoop Driver
+-----------------------------------------------
+
+Driver exposes configurations required for the job execution.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------------+----------------------------------------------------------------------------------------------------+
+|   Field                  | Description                                                                                        |
++==========================+====================================================================================================+
+| ``id``                   | The id for the driver ( registered as a configurable )                                             |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``job-config``           | Driver job config and inputs                                                                       |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``version``              | The build version of the driver                                                                    |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``all-config-resources`` | Driver exposed config and input labels and description                                             |
++--------------------------+----------------------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+ {
+    id: 3,
+    job-config: [{
+        id: 7,
+        inputs: [{
+            id: 25,
+            name: "throttlingConfig.numExtractors",
+            type: "INTEGER",
+            sensitive: false
+        }, {
+            id: 26,
+            name: "throttlingConfig.numLoaders",
+            type: "INTEGER",
+            sensitive: false
+        }],
+        name: "throttlingConfig",
+        type: "JOB"
+    }],
+    all-config-resources: {
+        throttlingConfig.numExtractors.label: "Extractors",
+            throttlingConfig.numLoaders.help: "Number of loaders that Sqoop will use",
+            throttlingConfig.numLoaders.label: "Loaders",
+            throttlingConfig.label: "Throttling resources",
+            throttlingConfig.numExtractors.help: "Number of extractors that Sqoop will use",
+            throttlingConfig.help: "Set throttling boundaries to not overload your systems"
+    },
+    version: "1"
+ }
+
+/v1/links/ - [GET]  Get all links
+-------------------------------------------
+
+Get all the links created in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example
+
+::
+
+  {
+    links: [
+      {
+        id: 1,
+        enabled: true,
+        update-user: "root",
+        link-config-values: [],
+        name: "First Link",
+        creation-date: 1415309361756,
+        connector-id: 1,
+        update-date: 1415309361756,
+        creation-user: "root"
+      },
+      {
+        id: 2,
+        enabled: true,
+        update-user: "root",
+        link-config-values: [],
+        name: "Second Link",
+        creation-date: 1415309390807,
+        connector-id: 2,
+        update-date: 1415309390807,
+        creation-user: "root"
+      }
+    ]
+  }
+
+
+/v1/links?cname=[cname] - [GET]  Get all links by Connector
+------------------------------------------------------------
+Get all the links for a given connector identified by ``[cname]`` part.
+
+
+/v1/link/[lname]  or /v1/link/[lid] - [GET] - Get Link
+-------------------------------------------------------------------------------
+
+Provide the id or unique name of the link in the url ``[lid]`` or ``[lname]`` part.
+
+Get all the details of the link including the id, name, type and the corresponding config input values for the link
+
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+ {
+    link: {
+        id: 1,
+        enabled: true,
+        link-config-values: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                value: "hdfs%3A%2F%2Fnamenode%3A8090",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "linkConfig",
+            type: "LINK"
+        }],
+        update-user: "root",
+        name: "First Link",
+        creation-date: 1415287846371,
+        connector-id: 1,
+        update-date: 1415287846371,
+        creation-user: "root"
+    }
+ }
+
+/v1/link - [POST] - Create Link
+---------------------------------------------------------
+
+Create a new link object. Provide values to the link config inputs for the ones that are required.
+
+* Method: ``POST``
+* Format: ``JSON``
+* Fields of Request:
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                                                          |
++==========================+======================================================================================+
+| ``link``                 | The root of the post data in JSON                                                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``id``                   | The id of the link can be left blank in the post data                                |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``enabled``              | Whether to enable this link (true/false)                                             |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-date``          | The last updated time of this link                                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The creation time of this link                                                       |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-user``          | The user who updated this link                                                       |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-user``        | The user who created this link                                                       |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``name``                 | The name of this link                                                                |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``link-config-values``   | Config input values for link config for the corresponding connector                  |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``connector-id``         | The id of the connector used for this link                                           |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Request Example:
+
+::
+
+  {
+    link: {
+        id: -1,
+        enabled: true,
+        link-config-values: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "testInput",
+            type: "LINK"
+        }],
+        update-user: "root",
+        name: "testLink",
+        creation-date: 1415202223048,
+        connector-id: 1,
+        update-date: 1415202223048,
+        creation-user: "root"
+    }
+  }
+
+* Fields of Response:
+
++---------------------------+--------------------------------------------------------------------------------------+
+|   Field                   | Description                                                                          |
++===========================+======================================================================================+
+| ``id``                    | The id assigned for this new created link                                            |
++---------------------------+--------------------------------------------------------------------------------------+
+| ``validation-result``     | The validation status for the  link config inputs given in the post data             |
++---------------------------+--------------------------------------------------------------------------------------+
+
+* ERROR Response Example:
+
+::
+
+   {
+     "validation-result": [
+         {
+             "linkConfig": [
+                 {
+                     "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+                     "status": "ERROR"
+                 }
+             ]
+         }
+     ]
+   }
+
+
+/v1/link/[lname]  or /v1/link/[lid] - [PUT] - Update Link
+---------------------------------------------------------
+
+Update an existing link object with name [lname] or id [lid]. To make the procedure of filling inputs easier, the general practice
+is get the link first and then change some of the values for the inputs.
+
+* Method: ``PUT``
+* Format: ``JSON``
+
+* OK Response Example:
+
+::
+
+  {
+    "validation-result": [
+        {}
+    ]
+  }
+
+/v1/link/[lname]  or /v1/link/[lid]  - [DELETE] - Delete Link
+-----------------------------------------------------------------
+
+Delete a link with name [lname] or id [lid]
+
+* Method: ``DELETE``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/link/[lid]/enable  or /v1/link/[lname]/enable  - [PUT] - Enable Link
+--------------------------------------------------------------------------------
+
+Enable a link with id ``lid`` or name ``lname``
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/link/[lid]/disable - [PUT] - Disable Link
+---------------------------------------------------------
+
+Disable a link with id ``lid`` or name ``lname``
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/jobs/ - [GET]  Get all jobs
+-------------------------------------------
+
+Get all the jobs created in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+  {
+     jobs: [{
+        driver-config-values: [],
+            enabled: true,
+            from-connector-id: 1,
+            update-user: "root",
+            to-config-values: [],
+            to-connector-id: 2,
+            creation-date: 1415310157618,
+            update-date: 1415310157618,
+            creation-user: "root",
+            id: 1,
+            to-link-id: 2,
+            from-config-values: [],
+            name: "First Job",
+            from-link-id: 1
+       },{
+        driver-config-values: [],
+            enabled: true,
+            from-connector-id: 2,
+            update-user: "root",
+            to-config-values: [],
+            to-connector-id: 1,
+            creation-date: 1415310650600,
+            update-date: 1415310650600,
+            creation-user: "root",
+            id: 2,
+            to-link-id: 1,
+            from-config-values: [],
+            name: "Second Job",
+            from-link-id: 2
+       }]
+  }
+
+/v1/jobs?cname=[cname] - [GET]  Get all jobs by connector
+------------------------------------------------------------
+Get all the jobs for a given connector identified by ``[cname]`` part.
+
+
+/v1/job/[jname] or /v1/job/[jid] - [GET] - Get Job
+-----------------------------------------------------
+
+Provide the name or the id of the job in the url [jname]
+part or [jid] part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+  {
+    job: {
+        driver-config-values: [{
+                id: 7,
+                inputs: [{
+                    id: 25,
+                    name: "throttlingConfig.numExtractors",
+                    value: "3",
+                    type: "INTEGER",
+                    sensitive: false
+                }, {
+                    id: 26,
+                    name: "throttlingConfig.numLoaders",
+                    value: "3",
+                    type: "INTEGER",
+                    sensitive: false
+                }],
+                name: "throttlingConfig",
+                type: "JOB"
+            }],
+            enabled: true,
+            from-connector-id: 1,
+            update-user: "root",
+            to-config-values: [{
+                id: 6,
+                inputs: [{
+                    id: 19,
+                    name: "toJobConfig.schemaName",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 20,
+                    name: "toJobConfig.tableName",
+                    value: "text",
+                    type: "STRING",
+                    size: 2000,
+                    sensitive: false
+                }, {
+                    id: 21,
+                    name: "toJobConfig.sql",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 22,
+                    name: "toJobConfig.columns",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 23,
+                    name: "toJobConfig.stageTableName",
+                    type: "STRING",
+                    size: 2000,
+                    sensitive: false
+                }, {
+                    id: 24,
+                    name: "toJobConfig.shouldClearStageTable",
+                    type: "BOOLEAN",
+                    sensitive: false
+                }],
+                name: "toJobConfig",
+                type: "JOB"
+            }],
+            to-connector-id: 2,
+            creation-date: 1415310157618,
+            update-date: 1415310157618,
+            creation-user: "root",
+            id: 1,
+            to-link-id: 2,
+            from-config-values: [{
+                id: 2,
+                inputs: [{
+                    id: 2,
+                    name: "fromJobConfig.inputDirectory",
+                    value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "fromJobConfig",
+                type: "JOB"
+            }],
+            name: "First Job",
+            from-link- id: 1
+    }
+ }
+
+
+/v1/job - [POST] - Create Job
+---------------------------------------------------------
+
+Create a new job object with the corresponding config values.
+
+* Method: ``POST``
+* Format: ``JSON``
+
+* Fields of Request:
+
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                                                          |
++==========================+======================================================================================+
+| ``job``                  | The root of the post data in JSON                                                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``from-link-id``         | The id of the from link for the job                                                  |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``to-link-id``           | The id of the to link for the job                                                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``id``                   | The id of the link can be left blank in the post data                                |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``enabled``              | Whether to enable this job (true/false)                                              |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-date``          | The last updated time of this job                                                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The creation time of this job                                                        |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-user``          | The user who updated this job                                                        |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-user``        | The uset who creates this job                                                        |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``name``                 | The name of this job                                                                 |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``from-config-values``   | Config input values for FROM part of the job                                         |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``to-config-values``     | Config input values for TO part of the job                                           |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``driver-config-values`` | Config input values for driver                                                       |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``connector-id``         | The id of the connector used for this link                                           |
++--------------------------+--------------------------------------------------------------------------------------+
+
+
+* Request Example:
+
+::
+
+ {
+   job: {
+     driver-config-values: [
+       {
+         id: 7,
+         inputs: [
+           {
+             id: 25,
+             name: "throttlingConfig.numExtractors",
+             value: "3",
+             type: "INTEGER",
+             sensitive: false
+           },
+           {
+             id: 26,
+             name: "throttlingConfig.numLoaders",
+             value: "3",
+             type: "INTEGER",
+             sensitive: false
+           }
+         ],
+         name: "throttlingConfig",
+         type: "JOB"
+       }
+     ],
+     enabled: true,
+     from-connector-id: 1,
+     update-user: "root",
+     to-config-values: [
+       {
+         id: 6,
+         inputs: [
+           {
+             id: 19,
+             name: "toJobConfig.schemaName",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 20,
+             name: "toJobConfig.tableName",
+             value: "text",
+             type: "STRING",
+             size: 2000,
+             sensitive: false
+           },
+           {
+             id: 21,
+             name: "toJobConfig.sql",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 22,
+             name: "toJobConfig.columns",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 23,
+             name: "toJobConfig.stageTableName",
+             type: "STRING",
+             size: 2000,
+             sensitive: false
+           },
+           {
+             id: 24,
+             name: "toJobConfig.shouldClearStageTable",
+             type: "BOOLEAN",
+             sensitive: false
+           }
+         ],
+         name: "toJobConfig",
+         type: "JOB"
+       }
+     ],
+     to-connector-id: 2,
+     creation-date: 1415310157618,
+     update-date: 1415310157618,
+     creation-user: "root",
+     id: -1,
+     to-link-id: 2,
+     from-config-values: [
+       {
+         id: 2,
+         inputs: [
+           {
+             id: 2,
+             name: "fromJobConfig.inputDirectory",
+             value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+             type: "STRING",
+             size: 255,
+             sensitive: false
+           }
+         ],
+         name: "fromJobConfig",
+         type: "JOB"
+       }
+     ],
+     name: "Test Job",
+     from-link-id: 1
+    }
+  }
+
+* Fields of Response:
+
++---------------------------+--------------------------------------------------------------------------------------+
+|   Field                   | Description                                                                          |
++===========================+======================================================================================+
+| ``id``                    | The id assigned for this new created job                                             |
++--------------------------+---------------------------------------------------------------------------------------+
+| ``validation-result``     | The validation status for the job config and driver config inputs in the post data   |
++---------------------------+--------------------------------------------------------------------------------------+
+
+
+* ERROR Response Example:
+
+::
+
+   {
+     "validation-result": [
+         {
+             "linkConfig": [
+                 {
+                     "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+                     "status": "ERROR"
+                 }
+             ]
+         }
+     ]
+   }
+
+
+/v1/job/[jid] - [PUT] - Update Job
+---------------------------------------------------------
+
+Update an existing job object with id [jid]. To make the procedure of filling inputs easier, the general practice
+is get the existing job object first and then change some of the inputs.
+
+* Method: ``PUT``
+* Format: ``JSON``
+
+The same as Create Job.
+
+* OK Response Example:
+
+::
+
+  {
+    "validation-result": [
+        {}
+    ]
+  }
+
+
+/v1/job/[jid] - [DELETE] - Delete Job
+---------------------------------------------------------
+
+Delete a job with id ``jid``.
+
+* Method: ``DELETE``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/job/[jid]/enable - [PUT] - Enable Job
+---------------------------------------------------------
+
+Enable a job with id ``jid``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/job/[jid]/disable - [PUT] - Disable Job
+---------------------------------------------------------
+
+Disable a job with id ``jid``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+
+/v1/job/[jid]/start or /v1/job/[jname]/start - [PUT]- Start Job
+---------------------------------------------------------------------------------
+
+Start a job with name ``[jname]`` or with id ``[jid]`` to trigger the job execution
+
+* Method: ``POST``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+* BOOTING Response Example
+
+::
+
+  {
+    "submission": {
+      "progress": -1,
+      "last-update-date": 1415312531188,
+      "external-id": "job_1412137947693_0004",
+      "status": "BOOTING",
+      "job": 2,
+      "creation-date": 1415312531188,
+      "to-schema": {
+        "created": 1415312531426,
+        "name": "HDFS file",
+        "columns": []
+      },
+      "external-link": "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
+      "from-schema": {
+        "created": 1415312531342,
+        "name": "text",
+        "columns": [
+          {
+            "name": "id",
+            "nullable": true,
+            "unsigned": null,
+            "type": "FIXED_POINT",
+            "size": null
+          },
+          {
+            "name": "txt",
+            "nullable": true,
+            "type": "TEXT",
+            "size": null
+          }
+        ]
+      }
+    }
+  }
+
+* SUCCEEDED Response Example
+
+::
+
+   {
+     submission: {
+       progress: -1,
+       last-update-date: 1415312809485,
+       external-id: "job_1412137947693_0004",
+       status: "SUCCEEDED",
+       job: 2,
+       creation-date: 1415312531188,
+       external-link: "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
+       counters: {
+         org.apache.hadoop.mapreduce.JobCounter: {
+           SLOTS_MILLIS_MAPS: 373553,
+           MB_MILLIS_MAPS: 382518272,
+           TOTAL_LAUNCHED_MAPS: 10,
+           MILLIS_MAPS: 373553,
+           VCORES_MILLIS_MAPS: 373553,
+           OTHER_LOCAL_MAPS: 10
+         },
+         org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
+           BYTES_WRITTEN: 0
+         },
+         org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
+           BYTES_READ: 0
+         },
+         org.apache.hadoop.mapreduce.TaskCounter: {
+           MAP_INPUT_RECORDS: 0,
+           MERGED_MAP_OUTPUTS: 0,
+           PHYSICAL_MEMORY_BYTES: 4065599488,
+           SPILLED_RECORDS: 0,
+           COMMITTED_HEAP_BYTES: 3439853568,
+           CPU_MILLISECONDS: 236900,
+           FAILED_SHUFFLE: 0,
+           VIRTUAL_MEMORY_BYTES: 15231422464,
+           SPLIT_RAW_BYTES: 1187,
+           MAP_OUTPUT_RECORDS: 1000000,
+           GC_TIME_MILLIS: 7282
+         },
+         org.apache.hadoop.mapreduce.FileSystemCounter: {
+           FILE_WRITE_OPS: 0,
+           FILE_READ_OPS: 0,
+           FILE_LARGE_READ_OPS: 0,
+           FILE_BYTES_READ: 0,
+           HDFS_BYTES_READ: 1187,
+           FILE_BYTES_WRITTEN: 1191230,
+           HDFS_LARGE_READ_OPS: 0,
+           HDFS_WRITE_OPS: 10,
+           HDFS_READ_OPS: 10,
+           HDFS_BYTES_WRITTEN: 276389736
+         },
+         org.apache.sqoop.submission.counter.SqoopCounters: {
+           ROWS_READ: 1000000
+         }
+       }
+     }
+   }
+
+
+* ERROR Response Example
+
+::
+
+  {
+    "submission": {
+      "progress": -1,
+      "last-update-date": 1415312390570,
+      "status": "FAILURE_ON_SUBMIT",
+      "error-summary": "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
+      "job": 1,
+      "creation-date": 1415312390570,
+      "to-schema": {
+        "created": 1415312390797,
+        "name": "text",
+        "columns": [
+          {
+            "name": "id",
+            "nullable": true,
+            "unsigned": null,
+            "type": "FIXED_POINT",
+            "size": null
+          },
+          {
+            "name": "txt",
+            "nullable": true,
+            "type": "TEXT",
+            "size": null
+          }
+        ]
+      },
+      "from-schema": {
+        "created": 1415312390778,
+        "name": "HDFS file",
+        "columns": [
+        ]
+      },
+      "error-details": "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_00"
+    }
+  }
+
+/v1/job/[jid]/stop or /v1/job/[jname]/stop  - [PUT]- Stop Job
+---------------------------------------------------------------------------------
+
+Stop a job with name ``[janme]`` or with id ``[jid]`` to abort the running job.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+/v1/job/[jid]/status or /v1/job/[jname]/status  - [GET]- Get Job Status
+---------------------------------------------------------------------------------
+
+Get status of the running job with name ``[janme]`` or with id ``[jid]``
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+::
+
+  {
+      "submission": {
+          "progress": 0.25,
+          "last-update-date": 1415312603838,
+          "external-id": "job_1412137947693_0004",
+          "status": "RUNNING",
+          "job": 2,
+          "creation-date": 1415312531188,
+          "external-link": "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/"
+      }
+  }
+
+/v1/submissions? - [GET] - Get all job Submissions
+----------------------------------------------------------------------
+
+Get all the submissions for every job started in SQoop
+
+/v1/submissions?jname=[jname] - [GET] - Get Submissions by Job
+----------------------------------------------------------------------
+
+Retrieve all job submissions in the past for the given job. Each submission record will have details such as the status, counters and urls for those submissions.
+
+Provide the name of the job in the url [jname] part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+* Fields of Response:
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                                                          |
++==========================+======================================================================================+
+| ``progress``             | The progress of the running Sqoop job                                                |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``job``                  | The id of the Sqoop job                                                              |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The submission timestamp                                                             |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``last-update-date``     | The timestamp of the last status update                                              |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``status``               | The status of this job submission                                                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``external-id``          | The job id of Sqoop job running on Hadoop                                            |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``external-link``        | The link to track the job status on Hadoop                                           |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+  {
+    submissions: [
+      {
+        progress: -1,
+        last-update-date: 1415312809485,
+        external-id: "job_1412137947693_0004",
+        status: "SUCCEEDED",
+        job: 2,
+        creation-date: 1415312531188,
+        external-link: "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
+        counters: {
+          org.apache.hadoop.mapreduce.JobCounter: {
+            SLOTS_MILLIS_MAPS: 373553,
+            MB_MILLIS_MAPS: 382518272,
+            TOTAL_LAUNCHED_MAPS: 10,
+            MILLIS_MAPS: 373553,
+            VCORES_MILLIS_MAPS: 373553,
+            OTHER_LOCAL_MAPS: 10
+          },
+          org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
+            BYTES_WRITTEN: 0
+          },
+          org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
+            BYTES_READ: 0
+          },
+          org.apache.hadoop.mapreduce.TaskCounter: {
+            MAP_INPUT_RECORDS: 0,
+            MERGED_MAP_OUTPUTS: 0,
+            PHYSICAL_MEMORY_BYTES: 4065599488,
+            SPILLED_RECORDS: 0,
+            COMMITTED_HEAP_BYTES: 3439853568,
+            CPU_MILLISECONDS: 236900,
+            FAILED_SHUFFLE: 0,
+            VIRTUAL_MEMORY_BYTES: 15231422464,
+            SPLIT_RAW_BYTES: 1187,
+            MAP_OUTPUT_RECORDS: 1000000,
+            GC_TIME_MILLIS: 7282
+          },
+          org.apache.hadoop.mapreduce.FileSystemCounter: {
+            FILE_WRITE_OPS: 0,
+            FILE_READ_OPS: 0,
+            FILE_LARGE_READ_OPS: 0,
+            FILE_BYTES_READ: 0,
+            HDFS_BYTES_READ: 1187,
+            FILE_BYTES_WRITTEN: 1191230,
+            HDFS_LARGE_READ_OPS: 0,
+            HDFS_WRITE_OPS: 10,
+            HDFS_READ_OPS: 10,
+            HDFS_BYTES_WRITTEN: 276389736
+          },
+          org.apache.sqoop.submission.counter.SqoopCounters: {
+            ROWS_READ: 1000000
+          }
+        }
+      },
+      {
+        progress: -1,
+        last-update-date: 1415312390570,
+        status: "FAILURE_ON_SUBMIT",
+        error-summary: "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
+        job: 1,
+        creation-date: 1415312390570,
+        error-details: "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner...."
+      }
+    ]
+  }
+
+/v1/authorization/roles/create - [POST] - Create Role
+-----------------------------------------------------
+
+Create a new role object. Provide values to the link config inputs for the ones that are required.
+
+* Method: ``POST``
+* Format: ``JSON``
+* Fields of Request:
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                                                          |
++==========================+======================================================================================+
+| ``role``                 | The root of the post data in JSON                                                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``name``                 | The name of this role                                                                |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Request Example:
+
+::
+
+  {
+    role: {
+        name: "testRole",
+    }
+  }
+
+/v1/authorization/role/[role-name]  - [DELETE] - Delete Role
+------------------------------------------------------------
+
+Delete a role with name [role-name]
+
+* Method: ``DELETE``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/authorization/roles?principal_type=[principal-type]&principal_name=[principal-name] - [GET]  Get all Roles by Principal
+---------------------------------------------------------------------------------------------------------------------------
+
+Get all the roles or for a given principal identified by ``[principal-type]`` and ``[principal-name]`` part.
+
+/v1/authorization/principals?role_name=[rname] - [GET]  Get all Principals by Role
+----------------------------------------------------------------------------------
+
+Get all the principals for a given role identified by ``[rname]`` part.
+
+/v1/authorization/roles/grant - [PUT] - Grant a Role to a Principal
+-------------------------------------------------------------------
+
+Grant a role with ``[role-name]`` to a principal with ``[principal-type]`` and ``[principal-name]``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Fields of Request:
+
+The same as Create Role and
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                                                          |
++==========================+======================================================================================+
+| ``principals``           | The root of the post data in JSON                                                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``name``                 | The name of this principal                                                           |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``type``                 | The type of this principal, ("USER", "GROUP", "ROLE")                                |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Request Example:
+
+::
+
+  {
+    roles: [{
+        name: "testRole",
+    }],
+    principals: [{
+        name: "testPrincipalName",
+        type: "USER",
+    }]
+  }
+
+* Response Content: ``None``
+
+/v1/authorization/roles/revoke - [PUT] - Revoke a Role from a Principal
+-----------------------------------------------------------------------
+
+Revoke a role with ``[role-name]`` to a principal with ``[principal-type]`` and ``[principal-name]``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Fields of Request:
+
+The same as Grant Role
+
+* Response Content: ``None``
+
+/v1/authorization/privileges/grant - [PUT] - Grant a Privilege to a Principal
+-----------------------------------------------------------------------------
+
+Grant a privilege with ``[resource-name]``, ``[resource-type]``, ``[action]`` and ``[with-grant-option]`` to a principal with``[principal-type]`` and ``[principal-name]``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Fields of Request:
+
+The same as Principal and
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                                                          |
++==========================+======================================================================================+
+| ``privileges``           | The root of the post data in JSON                                                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``resource-name``        | The resource name of this privilege                                                  |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``resource-type``        | The resource type of this privilege, ("CONNECTOR", "LINK", "JOB")                    |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``action``               | The action type of this privilege, ("READ", "WRITE", "ALL")                          |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``with-grant-option``    | The resource type of this privilege                                                  |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Request Example:
+
+::
+
+  {
+    privileges: [{
+        resource-name: "testResourceName",
+        resource-type: "LINK",
+        action: "READ",
+        with-grant-option: false,
+    }]
+    principals: [{
+        name: "testPrincipalName",
+        type: "USER",
+    }]
+  }
+
+* Response Content: ``None``
+
+/v1/authorization/privileges/revoke - [PUT] - Revoke a Privilege to a Principal
+-------------------------------------------------------------------------------
+
+Revoke a privilege with ``[resource-name]``, ``[resource-type]``, ``[action]`` and ``[with-grant-option]`` to a principal with``[principal-type]`` and ``[principal-name]``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Fields of Request:
+
+The same as Grant Privilege
+
+* Response Content: ``None``
+
+/v1/authorization/privilieges?principal_type=[principal-type]&principal_name=[principal-name]&resource_type=[resource-type]&resource_name=[resource-name] - [GET]  Get all Roles by Principal (and Resource)
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+
+Get all the privileges or for a given principal identified by ``[principal-type]`` and ``[principal-name]`` (and a given resource identified by ``[resource-type]`` and ``[resource-name]``).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/dev/Repository.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/dev/Repository.rst b/docs/src/site/sphinx/dev/Repository.rst
new file mode 100644
index 0000000..55daf2e
--- /dev/null
+++ b/docs/src/site/sphinx/dev/Repository.rst
@@ -0,0 +1,335 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==========
+Repository
+==========
+
+This repository contains additional information regarding Sqoop.
+
+
+Sqoop Schema
+------------
+
+The DDL queries that create the Sqoop repository schema in Derby database create the following tables:
+
+
+
+SQ_SYSTEM
++++++++++
+Store for various state information
+
+      +----------------------------+
+      | SQ_SYSTEM                  |
+      +============================+
+      | SQM_ID: BIGINT PK          |
+      +----------------------------+
+      | SQM_KEY: VARCHAR(64)       |
+      +----------------------------+
+      | SQM_VALUE: VARCHAR(64)     |
+      +----------------------------+
+
+
+
+
+SQ_DIRECTION
+++++++++++++
+Directions
+
+      +---------------------------------------+-------------+
+      | SQ_DIRECTION                          |             |
+      +=======================================+=============+
+      | SQD_ID: BIGINT PK AUTO-GEN            |             |
+      +---------------------------------------+-------------+
+      | SQD_NAME: VARCHAR(64)                 | "FROM"|"TO" |
+      +---------------------------------------+-------------+
+
+
+
+
+SQ_CONFIGURABLE
++++++++++++++++
+Configurable registration
+
+      +-----------------------------+----------------------+
+      | SQ_CONFIGURABLE             |                      |
+      +=============================+======================+
+      | SQC_ID: BIGINT PK AUTO-GEN  |                      |
+      +-----------------------------+----------------------+
+      | SQC_NAME: VARCHAR(64)       |                      |
+      +-----------------------------+----------------------+
+      | SQC_CLASS: VARCHAR(255)     |                      |
+      +-----------------------------+----------------------+
+      | SQC_TYPE: VARCHAR(32)       | "CONNECTOR"|"DRIVER" |
+      +-----------------------------+----------------------+
+      | SQC_VERSION: VARCHAR(64)    |                      |
+      +-----------------------------+----------------------+
+
+
+
+
+SQ_CONNECTOR_DIRECTIONS
++++++++++++++++++++++++
+Connector directions
+
+      +------------------------------+------------------------------+
+      | SQ_CONNECTOR_DIRECTIONS      |                              |
+      +==============================+==============================+
+      | SQCD_ID: BIGINT PK AUTO-GEN  |                              |
+      +------------------------------+------------------------------+
+      | SQCD_CONNECTOR: BIGINT       | FK SQCD_CONNECTOR(SQC_ID)    |
+      +------------------------------+------------------------------+
+      | SQCD_DIRECTION: BIGINT       | FK SQCD_DIRECTION(SQD_ID)    |
+      +------------------------------+------------------------------+
+
+
+
+
+SQ_CONFIG
++++++++++
+Config details
+
+      +-------------------------------------+------------------------------------------------------+
+      | SQ_CONFIG                           |                                                      |
+      +=====================================+======================================================+
+      | SQ_CFG_ID: BIGINT PK AUTO-GEN       |                                                      |
+      +-------------------------------------+------------------------------------------------------+
+      | SQ_CFG_CONNECTOR: BIGINT            | FK SQ_CFG_CONNECTOR(SQC_ID), NULL for driver         |
+      +-------------------------------------+------------------------------------------------------+
+      | SQ_CFG_NAME: VARCHAR(64)            |                                                      |
+      +-------------------------------------+------------------------------------------------------+
+      | SQ_CFG_TYPE: VARCHAR(32)            | "LINK"|"JOB"                                         |
+      +-------------------------------------+------------------------------------------------------+
+      | SQ_CFG_INDEX: SMALLINT              |                                                      |
+      +-------------------------------------+------------------------------------------------------+
+
+
+
+
+SQ_CONFIG_DIRECTIONS
+++++++++++++++++++++
+Connector directions
+
+      +------------------------------+------------------------------+
+      | SQ_CONNECTOR_DIRECTIONS      |                              |
+      +==============================+==============================+
+      | SQCD_ID: BIGINT PK AUTO-GEN  |                              |
+      +------------------------------+------------------------------+
+      | SQCD_CONFIG: BIGINT          | FK SQCD_CONFIG(SQ_CFG_ID)    |
+      +------------------------------+------------------------------+
+      | SQCD_DIRECTION: BIGINT       | FK SQCD_DIRECTION(SQD_ID)    |
+      +------------------------------+------------------------------+
+
+
+
+
+SQ_INPUT
+++++++++
+Input details
+
+      +----------------------------+--------------------------+
+      | SQ_INPUT                   |                          |
+      +============================+==========================+
+      | SQI_ID: BIGINT PK AUTO-GEN |                          |
+      +----------------------------+--------------------------+
+      | SQI_NAME: VARCHAR(64)      |                          |
+      +----------------------------+--------------------------+
+      | SQI_CONFIG: BIGINT         | FK SQ_CONFIG(SQ_CFG_ID)  |
+      +----------------------------+--------------------------+
+      | SQI_INDEX: SMALLINT        |                          |
+      +----------------------------+--------------------------+
+      | SQI_TYPE: VARCHAR(32)      | "STRING"|"MAP"           |
+      +----------------------------+--------------------------+
+      | SQI_STRMASK: BOOLEAN       |                          |
+      +----------------------------+--------------------------+
+      | SQI_STRLENGTH: SMALLINT    |                          |
+      +----------------------------+--------------------------+
+      | SQI_ENUMVALS: VARCHAR(100) |                          |
+      +----------------------------+--------------------------+
+
+
+
+
+SQ_LINK
++++++++
+Stored links
+
+      +-----------------------------------+--------------------------+
+      | SQ_LINK                           |                          |
+      +===================================+==========================+
+      | SQ_LNK_ID: BIGINT PK AUTO-GEN     |                          |
+      +-----------------------------------+--------------------------+
+      | SQ_LNK_NAME: VARCHAR(64)          |                          |
+      +-----------------------------------+--------------------------+
+      | SQ_LNK_CONNECTOR: BIGINT          | FK SQ_CONNECTOR(SQC_ID)  |
+      +-----------------------------------+--------------------------+
+      | SQ_LNK_CREATION_USER: VARCHAR(32) |                          |
+      +-----------------------------------+--------------------------+
+      | SQ_LNK_CREATION_DATE: TIMESTAMP   |                          |
+      +-----------------------------------+--------------------------+
+      | SQ_LNK_UPDATE_USER: VARCHAR(32)   |                          |
+      +-----------------------------------+--------------------------+
+      | SQ_LNK_UPDATE_DATE: TIMESTAMP     |                          |
+      +-----------------------------------+--------------------------+
+      | SQ_LNK_ENABLED: BOOLEAN           |                          |
+      +-----------------------------------+--------------------------+
+
+
+
+
+SQ_JOB
+++++++
+Stored jobs
+
+      +--------------------------------+-----------------------+
+      | SQ_JOB                         |                       |
+      +================================+=======================+
+      | SQB_ID: BIGINT PK AUTO-GEN     |                       |
+      +--------------------------------+-----------------------+
+      | SQB_NAME: VARCHAR(64)          |                       |
+      +--------------------------------+-----------------------+
+      | SQB_FROM_LINK: BIGINT          | FK SQ_LINK(SQ_LNK_ID) |
+      +--------------------------------+-----------------------+
+      | SQB_TO_LINK: BIGINT            | FK SQ_LINK(SQ_LNK_ID) |
+      +--------------------------------+-----------------------+
+      | SQB_CREATION_USER: VARCHAR(32) |                       |
+      +--------------------------------+-----------------------+
+      | SQB_CREATION_DATE: TIMESTAMP   |                       |
+      +--------------------------------+-----------------------+
+      | SQB_UPDATE_USER: VARCHAR(32)   |                       |
+      +--------------------------------+-----------------------+
+      | SQB_UPDATE_DATE: TIMESTAMP     |                       |
+      +--------------------------------+-----------------------+
+      | SQB_ENABLED: BOOLEAN           |                       |
+      +--------------------------------+-----------------------+
+
+
+
+
+SQ_LINK_INPUT
++++++++++++++
+N:M relationship link and input
+
+      +----------------------------+-----------------------+
+      | SQ_LINK_INPUT              |                       |
+      +============================+=======================+
+      | SQ_LNKI_LINK: BIGINT PK    | FK SQ_LINK(SQ_LNK_ID) |
+      +----------------------------+-----------------------+
+      | SQ_LNKI_INPUT: BIGINT PK   | FK SQ_INPUT(SQI_ID)   |
+      +----------------------------+-----------------------+
+      | SQ_LNKI_VALUE: LONG VARCHAR|                       |
+      +----------------------------+-----------------------+
+
+
+
+
+SQ_JOB_INPUT
+++++++++++++
+N:M relationship job and input
+
+      +----------------------------+---------------------+
+      | SQ_JOB_INPUT               |                     |
+      +============================+=====================+
+      | SQBI_JOB: BIGINT PK        | FK SQ_JOB(SQB_ID)   |
+      +----------------------------+---------------------+
+      | SQBI_INPUT: BIGINT PK      | FK SQ_INPUT(SQI_ID) |
+      +----------------------------+---------------------+
+      | SQBI_VALUE: LONG VARCHAR   |                     |
+      +----------------------------+---------------------+
+
+
+
+
+SQ_SUBMISSION
++++++++++++++
+List of submissions
+
+      +-----------------------------------+-------------------+
+      | SQ_JOB_SUBMISSION                 |                   |
+      +===================================+===================+
+      | SQS_ID: BIGINT PK                 |                   |
+      +-----------------------------------+-------------------+
+      | SQS_JOB: BIGINT                   | FK SQ_JOB(SQB_ID) |
+      +-----------------------------------+-------------------+
+      | SQS_STATUS: VARCHAR(20)           |                   |
+      +-----------------------------------+-------------------+
+      | SQS_CREATION_USER: VARCHAR(32)    |                   |
+      +-----------------------------------+-------------------+
+      | SQS_CREATION_DATE: TIMESTAMP      |                   |
+      +-----------------------------------+-------------------+
+      | SQS_UPDATE_USER: VARCHAR(32)      |                   |
+      +-----------------------------------+-------------------+
+      | SQS_UPDATE_DATE: TIMESTAMP        |                   |
+      +-----------------------------------+-------------------+
+      | SQS_EXTERNAL_ID: VARCHAR(50)      |                   |
+      +-----------------------------------+-------------------+
+      | SQS_EXTERNAL_LINK: VARCHAR(150)   |                   |
+      +-----------------------------------+-------------------+
+      | SQS_EXCEPTION: VARCHAR(150)       |                   |
+      +-----------------------------------+-------------------+
+      | SQS_EXCEPTION_TRACE: VARCHAR(750) |                   |
+      +-----------------------------------+-------------------+
+
+
+
+
+SQ_COUNTER_GROUP
+++++++++++++++++
+List of counter groups
+
+      +----------------------------+
+      | SQ_COUNTER_GROUP           |
+      +============================+
+      | SQG_ID: BIGINT PK          |
+      +----------------------------+
+      | SQG_NAME: VARCHAR(75)      |
+      +----------------------------+
+
+
+
+
+SQ_COUNTER
+++++++++++
+List of counters
+
+      +----------------------------+
+      | SQ_COUNTER                 |
+      +============================+
+      | SQR_ID: BIGINT PK          |
+      +----------------------------+
+      | SQR_NAME: VARCHAR(75)      |
+      +----------------------------+
+
+
+
+
+SQ_COUNTER_SUBMISSION
++++++++++++++++++++++
+N:M Relationship
+
+      +----------------------------+--------------------------------+
+      | SQ_COUNTER_SUBMISSION      |                                |
+      +============================+================================+
+      | SQRS_GROUP: BIGINT PK      | FK SQ_COUNTER_GROUP(SQR_ID)    |
+      +----------------------------+--------------------------------+
+      | SQRS_COUNTER: BIGINT PK    | FK SQ_COUNTER(SQR_ID)          |
+      +----------------------------+--------------------------------+
+      | SQRS_SUBMISSION: BIGINT PK | FK SQ_SUBMISSION(SQS_ID)       |
+      +----------------------------+--------------------------------+
+      | SQRS_VALUE: BIGINT         |                                |
+      +----------------------------+--------------------------------+
+
+

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/index.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/index.rst b/docs/src/site/sphinx/index.rst
index a18fad2..64bf951 100644
--- a/docs/src/site/sphinx/index.rst
+++ b/docs/src/site/sphinx/index.rst
@@ -20,68 +20,50 @@ Apache Sqoop documentation
 
 Apache Sqoop is a tool designed for efficiently transferring data betweeen structured, semi-structured and unstructured data sources. Relational databases are examples of structured data sources with well defined schema for the data they store. Cassandra, Hbase are  examples of semi-structured data sources and HDFS is an example of unstructured data source that Sqoop can support.
 
-License
--------
+.. toctree::
+   :maxdepth: 3
+   :numbered:
+   :hidden:
 
-::
+   admin
+   user
+   dev
+   security
 
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    (the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
 
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
+Administrator Guide
+--------------------
+If you are a admin trying to set up Sqoop, check out the links below
 
+- `Sqoop Server and Client Installation <admin/Installation.html>`_
+- `Sqoop Server Upgrade <admin/Upgrade.html>`_
+- `Sqoop Tools <admin/Tools.html>`_
 
 User Guide
 ------------
 If you are excited to start using Sqoop you can follow the links below to get a quick overview of the system
 
-- `Sqoop 5 Minute Demo <Sqoop5MinutesDemo.html>`_
-- `Command Line Shell Usage Guide <CommandLineClient.html>`_
-- Connector guides
-
-  + `Generic JDBC Connector <Connector-GenericJDBC.html>`_
-  + `HDFS Connector <Connector-HDFS.html>`_
-  + `Kite Connector <Connector-Kite.html>`_
-  + `Kafka Connector <Connector-Kafka.html>`_
-  + `FTP Connector <Connector-FTP.html>`_
-  + `SFTP Connector <Connector-SFTP.html>`_
-
-- `Security Guide <SecurityGuideOnSqoop2.html>`_
+- `Sqoop 5 Minute Demo <user/Sqoop5MinutesDemo.html>`_
+- `Command Line Shell Usage Guide <user/CommandLineClient.html>`_
+- `Connectors <user/Connectors.html>`_
 
 Developer Guide
 -----------------
 
 If you are keen on contributing to Sqoop and get your hands dirty building connectors or interesting UI/applications for Sqoop internals check out the links below
 
-- `Building Sqoop 2 <BuildingSqoop2.html>`_
-- `Sqoop Development Environment Setup <DevEnv.html>`_
-- `Developing a Sqoop Connector with Connector API <ConnectorDevelopment.html>`_
-- `Developing Sqoop application with REST API <RESTAPI.html>`_
-- `Developing Sqoop application using Sqoop Java Client API <ClientAPI.html>`_
-- `Repository <Repository.html>`_
+- `Building Sqoop 2 <dev/BuildingSqoop2.html>`_
+- `Sqoop Development Environment Setup <dev/DevEnv.html>`_
+- `Developing a Sqoop Connector with Connector API <dev/ConnectorDevelopment.html>`_
+- `Developing Sqoop application with REST API <dev/RESTAPI.html>`_
+- `Developing Sqoop application using Sqoop Java Client API <dev/ClientAPI.html>`_
+- `Repository <dev/Repository.html>`_
 
-Administrator Guide
---------------------
-If you are a admin trying to set up Sqoop, check out the links below
+Security:
+---------------
+- `Security Guide <security/SecurityGuideOnSqoop2.html>`_
 
-- `Sqoop Server and Client Installation <Installation.html>`_
-- `Sqoop Server Upgrade <Upgrade.html>`_
-- `Sqoop Tools <Tools.html>`_
-
-Sqoop Project Details
----------------------
+License
+-------
 
-- `Download Apache Sqoop <http://www.apache.org/dyn/closer.cgi/sqoop>`_
-- `Sqoop Apache Wiki <https://cwiki.apache.org/confluence/display/SQOOP/Home>`_
-- `Sqoop Issue Tracking (JIRA) <https://issues.apache.org/jira/browse/SQOOP>`_
-- `Sqoop Source Code <https://git-wip-us.apache.org/repos/asf?p=sqoop.git;a=shortlog;h=refs/heads/sqoop2>`_
+Sqoop is licensed under `Apache Software License v2 <http://www.apache.org/licenses/LICENSE-2.0>`_.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/sqoop/blob/3613843a/docs/src/site/sphinx/security.rst
----------------------------------------------------------------------
diff --git a/docs/src/site/sphinx/security.rst b/docs/src/site/sphinx/security.rst
new file mode 100644
index 0000000..959be78
--- /dev/null
+++ b/docs/src/site/sphinx/security.rst
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==============
+Security Guide
+==============
+
+.. toctree::
+   :glob:
+
+   security/*
\ No newline at end of file