You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by ar...@apache.org on 2018/05/22 20:15:09 UTC

[41/50] [abbrv] hadoop git commit: HDDS-89. Create ozone specific inline documentation as part of the build. Contributed by Elek, Marton.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-ozone/docs/themes/ozonedoc/static/js/ozonedoc.js
----------------------------------------------------------------------
diff --git a/hadoop-ozone/docs/themes/ozonedoc/static/js/ozonedoc.js b/hadoop-ozone/docs/themes/ozonedoc/static/js/ozonedoc.js
new file mode 100644
index 0000000..3f96f00
--- /dev/null
+++ b/hadoop-ozone/docs/themes/ozonedoc/static/js/ozonedoc.js
@@ -0,0 +1,23 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+$(
+  function(){
+    $("table").addClass("table table-condensed table-bordered table-striped");
+  }
+);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-ozone/docs/themes/ozonedoc/theme.toml
----------------------------------------------------------------------
diff --git a/hadoop-ozone/docs/themes/ozonedoc/theme.toml b/hadoop-ozone/docs/themes/ozonedoc/theme.toml
new file mode 100644
index 0000000..9f427fe
--- /dev/null
+++ b/hadoop-ozone/docs/themes/ozonedoc/theme.toml
@@ -0,0 +1,2 @@
+
+name = "Ozonedoc"

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md b/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md
deleted file mode 100644
index fc63742..0000000
--- a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md
+++ /dev/null
@@ -1,150 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-Ozone Command Shell
-===================
-
-Ozone command shell gives a command shell interface to work against ozone.
-Please note that this  document assumes that cluster is deployed
-with simple authentication.
-
-The Ozone commands take the following format.
-
-* `ozone oz --command_ http://hostname:port/volume/bucket/key -user
-<name> -root`
-
-The *port* specified in command should match the port mentioned in the config
-property `hdds.rest.http-address`. This property can be set in `ozone-site.xml`.
-The default value for the port is `9880` and is used in below commands.
-
-The *-root* option is a command line short cut that allows *ozone oz*
-commands to be run as the user that started the cluster. This is useful to
-indicate that you want the commands to be run as some admin user. The only
-reason for this option is that it makes the life of a lazy developer more
-easier.
-
-Ozone Volume Commands
---------------------
-
-The volume commands allow users to create, delete and list the volumes in the
-ozone cluster.
-
-### Create Volume
-
-Volumes can be created only by Admins. Here is an example of creating a volume.
-
-* `ozone oz -createVolume http://localhost:9880/hive -user bilbo -quota
-100TB -root`
-
-The above command creates a volume called `hive` owned by user `bilbo`. The
-`-root` option allows the command to be executed as user `hdfs` which is an
-admin in the cluster.
-
-### Update Volume
-
-Updates information like ownership and quota on an existing volume.
-
-* `ozone oz  -updateVolume  http://localhost:9880/hive -quota 500TB -root`
-
-The above command changes the volume quota of hive from 100TB to 500TB.
-
-### Delete Volume
-Deletes a Volume if it is empty.
-
-* `ozone oz -deleteVolume http://localhost:9880/hive -root`
-
-
-### Info Volume
-Info volume command allows the owner or the administrator of the cluster to read meta-data about a specific volume.
-
-* `ozone oz -infoVolume http://localhost:9880/hive -root`
-
-### List Volumes
-
-List volume command can be used by administrator to list volumes of any user. It can also be used by a user to list volumes owned by him.
-
-* `ozone oz -listVolume http://localhost:9880/ -user bilbo -root`
-
-The above command lists all volumes owned by user bilbo.
-
-Ozone Bucket Commands
---------------------
-
-Bucket commands follow a similar pattern as volume commands. However bucket commands are designed to be run by the owner of the volume.
-Following examples assume that these commands are run by the owner of the volume or bucket.
-
-
-### Create Bucket
-
-Create bucket call allows the owner of a volume to create a bucket.
-
-* `ozone oz -createBucket http://localhost:9880/hive/january`
-
-This call creates a bucket called `january` in the volume called `hive`. If
-the volume does not exist, then this call will fail.
-
-
-### Update Bucket
-Updates bucket meta-data, like ACLs.
-
-* `ozone oz -updateBucket http://localhost:9880/hive/january  -addAcl
-user:spark:rw`
-
-### Delete Bucket
-Deletes a bucket if it is empty.
-
-* `ozone oz -deleteBucket http://localhost:9880/hive/january`
-
-### Info Bucket
-Returns information about a given bucket.
-
-* `ozone oz -infoBucket http://localhost:9880/hive/january`
-
-### List Buckets
-List buckets on a given volume.
-
-* `ozone oz -listBucket http://localhost:9880/hive`
-
-Ozone Key Commands
-------------------
-
-Ozone key commands allows users to put, delete and get keys from ozone buckets.
-
-### Put Key
-Creates or overwrites a key in ozone store, -file points to the file you want
-to upload.
-
-* `ozone oz -putKey  http://localhost:9880/hive/january/processed.orc  -file
-processed.orc`
-
-### Get Key
-Downloads a file from the ozone bucket.
-
-* `ozone oz -getKey  http://localhost:9880/hive/january/processed.orc  -file
-  processed.orc.copy`
-
-### Delete Key
-Deletes a key  from the ozone store.
-
-* `ozone oz -deleteKey http://localhost:9880/hive/january/processed.orc`
-
-### Info Key
-Reads  key metadata from the ozone store.
-
-* `ozone oz -infoKey http://localhost:9880/hive/january/processed.orc`
-
-### List Keys
-List all keys in an ozone bucket.
-
-* `ozone oz -listKey  http://localhost:9880/hive/january`

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneGettingStarted.md.vm
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneGettingStarted.md.vm b/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneGettingStarted.md.vm
deleted file mode 100644
index 9e96098..0000000
--- a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneGettingStarted.md.vm
+++ /dev/null
@@ -1,347 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-Ozone - Object store for Hadoop
-==============================
-
-Introduction
-------------
-Ozone is an object store for Hadoop. It  is a redundant, distributed object
-store build by leveraging primitives present in HDFS. Ozone supports REST
-API for accessing the store.
-
-Getting Started
----------------
-Ozone is a work in progress and currently lives in the hadoop source tree.
-The subprojects (ozone/hdds) are part of the hadoop source tree but by default
-not compiled and not part of the official releases. To
-use it, you have to build a package by yourself and deploy a cluster.
-
-### Building Ozone
-
-To build Ozone, please checkout the hadoop sources from github. Then
-checkout the trunk branch and build it.
-
-`mvn clean package -DskipTests=true -Dmaven.javadoc.skip=true -Pdist -Phdds -Dtar -DskipShade`
-
-skipShade is just to make compilation faster and not really required.
-
-This will give you a tarball in your distribution directory. This is the
-tarball that can be used for deploying your hadoop cluster. Here is an
-example of the tarball that will be generated.
-
-* `~/apache/hadoop/hadoop-dist/target/${project.version}.tar.gz`
-
-At this point we have an option to setup a physical cluster or run ozone via
-docker.
-
-Running Ozone via Docker
-------------------------
-
-This assumes that you have a running docker setup on the machine. Please run
-these following commands to see ozone in action.
-
- Go to the directory where the docker compose files exist.
-
-
- - `cd hadoop-dist/target/compose/ozone`
-
-Tell docker to start ozone, this will start a KSM, SCM and a single datanode in
-the background.
-
-
- - `docker-compose up -d`
-
-Now let us run some work load against ozone, to do that we will run freon.
-
-This will log into the datanode and run bash.
-
- - `docker-compose exec datanode bash`
-
-Now you can run the `ozone` command shell or freon, the ozone load generator.
-
-This is the command to run freon.
-
- - `ozone freon -mode offline -validateWrites -numOfVolumes 1 -numOfBuckets 10 -numOfKeys 100`
-
-You can checkout the KSM UI to see the requests information.
-
- - `http://localhost:9874/`
-
-If you need more datanode you can scale up:
-
- - `docker-compose scale datanode=3`
-
-Running Ozone using a real cluster
-----------------------------------
-
-Please proceed to setup a hadoop cluster by creating the hdfs-site.xml and
-other configuration files that are needed for your cluster.
-
-
-### Ozone Configuration
-
-Ozone relies on its own configuration file called `ozone-site.xml`. It is
-just for convenience and ease of management --  you can add these settings
-to `hdfs-site.xml`, if you don't want to keep ozone settings separate.
-This document refers to `ozone-site.xml` so that ozone settings are in one
-place  and not mingled with HDFS settings.
-
- * _*ozone.enabled*_  This is the most important setting for ozone.
- Currently, Ozone is an opt-in subsystem of HDFS. By default, Ozone is
- disabled. Setting this flag to `true` enables ozone in the HDFS cluster.
- Here is an example,
-
-```
-    <property>
-       <name>ozone.enabled</name>
-       <value>True</value>
-    </property>
-```
- *  _*ozone.metadata.dirs*_ Ozone is designed with modern hardware
- in mind. It tries to use SSDs effectively. So users can specify where the
- metadata must reside. Usually you pick your fastest disk (SSD if
- you have them on your nodes). KSM, SCM and datanode will write the metadata
- to these disks. This is a required setting, if this is missing Ozone will
- fail to come up. Here is an example,
-
-```
-   <property>
-      <name>ozone.metadata.dirs</name>
-      <value>/data/disk1/meta</value>
-   </property>
-```
-
-* _*ozone.scm.names*_ Ozone is build on top of container framework. Storage
- container manager(SCM) is a distributed block service which is used by ozone
- and other storage services.
- This property allows datanodes to discover where SCM is, so that
- datanodes can send heartbeat to SCM. SCM is designed to be highly available
- and datanodes assume there are multiple instances of SCM which form a highly
- available ring. The HA feature of SCM is a work in progress. So we
- configure ozone.scm.names to be a single machine. Here is an example,
-
-```
-    <property>
-      <name>ozone.scm.names</name>
-      <value>scm.hadoop.apache.org</value>
-    </property>
-```
-
-* _*ozone.scm.datanode.id*_ Each datanode that speaks to SCM generates an ID
-just like HDFS.  This is an optional setting. Please note:
-This path will be created by datanodes if it doesn't exist already. Here is an
- example,
-
-```
-   <property>
-      <name>ozone.scm.datanode.id</name>
-      <value>/data/disk1/scm/meta/node/datanode.id</value>
-   </property>
-```
-
-* _*ozone.scm.block.client.address*_ Storage Container Manager(SCM) offers a
- set of services that can be used to build a distributed storage system. One
- of the services offered is the block services. KSM and HDFS would use this
- service. This property describes where KSM can discover SCM's block service
- endpoint. There is corresponding ports etc, but assuming that we are using
- default ports, the server address is the only required field. Here is an
- example,
-
-```
-    <property>
-      <name>ozone.scm.block.client.address</name>
-      <value>scm.hadoop.apache.org</value>
-    </property>
-```
-
-* _*ozone.ksm.address*_ KSM server address. This is used by Ozonehandler and
-Ozone File System.
-
-```
-    <property>
-       <name>ozone.ksm.address</name>
-       <value>ksm.hadoop.apache.org</value>
-    </property>
-```
-
-* _*dfs.datanode.plugin*_ Datanode service plugins: the container manager part
- of ozone is running inside the datanode as a service plugin. To activate ozone
- you should define the service plugin implementation class. **Important**
- It should be added to the **hdfs-site.xml** as the plugin should be activated
- as part of the normal HDFS Datanode bootstrap.
-
-```
-    <property>
-       <name>dfs.datanode.plugins</name>
-       <value>org.apache.hadoop.ozone.HddsDatanodeService</value>
-    </property>
-```
-
-Here is a quick summary of settings needed by Ozone.
-
-| Setting                        | Value                        | Comment |
-|--------------------------------|------------------------------|------------------------------------------------------------------|
-| ozone.enabled                  | True                         | This enables SCM and  containers in HDFS cluster.                |
-| ozone.metadata.dirs            | file path                    | The metadata will be stored here.                                |
-| ozone.scm.names                | SCM server name              | Hostname:port or or IP:port address of SCM.                      |
-| ozone.scm.block.client.address | SCM server name and port     | Used by services like KSM                                        |
-| ozone.scm.client.address       | SCM server name and port     | Used by client side                                              |
-| ozone.scm.datanode.address     | SCM server name and port     | Used by datanode to talk to SCM                                  |
-| ozone.ksm.address              | KSM server name              | Used by Ozone handler and Ozone file system.                     |
-
- Here is a working example of`ozone-site.xml`.
-
-```
-    <?xml version="1.0" encoding="UTF-8"?>
-    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-    <configuration>
-      <property>
-          <name>ozone.enabled</name>
-          <value>True</value>
-        </property>
-
-        <property>
-          <name>ozone.metadata.dirs</name>
-          <value>/data/disk1/ozone/meta</value>
-        </property>
-
-        <property>
-          <name>ozone.scm.names</name>
-          <value>127.0.0.1</value>
-        </property>
-
-        <property>
-           <name>ozone.scm.client.address</name>
-           <value>127.0.0.1:9860</value>
-        </property>
-
-         <property>
-           <name>ozone.scm.block.client.address</name>
-           <value>127.0.0.1:9863</value>
-         </property>
-
-         <property>
-           <name>ozone.scm.datanode.address</name>
-           <value>127.0.0.1:9861</value>
-         </property>
-
-         <property>
-           <name>ozone.ksm.address</name>
-           <value>127.0.0.1:9874</value>
-         </property>
-    </configuration>
-```
-
-And don't forget to enable the datanode component with adding the
-following configuration to the hdfs-site.xml:
-
-```
-    <property>
-       <name>dfs.datanode.plugins</name>
-       <value>org.apache.hadoop.ozone.HddsDatanodeService</value>
-    </property>
-```
-
-### Starting Ozone
-
-Ozone is designed to run concurrently with HDFS. The simplest way to [start
-HDFS](../hadoop-common/ClusterSetup.html) is to run `start-dfs.sh` from the
-`$HADOOP/sbin/start-dfs.sh`. Once HDFS
-is running, please verify it is fully functional by running some commands like
-
-   - *./hdfs dfs -mkdir /usr*
-   - *./hdfs dfs -ls /*
-
- Once you are sure that HDFS is running, start Ozone. To start  ozone, you
- need to start SCM and KSM. Currently we assume that both KSM and SCM
-  is running on the same node, this will change in future.
-
- The first time you bring up Ozone, SCM must be initialized.
-
-   - `./ozone scm -init`
-
- Start SCM.
-
-   - `./ozone --daemon start scm`
-
- Once SCM gets started, KSM must be initialized.
-
-   - `./ozone ksm -createObjectStore`
-
- Start KSM.
-
-   - `./ozone --daemon start ksm`
-
-if you would like to start HDFS and Ozone together, you can do that by running
- a single command.
- - `$HADOOP/sbin/start-ozone.sh`
-
- This command will start HDFS and then start the ozone components.
-
- Once you have ozone running you can use these ozone [shell](./OzoneCommandShell.html)
- commands to  create a  volume, bucket and keys.
-
-### Diagnosing issues
-
-Ozone tries not to pollute the existing HDFS streams of configuration and
-logging. So ozone logs are by default configured to be written to a file
-called `ozone.log`. This is controlled by the settings in `log4j.properties`
-file in the hadoop configuration directory.
-
-Here is the log4j properties that are added by ozone.
-
-
-```
-   #
-   # Add a logger for ozone that is separate from the Datanode.
-   #
-   #log4j.debug=true
-   log4j.logger.org.apache.hadoop.ozone=DEBUG,OZONE,FILE
-
-   # Do not log into datanode logs. Remove this line to have single log.
-   log4j.additivity.org.apache.hadoop.ozone=false
-
-   # For development purposes, log both to console and log file.
-   log4j.appender.OZONE=org.apache.log4j.ConsoleAppender
-   log4j.appender.OZONE.Threshold=info
-   log4j.appender.OZONE.layout=org.apache.log4j.PatternLayout
-   log4j.appender.OZONE.layout.ConversionPattern=%d{ISO8601} [%t] %-5p \
-    %X{component} %X{function} %X{resource} %X{user} %X{request} - %m%n
-
-   # Real ozone logger that writes to ozone.log
-   log4j.appender.FILE=org.apache.log4j.DailyRollingFileAppender
-   log4j.appender.FILE.File=${hadoop.log.dir}/ozone.log
-   log4j.appender.FILE.Threshold=debug
-   log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
-   log4j.appender.FILE.layout.ConversionPattern=%d{ISO8601} [%t] %-5p \
-     (%F:%L) %X{function} %X{resource} %X{user} %X{request} - \
-      %m%n
-```
-
-If you would like to have a single datanode log instead of ozone stuff
-getting written to ozone.log, please remove this line or set this to true.
-
- ` log4j.additivity.org.apache.hadoop.ozone=false`
-
-On the SCM/KSM side, you will be able to see
-
-  - `hadoop-hdfs-ksm-hostname.log`
-  - `hadoop-hdfs-scm-hostname.log`
-
-Please file any issues you see under the related issues:
-
- - [Object store in HDFS: HDFS-7240](https://issues.apache.org/jira/browse/HDFS-7240)
- - [Ozone File System: HDFS-13074](https://issues.apache.org/jira/browse/HDFS-13074)
- - [Building HDFS on top of new storage layer (HDDS): HDFS-10419](https://issues.apache.org/jira/browse/HDFS-10419)
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneMetrics.md
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneMetrics.md b/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneMetrics.md
deleted file mode 100644
index f5eccf6..0000000
--- a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneMetrics.md
+++ /dev/null
@@ -1,166 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-
-
-HDFS Ozone Metrics
-===============
-
-<!-- MACRO{toc|fromDepth=0|toDepth=3} -->
-
-Overview
---------
-
-The container metrics that is used in HDFS Ozone.
-
-### Storage Container Metrics
-
-The metrics for various storage container operations in HDFS Ozone.
-
-Storage container is an optional service that can be enabled by setting
-'ozone.enabled' to true.
-These metrics are only available when ozone is enabled.
-
-Storage Container Metrics maintains a set of generic metrics for all
-container RPC calls that can be made to a datandoe/container.
-
-Along with the total number of RPC calls containers maintain a set of metrics
-for each RPC call. Following is the set of counters maintained for each RPC
-operation.
-
-*Total number of operation* - We maintain an array which counts how
-many times a specific operation has been performed.
-Eg.`NumCreateContainer` tells us how many times create container has been
-invoked on this datanode.
-
-*Total number of pending operation* - This is an array which counts how
-many times a specific operation is waitting to be processed from the client
-point of view.
-Eg.`NumPendingCreateContainer` tells us how many create container requests that
-waitting to be processed.
-
-*Average latency of each pending operation in nanoseconds* - The average latency
-of the operation from the client point of view.
-Eg. `CreateContainerLatencyAvgTime` - This tells us the average latency of
-Create Container from the client point of view.
-
-*Number of bytes involved in a specific command* - This is an array that is
-maintained for all operations, but makes sense only for read and write
-operations.
-
-While it is possible to read the bytes in update container, it really makes
-no sense, since no data stream involved. Users are advised to use this
-metric only when it makes sense. Eg. `BytesReadChunk` -- Tells us how
-many bytes have been read from this data using Read Chunk operation.
-
-*Average Latency of each operation* - The average latency of the operation.
-Eg. `LatencyCreateContainerAvgTime` - This tells us the average latency of
-Create Container.
-
-*Quantiles for each of these operations* - The 50/75/90/95/99th percentile
-of these operations. Eg. `CreateContainerNanos60s50thPercentileLatency` --
-gives latency of the create container operations at the 50th percentile latency
-(1 minute granularity). We report 50th, 75th, 90th, 95th and 99th percentile
-for all RPCs.
-
-So this leads to the containers reporting these counters for each of these
-RPC operations.
-
-| Name | Description |
-|:---- |:---- |
-| `NumOps` | Total number of container operations |
-| `CreateContainer` | Create container operation |
-| `ReadContainer` | Read container operation |
-| `UpdateContainer` | Update container operations |
-| `DeleteContainer` | Delete container operations |
-| `ListContainer` | List container operations |
-| `PutKey` | Put key operations |
-| `GetKey` | Get key operations |
-| `DeleteKey` | Delete key operations |
-| `ListKey` | List key operations |
-| `ReadChunk` | Read chunk operations |
-| `DeleteChunk` | Delete chunk operations |
-| `WriteChunk` | Write chunk operations|
-| `ListChunk` | List chunk operations |
-| `CompactChunk` | Compact chunk operations |
-| `PutSmallFile` | Put small file operations |
-| `GetSmallFile` | Get small file operations |
-| `CloseContainer` | Close container operations |
-
-### Storage Container Manager Metrics
-
-The metrics for containers that managed by Storage Container Manager.
-
-Storage Container Manager (SCM) is a master service which keeps track of
-replicas of storage containers. It also manages all data nodes and their
-states, dealing with container reports and dispatching commands for execution.
-
-Following are the counters for containers:
-
-| Name | Description |
-|:---- |:---- |
-| `LastContainerReportSize` | Total size in bytes of all containers in latest container report that SCM received from datanode |
-| `LastContainerReportUsed` | Total number of bytes used by all containers in latest container report that SCM received from datanode |
-| `LastContainerReportKeyCount` | Total number of keys in all containers in latest container report that SCM received from datanode |
-| `LastContainerReportReadBytes` | Total number of bytes have been read from all containers in latest container report that SCM received from datanode |
-| `LastContainerReportWriteBytes` | Total number of bytes have been written into all containers in latest container report that SCM received from datanode |
-| `LastContainerReportReadCount` | Total number of times containers have been read from in latest container report that SCM received from datanode |
-| `LastContainerReportWriteCount` | Total number of times containers have been written to in latest container report that SCM received from datanode |
-| `ContainerReportSize` | Total size in bytes of all containers over whole cluster |
-| `ContainerReportUsed` | Total number of bytes used by all containers over whole cluster |
-| `ContainerReportKeyCount` | Total number of keys in all containers over whole cluster |
-| `ContainerReportReadBytes` | Total number of bytes have been read from all containers over whole cluster |
-| `ContainerReportWriteBytes` | Total number of bytes have been written into all containers over whole cluster |
-| `ContainerReportReadCount` | Total number of times containers have been read from over whole cluster |
-| `ContainerReportWriteCount` | Total number of times containers have been written to over whole cluster |
-
-### Key Space Metrics
-
-The metrics for various key space manager operations in HDFS Ozone.
-
-key space manager (KSM) is a service that similar to the Namenode in HDFS.
-In the current design of KSM, it maintains metadata of all volumes, buckets and keys.
-These metrics are only available when ozone is enabled.
-
-Following is the set of counters maintained for each key space operation.
-
-*Total number of operation* - We maintain an array which counts how
-many times a specific operation has been performed.
-Eg.`NumVolumeCreate` tells us how many times create volume has been
-invoked in KSM.
-
-*Total number of failed operation* - This type operation is opposite to the above
-operation.
-Eg.`NumVolumeCreateFails` tells us how many times create volume has been invoked
-failed in KSM.
-
-Following are the counters for each of key space operations.
-
-| Name | Description |
-|:---- |:---- |
-| `VolumeCreate` | Create volume operation |
-| `VolumeUpdates` | Update volume property operation |
-| `VolumeInfos` | Get volume information operation |
-| `VolumeCheckAccesses` | Check volume access operation |
-| `VolumeDeletes` | Delete volume operation |
-| `VolumeLists` | List volume operation |
-| `BucketCreates` | Create bucket operation |
-| `BucketInfos` | Get bucket information operation |
-| `BucketUpdates` | Update bucket property operation |
-| `BucketDeletes` | Delete bucket operation |
-| `BucketLists` | List bucket operation |
-| `KeyAllocate` | Allocate key operation |
-| `KeyLookup` | Look up key operation |
-| `KeyDeletes` | Delete key operation |
-| `KeyLists` | List key operation |
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneOverview.md
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneOverview.md b/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneOverview.md
deleted file mode 100644
index 41d7dbd..0000000
--- a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneOverview.md
+++ /dev/null
@@ -1,88 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-Ozone Overview
-==============
-
-
Ozone is an Object store for Apache Hadoop. It aims to scale to billions of
-keys. 
The following is a high-level overview of the core components of Ozone.


-
-![Ozone Architecture Overview](images/ozoneoverview.png) 


-
-The main elements of Ozone are
:
-
-### Clients
-Ozone ships with a set of ready-made clients. They are 
Ozone CLI and Freon.

-
-    * [Ozone CLI](./OzoneCommandShell.html) is the command line interface like 'hdfs' command.

-
-    * Freon is a  load generation tool for Ozone.

-
-### REST Handler
-Ozone provides both an RPC (Remote Procedure Call) as well as a  REST
-(Representational State Transfer) style interface. This allows clients to be
-written in many languages quickly. Ozone strives to maintain a similar
-interface between REST and RPC. The Rest handler offers the REST protocol
-services of Ozone.
-
-For most purposes, a client can make one line change to switch from REST to
-RPC or vice versa.  

-
-### Ozone File System
-Ozone file system (TODO: Add documentation) is a Hadoop compatible file system.
-This is the important user-visible component of ozone.
-This allows Hadoop services and applications like Hive/Spark to run against
-Ozone without any change.
-
-### Ozone Client
-This is like DFSClient in HDFS. This acts as the standard client to talk to
-Ozone. All other components that we have discussed so far rely on Ozone client
-(TODO: Add Ozone client documentation).

-
-### Key Space Manager

-Key Space Manager(KSM) takes care of the Ozone's namespace.
-All ozone entities like volumes, buckets and keys are managed by KSM
-(TODO: Add KSM documentation). In Short, KSM is the metadata manager for Ozone.
-KSM talks to blockManager(SCM) to get blocks and passes it on to the Ozone
-client.  Ozone client writes data to these blocks.
-KSM will eventually be replicated via Apache Ratis for High Availability.

-
-### Storage Container Manager
-Storage Container Manager (SCM) is the block and cluster manager for Ozone.
-SCM along with data nodes offer a service called 'containers'.
-A container is a group unrelated of blocks that are managed together
-as a single entity.
-
-SCM offers the following abstractions.


-
-![SCM Abstractions](images/scmservices.png)
-#### Blocks
-Blocks are like blocks in HDFS. They are replicated store of data.
-
-#### Containers
-A collection of blocks replicated and managed together.
-
-#### Pipelines
-SCM allows each container to choose its method of replication.
-For example, a container might decide that it needs only one copy of a  block
-and might choose a stand-alone pipeline. Another container might want to have
-a very high level of reliability and pick a RATIS based pipeline. In other
-words, SCM allows different kinds of replication strategies to co-exist.
-
-#### Pools
-A group of data nodes is called a pool. For scaling purposes,
-we define a pool as a set of machines. This makes management of datanodes
-easier.
-
-#### Nodes
-The data node where data is stored.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneRest.md
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneRest.md b/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneRest.md
deleted file mode 100644
index 13fe00d..0000000
--- a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneRest.md
+++ /dev/null
@@ -1,549 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-Ozone REST API's.
-===================
-
-<!-- MACRO{toc|fromDepth=0|toDepth=1} -->
-
-Overview
---------
-
-The Ozone REST API's allows user to access ozone via  REST protocol.
-
-Authentication and Authorization
---------------------
-
-For time being, The default authentication mode of REST API is insecure access
-mode, which is *Simple* mode. Under this mode, ozone server trusts the user
-name specified by client and it does not perform any authentication.
-
-User name can be specified in HTTP header by
-
-* `x-ozone-user: {USER_NAME}`
-
-for example if add following header *x-ozone-user: bilbo* in the HTTP request,
-then operation will be executed as *bilbo* user.
-In *Simple* mode, there is no real authorization either. Client can be
-authorized to obtain administrator privilege by using HTTP header
-
-* `Authorization: {AUTH_METHOD} {SIGNATURE}`
-
-for example set following header *Authorization: OZONE root* in the HTTP request,
-then ozone will authorize the client with administrator privilege.
-
-Common REST Headers
---------------------
-
-The following HTTP headers must be set for each REST call.
-
-| Property | Description |
-|:---- |:----
-| Authorization | The authorization field determines which authentication method is used by ozone. Currently only *simple* mode is supported, the corresponding value is *OZONE*. Optionally an user name can be set as *OZONE {USER_NAME}* to authorize as a particular user. |
-| Date | Standard HTTP header that represents dates. The format is - day of the week, month, day, year and time (military time format) in GMT. Any other time zone will be rejected by ozone server. Eg. *Date : Mon, Apr 4, 2016 06:22:00 GMT*. This field is required. |
-| x-ozone-version | A required HTTP header to indicate which version of API this call will be communicating to. E.g *x-ozone-version: v1*. Currently ozone only publishes v1 version API. |
-
-Common Reply Headers
---------------------
-
-The common reply headers are part of all Ozone server replies.
-
-| Property | Description |
-|:---- |:----
-| Date | This is the HTTP date header and it is set to server’s local time expressed in GMT. |
-| x-ozone-request-id | This is a UUID string that represents an unique request ID. This ID is used to track the request through the ozone system and is useful for debugging purposes. |
-| x-ozone-server-name | Fully qualified domain name of the sever which handled the request. |
-
-Volume APIs
---------------------
-
-### Create a Volume
-
-This API allows admins to create a new storage volume.
-
-Schema:
-
-- `POST /{volume}?quota=<VOLUME_QUOTA>`
-
-Query Parameter:
-
-| Query Parameter | Value | Description |
-|:---- |:---- |:----
-| quota | long<BYTES \| MB \| GB \| TB> | Optional. Quota size in BYTEs, MBs, GBs or TBs |
-
-Sample HTTP POST request:
-
-    curl -i -X POST -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE root" "http://localhost:9880/volume-to-create"
-
-this request creates a volume as user *bilbo*, the authorization field is set to *OZONE root* because this call requires administration privilege. The client receives a response with zero content length.
-
-    HTTP/1.1 201 Created
-    x-ozone-server-name: localhost
-    x-ozone-request-id: 2173deb5-bbb7-4f0a-8236-f354784e3bae
-    Date: Tue, 27 Jun 2017 07:42:04 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 0
-    Connection: keep-alive
-
-### Update Volume
-
-This API allows administrators to update volume info such as ownership and quota. This API requires administration privilege.
-
-Schema:
-
-- `PUT /{volume}?quota=<VOLUME_QUOTA>`
-
-Query Parameter:
-
-| Query Parameter | Value | Description |
-|:---- |:---- |:----
-| quota | long<BYTES \| MB \| GB \| TB>  \| remove | Optional. Quota size in BYTEs, MBs, GBs or TBs. Or use string value *remove* to remove an existing quota for a volume. |
-
-Sample HTTP PUT request:
-
-    curl -X PUT -H "Authorization:OZONE root" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" -H "x-ozone-user: john"  http://localhost:9880/volume-to-update
-
-this request modifies the owner of */volume-to-update* to *john*.
-
-### Delete Volume
-
-This API allows user to delete a volume owned by themselves if the volume is not empty. Administrators can delete volumes owned by any user.
-
-Schema:
-
-- `DELETE /{volume}`
-
-Sample HTTP DELETE request:
-
-    curl -i -X DELETE -H "Authorization:OZONE root" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" -H "x-ozone-user: bilbo"  http://localhost:9880/volume-to-delete
-
-this request deletes an empty volume */volume-to-delete*. The client receives a zero length content.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: 6af14c64-e3a9-40fe-9634-df60b7cbbc6a
-    Date: Tue, 27 Jun 2017 08:49:52 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 0
-    Connection: keep-alive
-
-### Info Volume
-
-This API allows user to read the info of a volume owned by themselves. Administrators can read volume info owned by any user.
-
-Schema:
-
-- `GET /{volume}?info=volume`
-
-Query Parameter:
-
-| Query Parameter | Value | Description |
-|:---- |:---- |:----
-| info | "volume" | Required and enforced with this value. |
-
-Sample HTTP GET request:
-
-    curl -i -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" "http://localhost:9880/volume-of-bilbo?info=volume"
-
-this request gets the info of volume */volume-of-bilbo*, the client receives a response with a JSON object of volume info.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: a2224806-beaf-42dd-a68e-533cd7508f74
-    Date: Tue, 27 Jun 2017 07:55:35 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 171
-    Connection: keep-alive
-
-    {
-      "owner" : { "name" : "bilbo" },
-      "quota" : { "unit" : "TB", "size" : 1048576 },
-      "volumeName" : "volume-of-bilbo",
-      "createdOn" : "Tue, 27 Jun 2017 07:42:04 GMT",
-      "createdBy" : "root"
-    }
-
-### List Volumes
-
-This API allows user to list all volumes owned by themselves. Administrators can list all volumes owned by any user.
-
-Schema:
-
-- `GET /?prefix=<PREFIX>&max-keys=<MAX_RESULT_SIZE>&prev-key=<PREVIOUS_VOLUME_KEY>`
-
-Query Parameter:
-
-| Query Parameter | Value | Description |
-|:---- |:---- |:----
-| prefix | string | Optional. Only volumes with this prefix are included in the result. |
-| max-keys | int | Optional. Maximum number of volumes included in the result. Default is 1024 if not specified. |
-| prev-key | string | Optional. Volume name from where listing should start, this key is excluded in the result. It must be a valid volume name. |
-| root-scan | bool | Optional. List all volumes in the cluster if this is set to true. Default false. |
-
-Sample HTTP GET request:
-
-    curl -i -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" "http://localhost:9880/?max-keys=100&prefix=Jan"
-
-this request gets all volumes owned by *bilbo* and each volume's name contains prefix *Jan*, the result at most contains *100* entries. The client receives a list of SON objects, each of them describes the info of a volume.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: 7fa0dce1-a8bd-4387-bc3c-1dac4b710bb1
-    Date: Tue, 27 Jun 2017 08:07:04 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 602
-    Connection: keep-alive
-
-    {
-      "volumes" : [
-        {
-          "owner" : { "name" : "bilbo"},
-          "quota" : { "unit" : "TB", "size" : 2 },
-          "volumeName" : "Jan-vol1",
-          "createdOn" : "Tue, 27 Jun 2017 07:42:04 GMT",
-          "createdBy" : root
-      },
-      ...
-      ]
-    }
-
-Bucket APIs
---------------------
-
-### Create Bucket
-
-This API allows an user to create a bucket in a volume.
-
-Schema:
-
-- `POST /{volume}/{bucket}`
-
-Additional HTTP Headers:
-
-| HTTP Header | Value | Description |
-|:---- |:---- |:----
-| x-ozone-acl | ozone ACLs | Optional. Ozone acls. |
-| x-ozone-storage-class | <DEFAULT \| ARCHIVE \| DISK \| RAM_DISK \| SSD > | Optional. Storage type for a volume. |
-| x-ozone-bucket-versioning | enabled/disabled | Optional. Do enable bucket versioning or not. |
-
-Sample HTTP POST request:
-
-    curl -i -X POST -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" http://localhost:9880/volume-of-bilbo/bucket-0
-
-this request creates a bucket *bucket-0* under volume *volume-of-bilbo*.
-
-    HTTP/1.1 201 Created
-    x-ozone-server-name: localhost
-    x-ozone-request-id: 49acfeec-4c85-470a-872b-2eaebd8d751e
-    Date: Tue, 27 Jun 2017 08:55:25 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 0
-    Connection: keep-alive
-
-### Update Bucket
-
-Updates bucket meta-data, like ACLs.
-
-Schema:
-
-- `PUT /{volume}/{bucket}`
-
-Additional HTTP Headers:
-
-| HTTP Header | Value | Description |
-|:---- |:---- |:----
-| x-ozone-acl | ozone ACLs | Optional. Ozone acls. |
-| x-ozone-bucket-versioning | enabled/disabled | Optional. Do enable bucket versioning or not. |
-
-Sample HTTP PUT request:
-
-    curl -i -X PUT -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" -H "x-ozone-acl: ADD user:peregrin:rw" http://localhost:9880/volume-of-bilbo/bucket-to-update
-
-this request adds an ACL policy specified by HTTP header *x-ozone-acl* to bucket */volume-of-bilbo/bucket-to-update*, the ACL field *ADD user:peregrin:rw* gives add additional read/write permission to user *peregrin* to this bucket.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: b061a295-5faf-4b98-94b9-8b3e87c8eb5e
-    Date: Tue, 27 Jun 2017 09:02:37 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 0
-    Connection: keep-alive
-
-### Delete Bucket
-
-Deletes a bucket if it is empty. An user can only delete bucket owned by themselves, and administrators can delete buckets owned by any user, as long as it is empty.
-
-Schema:
-
-- `DELETE /{volume}/{bucket}`
-
-Sample HTTP DELETE request:
-
-    curl -i -X DELETE -H "Authorization:OZONE root" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" -H "x-ozone-user:bilbo" "http://localhost:9880/volume-of-bilbo/bucket-0"
-
-this request deletes bucket */volume-of-bilbo/bucket-0*. The client receives a zero length content response.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: f57acd7a-2116-4c2f-aa2f-5a483db81c9c
-    Date: Tue, 27 Jun 2017 09:16:52 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 0
-    Connection: keep-alive
-
-
-### Info Bucket
-
-This API returns information about a given bucket.
-
-Schema:
-
-- `GET /{volume}/{bucket}?info=bucket`
-
-Query Parameters:
-
-| Query Parameter | Value | Description |
-|:---- |:---- |:----
-| info | "bucket" | Required and enforced with this value. |
-
-Sample HTTP GET request:
-
-    curl -i -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" "http://localhost:9880/volume-of-bilbo/bucket-0?info=bucket"
-
-this request gets the info of bucket */volume-of-bilbo/bucket-0*. The client receives a response of JSON object contains bucket info.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: f125485b-8cae-4c7f-a2d6-5b1fefd6f193
-    Date: Tue, 27 Jun 2017 09:08:31 GMT
-    Content-Type: application/json
-    Content-Length: 138
-    Connection: keep-alive
-
-    {
-      "volumeName" : "volume-of-bilbo",
-      "bucketName" : "bucket-0",
-      "createdOn" : "Tue, 27 Jun 2017 08:55:25 GMT",
-      "acls" : [ ],
-      "versioning" : "DISABLED",
-      "storageType" : "DISK"
-    }
-
-### List Buckets
-
-List buckets in a given volume.
-
-Schema:
-
-- `GET /{volume}?prefix=<PREFIX>&max-keys=<MAX_RESULT_SIZE>&prev-key=<PREVIOUS_BUCKET_KEY>`
-
-Query Parameters:
-
-| Query Parameter | Value | Description |
-|:---- |:---- |:----
-| prefix | string | Optional. Only buckets with this prefix are included in the result. |
-| max-keys | int | Optional. Maximum number of buckets included in the result. Default is 1024 if not specified. |
-| prev-key | string | Optional. Bucket name from where listing should start, this key is excluded in the result. It must be a valid bucket name. |
-
-Sample HTTP GET request:
-
-    curl -i -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" "http://localhost:9880/volume-of-bilbo?max-keys=10"
-
-this request lists all the buckets under volume *volume-of-bilbo*, and the result at most contains 10 entries. The client receives response of a array of JSON objects, each of them represents for a bucket info.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: e048c3d5-169c-470f-9903-632d9f9e32d5
-    Date: Tue, 27 Jun 2017 09:12:18 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 207
-    Connection: keep-alive
-
-    {
-      "buckets" : [ {
-        "volumeName" : "volume-of-bilbo",
-        "bucketName" : "bucket-0",
-        "createdOn" : "Tue, 27 Jun 2017 08:55:25 GMT",
-        "acls" : [ ],
-        "versioning" : null,
-        "storageType" : "DISK",
-        "bytesUsed" : 0,
-        "keyCount" : 0
-        },
-        ...
-      ]
-    }
-
-Key APIs
-------------------
-
-### Put Key
-
-This API allows user to create or overwrite keys inside of a bucket.
-
-Schema:
-
-- `PUT /{volume}/{bucket}/{key}`
-
-Additional HTTP headers:
-
-| HTTP Header | Value | Description |
-|:---- |:---- |:----
-| Content-MD5 | MD5 digest | Standard HTTP header, file hash. |
-
-Sample PUT HTTP request:
-
-    curl -X PUT -T /path/to/localfile -H "Authorization:OZONE" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" -H "x-ozone-user:bilbo" "http://localhost:9880/volume-of-bilbo/bucket-0/file-0"
-
-this request uploads a local file */path/to/localfile* specified by option *-T* to ozone as user *bilbo*, mapped to ozone key */volume-of-bilbo/bucket-0/file-0*. The client receives a zero length content response.
-
-### Get Key
-
-This API allows user to get or download a key from an ozone bucket.
-
-Schema:
-
-- `GET /{volume}/{bucket}/{key}`
-
-Sample HTTP GET request:
-
-    curl -i -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" "http://localhost:9880/volume-of-bilbo/bucket-0/file-0"
-
-this request reads the content of key */volume-of-bilbo/bucket-0/file-0*. If the content of the file is plain text, it can be directly dumped onto stdout.
-
-    HTTP/1.1 200 OK
-    Content-Type: application/octet-stream
-    x-ozone-server-name: localhost
-    x-ozone-request-id: 1bcd7de7-d8e3-46bb-afee-bdc933d383b8
-    Date: Tue, 27 Jun 2017 09:35:29 GMT
-    Content-Length: 6
-    Connection: keep-alive
-
-    Hello Ozone!
-
-if the file is not plain text, specify *-O* option in curl command and the file *file-0* will be downloaded into current working directory, file name will be same as the key. A sample request like following:
-
-    curl -O -i -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" "http://localhost:9880/volume-of-bilbo/bucket-0/file-1"
-
-response looks like following:
-
-    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
-                                 Dload  Upload   Total   Spent    Left  Speed
-    100 6148k  100 6148k    0     0  24.0M      0 --:--:-- --:--:-- --:--:-- 24.1M
-
-### Delete Key
-
-This API allows user to delete a key from a bucket.
-
-Schema:
-
-- `DELETE /{volume}/{bucket}/{key}`
-
-Sample HTTP DELETE request:
-
-    curl -i -X DELETE -H "Authorization:OZONE root" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "x-ozone-version: v1" -H "x-ozone-user:bilbo" "http://localhost:9880/volume-of-bilbo/bucket-0/file-0"
-
-this request deletes key */volume-of-bilbo/bucket-0/file-0*. The client receives a zero length content result:
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: f8c4a373-dd5f-4e3a-b6c4-ddf7e191fe91
-    Date: Tue, 27 Jun 2017 14:19:48 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 0
-    Connection: keep-alive
-
-### Info Key
-
-This API returns information about a given key.
-
-Schema:
-
-- `GET /{volume}/{bucket}/{key}?info=key`
-
-Query Parameter:
-
-| Query Parameter | Value | Description |
-|:---- |:---- |:----
-| info | String, "key" | Required and enforced with this value. |
-
-Sample HTTP DELETE request:
-
-    curl -i -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" "http://localhost:9880/volume-of-bilbo/buket-0/file-0?info=key"
-
-this request returns information of the key */volume-of-bilbo/bucket-0/file-0*. The client receives a JSON object listed attributes of the key.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: c674343c-a0f2-49e4-bbd6-daa73e7dc131
-    Date: Mon, 03 Jul 2017 14:28:45 GMT
-    Content-Type: application/octet-stream
-    Content-Length: 73
-    Connection: keep-alive
-
-    {
-      "version" : 0,
-      "md5hash" : null,
-      "createdOn" : "Mon, 26 Jun 2017 04:23:30 GMT",
-      "modifiedOn" : "Mon, 26 Jun 2017 04:23:30 GMT",
-      "size" : 0,
-      "keyName" : "file-0"
-    }
-
-### List Keys
-
-This API allows user to list keys in a bucket.
-
-Schema:
-
-- `GET /{volume}/{bucket}?prefix=<PREFIX>&max-keys=<MAX_RESULT_SIZE>&prev-key=<PREVIOUS_KEY>`
-
-Query Parameters:
-
-| Query Parameter | Value | Description |
-|:---- |:---- |:----
-| prefix | string | Optional. Only keys with this prefix are included in the result. |
-| max-keys | int | Optional. Maximum number of keys included in the result. Default is 1024 if not specified. |
-| prev-key | string | Optional. Key name from where listing should start, this key is excluded in the result. It must be a valid key name. |
-
-Sample HTTP GET request:
-
-    curl -i -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H "Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE" "http:/localhost:9880/volume-of-bilbo/bucket-0/?max-keys=100&prefix=file"
-
-this request list keys under bucket */volume-of-bilbo/bucket-0*, the listing result is filtered by prefix *file*. The client receives an array of JSON objects, each of them represents the info of a matched key.
-
-    HTTP/1.1 200 OK
-    x-ozone-server-name: localhost
-    x-ozone-request-id: 7f9fc970-9904-4c56-b671-83a086c6f555
-    Date: Tue, 27 Jun 2017 09:48:59 GMT
-    Content-Type: application/json
-    Content-Length: 209
-    Connection: keep-alive
-
-    {
-      "name" : null,
-      "prefix" : file,
-      "maxKeys" : 0,
-      "truncated" : false,
-      "keyList" : [ {
-          "version" : 0,
-          "md5hash" : null,
-          "createdOn" : "Mon, 26 Jun 2017 04:23:30 GMT",
-          "modifiedOn" : "Mon, 26 Jun 2017 04:23:30 GMT",
-          "size" : 0,
-          "keyName" : "file-0"
-          },
-          ...
-       ]
-    }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-ozone/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index 6687382..f605da2 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -36,6 +36,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
     <module>tools</module>
     <module>integration-test</module>
     <module>objectstore-service</module>
+    <module>docs</module>
   </modules>
 
   <dependencies>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/481bfdb9/hadoop-project/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index bcb816e..a916108 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -573,6 +573,11 @@
         <artifactId>hadoop-ozone-objectstore-service</artifactId>
         <version>${hdds.version}</version>
       </dependency>
+      <dependency>
+        <groupId>org.apache.hadoop</groupId>
+        <artifactId>hadoop-ozone-docs</artifactId>
+        <version>${hdds.version}</version>
+      </dependency>
 
       <dependency>
         <groupId>org.apache.hadoop</groupId>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org