You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ozone.apache.org by xy...@apache.org on 2020/07/02 16:38:54 UTC

[hadoop-ozone] branch master updated: HDDS-3891. Add the usage of ofs in doc. (#1143)

This is an automated email from the ASF dual-hosted git repository.

xyao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
     new 3aa2774  HDDS-3891. Add the usage of ofs in doc. (#1143)
3aa2774 is described below

commit 3aa277489092e5e9003ecfcd364172c38b9c35ff
Author: micah zhao <mi...@tencent.com>
AuthorDate: Fri Jul 3 00:38:41 2020 +0800

    HDDS-3891. Add the usage of ofs in doc. (#1143)
---
 hadoop-hdds/docs/content/interface/OzoneFS.md    | 58 ++++++++++++++++++++++--
 hadoop-hdds/docs/content/interface/OzoneFS.zh.md | 52 ++++++++++++++++++++-
 2 files changed, 106 insertions(+), 4 deletions(-)

diff --git a/hadoop-hdds/docs/content/interface/OzoneFS.md b/hadoop-hdds/docs/content/interface/OzoneFS.md
index cf269d4..98bf2f9 100644
--- a/hadoop-hdds/docs/content/interface/OzoneFS.md
+++ b/hadoop-hdds/docs/content/interface/OzoneFS.md
@@ -23,9 +23,12 @@ summary: Hadoop Compatible file system allows any application that expects an HD
 
 The Hadoop compatible file system interface allows storage backends like Ozone
 to be easily integrated into Hadoop eco-system.  Ozone file system is an
-Hadoop compatible file system.
+Hadoop compatible file system. Currently, Ozone supports two scheme: o3fs and ofs.
+The biggest difference between the o3fs and ofs,is that o3fs supports operations 
+only at a single bucket, while ofs supports operations across all volumes and buckets.
+you can Refer to "Differences from existing o3FS "in ofs.md for details of the differences.
 
-## Setting up the Ozone file system (o3fs)
+## Setting up the o3fs
 
 To create an ozone file system, we have to choose a bucket where the file system would live. This bucket will be used as the backend store for OzoneFileSystem. All the files and directories will be stored as keys in this bucket.
 
@@ -51,7 +54,7 @@ Please add the following entry to the core-site.xml.
 </property>
 {{< /highlight >}}
 
-This will make this bucket to be the default file system for HDFS dfs commands and register the o3fs file system type.
+This will make this bucket to be the default Hadoop compatible file system and register the o3fs file system type.
 
 You also need to add the ozone-filesystem-hadoop3.jar file to the classpath:
 
@@ -113,3 +116,52 @@ hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:6789/key
 Note: Only port number from the config is used in this case, 
 whereas the host name in the config `ozone.om.address` is ignored.
 
+## Setting up the ofs
+This is just a general introduction. For more detailed usage, you can refer to ofs.md.
+
+Please add the following entry to the core-site.xml.
+
+{{< highlight xml >}}
+<property>
+  <name>fs.ofs.impl</name>
+  <value>org.apache.hadoop.fs.ozone.RootedOzoneFileSystem</value>
+</property>
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://om-host.example.com/</value>
+</property>
+{{< /highlight >}}
+
+This will make all the volumes and buckets to be the default Hadoop compatible file system and register the ofs file system type.
+
+You also need to add the ozone-filesystem-hadoop3.jar file to the classpath:
+
+{{< highlight bash >}}
+export HADOOP_CLASSPATH=/opt/ozone/share/ozonefs/lib/hadoop-ozone-filesystem-hadoop3-*.jar:$HADOOP_CLASSPATH
+{{< /highlight >}}
+
+(Note: with Hadoop 2.x, use the `hadoop-ozone-filesystem-hadoop2-*.jar`)
+
+Once the default Filesystem has been setup, users can run commands like ls, put, mkdir, etc.
+For example:
+
+{{< highlight bash >}}
+hdfs dfs -ls /
+{{< /highlight >}}
+
+Note that ofs works on all buckets and volumes. Users can create buckets and volumes using mkdir, such as create volume named volume1 and  bucket named bucket1:
+
+{{< highlight bash >}}
+hdfs dfs -mkdir /volume1
+hdfs dfs -mkdir /volume1/bucket1
+{{< /highlight >}}
+
+
+Or use the put command to write a file to the bucket.
+
+{{< highlight bash >}}
+hdfs dfs -put /etc/hosts /volume1/bucket1/test
+{{< /highlight >}}
+
+For more usage, see: https://issues.apache.org/jira/secure/attachment/12987636/Design%20ofs%20v1.pdf
+
diff --git a/hadoop-hdds/docs/content/interface/OzoneFS.zh.md b/hadoop-hdds/docs/content/interface/OzoneFS.zh.md
index 0d35156..9969919 100644
--- a/hadoop-hdds/docs/content/interface/OzoneFS.zh.md
+++ b/hadoop-hdds/docs/content/interface/OzoneFS.zh.md
@@ -22,8 +22,10 @@ summary: Hadoop 文件系统兼容使得任何使用类 HDFS 接口的应用无
 -->
 
 Hadoop 的文件系统接口兼容可以让任意像 Ozone 这样的存储后端轻松地整合进 Hadoop 生态系统,Ozone 文件系统就是一个兼容 Hadoop 的文件系统。
+目前ozone支持两种协议: o3fs和ofs。两者最大的区别是o3fs只支持在单个bucket上操作,而ofs则支持跨所有volume和bucket的操作。关于两者在操作
+上的具体区别可以参考ofs.md中的"Differences from existing o3fs"。
 
-## 搭建 Ozone 文件系统
+## o3fs的配置及使用
 
 要创建一个 ozone 文件系统,我们需要先为它选择一个用来存放数据的桶,这个桶会被用作 Ozone 文件系统的后端存储,所有的文件和目录都存储为这个桶中的键。
 
@@ -106,4 +108,52 @@ hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:6789/key
 
 注意:在这种情况下,`ozone.om.address` 配置中只有端口号会被用到,主机名是被忽略的。
 
+## ofs的配置及使用
+这只是一个通用的介绍。了解更详细的用法,可以请参考ofs.md。
+
+请在 core-site.xml 中添加以下条目:
+
+{{< highlight xml >}}
+<property>
+  <name>fs.ofs.impl</name>
+  <value>org.apache.hadoop.fs.ozone.RootedOzoneFileSystem</value>
+</property>
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://om-host.example.com/</value>
+</property>
+{{< /highlight >}}
+
+这样会使该om的所有桶和卷成为 HDFS 的 dfs 命令的默认文件系统,并且将其注册为了 ofs 文件系统类型。
+
+你还需要将 ozone-filesystem.jar 文件加入 classpath:
+
+{{< highlight bash >}}
+export HADOOP_CLASSPATH=/opt/ozone/share/ozonefs/lib/hadoop-ozone-filesystem-hadoop3-*.jar:$HADOOP_CLASSPATH
+{{< /highlight >}}
+
+(注意:当使用Hadoop 2.x时,应该在classpath上添加hadoop-ozone-filesystem-hadoop2-*.jar)
+
+当配置了默认的文件系统之后,用户可以运行 ls、put、mkdir 等命令,比如:
+
+{{< highlight bash >}}
+hdfs dfs -ls /
+{{< /highlight >}}
+
+需要注意的是ofs能够作用于所有的桶和卷之上,用户可以使用mkdir自行创建桶和卷,比如创建卷volume1和桶bucket1。
+
+{{< highlight bash >}}
+hdfs dfs -mkdir /volume1
+hdfs dfs -mkdir /volume1/bucket1
+{{< /highlight >}}
+
+
+或者用 put 命令向对应的桶写入文件。
+
+{{< highlight bash >}}
+hdfs dfs -put /etc/hosts /volume1/bucket1/test
+{{< /highlight >}}
+
+更多用法可以参考: https://issues.apache.org/jira/secure/attachment/12987636/Design%20ofs%20v1.pdf
+
 


---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-commits-help@hadoop.apache.org