You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by Lei Xu <le...@cloudera.com> on 2017/11/03 06:29:27 UTC

Re: 答复: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk

Hey,  Weiwei and Jitendra

Thanks a lot for this large effort to bring us ozone.

* As the current state of Ozone implementation, what are the major
benefits of using today’s Ozone over HDFS?  Giving that its missing
features like HDFS-12680 and HDFS-12697, being disabled by default,
and the closing of Hadoop 3.0 release, should we wait for a late merge
when Ozone is more mature ? Or more generally, why should this merge
to a release branch happen now, when Ozone is not yet usable by users?
Staying on a feature branch seems like it's still the right place to
me.
* For the existing HDFS user, could you address the semantic gaps
between Ozone / Ozone File System and HDFS. It would be great to
illustrate what is the expected use cases for Ozone giving its
different architecture and design decisions?  Like no append, no
atomic rename and etc.
* A follow question, was it able to run any of today’s Hadoop
applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
against OZoneFileSystem? I think a performance / scalability gain or
extended functionality should be the prerequisites for the merge.
Additionally, I believe such tests will reveal the potential caveats
if any.
* Ozone’s architecture shows great potential to address NN
scalability.  However it looks like a XXL effort to me, considering
the fact that 1) the community had multiple unfinished attempts to
simply separate namespace and block management within the same NN
process, and 2) many existing features like snapshot, append, erasure
coding, and etc, are not straightforward to be implemented in today’s
ozone design. Could you share your opinions on this matter?
* How stable is the ozone client? Should we mark them as unstable for
now? Also giving the significant difference between OzoneClient and
HdfsClient, should move it to a separated package or even a project? I
second Konstantin’s option to separate ozone from HDFS.
* Please add sections to the end-user and system admin oriented
documents for deploying and operating SCM, KSM, and also the chunk
servers on DataNodes. Additionally, the introduction in
“OZoneGettingStarted.md” is still building ozone from feature branch
HDFS-7240.

Best regards,

On Mon, Oct 23, 2017 at 11:10 AM, Jitendra Pandey
<ji...@hortonworks.com> wrote:
> I have filed https://issues.apache.org/jira/browse/HDFS-12697 to ensure ozone stays disabled in a secure environment.
> Since ozone is disabled by default and will not come with security on, it will not expose any new attack surface in a Hadoop deployment.
> Ozone security effort will need a detailed design and discussion on a community jira. Hopefully, that effort will start soon after the merge.
>
> Thanks
> jitendra
>
> On 10/20/17, 2:40 PM, "larry mccay" <lm...@apache.org> wrote:
>
>     All -
>
>     I broke this list of questions out into a separate DISCUSS thread where we
>     can iterate over how a security audit process at merge time might look and
>     whether it is even something that we want to take on.
>
>     I will try and continue discussion on that thread and drive that to some
>     conclusion before bringing it into any particular merge discussion.
>
>     thanks,
>
>     --larry
>
>     On Fri, Oct 20, 2017 at 12:37 PM, larry mccay <lm...@apache.org> wrote:
>
>     > I previously sent this same email from my work email and it doesn't seem
>     > to have gone through - resending from apache account (apologizing up from
>     > for the length)....
>     >
>     > For such sizable merges in Hadoop, I would like to start doing security
>     > audits in order to have an initial idea of the attack surface, the
>     > protections available for known threats, what sort of configuration is
>     > being used to launch processes, etc.
>     >
>     > I dug into the architecture documents while in the middle of this list -
>     > nice docs!
>     > I do intend to try and make a generic check list like this for such
>     > security audits in the future so a lot of this is from that but I tried to
>     > also direct specific questions from those docs as well.
>     >
>     > 1. UIs
>     > I see there are at least two UIs - Storage Container Manager and Key Space
>     > Manager. There are a number of typical vulnerabilities that we find in UIs
>     >
>     > 1.1. What sort of validation is being done on any accepted user input?
>     > (pointers to code would be appreciated)
>     > 1.2. What explicit protections have been built in for (pointers to code
>     > would be appreciated):
>     >   1.2.1. cross site scripting
>     >   1.2.2. cross site request forgery
>     >   1.2.3. click jacking (X-Frame-Options)
>     > 1.3. What sort of authentication is required for access to the UIs?
>     > 1.4. What authorization is available for determining who can access what
>     > capabilities of the UIs for either viewing, modifying data or affecting
>     > object stores and related processes?
>     > 1.5. Are the UIs built with proxying in mind by leveraging X-Forwarded
>     > headers?
>     > 1.6. Is there any input that will ultimately be persisted in configuration
>     > for executing shell commands or processes?
>     > 1.7. Do the UIs support the trusted proxy pattern with doas impersonation?
>     > 1.8. Is there TLS/SSL support?
>     >
>     > 2. REST APIs
>     >
>     > 2.1. Do the REST APIs support the trusted proxy pattern with doas
>     > impersonation capabilities?
>     > 2.2. What explicit protections have been built in for:
>     >   2.2.1. cross site scripting (XSS)
>     >   2.2.2. cross site request forgery (CSRF)
>     >   2.2.3. XML External Entity (XXE)
>     > 2.3. What is being used for authentication - Hadoop Auth Module?
>     > 2.4. Are there separate processes for the HTTP resources (UIs and REST
>     > endpoints) or are the part of existing HDFS processes?
>     > 2.5. Is there TLS/SSL support?
>     > 2.6. Are there new CLI commands and/or clients for access the REST APIs?
>     > 2.7. Bucket Level API allows for setting of ACLs on a bucket - what
>     > authorization is required here - is there a restrictive ACL set on creation?
>     > 2.8. Bucket Level API allows for deleting a bucket - I assume this is
>     > dependent on ACLs based access control?
>     > 2.9. Bucket Level API to list bucket returns up to 1000 keys - is there
>     > paging available?
>     > 2.10. Storage Level APIs indicate “Signed with User Authorization” what
>     > does this refer to exactly?
>     > 2.11. Object Level APIs indicate that there is no ACL support and only
>     > bucket owners can read and write - but there are ACL APIs on the Bucket
>     > Level are they meaningless for now?
>     > 2.12. How does a REST client know which Ozone Handler to connect to or am
>     > I missing some well known NN type endpoint in the architecture doc
>     > somewhere?
>     >
>     > 3. Encryption
>     >
>     > 3.1. Is there any support for encryption of persisted data?
>     > 3.2. If so, is KMS and the hadoop key command used for key management?
>     >
>     > 4. Configuration
>     >
>     > 4.1. Are there any passwords or secrets being added to configuration?
>     > 4.2. If so, are they accessed via Configuration.getPassword() to allow for
>     > provisioning in credential providers?
>     > 4.3. Are there any settings that are used to launch docker containers or
>     > shell out any commands, etc?
>     >
>     > 5. HA
>     >
>     > 5.1. Are there provisions for HA?
>     > 5.2. Are we leveraging the existing HA capabilities in HDFS?
>     > 5.3. Is Storage Container Manager a SPOF?
>     > 5.4. I see HA listed in future work in the architecture doc - is this
>     > still an open issue?
>     >
>     > On Fri, Oct 20, 2017 at 11:19 AM, Anu Engineer <ae...@hortonworks.com>
>     > wrote:
>     >
>     >> Hi Steve,
>     >>
>     >> In addition to everything Weiwei mentioned (chapter 3 of user guide), if
>     >> you really want to drill down to REST protocol you might want to apply this
>     >> patch and build ozone.
>     >>
>     >> https://issues.apache.org/jira/browse/HDFS-12690
>     >>
>     >> This will generate an Open API (https://www.openapis.org ,
>     >> http://swagger.io) based specification which can be accessed from KSM UI
>     >> or just as a json file.
>     >> Unfortunately, this patch is still at code review stage, so you will have
>     >> to apply the patch and build it yourself.
>     >>
>     >> Thanks
>     >> Anu
>     >>
>     >>
>     >> On 10/20/17, 6:09 AM, "Yang Weiwei" <ch...@hotmail.com> wrote:
>     >>
>     >>     Hi Steve
>     >>
>     >>
>     >>     The code is available in HDFS-7240 feature branch, public git repo
>     >> here<https://github.com/apache/hadoop/tree/HDFS-7240>.
>     >>
>     >>     I am not sure if there is a "public" API for object stores, but the
>     >> design doc<https://issues.apache.org/jira/secure/attachment/1279954
>     >> 9/ozone_user_v0.pdf> uses most common syntax so I believe it should be
>     >> compliance. You can find the rest API doc here<https://github.com/apache
>     >> /hadoop/blob/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/src/
>     >> site/markdown/OzoneRest.md> (with some example usages), and commandline
>     >> API here<https://github.com/apache/hadoop/blob/HDFS-7240/hadoop-
>     >> hdfs-project/hadoop-hdfs/src/site/markdown/OzoneCommandShell.md>.
>     >>
>     >>
>     >>     Look forward for your feedback!
>     >>
>     >>
>     >>     --Weiwei
>     >>
>     >>
>     >>     ________________________________
>     >>     发件人: Steve Loughran <st...@hortonworks.com>
>     >>     发送时间: 2017年10月20日 11:49
>     >>     收件人: Yang Weiwei
>     >>     抄送: hdfs-dev@hadoop.apache.org; mapreduce-dev@hadoop.apache.org;
>     >> yarn-dev@hadoop.apache.org; common-dev@hadoop.apache.org
>     >>     主题: Re: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk
>     >>
>     >>
>     >>     Wow, big piece of work
>     >>
>     >>     1. Where is a PR/branch on github with rendered docs for us to look
>     >> at?
>     >>     2. Have you made any public APi changes related to object stores?
>     >> That's probably something I'll have opinions on more than implementation
>     >> details.
>     >>
>     >>     thanks
>     >>
>     >>     > On 19 Oct 2017, at 02:54, Yang Weiwei <ch...@hotmail.com>
>     >> wrote:
>     >>     >
>     >>     > Hello everyone,
>     >>     >
>     >>     >
>     >>     > I would like to start this thread to discuss merging Ozone
>     >> (HDFS-7240) to trunk. This feature implements an object store which can
>     >> co-exist with HDFS. Ozone is disabled by default. We have tested Ozone with
>     >> cluster sizes varying from 1 to 100 data nodes.
>     >>     >
>     >>     >
>     >>     >
>     >>     > The merge payload includes the following:
>     >>     >
>     >>     >  1.  All services, management scripts
>     >>     >  2.  Object store APIs, exposed via both REST and RPC
>     >>     >  3.  Master service UIs, command line interfaces
>     >>     >  4.  Pluggable pipeline Integration
>     >>     >  5.  Ozone File System (Hadoop compatible file system
>     >> implementation, passes all FileSystem contract tests)
>     >>     >  6.  Corona - a load generator for Ozone.
>     >>     >  7.  Essential documentation added to Hadoop site.
>     >>     >  8.  Version specific Ozone Documentation, accessible via service
>     >> UI.
>     >>     >  9.  Docker support for ozone, which enables faster development
>     >> cycles.
>     >>     >
>     >>     >
>     >>     > To build Ozone and run ozone using docker, please follow
>     >> instructions in this wiki page. https://cwiki.apache.org/confl
>     >> uence/display/HADOOP/Dev+cluster+with+docker.
>     >>     Dev cluster with docker - Hadoop - Apache Software Foundation<
>     >> https://cwiki.apache.org/confluence/display/HADOO
>     >> P/Dev+cluster+with+docker>
>     >>     cwiki.apache.org
>     >>     First, it uses a much more smaller common image which doesn't
>     >> contains Hadoop. Second, the real Hadoop should be built from the source
>     >> and the dist director should be ...
>     >>
>     >>
>     >>
>     >>     >
>     >>     >
>     >>     > We have built a passionate and diverse community to drive this
>     >> feature development. As a team, we have achieved significant progress in
>     >> past 3 years since first JIRA for HDFS-7240 was opened on Oct 2014. So far,
>     >> we have resolved almost 400 JIRAs by 20+ contributors/committers from
>     >> different countries and affiliations. We also want to thank the large
>     >> number of community members who were supportive of our efforts and
>     >> contributed ideas and participated in the design of ozone.
>     >>     >
>     >>     >
>     >>     > Please share your thoughts, thanks!
>     >>     >
>     >>     >
>     >>     > -- Weiwei Yang
>     >>
>     >>
>     >>
>     >>
>     >> ---------------------------------------------------------------------
>     >> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
>     >> For additional commands, e-mail: common-dev-help@hadoop.apache.org
>     >>
>     >
>     >
>
>



-- 
Lei (Eddy) Xu
Software Engineer, Cloudera

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org


Re: 答复: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk

Posted by Xiaoyu Yao <xy...@hortonworks.com>.
Hi Lei,

Thank you for your interest in Ozone. Let me answer each of the
specific questions.

> As the current state of Ozone implementation, what are the major
> benefits of using today’s Ozone over HDFS?


Scale -  HDFS tops out at 500/700 million keys; Ozone primary 
use case is to go beyond that limit. Ozone can scale block space 
and namespace independently. Ozone also provides object store 
semantics which is simpler and scales better


Enabling new workloads on HDFS -  For example, we see lots of 
customers moving to a cloud-like model, where they have a compute 
cluster and storage cluster. This allows them to move workloads to 
the cloud and back seamlessly. We see a marked increase of 
Docker/Kubernetes enabled workloads. The pain point for most 
Docker/VM deployments is lack of storage. Ozone is an excellent 
fit for Docker/Kubernetes deployments.

Ease of Management and use - Ozone has learned very 
valuable lessons from HDFS. It comes with a good set of 
management tools and tries to avoid very complicated setup.


> Giving that its missing features like HDFS-12680 and HDFS-12697, and
> the closing of Hadoop 3.0 release, should we wait for a late merge
> when Ozone is more mature ?

Both HDFS-12680 (lease manager) and HDFS-12697 (Ozone services 
stay disabled in secure setup) are resolved in the past weeks. 
We are targeting the merge for trunk not 3.0.

> Or more generally, why should this merge to a release branch happen
> now, when Ozone is not yet usable by users? Staying on a feature
> branch seems like it's still the right place to Me.

Let me repeat that we are not merging to the 3.0 branch. We are
merging to trunk and we do not intend to backport this to 3.0 code
base.

Ozone is certainly usable. We have written and read billions of keys
into ozone. I would think that it more like Erasure coding when we
merged. We want ozone to be used/tested when people start using 3.1
release. Yes, it is an Alpha feature, having an Alpha release out in
the community is the best way to mature Ozone.


> For the existing HDFS user, could you address the semantic gaps
> between Ozone / Ozone File System and HDFS.

Ozone file system offers a Hadoop compatible file system. For the
first release, we are targeting YARN, Hive, and Spark as the principle
workloads. These applications are functional with Ozone file system.

> It would be great to illustrate what is the expected use cases for
> Ozone giving its different architecture and design decisions?

We expect almost all real use case of ozone to come via Ozone File
System. Hence our focus has been to make sure that (YARN, Hive and
Spark) work well with this system. Ozone file system does the right
magic on behalf of the users for now.

> Like no append, no atomic rename and etc.

This is similar to S3 -- the rise of cloud-based object stores has made 
it very easy for ozone. In fact, the work done by other stacks (Hive, Spark etc.) 
to enable big data workload in cloud is extremely helpful for ozone.


> A follow question, was it able to run any of today’s Hadoop
> applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
> against OZoneFileSystem? I think a performance / scalability gain or
> extended functionality should be the prerequisites for the merge.
> Additionally, I believe such tests will reveal the potential caveats
> if any.

We have run, Mapreduce (pretty much all standard applications along
with Distcp works well), YARN, Hive and Spark against ozone with NO
Modifications to MR,YARN, Hive or Spark.

We have never tried out Impala or Presto, but if they are known to work
well against Hadoop compatible file systems, I am hopeful that they
will work as well. Please feel free to test and report if you run into
any issues.


> * Ozone’s architecture shows great potential to address NN
> scalability.  However it looks like a XXL effort to me, considering
> the fact that 1) the community had multiple unfinished attempts to
> simply separate namespace and block management within the same NN
> process,

You are absolutely correct. We have learned from those experiences. We
think that separating namespace and block space in the same NN process
does not address the core issue of NN scale. And, also as you clearly 
mentioned they are unfinished. 

With Ozone, we have separated out a block service. Once it is known 
to be stable, we will use that in Namenode, thus achieving the full 
separation. Ozone FS and Ozone object store are intermediate steps
 to solving the scale issue for HDFS.

> *and 2) many existing features like snapshot, append, erasure coding,
> and etc, are not straightforward to be implemented in today’s ozone
> design. Could you share your opinions on this matter?

Ozone is well prepared to implement each of these features. We have
many design documents for ozone posted in the sub-JIRAs. For example,
please take a look at the versioning doc to understand how Ozone’s block
layer really offers.

https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion
.001.pdf

When you read thru this doc (you are welcome to review the attached
patches for this feature, HDFS-12000) and you will see that ozone
block layer supports append semantics. However, we have chosen not to
expose it via Ozone’s object layer since people are used to reduced
capabilities in object stores. We will certainly expose these features
via HDFS.


> How stable is the ozone client? Should we mark them as unstable for
> now? Also giving the significant difference between OzoneClient and
> HdfsClient, should move it to a separated package or even a project? I
> second Konstantin’s option to separate ozone from HDFS.

The ozone’s client libraries are already under a separate package.  We
can always refactor them to better locations based on usage patterns.

We have used ozone client to write billions of keys to ozone clusters.
Good catch on Unstable annotation, they might still change as
community starts using them.


> * Please add sections to the end-user and system admin oriented
> documents for deploying and operating SCM, KSM, and also the chunk
> servers on DataNodes. Additionally, the introduction in
> “OZoneGettingStarted.md” is still building ozone from feature branch
> HDFS-7240.

If you scroll down the OzoneGettingStarted.md, you will see that it
has a section called, “Running Ozone using a real cluster”, that 
contains all instructions needed to run ozone in a real physical
cluster. I will add a section part to the top of this document, so
that these links are easily discoverable. Thank you for pointing this
out.

> still building ozone from feature branch HDFS-7240.

Thank you for pointing this out. We left the instructions in the doc
in that way so that before the merge community has the right instructions
for building and deploying ozone. I will file a blocking JIRA to fix
this before release.

Thanks,
Xiaoyu






On 11/2/17, 11:29 PM, "Lei Xu" <le...@cloudera.com> wrote:

    Hey,  Weiwei and Jitendra
    
    Thanks a lot for this large effort to bring us ozone.
    
    * As the current state of Ozone implementation, what are the major
    benefits of using today’s Ozone over HDFS?  Giving that its missing
    features like HDFS-12680 and HDFS-12697, being disabled by default,
    and the closing of Hadoop 3.0 release, should we wait for a late merge
    when Ozone is more mature ? Or more generally, why should this merge
    to a release branch happen now, when Ozone is not yet usable by users?
    Staying on a feature branch seems like it's still the right place to
    me.
    * For the existing HDFS user, could you address the semantic gaps
    between Ozone / Ozone File System and HDFS. It would be great to
    illustrate what is the expected use cases for Ozone giving its
    different architecture and design decisions?  Like no append, no
    atomic rename and etc.
    * A follow question, was it able to run any of today’s Hadoop
    applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
    against OZoneFileSystem? I think a performance / scalability gain or
    extended functionality should be the prerequisites for the merge.
    Additionally, I believe such tests will reveal the potential caveats
    if any.
    * Ozone’s architecture shows great potential to address NN
    scalability.  However it looks like a XXL effort to me, considering
    the fact that 1) the community had multiple unfinished attempts to
    simply separate namespace and block management within the same NN
    process, and 2) many existing features like snapshot, append, erasure
    coding, and etc, are not straightforward to be implemented in today’s
    ozone design. Could you share your opinions on this matter?
    * How stable is the ozone client? Should we mark them as unstable for
    now? Also giving the significant difference between OzoneClient and
    HdfsClient, should move it to a separated package or even a project? I
    second Konstantin’s option to separate ozone from HDFS.
    * Please add sections to the end-user and system admin oriented
    documents for deploying and operating SCM, KSM, and also the chunk
    servers on DataNodes. Additionally, the introduction in
    “OZoneGettingStarted.md” is still building ozone from feature branch
    HDFS-7240.
    
    Best regards,
    
    On Mon, Oct 23, 2017 at 11:10 AM, Jitendra Pandey
    <ji...@hortonworks.com> wrote:
    > I have filed https://issues.apache.org/jira/browse/HDFS-12697 to ensure ozone stays disabled in a secure environment.
    > Since ozone is disabled by default and will not come with security on, it will not expose any new attack surface in a Hadoop deployment.
    > Ozone security effort will need a detailed design and discussion on a community jira. Hopefully, that effort will start soon after the merge.
    >
    > Thanks
    > jitendra
    >
    > On 10/20/17, 2:40 PM, "larry mccay" <lm...@apache.org> wrote:
    >
    >     All -
    >
    >     I broke this list of questions out into a separate DISCUSS thread where we
    >     can iterate over how a security audit process at merge time might look and
    >     whether it is even something that we want to take on.
    >
    >     I will try and continue discussion on that thread and drive that to some
    >     conclusion before bringing it into any particular merge discussion.
    >
    >     thanks,
    >
    >     --larry
    >
    >     On Fri, Oct 20, 2017 at 12:37 PM, larry mccay <lm...@apache.org> wrote:
    >
    >     > I previously sent this same email from my work email and it doesn't seem
    >     > to have gone through - resending from apache account (apologizing up from
    >     > for the length)....
    >     >
    >     > For such sizable merges in Hadoop, I would like to start doing security
    >     > audits in order to have an initial idea of the attack surface, the
    >     > protections available for known threats, what sort of configuration is
    >     > being used to launch processes, etc.
    >     >
    >     > I dug into the architecture documents while in the middle of this list -
    >     > nice docs!
    >     > I do intend to try and make a generic check list like this for such
    >     > security audits in the future so a lot of this is from that but I tried to
    >     > also direct specific questions from those docs as well.
    >     >
    >     > 1. UIs
    >     > I see there are at least two UIs - Storage Container Manager and Key Space
    >     > Manager. There are a number of typical vulnerabilities that we find in UIs
    >     >
    >     > 1.1. What sort of validation is being done on any accepted user input?
    >     > (pointers to code would be appreciated)
    >     > 1.2. What explicit protections have been built in for (pointers to code
    >     > would be appreciated):
    >     >   1.2.1. cross site scripting
    >     >   1.2.2. cross site request forgery
    >     >   1.2.3. click jacking (X-Frame-Options)
    >     > 1.3. What sort of authentication is required for access to the UIs?
    >     > 1.4. What authorization is available for determining who can access what
    >     > capabilities of the UIs for either viewing, modifying data or affecting
    >     > object stores and related processes?
    >     > 1.5. Are the UIs built with proxying in mind by leveraging X-Forwarded
    >     > headers?
    >     > 1.6. Is there any input that will ultimately be persisted in configuration
    >     > for executing shell commands or processes?
    >     > 1.7. Do the UIs support the trusted proxy pattern with doas impersonation?
    >     > 1.8. Is there TLS/SSL support?
    >     >
    >     > 2. REST APIs
    >     >
    >     > 2.1. Do the REST APIs support the trusted proxy pattern with doas
    >     > impersonation capabilities?
    >     > 2.2. What explicit protections have been built in for:
    >     >   2.2.1. cross site scripting (XSS)
    >     >   2.2.2. cross site request forgery (CSRF)
    >     >   2.2.3. XML External Entity (XXE)
    >     > 2.3. What is being used for authentication - Hadoop Auth Module?
    >     > 2.4. Are there separate processes for the HTTP resources (UIs and REST
    >     > endpoints) or are the part of existing HDFS processes?
    >     > 2.5. Is there TLS/SSL support?
    >     > 2.6. Are there new CLI commands and/or clients for access the REST APIs?
    >     > 2.7. Bucket Level API allows for setting of ACLs on a bucket - what
    >     > authorization is required here - is there a restrictive ACL set on creation?
    >     > 2.8. Bucket Level API allows for deleting a bucket - I assume this is
    >     > dependent on ACLs based access control?
    >     > 2.9. Bucket Level API to list bucket returns up to 1000 keys - is there
    >     > paging available?
    >     > 2.10. Storage Level APIs indicate “Signed with User Authorization” what
    >     > does this refer to exactly?
    >     > 2.11. Object Level APIs indicate that there is no ACL support and only
    >     > bucket owners can read and write - but there are ACL APIs on the Bucket
    >     > Level are they meaningless for now?
    >     > 2.12. How does a REST client know which Ozone Handler to connect to or am
    >     > I missing some well known NN type endpoint in the architecture doc
    >     > somewhere?
    >     >
    >     > 3. Encryption
    >     >
    >     > 3.1. Is there any support for encryption of persisted data?
    >     > 3.2. If so, is KMS and the hadoop key command used for key management?
    >     >
    >     > 4. Configuration
    >     >
    >     > 4.1. Are there any passwords or secrets being added to configuration?
    >     > 4.2. If so, are they accessed via Configuration.getPassword() to allow for
    >     > provisioning in credential providers?
    >     > 4.3. Are there any settings that are used to launch docker containers or
    >     > shell out any commands, etc?
    >     >
    >     > 5. HA
    >     >
    >     > 5.1. Are there provisions for HA?
    >     > 5.2. Are we leveraging the existing HA capabilities in HDFS?
    >     > 5.3. Is Storage Container Manager a SPOF?
    >     > 5.4. I see HA listed in future work in the architecture doc - is this
    >     > still an open issue?
    >     >
    >     > On Fri, Oct 20, 2017 at 11:19 AM, Anu Engineer <ae...@hortonworks.com>
    >     > wrote:
    >     >
    >     >> Hi Steve,
    >     >>
    >     >> In addition to everything Weiwei mentioned (chapter 3 of user guide), if
    >     >> you really want to drill down to REST protocol you might want to apply this
    >     >> patch and build ozone.
    >     >>
    >     >> https://issues.apache.org/jira/browse/HDFS-12690
    >     >>
    >     >> This will generate an Open API (https://www.openapis.org ,
    >     >> http://swagger.io) based specification which can be accessed from KSM UI
    >     >> or just as a json file.
    >     >> Unfortunately, this patch is still at code review stage, so you will have
    >     >> to apply the patch and build it yourself.
    >     >>
    >     >> Thanks
    >     >> Anu
    >     >>
    >     >>
    >     >> On 10/20/17, 6:09 AM, "Yang Weiwei" <ch...@hotmail.com> wrote:
    >     >>
    >     >>     Hi Steve
    >     >>
    >     >>
    >     >>     The code is available in HDFS-7240 feature branch, public git repo
    >     >> here<https://github.com/apache/hadoop/tree/HDFS-7240>.
    >     >>
    >     >>     I am not sure if there is a "public" API for object stores, but the
    >     >> design doc<https://issues.apache.org/jira/secure/attachment/1279954
    >     >> 9/ozone_user_v0.pdf> uses most common syntax so I believe it should be
    >     >> compliance. You can find the rest API doc here<https://github.com/apache
    >     >> /hadoop/blob/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/src/
    >     >> site/markdown/OzoneRest.md> (with some example usages), and commandline
    >     >> API here<https://github.com/apache/hadoop/blob/HDFS-7240/hadoop-
    >     >> hdfs-project/hadoop-hdfs/src/site/markdown/OzoneCommandShell.md>.
    >     >>
    >     >>
    >     >>     Look forward for your feedback!
    >     >>
    >     >>
    >     >>     --Weiwei
    >     >>
    >     >>
    >     >>     ________________________________
    >     >>     发件人: Steve Loughran <st...@hortonworks.com>
    >     >>     发送时间: 2017年10月20日 11:49
    >     >>     收件人: Yang Weiwei
    >     >>     抄送: hdfs-dev@hadoop.apache.org; mapreduce-dev@hadoop.apache.org;
    >     >> yarn-dev@hadoop.apache.org; common-dev@hadoop.apache.org
    >     >>     主题: Re: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk
    >     >>
    >     >>
    >     >>     Wow, big piece of work
    >     >>
    >     >>     1. Where is a PR/branch on github with rendered docs for us to look
    >     >> at?
    >     >>     2. Have you made any public APi changes related to object stores?
    >     >> That's probably something I'll have opinions on more than implementation
    >     >> details.
    >     >>
    >     >>     thanks
    >     >>
    >     >>     > On 19 Oct 2017, at 02:54, Yang Weiwei <ch...@hotmail.com>
    >     >> wrote:
    >     >>     >
    >     >>     > Hello everyone,
    >     >>     >
    >     >>     >
    >     >>     > I would like to start this thread to discuss merging Ozone
    >     >> (HDFS-7240) to trunk. This feature implements an object store which can
    >     >> co-exist with HDFS. Ozone is disabled by default. We have tested Ozone with
    >     >> cluster sizes varying from 1 to 100 data nodes.
    >     >>     >
    >     >>     >
    >     >>     >
    >     >>     > The merge payload includes the following:
    >     >>     >
    >     >>     >  1.  All services, management scripts
    >     >>     >  2.  Object store APIs, exposed via both REST and RPC
    >     >>     >  3.  Master service UIs, command line interfaces
    >     >>     >  4.  Pluggable pipeline Integration
    >     >>     >  5.  Ozone File System (Hadoop compatible file system
    >     >> implementation, passes all FileSystem contract tests)
    >     >>     >  6.  Corona - a load generator for Ozone.
    >     >>     >  7.  Essential documentation added to Hadoop site.
    >     >>     >  8.  Version specific Ozone Documentation, accessible via service
    >     >> UI.
    >     >>     >  9.  Docker support for ozone, which enables faster development
    >     >> cycles.
    >     >>     >
    >     >>     >
    >     >>     > To build Ozone and run ozone using docker, please follow
    >     >> instructions in this wiki page. https://cwiki.apache.org/confl
    >     >> uence/display/HADOOP/Dev+cluster+with+docker.
    >     >>     Dev cluster with docker - Hadoop - Apache Software Foundation<
    >     >> https://cwiki.apache.org/confluence/display/HADOO
    >     >> P/Dev+cluster+with+docker>
    >     >>     cwiki.apache.org
    >     >>     First, it uses a much more smaller common image which doesn't
    >     >> contains Hadoop. Second, the real Hadoop should be built from the source
    >     >> and the dist director should be ...
    >     >>
    >     >>
    >     >>
    >     >>     >
    >     >>     >
    >     >>     > We have built a passionate and diverse community to drive this
    >     >> feature development. As a team, we have achieved significant progress in
    >     >> past 3 years since first JIRA for HDFS-7240 was opened on Oct 2014. So far,
    >     >> we have resolved almost 400 JIRAs by 20+ contributors/committers from
    >     >> different countries and affiliations. We also want to thank the large
    >     >> number of community members who were supportive of our efforts and
    >     >> contributed ideas and participated in the design of ozone.
    >     >>     >
    >     >>     >
    >     >>     > Please share your thoughts, thanks!
    >     >>     >
    >     >>     >
    >     >>     > -- Weiwei Yang
    >     >>
    >     >>
    >     >>
    >     >>
    >     >> ---------------------------------------------------------------------
    >     >> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
    >     >> For additional commands, e-mail: common-dev-help@hadoop.apache.org
    >     >>
    >     >
    >     >
    >
    >
    
    
    
    -- 
    Lei (Eddy) Xu
    Software Engineer, Cloudera
    
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
    For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org
    
    


Re: 答复: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk

Posted by Xiaoyu Yao <xy...@hortonworks.com>.
Hi Lei,

Thank you for your interest in Ozone. Let me answer each of the
specific questions.

> As the current state of Ozone implementation, what are the major
> benefits of using today’s Ozone over HDFS?


Scale -  HDFS tops out at 500/700 million keys; Ozone primary 
use case is to go beyond that limit. Ozone can scale block space 
and namespace independently. Ozone also provides object store 
semantics which is simpler and scales better


Enabling new workloads on HDFS -  For example, we see lots of 
customers moving to a cloud-like model, where they have a compute 
cluster and storage cluster. This allows them to move workloads to 
the cloud and back seamlessly. We see a marked increase of 
Docker/Kubernetes enabled workloads. The pain point for most 
Docker/VM deployments is lack of storage. Ozone is an excellent 
fit for Docker/Kubernetes deployments.

Ease of Management and use - Ozone has learned very 
valuable lessons from HDFS. It comes with a good set of 
management tools and tries to avoid very complicated setup.


> Giving that its missing features like HDFS-12680 and HDFS-12697, and
> the closing of Hadoop 3.0 release, should we wait for a late merge
> when Ozone is more mature ?

Both HDFS-12680 (lease manager) and HDFS-12697 (Ozone services 
stay disabled in secure setup) are resolved in the past weeks. 
We are targeting the merge for trunk not 3.0.

> Or more generally, why should this merge to a release branch happen
> now, when Ozone is not yet usable by users? Staying on a feature
> branch seems like it's still the right place to Me.

Let me repeat that we are not merging to the 3.0 branch. We are
merging to trunk and we do not intend to backport this to 3.0 code
base.

Ozone is certainly usable. We have written and read billions of keys
into ozone. I would think that it more like Erasure coding when we
merged. We want ozone to be used/tested when people start using 3.1
release. Yes, it is an Alpha feature, having an Alpha release out in
the community is the best way to mature Ozone.


> For the existing HDFS user, could you address the semantic gaps
> between Ozone / Ozone File System and HDFS.

Ozone file system offers a Hadoop compatible file system. For the
first release, we are targeting YARN, Hive, and Spark as the principle
workloads. These applications are functional with Ozone file system.

> It would be great to illustrate what is the expected use cases for
> Ozone giving its different architecture and design decisions?

We expect almost all real use case of ozone to come via Ozone File
System. Hence our focus has been to make sure that (YARN, Hive and
Spark) work well with this system. Ozone file system does the right
magic on behalf of the users for now.

> Like no append, no atomic rename and etc.

This is similar to S3 -- the rise of cloud-based object stores has made 
it very easy for ozone. In fact, the work done by other stacks (Hive, Spark etc.) 
to enable big data workload in cloud is extremely helpful for ozone.


> A follow question, was it able to run any of today’s Hadoop
> applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
> against OZoneFileSystem? I think a performance / scalability gain or
> extended functionality should be the prerequisites for the merge.
> Additionally, I believe such tests will reveal the potential caveats
> if any.

We have run, Mapreduce (pretty much all standard applications along
with Distcp works well), YARN, Hive and Spark against ozone with NO
Modifications to MR,YARN, Hive or Spark.

We have never tried out Impala or Presto, but if they are known to work
well against Hadoop compatible file systems, I am hopeful that they
will work as well. Please feel free to test and report if you run into
any issues.


> * Ozone’s architecture shows great potential to address NN
> scalability.  However it looks like a XXL effort to me, considering
> the fact that 1) the community had multiple unfinished attempts to
> simply separate namespace and block management within the same NN
> process,

You are absolutely correct. We have learned from those experiences. We
think that separating namespace and block space in the same NN process
does not address the core issue of NN scale. And, also as you clearly 
mentioned they are unfinished. 

With Ozone, we have separated out a block service. Once it is known 
to be stable, we will use that in Namenode, thus achieving the full 
separation. Ozone FS and Ozone object store are intermediate steps
 to solving the scale issue for HDFS.

> *and 2) many existing features like snapshot, append, erasure coding,
> and etc, are not straightforward to be implemented in today’s ozone
> design. Could you share your opinions on this matter?

Ozone is well prepared to implement each of these features. We have
many design documents for ozone posted in the sub-JIRAs. For example,
please take a look at the versioning doc to understand how Ozone’s block
layer really offers.

https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion
.001.pdf

When you read thru this doc (you are welcome to review the attached
patches for this feature, HDFS-12000) and you will see that ozone
block layer supports append semantics. However, we have chosen not to
expose it via Ozone’s object layer since people are used to reduced
capabilities in object stores. We will certainly expose these features
via HDFS.


> How stable is the ozone client? Should we mark them as unstable for
> now? Also giving the significant difference between OzoneClient and
> HdfsClient, should move it to a separated package or even a project? I
> second Konstantin’s option to separate ozone from HDFS.

The ozone’s client libraries are already under a separate package.  We
can always refactor them to better locations based on usage patterns.

We have used ozone client to write billions of keys to ozone clusters.
Good catch on Unstable annotation, they might still change as
community starts using them.


> * Please add sections to the end-user and system admin oriented
> documents for deploying and operating SCM, KSM, and also the chunk
> servers on DataNodes. Additionally, the introduction in
> “OZoneGettingStarted.md” is still building ozone from feature branch
> HDFS-7240.

If you scroll down the OzoneGettingStarted.md, you will see that it
has a section called, “Running Ozone using a real cluster”, that 
contains all instructions needed to run ozone in a real physical
cluster. I will add a section part to the top of this document, so
that these links are easily discoverable. Thank you for pointing this
out.

> still building ozone from feature branch HDFS-7240.

Thank you for pointing this out. We left the instructions in the doc
in that way so that before the merge community has the right instructions
for building and deploying ozone. I will file a blocking JIRA to fix
this before release.

Thanks,
Xiaoyu






On 11/2/17, 11:29 PM, "Lei Xu" <le...@cloudera.com> wrote:

    Hey,  Weiwei and Jitendra
    
    Thanks a lot for this large effort to bring us ozone.
    
    * As the current state of Ozone implementation, what are the major
    benefits of using today’s Ozone over HDFS?  Giving that its missing
    features like HDFS-12680 and HDFS-12697, being disabled by default,
    and the closing of Hadoop 3.0 release, should we wait for a late merge
    when Ozone is more mature ? Or more generally, why should this merge
    to a release branch happen now, when Ozone is not yet usable by users?
    Staying on a feature branch seems like it's still the right place to
    me.
    * For the existing HDFS user, could you address the semantic gaps
    between Ozone / Ozone File System and HDFS. It would be great to
    illustrate what is the expected use cases for Ozone giving its
    different architecture and design decisions?  Like no append, no
    atomic rename and etc.
    * A follow question, was it able to run any of today’s Hadoop
    applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
    against OZoneFileSystem? I think a performance / scalability gain or
    extended functionality should be the prerequisites for the merge.
    Additionally, I believe such tests will reveal the potential caveats
    if any.
    * Ozone’s architecture shows great potential to address NN
    scalability.  However it looks like a XXL effort to me, considering
    the fact that 1) the community had multiple unfinished attempts to
    simply separate namespace and block management within the same NN
    process, and 2) many existing features like snapshot, append, erasure
    coding, and etc, are not straightforward to be implemented in today’s
    ozone design. Could you share your opinions on this matter?
    * How stable is the ozone client? Should we mark them as unstable for
    now? Also giving the significant difference between OzoneClient and
    HdfsClient, should move it to a separated package or even a project? I
    second Konstantin’s option to separate ozone from HDFS.
    * Please add sections to the end-user and system admin oriented
    documents for deploying and operating SCM, KSM, and also the chunk
    servers on DataNodes. Additionally, the introduction in
    “OZoneGettingStarted.md” is still building ozone from feature branch
    HDFS-7240.
    
    Best regards,
    
    On Mon, Oct 23, 2017 at 11:10 AM, Jitendra Pandey
    <ji...@hortonworks.com> wrote:
    > I have filed https://issues.apache.org/jira/browse/HDFS-12697 to ensure ozone stays disabled in a secure environment.
    > Since ozone is disabled by default and will not come with security on, it will not expose any new attack surface in a Hadoop deployment.
    > Ozone security effort will need a detailed design and discussion on a community jira. Hopefully, that effort will start soon after the merge.
    >
    > Thanks
    > jitendra
    >
    > On 10/20/17, 2:40 PM, "larry mccay" <lm...@apache.org> wrote:
    >
    >     All -
    >
    >     I broke this list of questions out into a separate DISCUSS thread where we
    >     can iterate over how a security audit process at merge time might look and
    >     whether it is even something that we want to take on.
    >
    >     I will try and continue discussion on that thread and drive that to some
    >     conclusion before bringing it into any particular merge discussion.
    >
    >     thanks,
    >
    >     --larry
    >
    >     On Fri, Oct 20, 2017 at 12:37 PM, larry mccay <lm...@apache.org> wrote:
    >
    >     > I previously sent this same email from my work email and it doesn't seem
    >     > to have gone through - resending from apache account (apologizing up from
    >     > for the length)....
    >     >
    >     > For such sizable merges in Hadoop, I would like to start doing security
    >     > audits in order to have an initial idea of the attack surface, the
    >     > protections available for known threats, what sort of configuration is
    >     > being used to launch processes, etc.
    >     >
    >     > I dug into the architecture documents while in the middle of this list -
    >     > nice docs!
    >     > I do intend to try and make a generic check list like this for such
    >     > security audits in the future so a lot of this is from that but I tried to
    >     > also direct specific questions from those docs as well.
    >     >
    >     > 1. UIs
    >     > I see there are at least two UIs - Storage Container Manager and Key Space
    >     > Manager. There are a number of typical vulnerabilities that we find in UIs
    >     >
    >     > 1.1. What sort of validation is being done on any accepted user input?
    >     > (pointers to code would be appreciated)
    >     > 1.2. What explicit protections have been built in for (pointers to code
    >     > would be appreciated):
    >     >   1.2.1. cross site scripting
    >     >   1.2.2. cross site request forgery
    >     >   1.2.3. click jacking (X-Frame-Options)
    >     > 1.3. What sort of authentication is required for access to the UIs?
    >     > 1.4. What authorization is available for determining who can access what
    >     > capabilities of the UIs for either viewing, modifying data or affecting
    >     > object stores and related processes?
    >     > 1.5. Are the UIs built with proxying in mind by leveraging X-Forwarded
    >     > headers?
    >     > 1.6. Is there any input that will ultimately be persisted in configuration
    >     > for executing shell commands or processes?
    >     > 1.7. Do the UIs support the trusted proxy pattern with doas impersonation?
    >     > 1.8. Is there TLS/SSL support?
    >     >
    >     > 2. REST APIs
    >     >
    >     > 2.1. Do the REST APIs support the trusted proxy pattern with doas
    >     > impersonation capabilities?
    >     > 2.2. What explicit protections have been built in for:
    >     >   2.2.1. cross site scripting (XSS)
    >     >   2.2.2. cross site request forgery (CSRF)
    >     >   2.2.3. XML External Entity (XXE)
    >     > 2.3. What is being used for authentication - Hadoop Auth Module?
    >     > 2.4. Are there separate processes for the HTTP resources (UIs and REST
    >     > endpoints) or are the part of existing HDFS processes?
    >     > 2.5. Is there TLS/SSL support?
    >     > 2.6. Are there new CLI commands and/or clients for access the REST APIs?
    >     > 2.7. Bucket Level API allows for setting of ACLs on a bucket - what
    >     > authorization is required here - is there a restrictive ACL set on creation?
    >     > 2.8. Bucket Level API allows for deleting a bucket - I assume this is
    >     > dependent on ACLs based access control?
    >     > 2.9. Bucket Level API to list bucket returns up to 1000 keys - is there
    >     > paging available?
    >     > 2.10. Storage Level APIs indicate “Signed with User Authorization” what
    >     > does this refer to exactly?
    >     > 2.11. Object Level APIs indicate that there is no ACL support and only
    >     > bucket owners can read and write - but there are ACL APIs on the Bucket
    >     > Level are they meaningless for now?
    >     > 2.12. How does a REST client know which Ozone Handler to connect to or am
    >     > I missing some well known NN type endpoint in the architecture doc
    >     > somewhere?
    >     >
    >     > 3. Encryption
    >     >
    >     > 3.1. Is there any support for encryption of persisted data?
    >     > 3.2. If so, is KMS and the hadoop key command used for key management?
    >     >
    >     > 4. Configuration
    >     >
    >     > 4.1. Are there any passwords or secrets being added to configuration?
    >     > 4.2. If so, are they accessed via Configuration.getPassword() to allow for
    >     > provisioning in credential providers?
    >     > 4.3. Are there any settings that are used to launch docker containers or
    >     > shell out any commands, etc?
    >     >
    >     > 5. HA
    >     >
    >     > 5.1. Are there provisions for HA?
    >     > 5.2. Are we leveraging the existing HA capabilities in HDFS?
    >     > 5.3. Is Storage Container Manager a SPOF?
    >     > 5.4. I see HA listed in future work in the architecture doc - is this
    >     > still an open issue?
    >     >
    >     > On Fri, Oct 20, 2017 at 11:19 AM, Anu Engineer <ae...@hortonworks.com>
    >     > wrote:
    >     >
    >     >> Hi Steve,
    >     >>
    >     >> In addition to everything Weiwei mentioned (chapter 3 of user guide), if
    >     >> you really want to drill down to REST protocol you might want to apply this
    >     >> patch and build ozone.
    >     >>
    >     >> https://issues.apache.org/jira/browse/HDFS-12690
    >     >>
    >     >> This will generate an Open API (https://www.openapis.org ,
    >     >> http://swagger.io) based specification which can be accessed from KSM UI
    >     >> or just as a json file.
    >     >> Unfortunately, this patch is still at code review stage, so you will have
    >     >> to apply the patch and build it yourself.
    >     >>
    >     >> Thanks
    >     >> Anu
    >     >>
    >     >>
    >     >> On 10/20/17, 6:09 AM, "Yang Weiwei" <ch...@hotmail.com> wrote:
    >     >>
    >     >>     Hi Steve
    >     >>
    >     >>
    >     >>     The code is available in HDFS-7240 feature branch, public git repo
    >     >> here<https://github.com/apache/hadoop/tree/HDFS-7240>.
    >     >>
    >     >>     I am not sure if there is a "public" API for object stores, but the
    >     >> design doc<https://issues.apache.org/jira/secure/attachment/1279954
    >     >> 9/ozone_user_v0.pdf> uses most common syntax so I believe it should be
    >     >> compliance. You can find the rest API doc here<https://github.com/apache
    >     >> /hadoop/blob/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/src/
    >     >> site/markdown/OzoneRest.md> (with some example usages), and commandline
    >     >> API here<https://github.com/apache/hadoop/blob/HDFS-7240/hadoop-
    >     >> hdfs-project/hadoop-hdfs/src/site/markdown/OzoneCommandShell.md>.
    >     >>
    >     >>
    >     >>     Look forward for your feedback!
    >     >>
    >     >>
    >     >>     --Weiwei
    >     >>
    >     >>
    >     >>     ________________________________
    >     >>     发件人: Steve Loughran <st...@hortonworks.com>
    >     >>     发送时间: 2017年10月20日 11:49
    >     >>     收件人: Yang Weiwei
    >     >>     抄送: hdfs-dev@hadoop.apache.org; mapreduce-dev@hadoop.apache.org;
    >     >> yarn-dev@hadoop.apache.org; common-dev@hadoop.apache.org
    >     >>     主题: Re: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk
    >     >>
    >     >>
    >     >>     Wow, big piece of work
    >     >>
    >     >>     1. Where is a PR/branch on github with rendered docs for us to look
    >     >> at?
    >     >>     2. Have you made any public APi changes related to object stores?
    >     >> That's probably something I'll have opinions on more than implementation
    >     >> details.
    >     >>
    >     >>     thanks
    >     >>
    >     >>     > On 19 Oct 2017, at 02:54, Yang Weiwei <ch...@hotmail.com>
    >     >> wrote:
    >     >>     >
    >     >>     > Hello everyone,
    >     >>     >
    >     >>     >
    >     >>     > I would like to start this thread to discuss merging Ozone
    >     >> (HDFS-7240) to trunk. This feature implements an object store which can
    >     >> co-exist with HDFS. Ozone is disabled by default. We have tested Ozone with
    >     >> cluster sizes varying from 1 to 100 data nodes.
    >     >>     >
    >     >>     >
    >     >>     >
    >     >>     > The merge payload includes the following:
    >     >>     >
    >     >>     >  1.  All services, management scripts
    >     >>     >  2.  Object store APIs, exposed via both REST and RPC
    >     >>     >  3.  Master service UIs, command line interfaces
    >     >>     >  4.  Pluggable pipeline Integration
    >     >>     >  5.  Ozone File System (Hadoop compatible file system
    >     >> implementation, passes all FileSystem contract tests)
    >     >>     >  6.  Corona - a load generator for Ozone.
    >     >>     >  7.  Essential documentation added to Hadoop site.
    >     >>     >  8.  Version specific Ozone Documentation, accessible via service
    >     >> UI.
    >     >>     >  9.  Docker support for ozone, which enables faster development
    >     >> cycles.
    >     >>     >
    >     >>     >
    >     >>     > To build Ozone and run ozone using docker, please follow
    >     >> instructions in this wiki page. https://cwiki.apache.org/confl
    >     >> uence/display/HADOOP/Dev+cluster+with+docker.
    >     >>     Dev cluster with docker - Hadoop - Apache Software Foundation<
    >     >> https://cwiki.apache.org/confluence/display/HADOO
    >     >> P/Dev+cluster+with+docker>
    >     >>     cwiki.apache.org
    >     >>     First, it uses a much more smaller common image which doesn't
    >     >> contains Hadoop. Second, the real Hadoop should be built from the source
    >     >> and the dist director should be ...
    >     >>
    >     >>
    >     >>
    >     >>     >
    >     >>     >
    >     >>     > We have built a passionate and diverse community to drive this
    >     >> feature development. As a team, we have achieved significant progress in
    >     >> past 3 years since first JIRA for HDFS-7240 was opened on Oct 2014. So far,
    >     >> we have resolved almost 400 JIRAs by 20+ contributors/committers from
    >     >> different countries and affiliations. We also want to thank the large
    >     >> number of community members who were supportive of our efforts and
    >     >> contributed ideas and participated in the design of ozone.
    >     >>     >
    >     >>     >
    >     >>     > Please share your thoughts, thanks!
    >     >>     >
    >     >>     >
    >     >>     > -- Weiwei Yang
    >     >>
    >     >>
    >     >>
    >     >>
    >     >> ---------------------------------------------------------------------
    >     >> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
    >     >> For additional commands, e-mail: common-dev-help@hadoop.apache.org
    >     >>
    >     >
    >     >
    >
    >
    
    
    
    -- 
    Lei (Eddy) Xu
    Software Engineer, Cloudera
    
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
    For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org
    
    


Re: 答复: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk

Posted by Xiaoyu Yao <xy...@hortonworks.com>.
Hi Lei,

Thank you for your interest in Ozone. Let me answer each of the
specific questions.

> As the current state of Ozone implementation, what are the major
> benefits of using today’s Ozone over HDFS?


Scale -  HDFS tops out at 500/700 million keys; Ozone primary 
use case is to go beyond that limit. Ozone can scale block space 
and namespace independently. Ozone also provides object store 
semantics which is simpler and scales better


Enabling new workloads on HDFS -  For example, we see lots of 
customers moving to a cloud-like model, where they have a compute 
cluster and storage cluster. This allows them to move workloads to 
the cloud and back seamlessly. We see a marked increase of 
Docker/Kubernetes enabled workloads. The pain point for most 
Docker/VM deployments is lack of storage. Ozone is an excellent 
fit for Docker/Kubernetes deployments.

Ease of Management and use - Ozone has learned very 
valuable lessons from HDFS. It comes with a good set of 
management tools and tries to avoid very complicated setup.


> Giving that its missing features like HDFS-12680 and HDFS-12697, and
> the closing of Hadoop 3.0 release, should we wait for a late merge
> when Ozone is more mature ?

Both HDFS-12680 (lease manager) and HDFS-12697 (Ozone services 
stay disabled in secure setup) are resolved in the past weeks. 
We are targeting the merge for trunk not 3.0.

> Or more generally, why should this merge to a release branch happen
> now, when Ozone is not yet usable by users? Staying on a feature
> branch seems like it's still the right place to Me.

Let me repeat that we are not merging to the 3.0 branch. We are
merging to trunk and we do not intend to backport this to 3.0 code
base.

Ozone is certainly usable. We have written and read billions of keys
into ozone. I would think that it more like Erasure coding when we
merged. We want ozone to be used/tested when people start using 3.1
release. Yes, it is an Alpha feature, having an Alpha release out in
the community is the best way to mature Ozone.


> For the existing HDFS user, could you address the semantic gaps
> between Ozone / Ozone File System and HDFS.

Ozone file system offers a Hadoop compatible file system. For the
first release, we are targeting YARN, Hive, and Spark as the principle
workloads. These applications are functional with Ozone file system.

> It would be great to illustrate what is the expected use cases for
> Ozone giving its different architecture and design decisions?

We expect almost all real use case of ozone to come via Ozone File
System. Hence our focus has been to make sure that (YARN, Hive and
Spark) work well with this system. Ozone file system does the right
magic on behalf of the users for now.

> Like no append, no atomic rename and etc.

This is similar to S3 -- the rise of cloud-based object stores has made 
it very easy for ozone. In fact, the work done by other stacks (Hive, Spark etc.) 
to enable big data workload in cloud is extremely helpful for ozone.


> A follow question, was it able to run any of today’s Hadoop
> applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
> against OZoneFileSystem? I think a performance / scalability gain or
> extended functionality should be the prerequisites for the merge.
> Additionally, I believe such tests will reveal the potential caveats
> if any.

We have run, Mapreduce (pretty much all standard applications along
with Distcp works well), YARN, Hive and Spark against ozone with NO
Modifications to MR,YARN, Hive or Spark.

We have never tried out Impala or Presto, but if they are known to work
well against Hadoop compatible file systems, I am hopeful that they
will work as well. Please feel free to test and report if you run into
any issues.


> * Ozone’s architecture shows great potential to address NN
> scalability.  However it looks like a XXL effort to me, considering
> the fact that 1) the community had multiple unfinished attempts to
> simply separate namespace and block management within the same NN
> process,

You are absolutely correct. We have learned from those experiences. We
think that separating namespace and block space in the same NN process
does not address the core issue of NN scale. And, also as you clearly 
mentioned they are unfinished. 

With Ozone, we have separated out a block service. Once it is known 
to be stable, we will use that in Namenode, thus achieving the full 
separation. Ozone FS and Ozone object store are intermediate steps
 to solving the scale issue for HDFS.

> *and 2) many existing features like snapshot, append, erasure coding,
> and etc, are not straightforward to be implemented in today’s ozone
> design. Could you share your opinions on this matter?

Ozone is well prepared to implement each of these features. We have
many design documents for ozone posted in the sub-JIRAs. For example,
please take a look at the versioning doc to understand how Ozone’s block
layer really offers.

https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion
.001.pdf

When you read thru this doc (you are welcome to review the attached
patches for this feature, HDFS-12000) and you will see that ozone
block layer supports append semantics. However, we have chosen not to
expose it via Ozone’s object layer since people are used to reduced
capabilities in object stores. We will certainly expose these features
via HDFS.


> How stable is the ozone client? Should we mark them as unstable for
> now? Also giving the significant difference between OzoneClient and
> HdfsClient, should move it to a separated package or even a project? I
> second Konstantin’s option to separate ozone from HDFS.

The ozone’s client libraries are already under a separate package.  We
can always refactor them to better locations based on usage patterns.

We have used ozone client to write billions of keys to ozone clusters.
Good catch on Unstable annotation, they might still change as
community starts using them.


> * Please add sections to the end-user and system admin oriented
> documents for deploying and operating SCM, KSM, and also the chunk
> servers on DataNodes. Additionally, the introduction in
> “OZoneGettingStarted.md” is still building ozone from feature branch
> HDFS-7240.

If you scroll down the OzoneGettingStarted.md, you will see that it
has a section called, “Running Ozone using a real cluster”, that 
contains all instructions needed to run ozone in a real physical
cluster. I will add a section part to the top of this document, so
that these links are easily discoverable. Thank you for pointing this
out.

> still building ozone from feature branch HDFS-7240.

Thank you for pointing this out. We left the instructions in the doc
in that way so that before the merge community has the right instructions
for building and deploying ozone. I will file a blocking JIRA to fix
this before release.

Thanks,
Xiaoyu






On 11/2/17, 11:29 PM, "Lei Xu" <le...@cloudera.com> wrote:

    Hey,  Weiwei and Jitendra
    
    Thanks a lot for this large effort to bring us ozone.
    
    * As the current state of Ozone implementation, what are the major
    benefits of using today’s Ozone over HDFS?  Giving that its missing
    features like HDFS-12680 and HDFS-12697, being disabled by default,
    and the closing of Hadoop 3.0 release, should we wait for a late merge
    when Ozone is more mature ? Or more generally, why should this merge
    to a release branch happen now, when Ozone is not yet usable by users?
    Staying on a feature branch seems like it's still the right place to
    me.
    * For the existing HDFS user, could you address the semantic gaps
    between Ozone / Ozone File System and HDFS. It would be great to
    illustrate what is the expected use cases for Ozone giving its
    different architecture and design decisions?  Like no append, no
    atomic rename and etc.
    * A follow question, was it able to run any of today’s Hadoop
    applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
    against OZoneFileSystem? I think a performance / scalability gain or
    extended functionality should be the prerequisites for the merge.
    Additionally, I believe such tests will reveal the potential caveats
    if any.
    * Ozone’s architecture shows great potential to address NN
    scalability.  However it looks like a XXL effort to me, considering
    the fact that 1) the community had multiple unfinished attempts to
    simply separate namespace and block management within the same NN
    process, and 2) many existing features like snapshot, append, erasure
    coding, and etc, are not straightforward to be implemented in today’s
    ozone design. Could you share your opinions on this matter?
    * How stable is the ozone client? Should we mark them as unstable for
    now? Also giving the significant difference between OzoneClient and
    HdfsClient, should move it to a separated package or even a project? I
    second Konstantin’s option to separate ozone from HDFS.
    * Please add sections to the end-user and system admin oriented
    documents for deploying and operating SCM, KSM, and also the chunk
    servers on DataNodes. Additionally, the introduction in
    “OZoneGettingStarted.md” is still building ozone from feature branch
    HDFS-7240.
    
    Best regards,
    
    On Mon, Oct 23, 2017 at 11:10 AM, Jitendra Pandey
    <ji...@hortonworks.com> wrote:
    > I have filed https://issues.apache.org/jira/browse/HDFS-12697 to ensure ozone stays disabled in a secure environment.
    > Since ozone is disabled by default and will not come with security on, it will not expose any new attack surface in a Hadoop deployment.
    > Ozone security effort will need a detailed design and discussion on a community jira. Hopefully, that effort will start soon after the merge.
    >
    > Thanks
    > jitendra
    >
    > On 10/20/17, 2:40 PM, "larry mccay" <lm...@apache.org> wrote:
    >
    >     All -
    >
    >     I broke this list of questions out into a separate DISCUSS thread where we
    >     can iterate over how a security audit process at merge time might look and
    >     whether it is even something that we want to take on.
    >
    >     I will try and continue discussion on that thread and drive that to some
    >     conclusion before bringing it into any particular merge discussion.
    >
    >     thanks,
    >
    >     --larry
    >
    >     On Fri, Oct 20, 2017 at 12:37 PM, larry mccay <lm...@apache.org> wrote:
    >
    >     > I previously sent this same email from my work email and it doesn't seem
    >     > to have gone through - resending from apache account (apologizing up from
    >     > for the length)....
    >     >
    >     > For such sizable merges in Hadoop, I would like to start doing security
    >     > audits in order to have an initial idea of the attack surface, the
    >     > protections available for known threats, what sort of configuration is
    >     > being used to launch processes, etc.
    >     >
    >     > I dug into the architecture documents while in the middle of this list -
    >     > nice docs!
    >     > I do intend to try and make a generic check list like this for such
    >     > security audits in the future so a lot of this is from that but I tried to
    >     > also direct specific questions from those docs as well.
    >     >
    >     > 1. UIs
    >     > I see there are at least two UIs - Storage Container Manager and Key Space
    >     > Manager. There are a number of typical vulnerabilities that we find in UIs
    >     >
    >     > 1.1. What sort of validation is being done on any accepted user input?
    >     > (pointers to code would be appreciated)
    >     > 1.2. What explicit protections have been built in for (pointers to code
    >     > would be appreciated):
    >     >   1.2.1. cross site scripting
    >     >   1.2.2. cross site request forgery
    >     >   1.2.3. click jacking (X-Frame-Options)
    >     > 1.3. What sort of authentication is required for access to the UIs?
    >     > 1.4. What authorization is available for determining who can access what
    >     > capabilities of the UIs for either viewing, modifying data or affecting
    >     > object stores and related processes?
    >     > 1.5. Are the UIs built with proxying in mind by leveraging X-Forwarded
    >     > headers?
    >     > 1.6. Is there any input that will ultimately be persisted in configuration
    >     > for executing shell commands or processes?
    >     > 1.7. Do the UIs support the trusted proxy pattern with doas impersonation?
    >     > 1.8. Is there TLS/SSL support?
    >     >
    >     > 2. REST APIs
    >     >
    >     > 2.1. Do the REST APIs support the trusted proxy pattern with doas
    >     > impersonation capabilities?
    >     > 2.2. What explicit protections have been built in for:
    >     >   2.2.1. cross site scripting (XSS)
    >     >   2.2.2. cross site request forgery (CSRF)
    >     >   2.2.3. XML External Entity (XXE)
    >     > 2.3. What is being used for authentication - Hadoop Auth Module?
    >     > 2.4. Are there separate processes for the HTTP resources (UIs and REST
    >     > endpoints) or are the part of existing HDFS processes?
    >     > 2.5. Is there TLS/SSL support?
    >     > 2.6. Are there new CLI commands and/or clients for access the REST APIs?
    >     > 2.7. Bucket Level API allows for setting of ACLs on a bucket - what
    >     > authorization is required here - is there a restrictive ACL set on creation?
    >     > 2.8. Bucket Level API allows for deleting a bucket - I assume this is
    >     > dependent on ACLs based access control?
    >     > 2.9. Bucket Level API to list bucket returns up to 1000 keys - is there
    >     > paging available?
    >     > 2.10. Storage Level APIs indicate “Signed with User Authorization” what
    >     > does this refer to exactly?
    >     > 2.11. Object Level APIs indicate that there is no ACL support and only
    >     > bucket owners can read and write - but there are ACL APIs on the Bucket
    >     > Level are they meaningless for now?
    >     > 2.12. How does a REST client know which Ozone Handler to connect to or am
    >     > I missing some well known NN type endpoint in the architecture doc
    >     > somewhere?
    >     >
    >     > 3. Encryption
    >     >
    >     > 3.1. Is there any support for encryption of persisted data?
    >     > 3.2. If so, is KMS and the hadoop key command used for key management?
    >     >
    >     > 4. Configuration
    >     >
    >     > 4.1. Are there any passwords or secrets being added to configuration?
    >     > 4.2. If so, are they accessed via Configuration.getPassword() to allow for
    >     > provisioning in credential providers?
    >     > 4.3. Are there any settings that are used to launch docker containers or
    >     > shell out any commands, etc?
    >     >
    >     > 5. HA
    >     >
    >     > 5.1. Are there provisions for HA?
    >     > 5.2. Are we leveraging the existing HA capabilities in HDFS?
    >     > 5.3. Is Storage Container Manager a SPOF?
    >     > 5.4. I see HA listed in future work in the architecture doc - is this
    >     > still an open issue?
    >     >
    >     > On Fri, Oct 20, 2017 at 11:19 AM, Anu Engineer <ae...@hortonworks.com>
    >     > wrote:
    >     >
    >     >> Hi Steve,
    >     >>
    >     >> In addition to everything Weiwei mentioned (chapter 3 of user guide), if
    >     >> you really want to drill down to REST protocol you might want to apply this
    >     >> patch and build ozone.
    >     >>
    >     >> https://issues.apache.org/jira/browse/HDFS-12690
    >     >>
    >     >> This will generate an Open API (https://www.openapis.org ,
    >     >> http://swagger.io) based specification which can be accessed from KSM UI
    >     >> or just as a json file.
    >     >> Unfortunately, this patch is still at code review stage, so you will have
    >     >> to apply the patch and build it yourself.
    >     >>
    >     >> Thanks
    >     >> Anu
    >     >>
    >     >>
    >     >> On 10/20/17, 6:09 AM, "Yang Weiwei" <ch...@hotmail.com> wrote:
    >     >>
    >     >>     Hi Steve
    >     >>
    >     >>
    >     >>     The code is available in HDFS-7240 feature branch, public git repo
    >     >> here<https://github.com/apache/hadoop/tree/HDFS-7240>.
    >     >>
    >     >>     I am not sure if there is a "public" API for object stores, but the
    >     >> design doc<https://issues.apache.org/jira/secure/attachment/1279954
    >     >> 9/ozone_user_v0.pdf> uses most common syntax so I believe it should be
    >     >> compliance. You can find the rest API doc here<https://github.com/apache
    >     >> /hadoop/blob/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/src/
    >     >> site/markdown/OzoneRest.md> (with some example usages), and commandline
    >     >> API here<https://github.com/apache/hadoop/blob/HDFS-7240/hadoop-
    >     >> hdfs-project/hadoop-hdfs/src/site/markdown/OzoneCommandShell.md>.
    >     >>
    >     >>
    >     >>     Look forward for your feedback!
    >     >>
    >     >>
    >     >>     --Weiwei
    >     >>
    >     >>
    >     >>     ________________________________
    >     >>     发件人: Steve Loughran <st...@hortonworks.com>
    >     >>     发送时间: 2017年10月20日 11:49
    >     >>     收件人: Yang Weiwei
    >     >>     抄送: hdfs-dev@hadoop.apache.org; mapreduce-dev@hadoop.apache.org;
    >     >> yarn-dev@hadoop.apache.org; common-dev@hadoop.apache.org
    >     >>     主题: Re: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk
    >     >>
    >     >>
    >     >>     Wow, big piece of work
    >     >>
    >     >>     1. Where is a PR/branch on github with rendered docs for us to look
    >     >> at?
    >     >>     2. Have you made any public APi changes related to object stores?
    >     >> That's probably something I'll have opinions on more than implementation
    >     >> details.
    >     >>
    >     >>     thanks
    >     >>
    >     >>     > On 19 Oct 2017, at 02:54, Yang Weiwei <ch...@hotmail.com>
    >     >> wrote:
    >     >>     >
    >     >>     > Hello everyone,
    >     >>     >
    >     >>     >
    >     >>     > I would like to start this thread to discuss merging Ozone
    >     >> (HDFS-7240) to trunk. This feature implements an object store which can
    >     >> co-exist with HDFS. Ozone is disabled by default. We have tested Ozone with
    >     >> cluster sizes varying from 1 to 100 data nodes.
    >     >>     >
    >     >>     >
    >     >>     >
    >     >>     > The merge payload includes the following:
    >     >>     >
    >     >>     >  1.  All services, management scripts
    >     >>     >  2.  Object store APIs, exposed via both REST and RPC
    >     >>     >  3.  Master service UIs, command line interfaces
    >     >>     >  4.  Pluggable pipeline Integration
    >     >>     >  5.  Ozone File System (Hadoop compatible file system
    >     >> implementation, passes all FileSystem contract tests)
    >     >>     >  6.  Corona - a load generator for Ozone.
    >     >>     >  7.  Essential documentation added to Hadoop site.
    >     >>     >  8.  Version specific Ozone Documentation, accessible via service
    >     >> UI.
    >     >>     >  9.  Docker support for ozone, which enables faster development
    >     >> cycles.
    >     >>     >
    >     >>     >
    >     >>     > To build Ozone and run ozone using docker, please follow
    >     >> instructions in this wiki page. https://cwiki.apache.org/confl
    >     >> uence/display/HADOOP/Dev+cluster+with+docker.
    >     >>     Dev cluster with docker - Hadoop - Apache Software Foundation<
    >     >> https://cwiki.apache.org/confluence/display/HADOO
    >     >> P/Dev+cluster+with+docker>
    >     >>     cwiki.apache.org
    >     >>     First, it uses a much more smaller common image which doesn't
    >     >> contains Hadoop. Second, the real Hadoop should be built from the source
    >     >> and the dist director should be ...
    >     >>
    >     >>
    >     >>
    >     >>     >
    >     >>     >
    >     >>     > We have built a passionate and diverse community to drive this
    >     >> feature development. As a team, we have achieved significant progress in
    >     >> past 3 years since first JIRA for HDFS-7240 was opened on Oct 2014. So far,
    >     >> we have resolved almost 400 JIRAs by 20+ contributors/committers from
    >     >> different countries and affiliations. We also want to thank the large
    >     >> number of community members who were supportive of our efforts and
    >     >> contributed ideas and participated in the design of ozone.
    >     >>     >
    >     >>     >
    >     >>     > Please share your thoughts, thanks!
    >     >>     >
    >     >>     >
    >     >>     > -- Weiwei Yang
    >     >>
    >     >>
    >     >>
    >     >>
    >     >> ---------------------------------------------------------------------
    >     >> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
    >     >> For additional commands, e-mail: common-dev-help@hadoop.apache.org
    >     >>
    >     >
    >     >
    >
    >
    
    
    
    -- 
    Lei (Eddy) Xu
    Software Engineer, Cloudera
    
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
    For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org
    
    


Re: 答复: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk

Posted by Xiaoyu Yao <xy...@hortonworks.com>.
Hi Lei,

Thank you for your interest in Ozone. Let me answer each of the
specific questions.

> As the current state of Ozone implementation, what are the major
> benefits of using today’s Ozone over HDFS?


Scale -  HDFS tops out at 500/700 million keys; Ozone primary 
use case is to go beyond that limit. Ozone can scale block space 
and namespace independently. Ozone also provides object store 
semantics which is simpler and scales better


Enabling new workloads on HDFS -  For example, we see lots of 
customers moving to a cloud-like model, where they have a compute 
cluster and storage cluster. This allows them to move workloads to 
the cloud and back seamlessly. We see a marked increase of 
Docker/Kubernetes enabled workloads. The pain point for most 
Docker/VM deployments is lack of storage. Ozone is an excellent 
fit for Docker/Kubernetes deployments.

Ease of Management and use - Ozone has learned very 
valuable lessons from HDFS. It comes with a good set of 
management tools and tries to avoid very complicated setup.


> Giving that its missing features like HDFS-12680 and HDFS-12697, and
> the closing of Hadoop 3.0 release, should we wait for a late merge
> when Ozone is more mature ?

Both HDFS-12680 (lease manager) and HDFS-12697 (Ozone services 
stay disabled in secure setup) are resolved in the past weeks. 
We are targeting the merge for trunk not 3.0.

> Or more generally, why should this merge to a release branch happen
> now, when Ozone is not yet usable by users? Staying on a feature
> branch seems like it's still the right place to Me.

Let me repeat that we are not merging to the 3.0 branch. We are
merging to trunk and we do not intend to backport this to 3.0 code
base.

Ozone is certainly usable. We have written and read billions of keys
into ozone. I would think that it more like Erasure coding when we
merged. We want ozone to be used/tested when people start using 3.1
release. Yes, it is an Alpha feature, having an Alpha release out in
the community is the best way to mature Ozone.


> For the existing HDFS user, could you address the semantic gaps
> between Ozone / Ozone File System and HDFS.

Ozone file system offers a Hadoop compatible file system. For the
first release, we are targeting YARN, Hive, and Spark as the principle
workloads. These applications are functional with Ozone file system.

> It would be great to illustrate what is the expected use cases for
> Ozone giving its different architecture and design decisions?

We expect almost all real use case of ozone to come via Ozone File
System. Hence our focus has been to make sure that (YARN, Hive and
Spark) work well with this system. Ozone file system does the right
magic on behalf of the users for now.

> Like no append, no atomic rename and etc.

This is similar to S3 -- the rise of cloud-based object stores has made 
it very easy for ozone. In fact, the work done by other stacks (Hive, Spark etc.) 
to enable big data workload in cloud is extremely helpful for ozone.


> A follow question, was it able to run any of today’s Hadoop
> applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
> against OZoneFileSystem? I think a performance / scalability gain or
> extended functionality should be the prerequisites for the merge.
> Additionally, I believe such tests will reveal the potential caveats
> if any.

We have run, Mapreduce (pretty much all standard applications along
with Distcp works well), YARN, Hive and Spark against ozone with NO
Modifications to MR,YARN, Hive or Spark.

We have never tried out Impala or Presto, but if they are known to work
well against Hadoop compatible file systems, I am hopeful that they
will work as well. Please feel free to test and report if you run into
any issues.


> * Ozone’s architecture shows great potential to address NN
> scalability.  However it looks like a XXL effort to me, considering
> the fact that 1) the community had multiple unfinished attempts to
> simply separate namespace and block management within the same NN
> process,

You are absolutely correct. We have learned from those experiences. We
think that separating namespace and block space in the same NN process
does not address the core issue of NN scale. And, also as you clearly 
mentioned they are unfinished. 

With Ozone, we have separated out a block service. Once it is known 
to be stable, we will use that in Namenode, thus achieving the full 
separation. Ozone FS and Ozone object store are intermediate steps
 to solving the scale issue for HDFS.

> *and 2) many existing features like snapshot, append, erasure coding,
> and etc, are not straightforward to be implemented in today’s ozone
> design. Could you share your opinions on this matter?

Ozone is well prepared to implement each of these features. We have
many design documents for ozone posted in the sub-JIRAs. For example,
please take a look at the versioning doc to understand how Ozone’s block
layer really offers.

https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion
.001.pdf

When you read thru this doc (you are welcome to review the attached
patches for this feature, HDFS-12000) and you will see that ozone
block layer supports append semantics. However, we have chosen not to
expose it via Ozone’s object layer since people are used to reduced
capabilities in object stores. We will certainly expose these features
via HDFS.


> How stable is the ozone client? Should we mark them as unstable for
> now? Also giving the significant difference between OzoneClient and
> HdfsClient, should move it to a separated package or even a project? I
> second Konstantin’s option to separate ozone from HDFS.

The ozone’s client libraries are already under a separate package.  We
can always refactor them to better locations based on usage patterns.

We have used ozone client to write billions of keys to ozone clusters.
Good catch on Unstable annotation, they might still change as
community starts using them.


> * Please add sections to the end-user and system admin oriented
> documents for deploying and operating SCM, KSM, and also the chunk
> servers on DataNodes. Additionally, the introduction in
> “OZoneGettingStarted.md” is still building ozone from feature branch
> HDFS-7240.

If you scroll down the OzoneGettingStarted.md, you will see that it
has a section called, “Running Ozone using a real cluster”, that 
contains all instructions needed to run ozone in a real physical
cluster. I will add a section part to the top of this document, so
that these links are easily discoverable. Thank you for pointing this
out.

> still building ozone from feature branch HDFS-7240.

Thank you for pointing this out. We left the instructions in the doc
in that way so that before the merge community has the right instructions
for building and deploying ozone. I will file a blocking JIRA to fix
this before release.

Thanks,
Xiaoyu






On 11/2/17, 11:29 PM, "Lei Xu" <le...@cloudera.com> wrote:

    Hey,  Weiwei and Jitendra
    
    Thanks a lot for this large effort to bring us ozone.
    
    * As the current state of Ozone implementation, what are the major
    benefits of using today’s Ozone over HDFS?  Giving that its missing
    features like HDFS-12680 and HDFS-12697, being disabled by default,
    and the closing of Hadoop 3.0 release, should we wait for a late merge
    when Ozone is more mature ? Or more generally, why should this merge
    to a release branch happen now, when Ozone is not yet usable by users?
    Staying on a feature branch seems like it's still the right place to
    me.
    * For the existing HDFS user, could you address the semantic gaps
    between Ozone / Ozone File System and HDFS. It would be great to
    illustrate what is the expected use cases for Ozone giving its
    different architecture and design decisions?  Like no append, no
    atomic rename and etc.
    * A follow question, was it able to run any of today’s Hadoop
    applications (MR, Spark, Impala, Presto and etc) on Ozone directly, or
    against OZoneFileSystem? I think a performance / scalability gain or
    extended functionality should be the prerequisites for the merge.
    Additionally, I believe such tests will reveal the potential caveats
    if any.
    * Ozone’s architecture shows great potential to address NN
    scalability.  However it looks like a XXL effort to me, considering
    the fact that 1) the community had multiple unfinished attempts to
    simply separate namespace and block management within the same NN
    process, and 2) many existing features like snapshot, append, erasure
    coding, and etc, are not straightforward to be implemented in today’s
    ozone design. Could you share your opinions on this matter?
    * How stable is the ozone client? Should we mark them as unstable for
    now? Also giving the significant difference between OzoneClient and
    HdfsClient, should move it to a separated package or even a project? I
    second Konstantin’s option to separate ozone from HDFS.
    * Please add sections to the end-user and system admin oriented
    documents for deploying and operating SCM, KSM, and also the chunk
    servers on DataNodes. Additionally, the introduction in
    “OZoneGettingStarted.md” is still building ozone from feature branch
    HDFS-7240.
    
    Best regards,
    
    On Mon, Oct 23, 2017 at 11:10 AM, Jitendra Pandey
    <ji...@hortonworks.com> wrote:
    > I have filed https://issues.apache.org/jira/browse/HDFS-12697 to ensure ozone stays disabled in a secure environment.
    > Since ozone is disabled by default and will not come with security on, it will not expose any new attack surface in a Hadoop deployment.
    > Ozone security effort will need a detailed design and discussion on a community jira. Hopefully, that effort will start soon after the merge.
    >
    > Thanks
    > jitendra
    >
    > On 10/20/17, 2:40 PM, "larry mccay" <lm...@apache.org> wrote:
    >
    >     All -
    >
    >     I broke this list of questions out into a separate DISCUSS thread where we
    >     can iterate over how a security audit process at merge time might look and
    >     whether it is even something that we want to take on.
    >
    >     I will try and continue discussion on that thread and drive that to some
    >     conclusion before bringing it into any particular merge discussion.
    >
    >     thanks,
    >
    >     --larry
    >
    >     On Fri, Oct 20, 2017 at 12:37 PM, larry mccay <lm...@apache.org> wrote:
    >
    >     > I previously sent this same email from my work email and it doesn't seem
    >     > to have gone through - resending from apache account (apologizing up from
    >     > for the length)....
    >     >
    >     > For such sizable merges in Hadoop, I would like to start doing security
    >     > audits in order to have an initial idea of the attack surface, the
    >     > protections available for known threats, what sort of configuration is
    >     > being used to launch processes, etc.
    >     >
    >     > I dug into the architecture documents while in the middle of this list -
    >     > nice docs!
    >     > I do intend to try and make a generic check list like this for such
    >     > security audits in the future so a lot of this is from that but I tried to
    >     > also direct specific questions from those docs as well.
    >     >
    >     > 1. UIs
    >     > I see there are at least two UIs - Storage Container Manager and Key Space
    >     > Manager. There are a number of typical vulnerabilities that we find in UIs
    >     >
    >     > 1.1. What sort of validation is being done on any accepted user input?
    >     > (pointers to code would be appreciated)
    >     > 1.2. What explicit protections have been built in for (pointers to code
    >     > would be appreciated):
    >     >   1.2.1. cross site scripting
    >     >   1.2.2. cross site request forgery
    >     >   1.2.3. click jacking (X-Frame-Options)
    >     > 1.3. What sort of authentication is required for access to the UIs?
    >     > 1.4. What authorization is available for determining who can access what
    >     > capabilities of the UIs for either viewing, modifying data or affecting
    >     > object stores and related processes?
    >     > 1.5. Are the UIs built with proxying in mind by leveraging X-Forwarded
    >     > headers?
    >     > 1.6. Is there any input that will ultimately be persisted in configuration
    >     > for executing shell commands or processes?
    >     > 1.7. Do the UIs support the trusted proxy pattern with doas impersonation?
    >     > 1.8. Is there TLS/SSL support?
    >     >
    >     > 2. REST APIs
    >     >
    >     > 2.1. Do the REST APIs support the trusted proxy pattern with doas
    >     > impersonation capabilities?
    >     > 2.2. What explicit protections have been built in for:
    >     >   2.2.1. cross site scripting (XSS)
    >     >   2.2.2. cross site request forgery (CSRF)
    >     >   2.2.3. XML External Entity (XXE)
    >     > 2.3. What is being used for authentication - Hadoop Auth Module?
    >     > 2.4. Are there separate processes for the HTTP resources (UIs and REST
    >     > endpoints) or are the part of existing HDFS processes?
    >     > 2.5. Is there TLS/SSL support?
    >     > 2.6. Are there new CLI commands and/or clients for access the REST APIs?
    >     > 2.7. Bucket Level API allows for setting of ACLs on a bucket - what
    >     > authorization is required here - is there a restrictive ACL set on creation?
    >     > 2.8. Bucket Level API allows for deleting a bucket - I assume this is
    >     > dependent on ACLs based access control?
    >     > 2.9. Bucket Level API to list bucket returns up to 1000 keys - is there
    >     > paging available?
    >     > 2.10. Storage Level APIs indicate “Signed with User Authorization” what
    >     > does this refer to exactly?
    >     > 2.11. Object Level APIs indicate that there is no ACL support and only
    >     > bucket owners can read and write - but there are ACL APIs on the Bucket
    >     > Level are they meaningless for now?
    >     > 2.12. How does a REST client know which Ozone Handler to connect to or am
    >     > I missing some well known NN type endpoint in the architecture doc
    >     > somewhere?
    >     >
    >     > 3. Encryption
    >     >
    >     > 3.1. Is there any support for encryption of persisted data?
    >     > 3.2. If so, is KMS and the hadoop key command used for key management?
    >     >
    >     > 4. Configuration
    >     >
    >     > 4.1. Are there any passwords or secrets being added to configuration?
    >     > 4.2. If so, are they accessed via Configuration.getPassword() to allow for
    >     > provisioning in credential providers?
    >     > 4.3. Are there any settings that are used to launch docker containers or
    >     > shell out any commands, etc?
    >     >
    >     > 5. HA
    >     >
    >     > 5.1. Are there provisions for HA?
    >     > 5.2. Are we leveraging the existing HA capabilities in HDFS?
    >     > 5.3. Is Storage Container Manager a SPOF?
    >     > 5.4. I see HA listed in future work in the architecture doc - is this
    >     > still an open issue?
    >     >
    >     > On Fri, Oct 20, 2017 at 11:19 AM, Anu Engineer <ae...@hortonworks.com>
    >     > wrote:
    >     >
    >     >> Hi Steve,
    >     >>
    >     >> In addition to everything Weiwei mentioned (chapter 3 of user guide), if
    >     >> you really want to drill down to REST protocol you might want to apply this
    >     >> patch and build ozone.
    >     >>
    >     >> https://issues.apache.org/jira/browse/HDFS-12690
    >     >>
    >     >> This will generate an Open API (https://www.openapis.org ,
    >     >> http://swagger.io) based specification which can be accessed from KSM UI
    >     >> or just as a json file.
    >     >> Unfortunately, this patch is still at code review stage, so you will have
    >     >> to apply the patch and build it yourself.
    >     >>
    >     >> Thanks
    >     >> Anu
    >     >>
    >     >>
    >     >> On 10/20/17, 6:09 AM, "Yang Weiwei" <ch...@hotmail.com> wrote:
    >     >>
    >     >>     Hi Steve
    >     >>
    >     >>
    >     >>     The code is available in HDFS-7240 feature branch, public git repo
    >     >> here<https://github.com/apache/hadoop/tree/HDFS-7240>.
    >     >>
    >     >>     I am not sure if there is a "public" API for object stores, but the
    >     >> design doc<https://issues.apache.org/jira/secure/attachment/1279954
    >     >> 9/ozone_user_v0.pdf> uses most common syntax so I believe it should be
    >     >> compliance. You can find the rest API doc here<https://github.com/apache
    >     >> /hadoop/blob/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/src/
    >     >> site/markdown/OzoneRest.md> (with some example usages), and commandline
    >     >> API here<https://github.com/apache/hadoop/blob/HDFS-7240/hadoop-
    >     >> hdfs-project/hadoop-hdfs/src/site/markdown/OzoneCommandShell.md>.
    >     >>
    >     >>
    >     >>     Look forward for your feedback!
    >     >>
    >     >>
    >     >>     --Weiwei
    >     >>
    >     >>
    >     >>     ________________________________
    >     >>     发件人: Steve Loughran <st...@hortonworks.com>
    >     >>     发送时间: 2017年10月20日 11:49
    >     >>     收件人: Yang Weiwei
    >     >>     抄送: hdfs-dev@hadoop.apache.org; mapreduce-dev@hadoop.apache.org;
    >     >> yarn-dev@hadoop.apache.org; common-dev@hadoop.apache.org
    >     >>     主题: Re: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk
    >     >>
    >     >>
    >     >>     Wow, big piece of work
    >     >>
    >     >>     1. Where is a PR/branch on github with rendered docs for us to look
    >     >> at?
    >     >>     2. Have you made any public APi changes related to object stores?
    >     >> That's probably something I'll have opinions on more than implementation
    >     >> details.
    >     >>
    >     >>     thanks
    >     >>
    >     >>     > On 19 Oct 2017, at 02:54, Yang Weiwei <ch...@hotmail.com>
    >     >> wrote:
    >     >>     >
    >     >>     > Hello everyone,
    >     >>     >
    >     >>     >
    >     >>     > I would like to start this thread to discuss merging Ozone
    >     >> (HDFS-7240) to trunk. This feature implements an object store which can
    >     >> co-exist with HDFS. Ozone is disabled by default. We have tested Ozone with
    >     >> cluster sizes varying from 1 to 100 data nodes.
    >     >>     >
    >     >>     >
    >     >>     >
    >     >>     > The merge payload includes the following:
    >     >>     >
    >     >>     >  1.  All services, management scripts
    >     >>     >  2.  Object store APIs, exposed via both REST and RPC
    >     >>     >  3.  Master service UIs, command line interfaces
    >     >>     >  4.  Pluggable pipeline Integration
    >     >>     >  5.  Ozone File System (Hadoop compatible file system
    >     >> implementation, passes all FileSystem contract tests)
    >     >>     >  6.  Corona - a load generator for Ozone.
    >     >>     >  7.  Essential documentation added to Hadoop site.
    >     >>     >  8.  Version specific Ozone Documentation, accessible via service
    >     >> UI.
    >     >>     >  9.  Docker support for ozone, which enables faster development
    >     >> cycles.
    >     >>     >
    >     >>     >
    >     >>     > To build Ozone and run ozone using docker, please follow
    >     >> instructions in this wiki page. https://cwiki.apache.org/confl
    >     >> uence/display/HADOOP/Dev+cluster+with+docker.
    >     >>     Dev cluster with docker - Hadoop - Apache Software Foundation<
    >     >> https://cwiki.apache.org/confluence/display/HADOO
    >     >> P/Dev+cluster+with+docker>
    >     >>     cwiki.apache.org
    >     >>     First, it uses a much more smaller common image which doesn't
    >     >> contains Hadoop. Second, the real Hadoop should be built from the source
    >     >> and the dist director should be ...
    >     >>
    >     >>
    >     >>
    >     >>     >
    >     >>     >
    >     >>     > We have built a passionate and diverse community to drive this
    >     >> feature development. As a team, we have achieved significant progress in
    >     >> past 3 years since first JIRA for HDFS-7240 was opened on Oct 2014. So far,
    >     >> we have resolved almost 400 JIRAs by 20+ contributors/committers from
    >     >> different countries and affiliations. We also want to thank the large
    >     >> number of community members who were supportive of our efforts and
    >     >> contributed ideas and participated in the design of ozone.
    >     >>     >
    >     >>     >
    >     >>     > Please share your thoughts, thanks!
    >     >>     >
    >     >>     >
    >     >>     > -- Weiwei Yang
    >     >>
    >     >>
    >     >>
    >     >>
    >     >> ---------------------------------------------------------------------
    >     >> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
    >     >> For additional commands, e-mail: common-dev-help@hadoop.apache.org
    >     >>
    >     >
    >     >
    >
    >
    
    
    
    -- 
    Lei (Eddy) Xu
    Software Engineer, Cloudera
    
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
    For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org