You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Marton Elek (Jira)" <ji...@apache.org> on 2020/04/02 11:13:00 UTC

[jira] [Comment Edited] (HDDS-3001) NFS support for Ozone

    [ https://issues.apache.org/jira/browse/HDDS-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17073546#comment-17073546 ] 

Marton Elek edited comment on HDDS-3001 at 4/2/20, 11:12 AM:
-------------------------------------------------------------

bq. Marton Elek We plan to make random writes work similar to HDFS.

If it will be part of this branch, would be great to include a high level overview (or just create a separated issue and start the discussion there?) 

I have questions to the random write support:

 * Let's assume that all the use cases which requires random writes can be supported with a limited number of containers (for example to provide storage for git repos a few 5G containers should be enough)
 * Let's assume that open pipelines can be fixed in case of any problem *without* closing the containers / pipelines. (which was suggested by [~jitendra] independent from this question).
 * Let's say we can mark the containers to keep it open and don't close them.

Wouldn't the random write will be solved by these three actions? In this case we don't need to use the HDFS-type of workaround. We can just read / write the same container any time.

(To be clear: I am not against to use any kind of initial implementation, just interested to continue the discussion and understand the arguments behind different possibilities)


was (Author: elek):
bq. Marton Elek We plan to make random writes work similar to HDFS.

If it will be part of this branch, would be great to include a high level overview (or just create a separated issue and start the discussion there?) 

I have questions to the random write support:

 * Let's assume that all the use cases which requires random writes can be supported with a limited number of containers (for example to provide storage for git repos a few 5G containers should be enough)
 * Let's assume that open pipelines can be fixed in case of any problem *without* closing the containers / pipelines. (which was suggested by [~jitendra] independent from this question).
 * Let's say we can mark the containers to keep it open and don't close them.

Wouldn't the random write will be solved by these three actiosn? In this case we don't need to use the HDFS-type of workaround. We can just read / write the same container any time

> NFS support for Ozone
> ---------------------
>
>                 Key: HDDS-3001
>                 URL: https://issues.apache.org/jira/browse/HDDS-3001
>             Project: Hadoop Distributed Data Store
>          Issue Type: New Feature
>          Components: Ozone Filesystem
>    Affects Versions: 0.5.0
>            Reporter: Prashant Pogde
>            Assignee: Prashant Pogde
>            Priority: Major
>         Attachments: NFS Support for Ozone.pdf
>
>
> Provide NFS support for Ozone



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org