You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Weiwei Yang (JIRA)" <ji...@apache.org> on 2017/06/02 10:22:04 UTC

[jira] [Resolved] (HDFS-11917) Why when using the hdfs nfs gateway, a file which is smaller than one block size required a block

     [ https://issues.apache.org/jira/browse/HDFS-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Weiwei Yang resolved HDFS-11917.
--------------------------------
    Resolution: Not A Problem
      Assignee: Weiwei Yang

> Why when using the hdfs nfs gateway, a file which is smaller than one block size required a block
> -------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-11917
>                 URL: https://issues.apache.org/jira/browse/HDFS-11917
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: nfs
>    Affects Versions: 2.8.0
>            Reporter: BINGHUI WANG
>            Assignee: Weiwei Yang
>
> I use the linux shell to put the file into the hdfs throuth the hdfs nfs gateway. I found that if the file which size is smaller than one block(128M), it will still takes one block(128M) of hdfs storage by this way. But after a few minitues the excess storage will be released.
> e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it will takes one block(128M) at first. After a few minitues the excess storage(68M) will
> be released. The file only use 60M hdfs storage at last.
> Why is will be this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org