You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2008/03/01 13:16:51 UTC

[jira] Commented: (HADOOP-2845) dfsadmin disk utilization report on Solaris is wrong

    [ https://issues.apache.org/jira/browse/HADOOP-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12574100#action_12574100 ] 

Hudson commented on HADOOP-2845:
--------------------------------

Integrated in Hadoop-trunk #416 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/416/])

> dfsadmin disk utilization report on Solaris is wrong
> ----------------------------------------------------
>
>                 Key: HADOOP-2845
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2845
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.16.0
>            Reporter: Martin Traverso
>            Assignee: Martin Traverso
>             Fix For: 0.17.0
>
>         Attachments: HADOOP-2845-1.patch, HADOOP-2845-2.patch, HADOOP-2845.patch
>
>
> dfsadmin reports 2x disk utilization on some platforms (Solaris, MacOS). The reason for this is that org.apache.hadoop.fs.DU is relying on du's default block size when reporting sizes and assuming they are 1024 byte blocks. This works fine on Linux, but du Solaris and MacOS uses 512-byte blocks to report disk usage.
> DU should use "du -sk" instead of "du -s" to force the command to report sizes based on 1024 byte blocks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.