You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Dr. Martin Menzel (JIRA)" <ji...@apache.org> on 2009/12/29 10:40:29 UTC

[jira] Commented: (HADOOP-6319) Capacity reporting incorrect on Solaris

    [ https://issues.apache.org/jira/browse/HADOOP-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12795031#action_12795031 ] 

Dr. Martin Menzel commented on HADOOP-6319:
-------------------------------------------

I had the same problem on Solaris and I did exactly what Allen mentioned. More detailed informations what I did:

1) Create a zfs filesystem in the global zone 

zfs create rpool/srv/hadoop 

2) set the mountpoint to legacy and set a quota

zfs set mountpoint=legacy rpool/srv/hadoop

zfs set quota=50G rpool/srv/hadoop

3) Add the dataset to the hadoop zone 

zonecfg -z <hadoopzone>
zonecfg:hadoopzone> add dataset
zonecfg:hadoopzone:dataset> set name=rpool/srv/hadoop
zonecfg:hadoopzone:dataset> end
zonecfg:hadoopzone>verify
zonecfg:hadoopzone>commit
zonecfg:hadoopzone>exit

4) login to your hadoop zone

zlogin hadoopzone

5) set mountpoint for the zfs filesystem

zfs set mountpoint=/srv/hadoop rpool/srv/hadoop

Than hadoop recognizes the 50G as capacity. Remember to configure hadoop so that it uses a data dir in /srv/hadoop

I set in core-site.xml 

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/srv/hadoop/tmp/hadoop-${user.name}</value>
  </property>
</configuration>

I hope these informations may help other solaris users.

Martin

> Capacity reporting incorrect on Solaris
> ---------------------------------------
>
>                 Key: HADOOP-6319
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6319
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.20.1
>            Reporter: Doug Judd
>         Attachments: solaris-hadoop.patch
>
>
> When trying to get Hadoop up and running on Solaris on a ZFS filesystem, I encountered a problem where the capacity reported was zero:
> Configured Capacity: 0 (0 KB)
> It looks like the problem is with the 'df' output:
> $ df -k /data/hadoop 
> Filesystem           1024-blocks        Used   Available Capacity  Mounted on
> /                              0     7186354    20490274    26%    /
> The following patch (applied to trunk) fixes the problem.  Though the real problem is with 'df', I suspect the patch is harmless enough to include?
> Index: src/java/org/apache/hadoop/fs/DF.java
> ===================================================================
> --- src/java/org/apache/hadoop/fs/DF.java	(revision 826471)
> +++ src/java/org/apache/hadoop/fs/DF.java	(working copy)
> @@ -181,7 +181,11 @@
>          this.percentUsed = Integer.parseInt(tokens.nextToken());
>          this.mount = tokens.nextToken();
>          break;
> -   }
> +    }
> +
> +    if (this.capacity == 0)
> +	this.capacity = this.used + this.available;
> +    
>    }
>  
>    public static void main(String[] args) throws Exception {

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.