You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by ta...@nttdata.co.jp on 2010/01/04 10:09:49 UTC

fuse-dfs

Hi,

I get the following error when trying to mount the fuse dfs.

[fuse-dfs]$ ./fuse_dfs_wrapper.sh -d dfs://drbd-test-vm03:8020 /mnt/hdfs/
port=8020,server=drbd-test-vm03
fuse-dfs didn't recognize /mnt/hdfs,-2
[fuse-dfs]$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       9047928   5934252   2650076  70% /
/dev/xvda1              101086     13230     82637  14% /boot
tmpfs                  1048576         0   1048576   0% /dev/shm
fuse                   9043968         0   9043968   0% /mnt/hdfs
[fuse-dfs]$ ls -ltr /mnt/hdfs/
total 0
?--------- ? ? ? ?            ? t.class
[fuse-dfs]$ ls -ltr /mnt/hdfs/
ls: reading directory /mnt/hdfs/: Input/output error total 0

We use Red hat Enterprise linux 5 update 2, kernel-xen-2.6.18-92.1.17.0.2.el5,kernel-headers-2.6.18-92.1.17.0.2.el5,
kernel-xen-devel-2.6.18-92.1.17.0.2.el5,
hadoop-0.20.0,and fuse-2.7.4.

I am not sure what is the reason for this error.
What should I do to avoid this? 
Does anyone know what I am doing wrong or what could be causing these errors?

Best regards,
Tadashi

RE: fuse-dfs

Posted by ta...@nttdata.co.jp.
Hi Eli,

> What kernel (distro/update and bitness) are you using? 

$ rpm -qa | grep kernel
kernel-headers-2.6.18-92.1.17.0.2.el5
kernel-xen-devel-2.6.18-92.1.17.0.2.el5
kernel-xen-2.6.18-92.1.17.0.2.el5

> What ant commands did you use to build fuse?

$ cd $HADOOP_HOME
$ ant compile-c++-libhdfs -Dlibhdfs=1 -Dcompile.c++=1
$ ln -s c++/Linux-amd64-64/lib/ build/libhdfs 
$ ant compile-contrib -Dlibhdfs=1 -Dfusedfs

> fuse-dfs works for me using hadoop-0.20.1+152 
> on both ubuntu and centos hosts.

I am using hadoop-0.20.1+152 on Enterprise Linux.

$cat /etc/redhat-release
Enterprise Linux Enterprise Linux Server release 5.2 (Carthage)

I am using hadoop-0.20.1+152 on VM that is operating on
Oracle VM Server2.1.5(Xen Base).

Best Regards,
Tadashi.
> -----Original Message-----
> From: Eli Collins [mailto:eli@cloudera.com]
> Sent: Thursday, January 14, 2010 2:47 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: fuse-dfs
> 
> Hey Tadashi,
> 
> What kernel (distro/update and bitness) are you using? What ant
> commands did you use to build fuse?
> 
> fuse-dfs works for me using hadoop-0.20.1+152 on both ubuntu and centos hosts.
> 
> Thanks,
> Eli
> 
> On Wed, Jan 13, 2010 at 9:41 PM,  <ta...@nttdata.co.jp> wrote:
> > Hi Eli,
> >
> > Thank you for your reply.
> >
> >> Looks like there's a bug parsing the name. To confirm, what are the
> >> contents of your directory in hdfs? Is there a /I?
> > NO , there isn't.
> > $ ./bin/hadoop dfs -lsr /
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> /opt/hadoop/current
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> /opt/hadoop/current/tmp
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> /opt/hadoop/current/tmp/mapred
> > drwx-wx-wx   - root supergroup          0 2010-01-14 00:11
> /opt/hadoop/current/tmp/mapred/system
> > -rw-------   2 root supergroup          4 2010-01-14 00:11
> /opt/hadoop/current/tmp/mapred/system/jobtracker.info
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:36 /user
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:41 /user/root
> >
> >> IIRC there was asimilar bug fixed in 20.1, could you try that or cdh2?
> > I am using hadoop-0.20.1+152 that I downloaded from the following site.
> > http://archive.cloudera.com/redhat/cdh/testing/SRPMS/
> > hadoop-0.20-0.20.1+152-1.src.rpm
> >
> > However, the same error still occurs.
> >
> > Best Regards,
> > Tadashi.
> >> -----Original Message-----
> >> From: Eli Collins [mailto:eli@cloudera.com]
> >> Sent: Wednesday, January 13, 2010 4:00 PM
> >> To: hdfs-user@hadoop.apache.org
> >> Subject: Re: fuse-dfs
> >>
> >> Thanks for running w/ tracing enabled.
> >>
> >> > fuse_dfs TRACE - readdir /
> >> >   unique: 48, error: 0 (Success), outsize: 112
> >> > unique: 49, opcode: LOOKUP (1), nodeid: 1, insize: 42
> >> > LOOKUP /l
> >> > fuse_dfs TRACE - getattr /l
> >> >   unique: 49, error: -2 (No such file or directory), outsize: 16
> >>
> >> Looks like there's a bug parsing the name. To confirm, what are the
> >> contents of your directory in hdfs? Is there a /I? IIRC there was a
> >> similar bug fixed in 20.1, could you try that or cdh2?
> >>
> >> Thanks,
> >> Eli
> >

RE: fuse-dfs

Posted by ta...@nttdata.co.jp.
Hi Eli,

Thank you for your reply.

Hearing your explanation,I understood that currently it was not an issue.

Best regards,
Tadashi
> -----Original Message-----
> From: Eli Collins [mailto:eli@cloudera.com]
> Sent: Monday, January 18, 2010 2:17 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: fuse-dfs
> 
> > Is the thing that fuse-dfs doesn't implement getxattr a problem?
> 
> Not really. Extended attributes aren't used widely in userspace
> programs. Most programs that can use them often support them not being
> available as not all file systems have extended attributes. There may
> be a use for them in fuse-dfs in the future but currently it's not an
> issue that they're not supported.
> 
> Thanks,
> Eli

Re: fuse-dfs

Posted by Eli Collins <el...@cloudera.com>.
> Is the thing that fuse-dfs doesn't implement getxattr a problem?

Not really. Extended attributes aren't used widely in userspace
programs. Most programs that can use them often support them not being
available as not all file systems have extended attributes. There may
be a use for them in fuse-dfs in the future but currently it's not an
issue that they're not supported.

Thanks,
Eli

RE: fuse-dfs

Posted by ta...@nttdata.co.jp.
Hi Eli,

Thank you for your reply.

I understood that was a separate issue.

Is the thing that fuse-dfs doesn't implement getxattr a problem?

Best regards,
Tadashi

> -----Original Message-----
> From: Eli Collins [mailto:eli@cloudera.com]
> Sent: Saturday, January 16, 2010 3:55 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: fuse-dfs
> 
> Hey Tadashi,
> 
> That's a separate issue (fuse-dfs doesn't implement getxattr but ls
> should fall back on using getattr which it does support).
> 
> Thanks,
> Eli
> 
> On Fri, Jan 15, 2010 at 2:04 AM,  <ta...@nttdata.co.jp> wrote:
> > Hi Eli,
> >
> > It is further information.
> >
> > When I executed "ls -ltr /mnt/hdfs" several times in Term B,
> >
> > It became the following different result in Term A:
> >
> > unique: 3, opcode: GETXATTR (22), nodeid: 1, insize: 72
> >   unique: 3, error: -38 (Function not implemented), outsize: 16
> >
> > It would be greatly appreciated if this information is useful for your
> analysis.
> >
> > Best regards,
> > Tadashi
> >
> >> -----Original Message-----
> >> From: Eli Collins [mailto:eli@cloudera.com]
> >> Sent: Thursday, January 14, 2010 2:47 PM
> >> To: hdfs-user@hadoop.apache.org
> >> Subject: Re: fuse-dfs
> >>
> >> Hey Tadashi,
> >>
> >> What kernel (distro/update and bitness) are you using? What ant
> >> commands did you use to build fuse?
> >>
> >> fuse-dfs works for me using hadoop-0.20.1+152 on both ubuntu and centos
> hosts.
> >>
> >> Thanks,
> >> Eli
> >>
> >> On Wed, Jan 13, 2010 at 9:41 PM,  <ta...@nttdata.co.jp> wrote:
> >> > Hi Eli,
> >> >
> >> > Thank you for your reply.
> >> >
> >> >> Looks like there's a bug parsing the name. To confirm, what are the
> >> >> contents of your directory in hdfs? Is there a /I?
> >> > NO , there isn't.
> >> > $ ./bin/hadoop dfs -lsr /
> >> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt
> >> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop
> >> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> >> /opt/hadoop/current
> >> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> >> /opt/hadoop/current/tmp
> >> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> >> /opt/hadoop/current/tmp/mapred
> >> > drwx-wx-wx   - root supergroup          0 2010-01-14 00:11
> >> /opt/hadoop/current/tmp/mapred/system
> >> > -rw-------   2 root supergroup          4 2010-01-14 00:11
> >> /opt/hadoop/current/tmp/mapred/system/jobtracker.info
> >> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:36 /user
> >> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:41 /user/root
> >> >
> >> >> IIRC there was asimilar bug fixed in 20.1, could you try that or cdh2?
> >> > I am using hadoop-0.20.1+152 that I downloaded from the following site.
> >> > http://archive.cloudera.com/redhat/cdh/testing/SRPMS/
> >> > hadoop-0.20-0.20.1+152-1.src.rpm
> >> >
> >> > However, the same error still occurs.
> >> >
> >> > Best Regards,
> >> > Tadashi.
> >> >> -----Original Message-----
> >> >> From: Eli Collins [mailto:eli@cloudera.com]
> >> >> Sent: Wednesday, January 13, 2010 4:00 PM
> >> >> To: hdfs-user@hadoop.apache.org
> >> >> Subject: Re: fuse-dfs
> >> >>
> >> >> Thanks for running w/ tracing enabled.
> >> >>
> >> >> > fuse_dfs TRACE - readdir /
> >> >> >   unique: 48, error: 0 (Success), outsize: 112
> >> >> > unique: 49, opcode: LOOKUP (1), nodeid: 1, insize: 42
> >> >> > LOOKUP /l
> >> >> > fuse_dfs TRACE - getattr /l
> >> >> >   unique: 49, error: -2 (No such file or directory), outsize: 16
> >> >>
> >> >> Looks like there's a bug parsing the name. To confirm, what are the
> >> >> contents of your directory in hdfs? Is there a /I? IIRC there was a
> >> >> similar bug fixed in 20.1, could you try that or cdh2?
> >> >>
> >> >> Thanks,
> >> >> Eli
> >> >
> >

Re: fuse-dfs

Posted by Eli Collins <el...@cloudera.com>.
Hey Tadashi,

That's a separate issue (fuse-dfs doesn't implement getxattr but ls
should fall back on using getattr which it does support).

Thanks,
Eli

On Fri, Jan 15, 2010 at 2:04 AM,  <ta...@nttdata.co.jp> wrote:
> Hi Eli,
>
> It is further information.
>
> When I executed "ls -ltr /mnt/hdfs" several times in Term B,
>
> It became the following different result in Term A:
>
> unique: 3, opcode: GETXATTR (22), nodeid: 1, insize: 72
>   unique: 3, error: -38 (Function not implemented), outsize: 16
>
> It would be greatly appreciated if this information is useful for your analysis.
>
> Best regards,
> Tadashi
>
>> -----Original Message-----
>> From: Eli Collins [mailto:eli@cloudera.com]
>> Sent: Thursday, January 14, 2010 2:47 PM
>> To: hdfs-user@hadoop.apache.org
>> Subject: Re: fuse-dfs
>>
>> Hey Tadashi,
>>
>> What kernel (distro/update and bitness) are you using? What ant
>> commands did you use to build fuse?
>>
>> fuse-dfs works for me using hadoop-0.20.1+152 on both ubuntu and centos hosts.
>>
>> Thanks,
>> Eli
>>
>> On Wed, Jan 13, 2010 at 9:41 PM,  <ta...@nttdata.co.jp> wrote:
>> > Hi Eli,
>> >
>> > Thank you for your reply.
>> >
>> >> Looks like there's a bug parsing the name. To confirm, what are the
>> >> contents of your directory in hdfs? Is there a /I?
>> > NO , there isn't.
>> > $ ./bin/hadoop dfs -lsr /
>> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt
>> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop
>> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
>> /opt/hadoop/current
>> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
>> /opt/hadoop/current/tmp
>> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
>> /opt/hadoop/current/tmp/mapred
>> > drwx-wx-wx   - root supergroup          0 2010-01-14 00:11
>> /opt/hadoop/current/tmp/mapred/system
>> > -rw-------   2 root supergroup          4 2010-01-14 00:11
>> /opt/hadoop/current/tmp/mapred/system/jobtracker.info
>> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:36 /user
>> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:41 /user/root
>> >
>> >> IIRC there was asimilar bug fixed in 20.1, could you try that or cdh2?
>> > I am using hadoop-0.20.1+152 that I downloaded from the following site.
>> > http://archive.cloudera.com/redhat/cdh/testing/SRPMS/
>> > hadoop-0.20-0.20.1+152-1.src.rpm
>> >
>> > However, the same error still occurs.
>> >
>> > Best Regards,
>> > Tadashi.
>> >> -----Original Message-----
>> >> From: Eli Collins [mailto:eli@cloudera.com]
>> >> Sent: Wednesday, January 13, 2010 4:00 PM
>> >> To: hdfs-user@hadoop.apache.org
>> >> Subject: Re: fuse-dfs
>> >>
>> >> Thanks for running w/ tracing enabled.
>> >>
>> >> > fuse_dfs TRACE - readdir /
>> >> >   unique: 48, error: 0 (Success), outsize: 112
>> >> > unique: 49, opcode: LOOKUP (1), nodeid: 1, insize: 42
>> >> > LOOKUP /l
>> >> > fuse_dfs TRACE - getattr /l
>> >> >   unique: 49, error: -2 (No such file or directory), outsize: 16
>> >>
>> >> Looks like there's a bug parsing the name. To confirm, what are the
>> >> contents of your directory in hdfs? Is there a /I? IIRC there was a
>> >> similar bug fixed in 20.1, could you try that or cdh2?
>> >>
>> >> Thanks,
>> >> Eli
>> >
>

RE: fuse-dfs

Posted by ta...@nttdata.co.jp.
Hi Eli,

It is further information.

When I executed "ls -ltr /mnt/hdfs" several times in Term B,

It became the following different result in Term A:

unique: 3, opcode: GETXATTR (22), nodeid: 1, insize: 72
   unique: 3, error: -38 (Function not implemented), outsize: 16

It would be greatly appreciated if this information is useful for your analysis.

Best regards,
Tadashi

> -----Original Message-----
> From: Eli Collins [mailto:eli@cloudera.com]
> Sent: Thursday, January 14, 2010 2:47 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: fuse-dfs
> 
> Hey Tadashi,
> 
> What kernel (distro/update and bitness) are you using? What ant
> commands did you use to build fuse?
> 
> fuse-dfs works for me using hadoop-0.20.1+152 on both ubuntu and centos hosts.
> 
> Thanks,
> Eli
> 
> On Wed, Jan 13, 2010 at 9:41 PM,  <ta...@nttdata.co.jp> wrote:
> > Hi Eli,
> >
> > Thank you for your reply.
> >
> >> Looks like there's a bug parsing the name. To confirm, what are the
> >> contents of your directory in hdfs? Is there a /I?
> > NO , there isn't.
> > $ ./bin/hadoop dfs -lsr /
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> /opt/hadoop/current
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> /opt/hadoop/current/tmp
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:11
> /opt/hadoop/current/tmp/mapred
> > drwx-wx-wx   - root supergroup          0 2010-01-14 00:11
> /opt/hadoop/current/tmp/mapred/system
> > -rw-------   2 root supergroup          4 2010-01-14 00:11
> /opt/hadoop/current/tmp/mapred/system/jobtracker.info
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:36 /user
> > drwxr-xr-x   - root supergroup          0 2010-01-14 00:41 /user/root
> >
> >> IIRC there was asimilar bug fixed in 20.1, could you try that or cdh2?
> > I am using hadoop-0.20.1+152 that I downloaded from the following site.
> > http://archive.cloudera.com/redhat/cdh/testing/SRPMS/
> > hadoop-0.20-0.20.1+152-1.src.rpm
> >
> > However, the same error still occurs.
> >
> > Best Regards,
> > Tadashi.
> >> -----Original Message-----
> >> From: Eli Collins [mailto:eli@cloudera.com]
> >> Sent: Wednesday, January 13, 2010 4:00 PM
> >> To: hdfs-user@hadoop.apache.org
> >> Subject: Re: fuse-dfs
> >>
> >> Thanks for running w/ tracing enabled.
> >>
> >> > fuse_dfs TRACE - readdir /
> >> >   unique: 48, error: 0 (Success), outsize: 112
> >> > unique: 49, opcode: LOOKUP (1), nodeid: 1, insize: 42
> >> > LOOKUP /l
> >> > fuse_dfs TRACE - getattr /l
> >> >   unique: 49, error: -2 (No such file or directory), outsize: 16
> >>
> >> Looks like there's a bug parsing the name. To confirm, what are the
> >> contents of your directory in hdfs? Is there a /I? IIRC there was a
> >> similar bug fixed in 20.1, could you try that or cdh2?
> >>
> >> Thanks,
> >> Eli
> >

Re: fuse-dfs

Posted by Eli Collins <el...@cloudera.com>.
Hey Tadashi,

What kernel (distro/update and bitness) are you using? What ant
commands did you use to build fuse?

fuse-dfs works for me using hadoop-0.20.1+152 on both ubuntu and centos hosts.

Thanks,
Eli

On Wed, Jan 13, 2010 at 9:41 PM,  <ta...@nttdata.co.jp> wrote:
> Hi Eli,
>
> Thank you for your reply.
>
>> Looks like there's a bug parsing the name. To confirm, what are the
>> contents of your directory in hdfs? Is there a /I?
> NO , there isn't.
> $ ./bin/hadoop dfs -lsr /
> drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt
> drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop
> drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop/current
> drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop/current/tmp
> drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop/current/tmp/mapred
> drwx-wx-wx   - root supergroup          0 2010-01-14 00:11 /opt/hadoop/current/tmp/mapred/system
> -rw-------   2 root supergroup          4 2010-01-14 00:11 /opt/hadoop/current/tmp/mapred/system/jobtracker.info
> drwxr-xr-x   - root supergroup          0 2010-01-14 00:36 /user
> drwxr-xr-x   - root supergroup          0 2010-01-14 00:41 /user/root
>
>> IIRC there was asimilar bug fixed in 20.1, could you try that or cdh2?
> I am using hadoop-0.20.1+152 that I downloaded from the following site.
> http://archive.cloudera.com/redhat/cdh/testing/SRPMS/
> hadoop-0.20-0.20.1+152-1.src.rpm
>
> However, the same error still occurs.
>
> Best Regards,
> Tadashi.
>> -----Original Message-----
>> From: Eli Collins [mailto:eli@cloudera.com]
>> Sent: Wednesday, January 13, 2010 4:00 PM
>> To: hdfs-user@hadoop.apache.org
>> Subject: Re: fuse-dfs
>>
>> Thanks for running w/ tracing enabled.
>>
>> > fuse_dfs TRACE - readdir /
>> >   unique: 48, error: 0 (Success), outsize: 112
>> > unique: 49, opcode: LOOKUP (1), nodeid: 1, insize: 42
>> > LOOKUP /l
>> > fuse_dfs TRACE - getattr /l
>> >   unique: 49, error: -2 (No such file or directory), outsize: 16
>>
>> Looks like there's a bug parsing the name. To confirm, what are the
>> contents of your directory in hdfs? Is there a /I? IIRC there was a
>> similar bug fixed in 20.1, could you try that or cdh2?
>>
>> Thanks,
>> Eli
>

RE: fuse-dfs

Posted by ta...@nttdata.co.jp.
Hi Eli,

Thank you for your reply.

> Looks like there's a bug parsing the name. To confirm, what are the
> contents of your directory in hdfs? Is there a /I? 
NO , there isn't.
$ ./bin/hadoop dfs -lsr /
drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt
drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop
drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop/current
drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop/current/tmp
drwxr-xr-x   - root supergroup          0 2010-01-14 00:11 /opt/hadoop/current/tmp/mapred
drwx-wx-wx   - root supergroup          0 2010-01-14 00:11 /opt/hadoop/current/tmp/mapred/system
-rw-------   2 root supergroup          4 2010-01-14 00:11 /opt/hadoop/current/tmp/mapred/system/jobtracker.info
drwxr-xr-x   - root supergroup          0 2010-01-14 00:36 /user
drwxr-xr-x   - root supergroup          0 2010-01-14 00:41 /user/root

> IIRC there was asimilar bug fixed in 20.1, could you try that or cdh2?
I am using hadoop-0.20.1+152 that I downloaded from the following site.
http://archive.cloudera.com/redhat/cdh/testing/SRPMS/
hadoop-0.20-0.20.1+152-1.src.rpm

However, the same error still occurs.

Best Regards,
Tadashi.
> -----Original Message-----
> From: Eli Collins [mailto:eli@cloudera.com]
> Sent: Wednesday, January 13, 2010 4:00 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: fuse-dfs
> 
> Thanks for running w/ tracing enabled.
> 
> > fuse_dfs TRACE - readdir /
> >   unique: 48, error: 0 (Success), outsize: 112
> > unique: 49, opcode: LOOKUP (1), nodeid: 1, insize: 42
> > LOOKUP /l
> > fuse_dfs TRACE - getattr /l
> >   unique: 49, error: -2 (No such file or directory), outsize: 16
> 
> Looks like there's a bug parsing the name. To confirm, what are the
> contents of your directory in hdfs? Is there a /I? IIRC there was a
> similar bug fixed in 20.1, could you try that or cdh2?
> 
> Thanks,
> Eli

Re: fuse-dfs

Posted by Eli Collins <el...@cloudera.com>.
Thanks for running w/ tracing enabled.

> fuse_dfs TRACE - readdir /
>   unique: 48, error: 0 (Success), outsize: 112
> unique: 49, opcode: LOOKUP (1), nodeid: 1, insize: 42
> LOOKUP /l
> fuse_dfs TRACE - getattr /l
>   unique: 49, error: -2 (No such file or directory), outsize: 16

Looks like there's a bug parsing the name. To confirm, what are the
contents of your directory in hdfs? Is there a /I? IIRC there was a
similar bug fixed in 20.1, could you try that or cdh2?

Thanks,
Eli

RE: fuse-dfs

Posted by ta...@nttdata.co.jp.
Hi Eli,

Thank you for your reply and advise.

I uncommented "// #define DOTRACE" in fuse_dfs.h.

$ cd $HADOOP_HOME/src/contrib/fuse-dfs/src
$ vi fuse_dfs.h
     		・
     		・         
     53 //#define DOTRACE ⇒ #define DOTRACE
     54 #ifdef DOTRACE
     		・
     		・

Then, I re-compiled that.

$ cd $HADOOP_HOME
$ ant compile-c++-libhdfs -Dlibhdfs=1 -Dcompile.c++=1
$ ln -s c++/Linux-amd64-64/lib/ build/libhdfs
$ ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1

However, it became a same result.
I'm using HADOOP-0.20.1.

■Term A
$ ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs/ -d
port=8020,server=drbd-test-vm03
fuse-dfs didn't recognize /mnt/hdfs/,-2
fuse-dfs ignoring option -d
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.8
flags=0x00000003
max_readahead=0x00020000
   INIT: 7.8
   flags=0x00000001
   max_readahead=0x00020000
   max_write=0x00020000
   unique: 1, error: 0 (Success), outsize: 40
		・
		・
		・
■Term B
$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       9047928   6148328   2436000  72% /
/dev/xvda1              101086     13230     82637  14% /boot
tmpfs                  1048576         0   1048576   0% /dev/shm
/dev/drbd0             4925336    141244   4533892   4% /drbd
$ ls -ltr /mnt/hdfs
ls: reading directory /mnt/hdfs: Input/output error
total 0

I executed "ls -ltr /mnt/hdfs" several times until a different result came out.
$ ls -ltr /mnt/hdfs
total 0
?--------- ? ? ? ?            ? l

■Results
When I executed "df" in Term B,
It became the following result in Term A:

unique: 2, opcode: STATFS (17), nodeid: 1, insize: 40
fuse_dfs TRACE - statfs /
   unique: 2, error: 0 (Success), outsize: 96

When I executed "ls -ltr /mnt/hdfs" in Term B,
It became the following result in Term A:

unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 40
fuse_dfs TRACE - getattr /
   unique: 3, error: 0 (Success), outsize: 112
unique: 4, opcode: GETXATTR (22), nodeid: 1, insize: 72
   unique: 4, error: -38 (Function not implemented), outsize: 16
unique: 5, opcode: GETATTR (3), nodeid: 1, insize: 40
fuse_dfs TRACE - getattr /
   unique: 5, error: 0 (Success), outsize: 112
unique: 6, opcode: OPENDIR (27), nodeid: 1, insize: 48
   unique: 6, error: 0 (Success), outsize: 32
unique: 7, opcode: GETATTR (3), nodeid: 1, insize: 40
fuse_dfs TRACE - getattr /
   unique: 7, error: 0 (Success), outsize: 112
unique: 8, opcode: READDIR (28), nodeid: 1, insize: 64
fuse_dfs TRACE - readdir /
   unique: 8, error: 0 (Success), outsize: 104
unique: 9, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
   unique: 9, error: 0 (Success), outsize: 16

When I executed "ls -ltr /mnt/hdfs" several times in Term B,
It became the following different result in Term A:

unique: 44, opcode: GETATTR (3), nodeid: 1, insize: 40
fuse_dfs TRACE - getattr /
   unique: 44, error: 0 (Success), outsize: 112
unique: 45, opcode: GETATTR (3), nodeid: 1, insize: 40
fuse_dfs TRACE - getattr /
   unique: 45, error: 0 (Success), outsize: 112
unique: 46, opcode: OPENDIR (27), nodeid: 1, insize: 48
   unique: 46, error: 0 (Success), outsize: 32
unique: 47, opcode: GETATTR (3), nodeid: 1, insize: 40
fuse_dfs TRACE - getattr /
   unique: 47, error: 0 (Success), outsize: 112
unique: 48, opcode: READDIR (28), nodeid: 1, insize: 64
fuse_dfs TRACE - readdir /
   unique: 48, error: 0 (Success), outsize: 112
unique: 49, opcode: LOOKUP (1), nodeid: 1, insize: 42
LOOKUP /l
fuse_dfs TRACE - getattr /l
   unique: 49, error: -2 (No such file or directory), outsize: 16
unique: 50, opcode: READDIR (28), nodeid: 1, insize: 64
   unique: 50, error: 0 (Success), outsize: 16
unique: 51, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
   unique: 51, error: 0 (Success), outsize: 16

Are there anything else I can do?

Best Regards,
Tadashi.

> -----Original Message-----
> From: Eli Collins [mailto:eli@cloudera.com]
> Sent: Wednesday, January 13, 2010 5:20 AM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: fuse-dfs
> 
> Hey Tadashi,
> 
> Think the next step is to uncomment "// #define DOTRACE" in
> fuse_dfs.h, re-compile and see what output that yields. I'd also be
> curious if you see the issue using 20.1.
> 
> Thanks,
> Eli
> 
> 2010/1/11  <ta...@nttdata.co.jp>:
> > Hi Eli,
> >
> > Thank you for your reply,
> >
> >> Is this the same exact hostname (drbd-test-vm03:8020) you use for
> >> fs.default.name in hadoop-site.xml/core-site.xml? They need to match
> >> up.
> >
> > I use the same exact hostname(drbd-test-vm03).
> >
> > Core-site.xml
> > <property>
> >  <name>fs.default.name</name>
> >  <value>hdfs://drbd-test-vm03/</value>
> > </property>
> >
> > Are there anything else I should check?
> >
> > Best regards,
> > Tadashi
> >> -----Original Message-----
> >> From: Eli Collins [mailto:eli@cloudera.com]
> >> Sent: Sunday, January 10, 2010 6:03 PM
> >> To: hdfs-user@hadoop.apache.org
> >> Subject: Re: fuse-dfs
> >>
> >> > I executed the following command with term A:
> >> > ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d
> >>
> >> Is this the same exact hostname (drbd-test-vm03:8020) you use for
> >> fs.default.name in hadoop-site.xml/core-site.xml? They need to match
> >> up.
> >>
> >> Thanks,
> >> Eli
> >

Re: fuse-dfs

Posted by Eli Collins <el...@cloudera.com>.
Hey Tadashi,

Think the next step is to uncomment "// #define DOTRACE" in
fuse_dfs.h, re-compile and see what output that yields. I'd also be
curious if you see the issue using 20.1.

Thanks,
Eli

2010/1/11  <ta...@nttdata.co.jp>:
> Hi Eli,
>
> Thank you for your reply,
>
>> Is this the same exact hostname (drbd-test-vm03:8020) you use for
>> fs.default.name in hadoop-site.xml/core-site.xml? They need to match
>> up.
>
> I use the same exact hostname(drbd-test-vm03).
>
> Core-site.xml
> <property>
>  <name>fs.default.name</name>
>  <value>hdfs://drbd-test-vm03/</value>
> </property>
>
> Are there anything else I should check?
>
> Best regards,
> Tadashi
>> -----Original Message-----
>> From: Eli Collins [mailto:eli@cloudera.com]
>> Sent: Sunday, January 10, 2010 6:03 PM
>> To: hdfs-user@hadoop.apache.org
>> Subject: Re: fuse-dfs
>>
>> > I executed the following command with term A:
>> > ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d
>>
>> Is this the same exact hostname (drbd-test-vm03:8020) you use for
>> fs.default.name in hadoop-site.xml/core-site.xml? They need to match
>> up.
>>
>> Thanks,
>> Eli
>

RE: fuse-dfs

Posted by ta...@nttdata.co.jp.
Hi Eli,

Thank you for your reply,

> Is this the same exact hostname (drbd-test-vm03:8020) you use for
> fs.default.name in hadoop-site.xml/core-site.xml? They need to match
> up.

I use the same exact hostname(drbd-test-vm03).

Core-site.xml
<property>
 <name>fs.default.name</name>
 <value>hdfs://drbd-test-vm03/</value>
</property>

Are there anything else I should check?

Best regards,
Tadashi
> -----Original Message-----
> From: Eli Collins [mailto:eli@cloudera.com]
> Sent: Sunday, January 10, 2010 6:03 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: fuse-dfs
> 
> > I executed the following command with term A:
> > ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d
> 
> Is this the same exact hostname (drbd-test-vm03:8020) you use for
> fs.default.name in hadoop-site.xml/core-site.xml? They need to match
> up.
> 
> Thanks,
> Eli

Re: fuse-dfs

Posted by Eli Collins <el...@cloudera.com>.
> I executed the following command with term A:
> ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d

Is this the same exact hostname (drbd-test-vm03:8020) you use for
fs.default.name in hadoop-site.xml/core-site.xml? They need to match
up.

Thanks,
Eli

RE: fuse-dfs

Posted by ta...@nttdata.co.jp.
Hi Eli,

Thank you for your reply.

The version of HADOOP that I am using is HADOOP-0.20.0.

I executed the following command with term A:
./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d

■term A
[root@host03 fuse-dfs]# ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d
port=8020,server=drbd-test-vm03
fuse-dfs didn't recognize /mnt/hdfs,-2
fuse-dfs ignoring option -d
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.8
flags=0x00000003
max_readahead=0x00020000
   INIT: 7.8
   flags=0x00000001
   max_readahead=0x00020000
   max_write=0x00020000
   unique: 1, error: 0 (Success), outsize: 40
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 2, error: 0 (Success), outsize: 112
unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 3, error: 0 (Success), outsize: 112
unique: 4, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 4, error: 0 (Success), outsize: 112
unique: 5, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 5, error: 0 (Success), outsize: 112
unique: 6, opcode: OPENDIR (27), nodeid: 1, insize: 48
   unique: 6, error: 0 (Success), outsize: 32
unique: 7, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 7, error: 0 (Success), outsize: 112
unique: 8, opcode: READDIR (28), nodeid: 1, insize: 64
   unique: 8, error: 0 (Success), outsize: 120
unique: 9, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
   unique: 9, error: 0 (Success), outsize: 16
unique: 10, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 10, error: 0 (Success), outsize: 112
unique: 11, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 11, error: 0 (Success), outsize: 112
unique: 12, opcode: OPENDIR (27), nodeid: 1, insize: 48
   unique: 12, error: 0 (Success), outsize: 32
unique: 13, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 13, error: 0 (Success), outsize: 112
unique: 14, opcode: READDIR (28), nodeid: 1, insize: 64
   unique: 14, error: 0 (Success), outsize: 104
unique: 15, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
   unique: 15, error: 0 (Success), outsize: 16
■term B
[root@host03 fuse-dfs]# ls /mnt/hdfs/
ls: reading directory /mnt/hdfs/: Input/output error



When I executed the ls command with term B,
the following result was displayed with term A.

unique: 10, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 10, error: 0 (Success), outsize: 112
unique: 11, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 11, error: 0 (Success), outsize: 112
unique: 12, opcode: OPENDIR (27), nodeid: 1, insize: 48
   unique: 12, error: 0 (Success), outsize: 32
unique: 13, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 13, error: 0 (Success), outsize: 112
unique: 14, opcode: READDIR (28), nodeid: 1, insize: 64
   unique: 14, error: 0 (Success), outsize: 104
unique: 15, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
   unique: 15, error: 0 (Success), outsize: 16

I didn't understand the cause because the error didn't 
occur even if it started by debug mode.

Are there anything else I should check?

Best regards,
Tadashi

> -----Original Message-----
> From: Eli Collins [mailto:eli@cloudera.com]
> Sent: Tuesday, January 05, 2010 5:58 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: fuse-dfs
> 
> Hey Tadashi,
> 
> What version of hadoop are you using?  What is the debug output if you
> just execute the following in one term and the ls in the other?
> 
> ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d
> 
> Thanks,
> Eli
> 
> 2010/1/4  <ta...@nttdata.co.jp>:
> > Hi,
> >
> > I get the following error when trying to mount the fuse dfs.
> >
> > [fuse-dfs]$ ./fuse_dfs_wrapper.sh -d dfs://drbd-test-vm03:8020 /mnt/hdfs/
> > port=8020,server=drbd-test-vm03
> > fuse-dfs didn't recognize /mnt/hdfs,-2
> > [fuse-dfs]$ df
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > /dev/mapper/VolGroup00-LogVol00
> >                       9047928   5934252   2650076  70% /
> > /dev/xvda1              101086     13230     82637  14% /boot
> > tmpfs                  1048576         0   1048576   0% /dev/shm
> > fuse                   9043968         0   9043968   0% /mnt/hdfs
> > [fuse-dfs]$ ls -ltr /mnt/hdfs/
> > total 0
> > ?--------- ? ? ? ?            ? t.class
> > [fuse-dfs]$ ls -ltr /mnt/hdfs/
> > ls: reading directory /mnt/hdfs/: Input/output error total 0
> >
> > We use Red hat Enterprise linux 5 update 2,
> kernel-xen-2.6.18-92.1.17.0.2.el5,kernel-headers-2.6.18-92.1.17.0.2.el5,
> > kernel-xen-devel-2.6.18-92.1.17.0.2.el5,
> > hadoop-0.20.0,and fuse-2.7.4.
> >
> > I am not sure what is the reason for this error.
> > What should I do to avoid this?
> > Does anyone know what I am doing wrong or what could be causing these errors?
> >
> > Best regards,
> > Tadashi
> >

Re: fuse-dfs

Posted by Eli Collins <el...@cloudera.com>.
Hey Tadashi,

What version of hadoop are you using?  What is the debug output if you
just execute the following in one term and the ls in the other?

./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d

Thanks,
Eli

2010/1/4  <ta...@nttdata.co.jp>:
> Hi,
>
> I get the following error when trying to mount the fuse dfs.
>
> [fuse-dfs]$ ./fuse_dfs_wrapper.sh -d dfs://drbd-test-vm03:8020 /mnt/hdfs/
> port=8020,server=drbd-test-vm03
> fuse-dfs didn't recognize /mnt/hdfs,-2
> [fuse-dfs]$ df
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00
>                       9047928   5934252   2650076  70% /
> /dev/xvda1              101086     13230     82637  14% /boot
> tmpfs                  1048576         0   1048576   0% /dev/shm
> fuse                   9043968         0   9043968   0% /mnt/hdfs
> [fuse-dfs]$ ls -ltr /mnt/hdfs/
> total 0
> ?--------- ? ? ? ?            ? t.class
> [fuse-dfs]$ ls -ltr /mnt/hdfs/
> ls: reading directory /mnt/hdfs/: Input/output error total 0
>
> We use Red hat Enterprise linux 5 update 2, kernel-xen-2.6.18-92.1.17.0.2.el5,kernel-headers-2.6.18-92.1.17.0.2.el5,
> kernel-xen-devel-2.6.18-92.1.17.0.2.el5,
> hadoop-0.20.0,and fuse-2.7.4.
>
> I am not sure what is the reason for this error.
> What should I do to avoid this?
> Does anyone know what I am doing wrong or what could be causing these errors?
>
> Best regards,
> Tadashi
>

Re: fuse-dfs

Posted by Eli Collins <el...@cloudera.com>.
For those following along, this issue turned out to be HDFS-961. I
uploaded a patch to the jira.

https://issues.apache.org/jira/browse/HDFS-961

Thanks,
Eli

2010/1/4  <ta...@nttdata.co.jp>:
> Hi,
>
> I get the following error when trying to mount the fuse dfs.
>
> [fuse-dfs]$ ./fuse_dfs_wrapper.sh -d dfs://drbd-test-vm03:8020 /mnt/hdfs/
> port=8020,server=drbd-test-vm03
> fuse-dfs didn't recognize /mnt/hdfs,-2
> [fuse-dfs]$ df
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00
>                       9047928   5934252   2650076  70% /
> /dev/xvda1              101086     13230     82637  14% /boot
> tmpfs                  1048576         0   1048576   0% /dev/shm
> fuse                   9043968         0   9043968   0% /mnt/hdfs
> [fuse-dfs]$ ls -ltr /mnt/hdfs/
> total 0
> ?--------- ? ? ? ?            ? t.class
> [fuse-dfs]$ ls -ltr /mnt/hdfs/
> ls: reading directory /mnt/hdfs/: Input/output error total 0
>
> We use Red hat Enterprise linux 5 update 2, kernel-xen-2.6.18-92.1.17.0.2.el5,kernel-headers-2.6.18-92.1.17.0.2.el5,
> kernel-xen-devel-2.6.18-92.1.17.0.2.el5,
> hadoop-0.20.0,and fuse-2.7.4.
>
> I am not sure what is the reason for this error.
> What should I do to avoid this?
> Does anyone know what I am doing wrong or what could be causing these errors?
>
> Best regards,
> Tadashi
>