You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Klaus Nagel <da...@gibtsdochgar.net> on 2010/01/13 05:23:11 UTC

fuse_dfs dfs problem

Hello, I am using Hadoop 0.20.1 and have a litte problem with it...hope
someone can help...

I have a 3 Node Setup, and set dfs.replication and dfs.replication.max to
1 (in the file hdfs-site.xml).
That works great when putting a file to the hadoop filesystem
(eg. ./hadoop fs -put ~/debian-503-i386-businesscard.iso abc.iso)

when I try that with fuse_dfs I get the following error message from the
fuse_dfs_wrapper.sh script

LOOKUP /temp/test.test
   unique: 21, error: -2 (No such file or directory), outsize: 16
unique: 22, opcode: CREATE (35), nodeid: 7, insize: 58
WARN: hdfs does not truly support O_CREATE && O_EXCL
Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException:
java.io.IOException: failed to create file /temp/test.test on client
10.8.0.1.
Requested replication 3 exceeds maximum 1
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
  ...
...
...


...same messages in the namenode-log
2010-01-13 04:36:57,183 WARN org.apache.hadoop.hdfs.StateChange: DIR*
NameSystem.startFile: failed to create file /temp/test.test on client
10.8.0.1.
Requested replication 3 exceeds maximum 1
2010-01-13 04:36:57,183 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 9000, call create(/temp/test.test, rwxr-xr-x,
DFSClient_814881830$
Requested replication 3 exceeds maximum 1
java.io.IOException: failed to create file /temp/test.test on client
10.8.0.1.
Requested replication 3 exceeds maximum 1
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
...
...

...hope someone can help me solving that problem,
best regards: Klaus


Re: fuse_dfs dfs problem

Posted by Klaus Nagel <da...@gibtsdochgar.net>.
Thanks Eli, that was exactly my problem...

> Hey Klaus,
>
> That's HDFS-856, you can apply the patch from the jira. The fix will
> also be in the next cdh2 release.
>
> Thanks,
> Eli
>



Re: fuse_dfs dfs problem

Posted by fe...@gmail.com.
Thanks.

2010/1/20 Eli Collins <el...@cloudera.com>:
> Hey Sergey,
>
> Here's a link to the jira: http://issues.apache.org/jira/browse/HDFS-856
>
> You can find a patch under the file attachment section, here's a direct link:
>
> http://issues.apache.org/jira/secure/attachment/12429027/HADOOP-856.patch
>
> Thanks,
> Eli
>
> On Wed, Jan 20, 2010 at 8:03 AM,  <fe...@gmail.com> wrote:
>> Hello, Eli could you please point me - where I can get this patch (from jira)
>> to fix this issue ?
>>
>> Regards,
>> Sergey S. Ropchan
>>
>> 2010/1/13 Eli Collins <el...@cloudera.com>:
>>> Hey Klaus,
>>>
>>> That's HDFS-856, you can apply the patch from the jira. The fix will
>>> also be in the next cdh2 release.
>>>
>>> Thanks,
>>> Eli
>>>
>>> On Tue, Jan 12, 2010 at 8:23 PM, Klaus Nagel <da...@gibtsdochgar.net> wrote:
>>>> Hello, I am using Hadoop 0.20.1 and have a litte problem with it...hope
>>>> someone can help...
>>>>
>>>> I have a 3 Node Setup, and set dfs.replication and dfs.replication.max to
>>>> 1 (in the file hdfs-site.xml).
>>>> That works great when putting a file to the hadoop filesystem
>>>> (eg. ./hadoop fs -put ~/debian-503-i386-businesscard.iso abc.iso)
>>>>
>>>> when I try that with fuse_dfs I get the following error message from the
>>>> fuse_dfs_wrapper.sh script
>>>>
>>>> LOOKUP /temp/test.test
>>>>   unique: 21, error: -2 (No such file or directory), outsize: 16
>>>> unique: 22, opcode: CREATE (35), nodeid: 7, insize: 58
>>>> WARN: hdfs does not truly support O_CREATE && O_EXCL
>>>> Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException:
>>>> java.io.IOException: failed to create file /temp/test.test on client
>>>> 10.8.0.1.
>>>> Requested replication 3 exceeds maximum 1
>>>>        at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>>>>        at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>>>>        at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>>>>  ...
>>>> ...
>>>> ...
>>>>
>>>>
>>>> ...same messages in the namenode-log
>>>> 2010-01-13 04:36:57,183 WARN org.apache.hadoop.hdfs.StateChange: DIR*
>>>> NameSystem.startFile: failed to create file /temp/test.test on client
>>>> 10.8.0.1.
>>>> Requested replication 3 exceeds maximum 1
>>>> 2010-01-13 04:36:57,183 INFO org.apache.hadoop.ipc.Server: IPC Server
>>>> handler 4 on 9000, call create(/temp/test.test, rwxr-xr-x,
>>>> DFSClient_814881830$
>>>> Requested replication 3 exceeds maximum 1
>>>> java.io.IOException: failed to create file /temp/test.test on client
>>>> 10.8.0.1.
>>>> Requested replication 3 exceeds maximum 1
>>>>        at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>>>>        at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>>>>        at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>>>> ...
>>>> ...
>>>>
>>>> ...hope someone can help me solving that problem,
>>>> best regards: Klaus
>>>>
>>>>
>>>
>>
>

Re: fuse_dfs dfs problem

Posted by Eli Collins <el...@cloudera.com>.
Hey Sergey,

Here's a link to the jira: http://issues.apache.org/jira/browse/HDFS-856

You can find a patch under the file attachment section, here's a direct link:

http://issues.apache.org/jira/secure/attachment/12429027/HADOOP-856.patch

Thanks,
Eli

On Wed, Jan 20, 2010 at 8:03 AM,  <fe...@gmail.com> wrote:
> Hello, Eli could you please point me - where I can get this patch (from jira)
> to fix this issue ?
>
> Regards,
> Sergey S. Ropchan
>
> 2010/1/13 Eli Collins <el...@cloudera.com>:
>> Hey Klaus,
>>
>> That's HDFS-856, you can apply the patch from the jira. The fix will
>> also be in the next cdh2 release.
>>
>> Thanks,
>> Eli
>>
>> On Tue, Jan 12, 2010 at 8:23 PM, Klaus Nagel <da...@gibtsdochgar.net> wrote:
>>> Hello, I am using Hadoop 0.20.1 and have a litte problem with it...hope
>>> someone can help...
>>>
>>> I have a 3 Node Setup, and set dfs.replication and dfs.replication.max to
>>> 1 (in the file hdfs-site.xml).
>>> That works great when putting a file to the hadoop filesystem
>>> (eg. ./hadoop fs -put ~/debian-503-i386-businesscard.iso abc.iso)
>>>
>>> when I try that with fuse_dfs I get the following error message from the
>>> fuse_dfs_wrapper.sh script
>>>
>>> LOOKUP /temp/test.test
>>>   unique: 21, error: -2 (No such file or directory), outsize: 16
>>> unique: 22, opcode: CREATE (35), nodeid: 7, insize: 58
>>> WARN: hdfs does not truly support O_CREATE && O_EXCL
>>> Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException:
>>> java.io.IOException: failed to create file /temp/test.test on client
>>> 10.8.0.1.
>>> Requested replication 3 exceeds maximum 1
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>>>  ...
>>> ...
>>> ...
>>>
>>>
>>> ...same messages in the namenode-log
>>> 2010-01-13 04:36:57,183 WARN org.apache.hadoop.hdfs.StateChange: DIR*
>>> NameSystem.startFile: failed to create file /temp/test.test on client
>>> 10.8.0.1.
>>> Requested replication 3 exceeds maximum 1
>>> 2010-01-13 04:36:57,183 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 4 on 9000, call create(/temp/test.test, rwxr-xr-x,
>>> DFSClient_814881830$
>>> Requested replication 3 exceeds maximum 1
>>> java.io.IOException: failed to create file /temp/test.test on client
>>> 10.8.0.1.
>>> Requested replication 3 exceeds maximum 1
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>>> ...
>>> ...
>>>
>>> ...hope someone can help me solving that problem,
>>> best regards: Klaus
>>>
>>>
>>
>

Re: fuse_dfs dfs problem

Posted by fe...@gmail.com.
Hello, Eli could you please point me - where I can get this patch (from jira)
to fix this issue ?

Regards,
Sergey S. Ropchan

2010/1/13 Eli Collins <el...@cloudera.com>:
> Hey Klaus,
>
> That's HDFS-856, you can apply the patch from the jira. The fix will
> also be in the next cdh2 release.
>
> Thanks,
> Eli
>
> On Tue, Jan 12, 2010 at 8:23 PM, Klaus Nagel <da...@gibtsdochgar.net> wrote:
>> Hello, I am using Hadoop 0.20.1 and have a litte problem with it...hope
>> someone can help...
>>
>> I have a 3 Node Setup, and set dfs.replication and dfs.replication.max to
>> 1 (in the file hdfs-site.xml).
>> That works great when putting a file to the hadoop filesystem
>> (eg. ./hadoop fs -put ~/debian-503-i386-businesscard.iso abc.iso)
>>
>> when I try that with fuse_dfs I get the following error message from the
>> fuse_dfs_wrapper.sh script
>>
>> LOOKUP /temp/test.test
>>   unique: 21, error: -2 (No such file or directory), outsize: 16
>> unique: 22, opcode: CREATE (35), nodeid: 7, insize: 58
>> WARN: hdfs does not truly support O_CREATE && O_EXCL
>> Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException:
>> java.io.IOException: failed to create file /temp/test.test on client
>> 10.8.0.1.
>> Requested replication 3 exceeds maximum 1
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>>  ...
>> ...
>> ...
>>
>>
>> ...same messages in the namenode-log
>> 2010-01-13 04:36:57,183 WARN org.apache.hadoop.hdfs.StateChange: DIR*
>> NameSystem.startFile: failed to create file /temp/test.test on client
>> 10.8.0.1.
>> Requested replication 3 exceeds maximum 1
>> 2010-01-13 04:36:57,183 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 4 on 9000, call create(/temp/test.test, rwxr-xr-x,
>> DFSClient_814881830$
>> Requested replication 3 exceeds maximum 1
>> java.io.IOException: failed to create file /temp/test.test on client
>> 10.8.0.1.
>> Requested replication 3 exceeds maximum 1
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>> ...
>> ...
>>
>> ...hope someone can help me solving that problem,
>> best regards: Klaus
>>
>>
>

Re: fuse_dfs dfs problem

Posted by Eli Collins <el...@cloudera.com>.
Hey Klaus,

That's HDFS-856, you can apply the patch from the jira. The fix will
also be in the next cdh2 release.

Thanks,
Eli

On Tue, Jan 12, 2010 at 8:23 PM, Klaus Nagel <da...@gibtsdochgar.net> wrote:
> Hello, I am using Hadoop 0.20.1 and have a litte problem with it...hope
> someone can help...
>
> I have a 3 Node Setup, and set dfs.replication and dfs.replication.max to
> 1 (in the file hdfs-site.xml).
> That works great when putting a file to the hadoop filesystem
> (eg. ./hadoop fs -put ~/debian-503-i386-businesscard.iso abc.iso)
>
> when I try that with fuse_dfs I get the following error message from the
> fuse_dfs_wrapper.sh script
>
> LOOKUP /temp/test.test
>   unique: 21, error: -2 (No such file or directory), outsize: 16
> unique: 22, opcode: CREATE (35), nodeid: 7, insize: 58
> WARN: hdfs does not truly support O_CREATE && O_EXCL
> Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: failed to create file /temp/test.test on client
> 10.8.0.1.
> Requested replication 3 exceeds maximum 1
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>  ...
> ...
> ...
>
>
> ...same messages in the namenode-log
> 2010-01-13 04:36:57,183 WARN org.apache.hadoop.hdfs.StateChange: DIR*
> NameSystem.startFile: failed to create file /temp/test.test on client
> 10.8.0.1.
> Requested replication 3 exceeds maximum 1
> 2010-01-13 04:36:57,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 9000, call create(/temp/test.test, rwxr-xr-x,
> DFSClient_814881830$
> Requested replication 3 exceeds maximum 1
> java.io.IOException: failed to create file /temp/test.test on client
> 10.8.0.1.
> Requested replication 3 exceeds maximum 1
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
> ...
> ...
>
> ...hope someone can help me solving that problem,
> best regards: Klaus
>
>