You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Jeremy Hanna <je...@gmail.com> on 2011/02/10 00:23:52 UTC

IllegalArgumentException when doing fs.open with an s3n prefix path

Anyone know why I would be getting an error doing a filesystem.open on a file with a s3n prefix?

for the input path "s3n://backlog.dev/1296648900000/" - I get the following stacktrace:

java.lang.IllegalArgumentException: This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000) does not support access to the request path 's3n://backlog.dev/1296648900000/32763897924550656' You possibly called FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path.
	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:351)
	at org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:396)
	at analytics.hadoop.socialdata.RawSignalFileInputFormat$MultiFileLineRecordReader.<init>(RawSignalFileInputFormat.java:53)
	at analytics.hadoop.socialdata.RawSignalFileInputFormat.getRecordReader(RawSignalFileInputFormat.java:22)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:343)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:312)
	at org.apache.hadoop.mapred.Child.main(Child.java:170)

Incidentally, I'm using on elastic mapreduce with hadoop version 0.20 (which I assume is the latest 0.20 version).

Re: IllegalArgumentException when doing fs.open with an s3n prefix path

Posted by Jeremy Hanna <je...@gmail.com>.
Wow, sorry, that was just my sad excuse.  Thanks again.

On Feb 9, 2011, at 6:53 PM, Andrew Hitchcock wrote:

> Ah, nice catch. I'll go fix that message now :)
> 
> On Wed, Feb 9, 2011 at 4:50 PM, Jeremy Hanna <je...@gmail.com> wrote:
>> Bah - you're right.  I don't know why I thought the real error was obscured, besides being distracted by "you should of" should be "you should have".
>> 
>> Thanks and apologies...
>> 
>> Jeremy
>> 
>> On Feb 9, 2011, at 6:10 PM, Andrew Hitchcock wrote:
>> 
>>> "This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000)
>>> does not support access to the request path
>>> 's3n://backlog.dev/1296648900000/32763897924550656' You possibly
>>> called FileSystem.get(conf) when you should of called
>>> FileSystem.get(uri, conf) to obtain a file system supporting your
>>> path."
>>> 
>>> That explains the error. You should always use the two parameter get
>>> method when requesting FileSystem objects.
>>> 
>>> Also, Elastic MapReduce is based on the Hadoop 0.20 branch. It has all
>>> the patches from Hadoop 0.20.2 plus some additional ones from that
>>> branch and other places.
>>> 
>>> Andrew
>>> 
>>> On Wed, Feb 9, 2011 at 3:23 PM, Jeremy Hanna <je...@gmail.com> wrote:
>>>> Anyone know why I would be getting an error doing a filesystem.open on a file with a s3n prefix?
>>>> 
>>>> for the input path "s3n://backlog.dev/1296648900000/" - I get the following stacktrace:
>>>> 
>>>> java.lang.IllegalArgumentException: This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000) does not support access to the request path 's3n://backlog.dev/1296648900000/32763897924550656' You possibly called FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path.
>>>>        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:351)
>>>>        at org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
>>>>        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
>>>>        at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
>>>>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:396)
>>>>        at analytics.hadoop.socialdata.RawSignalFileInputFormat$MultiFileLineRecordReader.<init>(RawSignalFileInputFormat.java:53)
>>>>        at analytics.hadoop.socialdata.RawSignalFileInputFormat.getRecordReader(RawSignalFileInputFormat.java:22)
>>>>        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:343)
>>>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:312)
>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>>> 
>>>> Incidentally, I'm using on elastic mapreduce with hadoop version 0.20 (which I assume is the latest 0.20 version).
>> 
>> 


Re: IllegalArgumentException when doing fs.open with an s3n prefix path

Posted by Andrew Hitchcock <ad...@gmail.com>.
Ah, nice catch. I'll go fix that message now :)

On Wed, Feb 9, 2011 at 4:50 PM, Jeremy Hanna <je...@gmail.com> wrote:
> Bah - you're right.  I don't know why I thought the real error was obscured, besides being distracted by "you should of" should be "you should have".
>
> Thanks and apologies...
>
> Jeremy
>
> On Feb 9, 2011, at 6:10 PM, Andrew Hitchcock wrote:
>
>> "This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000)
>> does not support access to the request path
>> 's3n://backlog.dev/1296648900000/32763897924550656' You possibly
>> called FileSystem.get(conf) when you should of called
>> FileSystem.get(uri, conf) to obtain a file system supporting your
>> path."
>>
>> That explains the error. You should always use the two parameter get
>> method when requesting FileSystem objects.
>>
>> Also, Elastic MapReduce is based on the Hadoop 0.20 branch. It has all
>> the patches from Hadoop 0.20.2 plus some additional ones from that
>> branch and other places.
>>
>> Andrew
>>
>> On Wed, Feb 9, 2011 at 3:23 PM, Jeremy Hanna <je...@gmail.com> wrote:
>>> Anyone know why I would be getting an error doing a filesystem.open on a file with a s3n prefix?
>>>
>>> for the input path "s3n://backlog.dev/1296648900000/" - I get the following stacktrace:
>>>
>>> java.lang.IllegalArgumentException: This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000) does not support access to the request path 's3n://backlog.dev/1296648900000/32763897924550656' You possibly called FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path.
>>>        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:351)
>>>        at org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
>>>        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
>>>        at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
>>>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:396)
>>>        at analytics.hadoop.socialdata.RawSignalFileInputFormat$MultiFileLineRecordReader.<init>(RawSignalFileInputFormat.java:53)
>>>        at analytics.hadoop.socialdata.RawSignalFileInputFormat.getRecordReader(RawSignalFileInputFormat.java:22)
>>>        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:343)
>>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:312)
>>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>>
>>> Incidentally, I'm using on elastic mapreduce with hadoop version 0.20 (which I assume is the latest 0.20 version).
>
>

Re: IllegalArgumentException when doing fs.open with an s3n prefix path

Posted by Jeremy Hanna <je...@gmail.com>.
Bah - you're right.  I don't know why I thought the real error was obscured, besides being distracted by "you should of" should be "you should have".

Thanks and apologies...

Jeremy

On Feb 9, 2011, at 6:10 PM, Andrew Hitchcock wrote:

> "This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000)
> does not support access to the request path
> 's3n://backlog.dev/1296648900000/32763897924550656' You possibly
> called FileSystem.get(conf) when you should of called
> FileSystem.get(uri, conf) to obtain a file system supporting your
> path."
> 
> That explains the error. You should always use the two parameter get
> method when requesting FileSystem objects.
> 
> Also, Elastic MapReduce is based on the Hadoop 0.20 branch. It has all
> the patches from Hadoop 0.20.2 plus some additional ones from that
> branch and other places.
> 
> Andrew
> 
> On Wed, Feb 9, 2011 at 3:23 PM, Jeremy Hanna <je...@gmail.com> wrote:
>> Anyone know why I would be getting an error doing a filesystem.open on a file with a s3n prefix?
>> 
>> for the input path "s3n://backlog.dev/1296648900000/" - I get the following stacktrace:
>> 
>> java.lang.IllegalArgumentException: This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000) does not support access to the request path 's3n://backlog.dev/1296648900000/32763897924550656' You possibly called FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path.
>>        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:351)
>>        at org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
>>        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
>>        at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
>>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:396)
>>        at analytics.hadoop.socialdata.RawSignalFileInputFormat$MultiFileLineRecordReader.<init>(RawSignalFileInputFormat.java:53)
>>        at analytics.hadoop.socialdata.RawSignalFileInputFormat.getRecordReader(RawSignalFileInputFormat.java:22)
>>        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:343)
>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:312)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> 
>> Incidentally, I'm using on elastic mapreduce with hadoop version 0.20 (which I assume is the latest 0.20 version).


Re: IllegalArgumentException when doing fs.open with an s3n prefix path

Posted by Andrew Hitchcock <ad...@gmail.com>.
"This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000)
does not support access to the request path
's3n://backlog.dev/1296648900000/32763897924550656' You possibly
called FileSystem.get(conf) when you should of called
FileSystem.get(uri, conf) to obtain a file system supporting your
path."

That explains the error. You should always use the two parameter get
method when requesting FileSystem objects.

Also, Elastic MapReduce is based on the Hadoop 0.20 branch. It has all
the patches from Hadoop 0.20.2 plus some additional ones from that
branch and other places.

Andrew

On Wed, Feb 9, 2011 at 3:23 PM, Jeremy Hanna <je...@gmail.com> wrote:
> Anyone know why I would be getting an error doing a filesystem.open on a file with a s3n prefix?
>
> for the input path "s3n://backlog.dev/1296648900000/" - I get the following stacktrace:
>
> java.lang.IllegalArgumentException: This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000) does not support access to the request path 's3n://backlog.dev/1296648900000/32763897924550656' You possibly called FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path.
>        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:351)
>        at org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
>        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
>        at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:396)
>        at analytics.hadoop.socialdata.RawSignalFileInputFormat$MultiFileLineRecordReader.<init>(RawSignalFileInputFormat.java:53)
>        at analytics.hadoop.socialdata.RawSignalFileInputFormat.getRecordReader(RawSignalFileInputFormat.java:22)
>        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:343)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:312)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> Incidentally, I'm using on elastic mapreduce with hadoop version 0.20 (which I assume is the latest 0.20 version).