You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@pig.apache.org by Mohit Anchlia <mo...@gmail.com> on 2013/03/05 00:32:26 UTC

Unable to upload to S3

I am trying to upload to S3 using pig but I get:

grunt> store A into 's3://BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@
/bucket/1/2/a';
2013-03-04 18:24:39,475 [main] INFO
org.apache.pig.tools.pigstats.ScriptState - Pig features used in the
script: UNKNOWN
2013-03-04 18:24:39,528 [main] ERROR org.apache.pig.tools.grunt.Grunt -
ERROR 1002: Unable to store alias A
Details at logfile: /data-ebs/misc/pig/pig_1362439271484.log
(java.lang.IllegalArgumentException: Invalid hostname in URI
s3://BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@/bucket/1/2/a)
(,at
org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:41))
(,at
org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:478))
(,at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1453))
(,at org.apache.hadoop.fs.FileSystem.access$100(FileSystem.java:69))
(,at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1487))
(,at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1469))
(,at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:235))
(,at org.apache.hadoop.fs.Path.getFileSystem(Path.java:191))
(,at
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131))
(,at
org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:80))
(,at
org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:77))
(,at
org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64))

Re: Unable to upload to S3

Posted by Mohit Anchlia <mo...@gmail.com>.
Keys that are given here were changed so it's not the real key. I did find
updating the keys in pig.properties file helpful.

On Mon, Mar 4, 2013 at 4:16 PM, Eli Finkelshteyn <el...@thebackplane.com>wrote:

> That looks like an awsSecretAccessKey. If that's what it is, you probably
> want to change it now, because you definitely don't want everyone on this
> list knowing your secret key. You also definitely do not want to specify
> your secret key as part of your bucket name. That stuff should be in your
> pig.properties file specified as "fs.s3.awsSecretAccessKey=some_key."
> There's more info in this thread.
>
> Eli
>
> On Mar 4, 2013, at 4:07 PM, Aniket Mokashi wrote:
>
> > What's BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@?
> >
> > To work with S3-
> > 1. Your path should be - s3n://bucket-name/key
> > 2. Have your aws keys in core-site.xml
> >
> >
> >
> > On Mon, Mar 4, 2013 at 3:32 PM, Mohit Anchlia <mohitanchlia@gmail.com
> >wrote:
> >
> >> I am trying to upload to S3 using pig but I get:
> >>
> >> grunt> store A into 's3://BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@
> >> /bucket/1/2/a';
> >> 2013-03-04 18:24:39,475 [main] INFO
> >> org.apache.pig.tools.pigstats.ScriptState - Pig features used in the
> >> script: UNKNOWN
> >> 2013-03-04 18:24:39,528 [main] ERROR org.apache.pig.tools.grunt.Grunt -
> >> ERROR 1002: Unable to store alias A
> >> Details at logfile: /data-ebs/misc/pig/pig_1362439271484.log
> >> (java.lang.IllegalArgumentException: Invalid hostname in URI
> >> s3://BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@/bucket/1/2/a)
> >> (,at
> >> org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:41))
> >> (,at
> >>
> >>
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:478))
> >> (,at
> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1453))
> >> (,at org.apache.hadoop.fs.FileSystem.access$100(FileSystem.java:69))
> >> (,at
> >> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1487))
> >> (,at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1469))
> >> (,at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:235))
> >> (,at org.apache.hadoop.fs.Path.getFileSystem(Path.java:191))
> >> (,at
> >>
> >>
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131))
> >> (,at
> >>
> >>
> org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:80))
> >> (,at
> >>
> org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:77))
> >> (,at
> >>
> >>
> org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64))
> >>
> >
> >
> >
> > --
> > "...:::Aniket:::... Quetzalco@tl"
>
>

Re: Unable to upload to S3

Posted by Eli Finkelshteyn <el...@thebackplane.com>.
That looks like an awsSecretAccessKey. If that's what it is, you probably want to change it now, because you definitely don't want everyone on this list knowing your secret key. You also definitely do not want to specify your secret key as part of your bucket name. That stuff should be in your pig.properties file specified as "fs.s3.awsSecretAccessKey=some_key." There's more info in this thread.

Eli

On Mar 4, 2013, at 4:07 PM, Aniket Mokashi wrote:

> What's BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@?
> 
> To work with S3-
> 1. Your path should be - s3n://bucket-name/key
> 2. Have your aws keys in core-site.xml
> 
> 
> 
> On Mon, Mar 4, 2013 at 3:32 PM, Mohit Anchlia <mo...@gmail.com>wrote:
> 
>> I am trying to upload to S3 using pig but I get:
>> 
>> grunt> store A into 's3://BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@
>> /bucket/1/2/a';
>> 2013-03-04 18:24:39,475 [main] INFO
>> org.apache.pig.tools.pigstats.ScriptState - Pig features used in the
>> script: UNKNOWN
>> 2013-03-04 18:24:39,528 [main] ERROR org.apache.pig.tools.grunt.Grunt -
>> ERROR 1002: Unable to store alias A
>> Details at logfile: /data-ebs/misc/pig/pig_1362439271484.log
>> (java.lang.IllegalArgumentException: Invalid hostname in URI
>> s3://BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@/bucket/1/2/a)
>> (,at
>> org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:41))
>> (,at
>> 
>> org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:478))
>> (,at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1453))
>> (,at org.apache.hadoop.fs.FileSystem.access$100(FileSystem.java:69))
>> (,at
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1487))
>> (,at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1469))
>> (,at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:235))
>> (,at org.apache.hadoop.fs.Path.getFileSystem(Path.java:191))
>> (,at
>> 
>> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131))
>> (,at
>> 
>> org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:80))
>> (,at
>> org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:77))
>> (,at
>> 
>> org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64))
>> 
> 
> 
> 
> -- 
> "...:::Aniket:::... Quetzalco@tl"


Re: Unable to upload to S3

Posted by Aniket Mokashi <an...@gmail.com>.
What's BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@?

To work with S3-
1. Your path should be - s3n://bucket-name/key
2. Have your aws keys in core-site.xml



On Mon, Mar 4, 2013 at 3:32 PM, Mohit Anchlia <mo...@gmail.com>wrote:

> I am trying to upload to S3 using pig but I get:
>
> grunt> store A into 's3://BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@
> /bucket/1/2/a';
> 2013-03-04 18:24:39,475 [main] INFO
> org.apache.pig.tools.pigstats.ScriptState - Pig features used in the
> script: UNKNOWN
> 2013-03-04 18:24:39,528 [main] ERROR org.apache.pig.tools.grunt.Grunt -
> ERROR 1002: Unable to store alias A
> Details at logfile: /data-ebs/misc/pig/pig_1362439271484.log
> (java.lang.IllegalArgumentException: Invalid hostname in URI
> s3://BBBBBCCKIAJV5KGMZVA:KKKKxmw5F7I4AWd6rDRA@/bucket/1/2/a)
> (,at
> org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:41))
> (,at
>
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:478))
> (,at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1453))
> (,at org.apache.hadoop.fs.FileSystem.access$100(FileSystem.java:69))
> (,at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1487))
> (,at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1469))
> (,at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:235))
> (,at org.apache.hadoop.fs.Path.getFileSystem(Path.java:191))
> (,at
>
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131))
> (,at
>
> org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:80))
> (,at
> org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:77))
> (,at
>
> org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64))
>



-- 
"...:::Aniket:::... Quetzalco@tl"