You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Vihang Karajgaonkar (JIRA)" <ji...@apache.org> on 2016/12/22 22:15:58 UTC

[jira] [Commented] (HIVE-15502) CTAS on S3 is broken with credentials exception

    [ https://issues.apache.org/jira/browse/HIVE-15502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15771243#comment-15771243 ] 

Vihang Karajgaonkar commented on HIVE-15502:
--------------------------------------------

[~stakiar] Shouldn't these keys be present in the core-site.xml too for the Map tasks to succeed?

> CTAS on S3 is broken with credentials exception
> -----------------------------------------------
>
>                 Key: HIVE-15502
>                 URL: https://issues.apache.org/jira/browse/HIVE-15502
>             Project: Hive
>          Issue Type: Bug
>          Components: Hive
>            Reporter: Sahil Takiar
>            Assignee: Sahil Takiar
>
> Simple CTAS queries that read from S3, and write to the local fs throw the following exception:
> {code}
> com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
> 	at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
> 	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
> 	at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
> 	at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
> 	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
> 	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> 	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> 	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> 	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> 	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> 	at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2308)
> 	at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2304)
> 	at org.apache.hadoop.hive.ql.exec.Utilities.getInputPaths(Utilities.java:3013)
> 	at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:342)
> 	at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
> 	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> 	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2168)
> 	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1824)
> 	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1511)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1222)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1212)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> 	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400)
> 	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:777)
> 	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:715)
> 	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:642)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> 	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Job Submission failed with exception 'com.amazonaws.AmazonClientException(Unable to load AWS credentials from any provider in the chain)'
> {code}
> Seems to only happen when trying to connect to S3 from map tasks. My {{hive-site.xml}} has the following entries:
> {code}
> <configuration>
>   <property>
>     <name>mapreduce.framework.name</name>
>     <value>local</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>local</value>
>   </property>
>   <property>
>     <name>fs.default.name</name>
>     <value>file:///</value>
>   </property>
>   <property>
>     <name>fs.s3a.access.key</name>
>     <value>[ACCESS-KEY]</value>
>   </property>
>   <property>
>     <name>fs.s3a.secret.key</name>
>     <value>[SECRET-KEY]</value>
>   </property>
> </configuration>
> {code}
> I've also noticed that now I need to copy the AWS S3 SDK jars into the lib folder before running Hive locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)