You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2016/12/14 18:52:59 UTC

[jira] [Commented] (HADOOP-13905) Cannot run wordcount example when there's a mounttable configured with a link to s3a.

    [ https://issues.apache.org/jira/browse/HADOOP-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15749127#comment-15749127 ] 

Steve Loughran commented on HADOOP-13905:
-----------------------------------------

looks like you've gone into a corner of the code nobody else has. 

Following the stack trace, its failing as a FS has to have a port if an authority is needed, and that is determined if uri.getAuthority()!=null, something that's not really the case in object stores. Now, for AbstractFS integration, HADOOP-11262, [~PieterReuse] had s3 declaring that it didn't need authority. Somehow view FS isn't picking it up, no doubt from this extra indirection

Pieter? Any thoughts

> Cannot run wordcount example when there's a mounttable configured with a link to s3a.
> -------------------------------------------------------------------------------------
>
>                 Key: HADOOP-13905
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13905
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.8.0
>            Reporter: Oleg Khaschansky
>
> Have 3 node setup: nn/slave/client. Client default fs is viewfs with the following mounttable:
> <configuration>
>     <property>
>         <name>fs.viewfs.mounttable.hadoopDemo.homedir</name>
>         <value>/home</value>
>     </property>
>     <property>
>         <name>fs.viewfs.mounttable.hadoopDemo.link./home</name>
>         <value>hdfs://namenode:9000</value>
>     </property>
>     <property>
>         <name>fs.viewfs.mounttable.hadoopDemo.link./tmp</name>
>         <value>hdfs://namenode:9000/tmp</value>
>     </property>
>     <property>
>         <name>fs.viewfs.mounttable.hadoopDemo.link./user</name>
>         <value>hdfs://namenode:9000/user</value>
>     </property>
>     <property>
>         <name>fs.viewfs.mounttable.hadoopDemo.link./s3a</name>
>         <value>s3a://cloudply-hadoop-demo/</value>
>     </property>
> </configuration>
> s3a credentials are configured in core-site.xml on the client node. Able to view/modify /s3a mount contents with hdfs commands. When I ran a wordcount example using this line (even without access to s3a share):
> hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.8.0-SNAPSHOT-sources.jar org.apache.hadoop.examples.WordCount /home/input /home/output
> it fails with the following exception: 
> 16/12/14 16:08:33 INFO client.RMProxy: Connecting to ResourceManager at namenode/172.18.0.2:8032
> 16/12/14 16:08:33 INFO mapreduce.Cluster: Failed to use org.apache.hadoop.mapred.YarnClientProtocolProvider due to error:
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>         at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136)
>         at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165)
>         at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
>         at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:342)
>         at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:339)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1711)
>         at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:339)
>         at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:456)
>         at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:482)
>         at org.apache.hadoop.mapred.YARNRunner.<init>(YARNRunner.java:148)
>         at org.apache.hadoop.mapred.YARNRunner.<init>(YARNRunner.java:132)
>         at org.apache.hadoop.mapred.YARNRunner.<init>(YARNRunner.java:122)
>         at org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
>         at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:111)
>         at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:98)
>         at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:91)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1311)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1711)
>         at org.apache.hadoop.mapreduce.Job.connect(Job.java:1307)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1335)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1359)
>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>         at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:134)
>         ... 32 more
> Caused by: org.apache.hadoop.HadoopIllegalArgumentException: FileSystem implementation error -  default port -1 is not valid
>         at org.apache.hadoop.fs.AbstractFileSystem.getUri(AbstractFileSystem.java:306)
>         at org.apache.hadoop.fs.AbstractFileSystem.<init>(AbstractFileSystem.java:266)
>         at org.apache.hadoop.fs.viewfs.ChRootedFs.<init>(ChRootedFs.java:102)
>         at org.apache.hadoop.fs.viewfs.ViewFs$1.getTargetFileSystem(ViewFs.java:220)
>         at org.apache.hadoop.fs.viewfs.ViewFs$1.getTargetFileSystem(ViewFs.java:209)
>         at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:261)
>         at org.apache.hadoop.fs.viewfs.InodeTree.<init>(InodeTree.java:333)
>         at org.apache.hadoop.fs.viewfs.ViewFs$1.<init>(ViewFs.java:209)
>         at org.apache.hadoop.fs.viewfs.ViewFs.<init>(ViewFs.java:209)
>         ... 37 more
> Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>         at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:133)
>         at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:98)
>         at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:91)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1311)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1711)
>         at org.apache.hadoop.mapreduce.Job.connect(Job.java:1307)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1335)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1359)
>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> When I remove s3a mount from mounttable - I am able to run wordcount example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org