You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "John George (Resolved) (JIRA)" <ji...@apache.org> on 2011/10/17 22:17:11 UTC

[jira] [Resolved] (HDFS-2457) har://hftp-:/ or har://hftp-/ does not seems to work

     [ https://issues.apache.org/jira/browse/HDFS-2457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

John George resolved HDFS-2457.
-------------------------------

    Resolution: Not A Problem
    
> har://hftp-<hostname>:<port>/ or har://hftp-<hostname>/ does not seems to work
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-2457
>                 URL: https://issues.apache.org/jira/browse/HDFS-2457
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20.205.0
>            Reporter: Rajit Saha
>            Assignee: John George
>
> $ hadoop  jar $HADOOP_HOME/hadoop-examples.jar sort  har://hftp-<Namenode hostname>:50070/tmp/ARCHIVE.har/random/part-00000 /tmp/out
> Running on 15 nodes to sort from
> har://hftp-<Namenode hostname>:50070/tmp/ARCHIVE.har/random/part-00000 into
> hdfs://<Namenode hostname>/tmp/out with 27 reduces.
> Job started: Sat Oct 15 02:04:44 UTC 2011
> 11/10/15 02:04:44 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 43 for hadoopqa on
> <Namenode hostname>:8020
> 11/10/15 02:04:44 INFO security.TokenCache: Got dt for
> hdfs://<Namenode hostname>/user/hadoopqa/.staging/job_201110142346_0038;uri=<Namenode hostname>:8020;t.service=<Namenode hostname>:8020
> 11/10/15 02:04:45 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://<Namenode hostname>/user/hadoopqa/.staging/job_201110142346_0038
> java.io.IOException: Can't seek!
>         at org.apache.hadoop.hdfs.HftpFileSystem$3.seek(HftpFileSystem.java:359)
>         at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:37)
>         at org.apache.hadoop.fs.HarFileSystem$HarMetaData.parseMetaData(HarFileSystem.java:1055)
>         at org.apache.hadoop.fs.HarFileSystem$HarMetaData.access$000(HarFileSystem.java:966)
>         at org.apache.hadoop.fs.HarFileSystem.initialize(HarFileSystem.java:137)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:241)
>         at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:91)
>         at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:79)
>         at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:160)
>         at org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:40)
>         at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
>         at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:981)
>         at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:973)
>         at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:889)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:842)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>         at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:842)
>         at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:816)
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1253)
>         at org.apache.hadoop.examples.Sort.run(Sort.java:176)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.examples.Sort.main(Sort.java:187)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> $ hadoop  jar $HADOOP_HOME/hadoop-examples.jar sort  har://hftp-<Namenode hostname>/tmp/ARCHIVE.har/random/part-00000 /tmp/out
> Also gives the same error .
> Is it expected ? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira