You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org> on 2008/04/08 04:09:25 UTC

[jira] Issue Comment Edited: (HADOOP-1373) checkPath() throws IllegalArgumentException

    [ https://issues.apache.org/jira/browse/HADOOP-1373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12586628#action_12586628 ] 

szetszwo edited comment on HADOOP-1373 at 4/7/08 7:08 PM:
------------------------------------------------------------------------

-1 for adding public String convertHostNameToLower(String hostName) to FileSystem.
Is there a good reason to add this method to the FileSystem public API?

> Could change the test to check checkPath()?
+1 the JUnit test should test the problem described in this JIRA but not the newly created methods.

      was (Author: szetszwo):
    -1 for adding public String convertHostNameToLower(String hostName) to FileSystem.

Is there a good reason to add this method to the FileSystem public API?

Also, the JUnit test should test the problem discribed in this JIRA but not the newly created methods.
  
> checkPath() throws IllegalArgumentException
> -------------------------------------------
>
>                 Key: HADOOP-1373
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1373
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.12.3
>         Environment: Windows, Linux
>            Reporter: Konstantin Shvachko
>            Assignee: Edward J. Yoon
>            Priority: Blocker
>             Fix For: 0.17.0
>
>         Attachments: 1373.patch, 1373_v02.patch
>
>
> This was introduced recently in one of the patches committed around 05/15 or 05/14.
> I am running TestDFSIO on a two node cluster. Here is the exception I get
> 07/05/15 19:14:53 INFO mapred.TaskInProgress: Error from task_0001_m_000007_0: java.lang.IllegalArgumentException: Wrong FS: hdfs://MY-HOST:7017/benchmarks/TestDFSIO/io_control/in_file_test_io_7, expected: hdfs://my-host:7017
>     at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:230)
>     at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.getPath(DistributedFileSystem.java:110)
>     at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.exists(DistributedFileSystem.java:170)
>     at org.apache.hadoop.fs.FilterFileSystem.exists(FilterFileSystem.java:168)
>     at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:335)
>     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1162)
>     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1156)
>     at org.apache.hadoop.mapred.SequenceFileRecordReader.<init>(SequenceFileRecordReader.java:40)
>     at org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(SequenceFileInputFormat.java:54)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:149)
>     at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1709)
> I confess, my config on one of the machines specifies name-node "MY-HOST:7017" and on the other one "my-host:7017".
> But that was acceptable before and should stay that way in the future afaiu.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.