You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2013/07/10 17:13:48 UTC

[jira] [Commented] (HADOOP-9712) Write contract tests for FTP filesystem, fix places where it breaks

    [ https://issues.apache.org/jira/browse/HADOOP-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704641#comment-13704641 ] 

Steve Loughran commented on HADOOP-9712:
----------------------------------------

the parent -004 patch contains tests for the FTP client, this JIRA will look at the problems thrown up


h4. Changes in the latest patch

* connection refusals are wrapped via the NetUtils
* if you can't log in, the username is included in the exception
* not found => {{FileNotFoundException}}
* file found in a {{mkdir()}} => {{ParentNotDirectoryException}}
* {{FTPFileSystem.exists()}} downgraded IOExceptions to {{FTPException}} which extended {{RuntimeException}}. This is potentially dangerous as it could stop code that expects failures to be represented as IOException from catching it. Now just rethrowing
so that problems don't get hidden.

h4. Bugs

* rename doesn't appear to work, even within the same dir (it explicitly doesn't handle renames in other dirs). Maybe the
whole operation should be marked as unsupported.

* throws FileNotFoundException when trying to delete a path that didn't exist
{code}
Running org.apache.hadoop.fs.contract.ftp.TestFTPDeleteContract
Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.793 sec <<< FAILURE!
testDeleteNonexistentFileRecursive(org.apache.hadoop.fs.contract.ftp.TestFTPDeleteContract)  Time elapsed: 430 sec  <<< ERROR!
java.io.FileNotFoundException: File ftp:/linuxvm/home/stevel/test/testDeleteEmptyDirRecursive does not exist.
	at org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:434)
	at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:317)
	at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:294)
	at org.apache.hadoop.fs.contract.AbstractDeleteContractTest.testDeleteNonexistentFileRecursive(AbstractDeleteContractTest.java:50)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
  {code}
  
This is easy to fix; I just wanted to note its existence.

h4. FTP Ambiguities

* throws a simple IOE when trying to {{create}} over a non-empty directory and overwrite==true. HDFS throws a {{FileAlreadyExistsException}}, which I propose mimicing.
{code}
testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.contract.ftp.TestFTPCreateContract)  Time elapsed: 1027 sec  <<< ERROR!java.io.IOException: Directory: ftp:/ubuntu/home/stevel/test/testOverwriteNonEmptyDirectory is not empty.
	at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:323)
	at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:304)
	at org.apache.hadoop.fs.ftp.FTPFileSystem.create(FTPFileSystem.java:224)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:888)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:869)
	at org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:130)
	at org.apache.hadoop.fs.contract.AbstractCreateContractTest.testOverwriteNonEmptyDirectory(AbstractCreateContractTest.java:115)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
{code}

                
> Write contract tests for FTP filesystem, fix places where it breaks
> -------------------------------------------------------------------
>
>                 Key: HADOOP-9712
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9712
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 1.2.0, 3.0.0, 2.1.0-beta
>            Reporter: Steve Loughran
>            Priority: Minor
>
> implement the abstract contract tests for S3, identify where it is failing to meet expectations and, where possible, fix. 
> FTPFS appears to be the least tested (& presumably used) hadoop filesystem implementation; there may be some bug reports that have been around for years that could drive test cases and fixes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira