You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Illya Yalovyy (JIRA)" <ji...@apache.org> on 2017/02/16 21:47:41 UTC

[jira] [Created] (SQOOP-3136) Sqoop should work well with not default file systems

Illya Yalovyy created SQOOP-3136:
------------------------------------

             Summary: Sqoop should work well with not default file systems
                 Key: SQOOP-3136
                 URL: https://issues.apache.org/jira/browse/SQOOP-3136
             Project: Sqoop
          Issue Type: Improvement
          Components: connectors/hdfs
    Affects Versions: 1.4.5
            Reporter: Illya Yalovyy


Currently Sqoop assumes default file system when it comes to IO operations. It makes it hard to use other FileSystem implementations as source or destination. Here is an example:

{code}
sqoop import --connect <JDBC CONNECTION> --table table1 --driver <JDBC DRIVER> --username root --password **** --delete-target-dir --target-dir s3a://some-bucket/tmp/sqoop
...
17/02/15 19:16:59 ERROR tool.ImportTool: Imported Failed: Wrong FS: s3a://some-bucket/tmp/sqoop, expected: hdfs://<DNS>:8020
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)