You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Boglarka Egyed (JIRA)" <ji...@apache.org> on 2017/02/17 14:58:41 UTC

[jira] [Assigned] (SQOOP-3136) Sqoop should work well with not default file systems

     [ https://issues.apache.org/jira/browse/SQOOP-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Boglarka Egyed reassigned SQOOP-3136:
-------------------------------------

    Assignee:     (was: Boglarka Egyed)

> Sqoop should work well with not default file systems
> ----------------------------------------------------
>
>                 Key: SQOOP-3136
>                 URL: https://issues.apache.org/jira/browse/SQOOP-3136
>             Project: Sqoop
>          Issue Type: Improvement
>          Components: connectors/hdfs
>    Affects Versions: 1.4.5
>            Reporter: Illya Yalovyy
>         Attachments: SQOOP-3136.patch
>
>
> Currently Sqoop assumes default file system when it comes to IO operations. It makes it hard to use other FileSystem implementations as source or destination. Here is an example:
> {code}
> sqoop import --connect <JDBC CONNECTION> --table table1 --driver <JDBC DRIVER> --username root --password **** --delete-target-dir --target-dir s3a://some-bucket/tmp/sqoop
> ...
> 17/02/15 19:16:59 ERROR tool.ImportTool: Imported Failed: Wrong FS: s3a://some-bucket/tmp/sqoop, expected: hdfs://<DNS>:8020
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)