You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Qian Xu (JIRA)" <ji...@apache.org> on 2015/03/24 08:52:53 UTC

[jira] [Commented] (SQOOP-2257) Parquet target for imports with Hive overwrite option does not work

    [ https://issues.apache.org/jira/browse/SQOOP-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377450#comment-14377450 ] 

Qian Xu commented on SQOOP-2257:
--------------------------------

[~pavasgarg] Thanks for your feedback. I've provided a patch. Could you please confirm if the patch helps you?

> Parquet target for imports with Hive overwrite option does not work
> -------------------------------------------------------------------
>
>                 Key: SQOOP-2257
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2257
>             Project: Sqoop
>          Issue Type: Bug
>          Components: hive-integration
>    Affects Versions: 1.4.5
>            Reporter: Pavas Garg
>            Assignee: Qian Xu
>            Priority: Critical
>         Attachments: SQOOP-2257.patch
>
>
> Parquet data import into a Hive table may fail if called a second time with the --hive-overwrite option set.
> 1: Run a successful Sqoop --hive-import 
> 2. And run another import with --hive-overwrite option, to just overwrite the previously loaded data
> Observed error:
> {code}
> ERROR sqoop.Sqoop: Got exception running Sqoop: org.kitesdk.data.DatasetExistsException: Metadata already exists for dataset: foo.bar
> org.kitesdk.data.DatasetExistsException: Metadata already exists for dataset: foo.bar
> 	at org.kitesdk.data.spi.hive.HiveManagedMetadataProvider.create(HiveManagedMetadataProvider.java:51)
> 	at org.kitesdk.data.spi.hive.HiveManagedDatasetRepository.create(HiveManagedDatasetRepository.java:77)
> 	at org.kitesdk.data.Datasets.create(Datasets.java:239)
> 	at org.kitesdk.data.Datasets.create(Datasets.java:307)
> 	at org.apache.sqoop.mapreduce.ParquetJob.createDataset(ParquetJob.java:102)
> 	at org.apache.sqoop.mapreduce.ParquetJob.configureImportJob(ParquetJob.java:89)
> 	at org.apache.sqoop.mapreduce.DataDrivenImportJob.configureMapper(DataDrivenImportJob.java:106)
> 	at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:260)
> 	at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:668)
> 	at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
> 	at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
> 	at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
> 	at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 	at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
> 	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
> 	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> 	at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)