You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Jarek Jarcec Cecho (JIRA)" <ji...@apache.org> on 2015/03/24 14:44:54 UTC

[jira] [Resolved] (SQOOP-2257) Parquet target for imports with Hive overwrite option does not work

     [ https://issues.apache.org/jira/browse/SQOOP-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jarek Jarcec Cecho resolved SQOOP-2257.
---------------------------------------
       Resolution: Fixed
    Fix Version/s: 1.4.6

Thank you for your  contribution [~stanleyxu2005]!

> Parquet target for imports with Hive overwrite option does not work
> -------------------------------------------------------------------
>
>                 Key: SQOOP-2257
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2257
>             Project: Sqoop
>          Issue Type: Bug
>          Components: hive-integration
>    Affects Versions: 1.4.5
>            Reporter: Pavas Garg
>            Assignee: Qian Xu
>            Priority: Critical
>             Fix For: 1.4.6
>
>         Attachments: SQOOP-2257.patch
>
>
> Parquet data import into a Hive table may fail if called a second time with the --hive-overwrite option set.
> 1: Run a successful Sqoop --hive-import 
> 2. And run another import with --hive-overwrite option, to just overwrite the previously loaded data
> Observed error:
> {code}
> ERROR sqoop.Sqoop: Got exception running Sqoop: org.kitesdk.data.DatasetExistsException: Metadata already exists for dataset: foo.bar
> org.kitesdk.data.DatasetExistsException: Metadata already exists for dataset: foo.bar
> 	at org.kitesdk.data.spi.hive.HiveManagedMetadataProvider.create(HiveManagedMetadataProvider.java:51)
> 	at org.kitesdk.data.spi.hive.HiveManagedDatasetRepository.create(HiveManagedDatasetRepository.java:77)
> 	at org.kitesdk.data.Datasets.create(Datasets.java:239)
> 	at org.kitesdk.data.Datasets.create(Datasets.java:307)
> 	at org.apache.sqoop.mapreduce.ParquetJob.createDataset(ParquetJob.java:102)
> 	at org.apache.sqoop.mapreduce.ParquetJob.configureImportJob(ParquetJob.java:89)
> 	at org.apache.sqoop.mapreduce.DataDrivenImportJob.configureMapper(DataDrivenImportJob.java:106)
> 	at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:260)
> 	at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:668)
> 	at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
> 	at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
> 	at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
> 	at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 	at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
> 	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
> 	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> 	at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)