You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Gaurav Kohli (JIRA)" <ji...@apache.org> on 2015/09/10 14:37:46 UTC

[jira] [Commented] (SQOOP-2192) SQOOP EXPORT for the ORC file HIVE TABLE Failing

    [ https://issues.apache.org/jira/browse/SQOOP-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14738676#comment-14738676 ] 

Gaurav Kohli commented on SQOOP-2192:
-------------------------------------

Is there any plan for this to be implemented soon ?

> SQOOP EXPORT for the ORC file HIVE TABLE Failing
> ------------------------------------------------
>
>                 Key: SQOOP-2192
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2192
>             Project: Sqoop
>          Issue Type: Bug
>          Components: hive-integration
>    Affects Versions: 1.4.5
>         Environment: Hadoop 2.6.0
> Hive 1.0.0
> Sqoop 1.4.5
>            Reporter: Sunil Kumar
>            Assignee: Venkat Ranganathan
>
> We are trying to export RDMB table to Hive table for running Hive  delete, update queries on exported Hive table. Since for the Hive to support delete, update queries on following is required:
> 1. Needs to declare table as having Transaction Property
> 2. Table must be in ORC format
> 3. Tables must to be bucketed
> to do that i have create hive table using hcat:
> create table bookinfo(md5 STRING , isbn STRING , bookid STRING , booktitle STRING , author STRING , yearofpub STRING , publisher STRING , imageurls STRING , imageurlm STRING , imageurll STRING , price DOUBLE , totalrating DOUBLE , totalusers BIGINT , maxrating INT , minrating INT , avgrating DOUBLE , rawscore DOUBLE , norm_score DOUBLE) clustered by (md5) into 10 buckets stored as orc TBLPROPERTIES('transactional'='true');
> then running sqoop import:
> sqoop import --verbose --connect 'RDBMS_JDBC_URL' --driver JDBC_DRIVER --table bookinfo --null-string '\\N' --null-non-string '\\N' --username USER --password PASSWPRD --hcatalog-database hive_test_trans --hcatalog-table bookinfo --hcatalog-storage-stanza "storedas orc" -m 1
> Following exception is comming:
> 15/03/09 16:28:59 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hive.hcatalog.common.HCatException : 2016 : Error operation not supported : Store into a partition with bucket definition from Pig/Mapreduce is not supported
>         at org.apache.hive.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:109)
>         at org.apache.hive.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:70)
>         at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:339)
>         at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:753)
>         at org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:98)
>         at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:240)
>         at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:665)
>         at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
>         at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)
>         at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
>         at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Please let any futher details required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)