You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Ashutosh Chauhan (JIRA)" <ji...@apache.org> on 2014/03/23 18:51:45 UTC

[jira] [Updated] (HIVE-5863) INSERT OVERWRITE TABLE fails in vectorized mode for ORC format target table

     [ https://issues.apache.org/jira/browse/HIVE-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ashutosh Chauhan updated HIVE-5863:
-----------------------------------

    Fix Version/s: 0.13.0

> INSERT OVERWRITE TABLE fails in vectorized mode for ORC format target table
> ---------------------------------------------------------------------------
>
>                 Key: HIVE-5863
>                 URL: https://issues.apache.org/jira/browse/HIVE-5863
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.13.0
>            Reporter: Eric Hanson
>            Assignee: Remus Rusanu
>             Fix For: 0.13.0
>
>
> create table store(s_store_key int, s_city string)
> stored as orc;
> set hive.vectorized.execution.enabled = true;
> insert overwrite table store
> select cint, cstring1
> from alltypesorc;
> Alltypesorc is a test table that is checked in to the Hive source.
> Expected result: data is added to store table.
> Actual result:
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201311191600_0007, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201311191600_0007
> Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill job_201311191600_0007
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
> 2013-11-20 16:39:53,271 Stage-1 map = 0%,  reduce = 0%
> 2013-11-20 16:40:20,375 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201311191600_0007 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201311191600_0007
> Examining task ID: task_201311191600_0007_m_000002 (and more) from job job_201311191600_0007
> Task with the most failures(4):
> -----
> Task ID:
>   task_201311191600_0007_m_000000
> URL:
>   http://localhost:50030/taskdetails.jsp?jobid=job_201311191600_0007&tipid=task_201311191600_0007_m_000000
> -----
> Diagnostic Messages for this Task:
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row
>         at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:181)
>         at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
>         at org.apache.hadoop.mapred.Child.main(Child.java:260)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row
>         at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
>         at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163)
>         ... 8 more
> Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.OrcStruct cannot be cast to [Ljava.lang.Object;
>         at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldData(StandardStructObjectInspec
> tor.java:173)
>         at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349)
>         at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962)
>         at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78)
>         at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159)
>         at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
>         at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
>         at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129)
>         at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
>         at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
>         at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91)
>         at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
>         at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
>         at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)
>         ... 9 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)