You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Adrian Lange (JIRA)" <ji...@apache.org> on 2014/07/02 21:28:25 UTC

[jira] [Commented] (HIVE-6914) parquet-hive cannot write nested map (map value is map)

    [ https://issues.apache.org/jira/browse/HIVE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14050596#comment-14050596 ] 

Adrian Lange commented on HIVE-6914:
------------------------------------

I get the same error, no matter what file format I use for the source table. I've tried Textfile, Sequencefile, RCFile, and ORC.

> parquet-hive cannot write nested map (map value is map)
> -------------------------------------------------------
>
>                 Key: HIVE-6914
>                 URL: https://issues.apache.org/jira/browse/HIVE-6914
>             Project: Hive
>          Issue Type: Bug
>          Components: File Formats
>    Affects Versions: 0.13.0
>            Reporter: Tongjie Chen
>
> // table schema (identical for both plain text version and parquet version)
> desc hive> desc text_mmap;
> m map>
> // sample nested map entry
> {"level1":{"level2_key1":"value1","level2_key2":"value2"}}
> The following query will fail, 
> insert overwrite table parquet_mmap select * from text_mmap;
> Caused by: parquet.io.ParquetEncodingException: This should be an ArrayWritable or MapWritable: org.apache.hadoop.hive.ql.io.parquet.writable.BinaryWritable@f2f8106
> at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:85)
> at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:118)
> at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:80)
> at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:82)
> at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:55)
> at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
> at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
> at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:115)
> at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
> at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
> at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:77)
> at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:90)
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:622)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
> at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
> at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
> at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
> ... 9 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)