You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Teng Yutong (JIRA)" <ji...@apache.org> on 2016/07/21 07:58:20 UTC

[jira] [Created] (HIVE-14306) Hive Failed to read Parquet Files generated by SparkSQL

Teng Yutong created HIVE-14306:
----------------------------------

             Summary: Hive Failed to read Parquet Files generated by SparkSQL
                 Key: HIVE-14306
                 URL: https://issues.apache.org/jira/browse/HIVE-14306
             Project: Hive
          Issue Type: Bug
          Components: CLI
    Affects Versions: 1.2.1
            Reporter: Teng Yutong


I'm trying to implement the following process:

1. create a hive parquet table A use hive CLI
2. create a external table B whose schema just like A, but point to a exist folder which contains one csv file in HDSF
3. execute `insert into A select * from B` using SparkSQL
4. query table A.

wired thing happens in step 3 and 4。

If the 'insert into' statement executed by SparkSQL,Hive CLI would throw me an Exception when querying table A
```
Failed with exception java.io.IOException:parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://NEOInciteDataNode-1:8020/user/hive/warehouse/call_center/part-r-00000-b9b6962d-cbab-452b-835b-c10c6221b8fa.gz.parquet
```

But SparkSQL can query table A without trouble...

If the `insert`  statement executed by Hive CLI, query table A in Hive CLI would be just fine...

So am I doing something wrong, or this is just a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)